From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 702611581FD for ; Fri, 12 Sep 2025 03:57:44 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 5B77D3410AC for ; Fri, 12 Sep 2025 03:57:44 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id 5C24A110377; Fri, 12 Sep 2025 03:57:43 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id 532BD110377 for ; Fri, 12 Sep 2025 03:57:43 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id D25253410AC for ; Fri, 12 Sep 2025 03:57:42 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 7E5353982 for ; Fri, 12 Sep 2025 03:57:41 +0000 (UTC) From: "Arisu Tachibana" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Arisu Tachibana" Message-ID: <1757649448.853f3d7e17c00d28786b9e13f8357dc856779587.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1151_linux-6.1.152.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Arisu Tachibana X-VCS-Revision: 853f3d7e17c00d28786b9e13f8357dc856779587 X-VCS-Branch: 6.1 Date: Fri, 12 Sep 2025 03:57:41 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 9152a891-64a2-4859-ade2-712506299f3d X-Archives-Hash: 394d777d1d44806681f922f2e0f8698f commit: 853f3d7e17c00d28786b9e13f8357dc856779587 Author: Arisu Tachibana gentoo org> AuthorDate: Fri Sep 12 03:57:28 2025 +0000 Commit: Arisu Tachibana gentoo org> CommitDate: Fri Sep 12 03:57:28 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=853f3d7e Linux patch 6.1.152 Signed-off-by: Arisu Tachibana gentoo.org> 0000_README | 4 + 1151_linux-6.1.152.patch | 744 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 748 insertions(+) diff --git a/0000_README b/0000_README index 9712c9c8..9696c806 100644 --- a/0000_README +++ b/0000_README @@ -647,6 +647,10 @@ Patch: 1150_linux-6.1.151.patch From: https://www.kernel.org Desc: Linux 6.1.151 +Patch: 1151_linux-6.1.152.patch +From: https://www.kernel.org +Desc: Linux 6.1.152 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1151_linux-6.1.152.patch b/1151_linux-6.1.152.patch new file mode 100644 index 00000000..d0a00de0 --- /dev/null +++ b/1151_linux-6.1.152.patch @@ -0,0 +1,744 @@ +diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu +index 97e695efa95995..6eeaec81824298 100644 +--- a/Documentation/ABI/testing/sysfs-devices-system-cpu ++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu +@@ -528,6 +528,7 @@ What: /sys/devices/system/cpu/vulnerabilities + /sys/devices/system/cpu/vulnerabilities/srbds + /sys/devices/system/cpu/vulnerabilities/tsa + /sys/devices/system/cpu/vulnerabilities/tsx_async_abort ++ /sys/devices/system/cpu/vulnerabilities/vmscape + Date: January 2018 + Contact: Linux kernel mailing list + Description: Information about CPU vulnerabilities +diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst +index dc69ba0b05e474..4f6c1a695fa91f 100644 +--- a/Documentation/admin-guide/hw-vuln/index.rst ++++ b/Documentation/admin-guide/hw-vuln/index.rst +@@ -23,3 +23,4 @@ are configurable at compile, boot or run time. + srso + reg-file-data-sampling + indirect-target-selection ++ vmscape +diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/admin-guide/hw-vuln/vmscape.rst +new file mode 100644 +index 00000000000000..d9b9a2b6c114c0 +--- /dev/null ++++ b/Documentation/admin-guide/hw-vuln/vmscape.rst +@@ -0,0 +1,110 @@ ++.. SPDX-License-Identifier: GPL-2.0 ++ ++VMSCAPE ++======= ++ ++VMSCAPE is a vulnerability that may allow a guest to influence the branch ++prediction in host userspace. It particularly affects hypervisors like QEMU. ++ ++Even if a hypervisor may not have any sensitive data like disk encryption keys, ++guest-userspace may be able to attack the guest-kernel using the hypervisor as ++a confused deputy. ++ ++Affected processors ++------------------- ++ ++The following CPU families are affected by VMSCAPE: ++ ++**Intel processors:** ++ - Skylake generation (Parts without Enhanced-IBRS) ++ - Cascade Lake generation - (Parts affected by ITS guest/host separation) ++ - Alder Lake and newer (Parts affected by BHI) ++ ++Note that, BHI affected parts that use BHB clearing software mitigation e.g. ++Icelake are not vulnerable to VMSCAPE. ++ ++**AMD processors:** ++ - Zen series (families 0x17, 0x19, 0x1a) ++ ++** Hygon processors:** ++ - Family 0x18 ++ ++Mitigation ++---------- ++ ++Conditional IBPB ++---------------- ++ ++Kernel tracks when a CPU has run a potentially malicious guest and issues an ++IBPB before the first exit to userspace after VM-exit. If userspace did not run ++between VM-exit and the next VM-entry, no IBPB is issued. ++ ++Note that the existing userspace mitigation against Spectre-v2 is effective in ++protecting the userspace. They are insufficient to protect the userspace VMMs ++from a malicious guest. This is because Spectre-v2 mitigations are applied at ++context switch time, while the userspace VMM can run after a VM-exit without a ++context switch. ++ ++Vulnerability enumeration and mitigation is not applied inside a guest. This is ++because nested hypervisors should already be deploying IBPB to isolate ++themselves from nested guests. ++ ++SMT considerations ++------------------ ++ ++When Simultaneous Multi-Threading (SMT) is enabled, hypervisors can be ++vulnerable to cross-thread attacks. For complete protection against VMSCAPE ++attacks in SMT environments, STIBP should be enabled. ++ ++The kernel will issue a warning if SMT is enabled without adequate STIBP ++protection. Warning is not issued when: ++ ++- SMT is disabled ++- STIBP is enabled system-wide ++- Intel eIBRS is enabled (which implies STIBP protection) ++ ++System information and options ++------------------------------ ++ ++The sysfs file showing VMSCAPE mitigation status is: ++ ++ /sys/devices/system/cpu/vulnerabilities/vmscape ++ ++The possible values in this file are: ++ ++ * 'Not affected': ++ ++ The processor is not vulnerable to VMSCAPE attacks. ++ ++ * 'Vulnerable': ++ ++ The processor is vulnerable and no mitigation has been applied. ++ ++ * 'Mitigation: IBPB before exit to userspace': ++ ++ Conditional IBPB mitigation is enabled. The kernel tracks when a CPU has ++ run a potentially malicious guest and issues an IBPB before the first ++ exit to userspace after VM-exit. ++ ++ * 'Mitigation: IBPB on VMEXIT': ++ ++ IBPB is issued on every VM-exit. This occurs when other mitigations like ++ RETBLEED or SRSO are already issuing IBPB on VM-exit. ++ ++Mitigation control on the kernel command line ++---------------------------------------------- ++ ++The mitigation can be controlled via the ``vmscape=`` command line parameter: ++ ++ * ``vmscape=off``: ++ ++ Disable the VMSCAPE mitigation. ++ ++ * ``vmscape=ibpb``: ++ ++ Enable conditional IBPB mitigation (default when CONFIG_MITIGATION_VMSCAPE=y). ++ ++ * ``vmscape=force``: ++ ++ Force vulnerability detection and mitigation even on processors that are ++ not known to be affected. +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt +index eaeabff9beff6a..cce27317273925 100644 +--- a/Documentation/admin-guide/kernel-parameters.txt ++++ b/Documentation/admin-guide/kernel-parameters.txt +@@ -3297,6 +3297,7 @@ + srbds=off [X86,INTEL] + ssbd=force-off [ARM64] + tsx_async_abort=off [X86] ++ vmscape=off [X86] + + Exceptions: + This does not have any effect on +@@ -6813,6 +6814,16 @@ + vmpoff= [KNL,S390] Perform z/VM CP command after power off. + Format: + ++ vmscape= [X86] Controls mitigation for VMscape attacks. ++ VMscape attacks can leak information from a userspace ++ hypervisor to a guest via speculative side-channels. ++ ++ off - disable the mitigation ++ ibpb - use Indirect Branch Prediction Barrier ++ (IBPB) mitigation (default) ++ force - force vulnerability detection even on ++ unaffected processors ++ + vsyscall= [X86-64] + Controls the behavior of vsyscalls (i.e. calls to + fixed addresses of 0xffffffffff600x00 from legacy +diff --git a/Makefile b/Makefile +index 565d019f9de06b..5d7fd3b481b3d2 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 1 +-SUBLEVEL = 151 ++SUBLEVEL = 152 + EXTRAVERSION = + NAME = Curry Ramen + +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index f2351027507696..c7dfe2a5f162cb 100644 +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -2595,6 +2595,15 @@ config MITIGATION_TSA + security vulnerability on AMD CPUs which can lead to forwarding of + invalid info to subsequent instructions and thus can affect their + timing and thereby cause a leakage. ++ ++config MITIGATION_VMSCAPE ++ bool "Mitigate VMSCAPE" ++ depends on KVM ++ default y ++ help ++ Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security ++ vulnerability on Intel and AMD CPUs that may allow a guest to do ++ Spectre v2 style attacks on userspace hypervisor. + endif + + config ARCH_HAS_ADD_PAGES +diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h +index c48a9733e906ab..f86e100cf56bae 100644 +--- a/arch/x86/include/asm/cpufeatures.h ++++ b/arch/x86/include/asm/cpufeatures.h +@@ -452,6 +452,7 @@ + #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* "" AMD CPU not vulnerable to TSA-SQ */ + #define X86_FEATURE_TSA_L1_NO (21*32+12) /* "" AMD CPU not vulnerable to TSA-L1 */ + #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* "" Clear CPU buffers using VERW before VMRUN */ ++#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */ + + /* + * BUG word(s) +@@ -505,4 +506,5 @@ + #define X86_BUG_ITS X86_BUG(1*32 + 5) /* CPU is affected by Indirect Target Selection */ + #define X86_BUG_ITS_NATIVE_ONLY X86_BUG(1*32 + 6) /* CPU is affected by ITS, VMX is not affected */ + #define X86_BUG_TSA X86_BUG(1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */ ++#define X86_BUG_VMSCAPE X86_BUG( 1*32+10) /* "vmscape" CPU is affected by VMSCAPE attacks from guests */ + #endif /* _ASM_X86_CPUFEATURES_H */ +diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h +index ebdf5c97f53a81..7dedda82f4992d 100644 +--- a/arch/x86/include/asm/entry-common.h ++++ b/arch/x86/include/asm/entry-common.h +@@ -83,6 +83,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, + * 8 (ia32) bits. + */ + choose_random_kstack_offset(rdtsc()); ++ ++ /* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */ ++ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && ++ this_cpu_read(x86_ibpb_exit_to_user)) { ++ indirect_branch_prediction_barrier(); ++ this_cpu_write(x86_ibpb_exit_to_user, false); ++ } + } + #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare + +diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h +index c77a65a3e5f14a..818a5913f21950 100644 +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -394,6 +394,8 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature) + + extern u64 x86_pred_cmd; + ++DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user); ++ + static inline void indirect_branch_prediction_barrier(void) + { + alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_USE_IBPB); +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index 4fbb5b15ab7516..ff8965bce6c900 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -50,6 +50,7 @@ static void __init gds_select_mitigation(void); + static void __init srso_select_mitigation(void); + static void __init its_select_mitigation(void); + static void __init tsa_select_mitigation(void); ++static void __init vmscape_select_mitigation(void); + + /* The base value of the SPEC_CTRL MSR without task-specific bits set */ + u64 x86_spec_ctrl_base; +@@ -59,6 +60,14 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); + DEFINE_PER_CPU(u64, x86_spec_ctrl_current); + EXPORT_SYMBOL_GPL(x86_spec_ctrl_current); + ++/* ++ * Set when the CPU has run a potentially malicious guest. An IBPB will ++ * be needed to before running userspace. That IBPB will flush the branch ++ * predictor content. ++ */ ++DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user); ++EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user); ++ + u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; + EXPORT_SYMBOL_GPL(x86_pred_cmd); + +@@ -185,6 +194,7 @@ void __init cpu_select_mitigations(void) + gds_select_mitigation(); + its_select_mitigation(); + tsa_select_mitigation(); ++ vmscape_select_mitigation(); + } + + /* +@@ -2128,80 +2138,6 @@ static void __init tsa_select_mitigation(void) + pr_info("%s\n", tsa_strings[tsa_mitigation]); + } + +-void cpu_bugs_smt_update(void) +-{ +- mutex_lock(&spec_ctrl_mutex); +- +- if (sched_smt_active() && unprivileged_ebpf_enabled() && +- spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) +- pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); +- +- switch (spectre_v2_user_stibp) { +- case SPECTRE_V2_USER_NONE: +- break; +- case SPECTRE_V2_USER_STRICT: +- case SPECTRE_V2_USER_STRICT_PREFERRED: +- update_stibp_strict(); +- break; +- case SPECTRE_V2_USER_PRCTL: +- case SPECTRE_V2_USER_SECCOMP: +- update_indir_branch_cond(); +- break; +- } +- +- switch (mds_mitigation) { +- case MDS_MITIGATION_FULL: +- case MDS_MITIGATION_VMWERV: +- if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY)) +- pr_warn_once(MDS_MSG_SMT); +- update_mds_branch_idle(); +- break; +- case MDS_MITIGATION_OFF: +- break; +- } +- +- switch (taa_mitigation) { +- case TAA_MITIGATION_VERW: +- case TAA_MITIGATION_UCODE_NEEDED: +- if (sched_smt_active()) +- pr_warn_once(TAA_MSG_SMT); +- break; +- case TAA_MITIGATION_TSX_DISABLED: +- case TAA_MITIGATION_OFF: +- break; +- } +- +- switch (mmio_mitigation) { +- case MMIO_MITIGATION_VERW: +- case MMIO_MITIGATION_UCODE_NEEDED: +- if (sched_smt_active()) +- pr_warn_once(MMIO_MSG_SMT); +- break; +- case MMIO_MITIGATION_OFF: +- break; +- } +- +- switch (tsa_mitigation) { +- case TSA_MITIGATION_USER_KERNEL: +- case TSA_MITIGATION_VM: +- case TSA_MITIGATION_FULL: +- case TSA_MITIGATION_UCODE_NEEDED: +- /* +- * TSA-SQ can potentially lead to info leakage between +- * SMT threads. +- */ +- if (sched_smt_active()) +- static_branch_enable(&cpu_buf_idle_clear); +- else +- static_branch_disable(&cpu_buf_idle_clear); +- break; +- case TSA_MITIGATION_NONE: +- break; +- } +- +- mutex_unlock(&spec_ctrl_mutex); +-} +- + #undef pr_fmt + #define pr_fmt(fmt) "Speculative Store Bypass: " fmt + +@@ -2890,9 +2826,169 @@ static void __init srso_select_mitigation(void) + x86_pred_cmd = PRED_CMD_SBPB; + } + ++#undef pr_fmt ++#define pr_fmt(fmt) "VMSCAPE: " fmt ++ ++enum vmscape_mitigations { ++ VMSCAPE_MITIGATION_NONE, ++ VMSCAPE_MITIGATION_AUTO, ++ VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, ++ VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, ++}; ++ ++static const char * const vmscape_strings[] = { ++ [VMSCAPE_MITIGATION_NONE] = "Vulnerable", ++ /* [VMSCAPE_MITIGATION_AUTO] */ ++ [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace", ++ [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT", ++}; ++ ++static enum vmscape_mitigations vmscape_mitigation __ro_after_init = ++ IS_ENABLED(CONFIG_MITIGATION_VMSCAPE) ? VMSCAPE_MITIGATION_AUTO : VMSCAPE_MITIGATION_NONE; ++ ++static int __init vmscape_parse_cmdline(char *str) ++{ ++ if (!str) ++ return -EINVAL; ++ ++ if (!strcmp(str, "off")) { ++ vmscape_mitigation = VMSCAPE_MITIGATION_NONE; ++ } else if (!strcmp(str, "ibpb")) { ++ vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; ++ } else if (!strcmp(str, "force")) { ++ setup_force_cpu_bug(X86_BUG_VMSCAPE); ++ vmscape_mitigation = VMSCAPE_MITIGATION_AUTO; ++ } else { ++ pr_err("Ignoring unknown vmscape=%s option.\n", str); ++ } ++ ++ return 0; ++} ++early_param("vmscape", vmscape_parse_cmdline); ++ ++static void __init vmscape_select_mitigation(void) ++{ ++ if (cpu_mitigations_off() || ++ !boot_cpu_has_bug(X86_BUG_VMSCAPE) || ++ !boot_cpu_has(X86_FEATURE_IBPB)) { ++ vmscape_mitigation = VMSCAPE_MITIGATION_NONE; ++ return; ++ } ++ ++ if (vmscape_mitigation == VMSCAPE_MITIGATION_AUTO) ++ vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; ++ ++ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB || ++ srso_mitigation == SRSO_MITIGATION_IBPB_ON_VMEXIT) ++ vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_ON_VMEXIT; ++ ++ if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) ++ setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER); ++ ++ pr_info("%s\n", vmscape_strings[vmscape_mitigation]); ++} ++ + #undef pr_fmt + #define pr_fmt(fmt) fmt + ++#define VMSCAPE_MSG_SMT "VMSCAPE: SMT on, STIBP is required for full protection. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/vmscape.html for more details.\n" ++ ++void cpu_bugs_smt_update(void) ++{ ++ mutex_lock(&spec_ctrl_mutex); ++ ++ if (sched_smt_active() && unprivileged_ebpf_enabled() && ++ spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) ++ pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); ++ ++ switch (spectre_v2_user_stibp) { ++ case SPECTRE_V2_USER_NONE: ++ break; ++ case SPECTRE_V2_USER_STRICT: ++ case SPECTRE_V2_USER_STRICT_PREFERRED: ++ update_stibp_strict(); ++ break; ++ case SPECTRE_V2_USER_PRCTL: ++ case SPECTRE_V2_USER_SECCOMP: ++ update_indir_branch_cond(); ++ break; ++ } ++ ++ switch (mds_mitigation) { ++ case MDS_MITIGATION_FULL: ++ case MDS_MITIGATION_VMWERV: ++ if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY)) ++ pr_warn_once(MDS_MSG_SMT); ++ update_mds_branch_idle(); ++ break; ++ case MDS_MITIGATION_OFF: ++ break; ++ } ++ ++ switch (taa_mitigation) { ++ case TAA_MITIGATION_VERW: ++ case TAA_MITIGATION_UCODE_NEEDED: ++ if (sched_smt_active()) ++ pr_warn_once(TAA_MSG_SMT); ++ break; ++ case TAA_MITIGATION_TSX_DISABLED: ++ case TAA_MITIGATION_OFF: ++ break; ++ } ++ ++ switch (mmio_mitigation) { ++ case MMIO_MITIGATION_VERW: ++ case MMIO_MITIGATION_UCODE_NEEDED: ++ if (sched_smt_active()) ++ pr_warn_once(MMIO_MSG_SMT); ++ break; ++ case MMIO_MITIGATION_OFF: ++ break; ++ } ++ ++ switch (tsa_mitigation) { ++ case TSA_MITIGATION_USER_KERNEL: ++ case TSA_MITIGATION_VM: ++ case TSA_MITIGATION_FULL: ++ case TSA_MITIGATION_UCODE_NEEDED: ++ /* ++ * TSA-SQ can potentially lead to info leakage between ++ * SMT threads. ++ */ ++ if (sched_smt_active()) ++ static_branch_enable(&cpu_buf_idle_clear); ++ else ++ static_branch_disable(&cpu_buf_idle_clear); ++ break; ++ case TSA_MITIGATION_NONE: ++ break; ++ } ++ ++ switch (vmscape_mitigation) { ++ case VMSCAPE_MITIGATION_NONE: ++ case VMSCAPE_MITIGATION_AUTO: ++ break; ++ case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: ++ case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: ++ /* ++ * Hypervisors can be attacked across-threads, warn for SMT when ++ * STIBP is not already enabled system-wide. ++ * ++ * Intel eIBRS (!AUTOIBRS) implies STIBP on. ++ */ ++ if (!sched_smt_active() || ++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || ++ spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED || ++ (spectre_v2_in_eibrs_mode(spectre_v2_enabled) && ++ !boot_cpu_has(X86_FEATURE_AUTOIBRS))) ++ break; ++ pr_warn_once(VMSCAPE_MSG_SMT); ++ break; ++ } ++ ++ mutex_unlock(&spec_ctrl_mutex); ++} ++ + #ifdef CONFIG_SYSFS + + #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion" +@@ -3138,6 +3234,11 @@ static ssize_t tsa_show_state(char *buf) + return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]); + } + ++static ssize_t vmscape_show_state(char *buf) ++{ ++ return sysfs_emit(buf, "%s\n", vmscape_strings[vmscape_mitigation]); ++} ++ + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, + char *buf, unsigned int bug) + { +@@ -3202,6 +3303,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr + case X86_BUG_TSA: + return tsa_show_state(buf); + ++ case X86_BUG_VMSCAPE: ++ return vmscape_show_state(buf); ++ + default: + break; + } +@@ -3291,4 +3395,9 @@ ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *bu + { + return cpu_show_common(dev, attr, buf, X86_BUG_TSA); + } ++ ++ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf) ++{ ++ return cpu_show_common(dev, attr, buf, X86_BUG_VMSCAPE); ++} + #endif +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index 9c849a4160cda7..19c9087e2b84dc 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1258,42 +1258,54 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { + #define ITS_NATIVE_ONLY BIT(9) + /* CPU is affected by Transient Scheduler Attacks */ + #define TSA BIT(10) ++/* CPU is affected by VMSCAPE */ ++#define VMSCAPE BIT(11) + + static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { +- VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(HASWELL_X, X86_STEPPING_ANY, MMIO), +- VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPING_ANY, MMIO), +- VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), +- VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPINGS(0x0, 0x5), MMIO | RETBLEED | GDS), +- VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | ITS), +- VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0xb), MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | ITS), +- VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0xc), MMIO | RETBLEED | GDS | SRBDS), +- VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | ITS), +- VULNBL_INTEL_STEPPINGS(CANNONLAKE_L, X86_STEPPING_ANY, RETBLEED), ++ VULNBL_INTEL_STEPPINGS(SANDYBRIDGE_X, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(SANDYBRIDGE, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(IVYBRIDGE_X, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(HASWELL_X, X86_STEPPING_ANY, MMIO | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPING_ANY, MMIO | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPINGS(0x0, 0x5), MMIO | RETBLEED | GDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0xb), MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0xc), MMIO | RETBLEED | GDS | SRBDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(CANNONLAKE_L, X86_STEPPING_ANY, RETBLEED | VMSCAPE), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO | GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO | GDS | ITS | ITS_NATIVE_ONLY), +- VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS), +- VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED | ITS), +- VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS), ++ VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED | ITS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | VMSCAPE), + VULNBL_INTEL_STEPPINGS(TIGERLAKE_L, X86_STEPPING_ANY, GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPPINGS(TIGERLAKE, X86_STEPPING_ANY, GDS | ITS | ITS_NATIVE_ONLY), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY), +- VULNBL_INTEL_STEPPINGS(ALDERLAKE, X86_STEPPING_ANY, RFDS), +- VULNBL_INTEL_STEPPINGS(ALDERLAKE_L, X86_STEPPING_ANY, RFDS), +- VULNBL_INTEL_STEPPINGS(RAPTORLAKE, X86_STEPPING_ANY, RFDS), +- VULNBL_INTEL_STEPPINGS(RAPTORLAKE_P, X86_STEPPING_ANY, RFDS), +- VULNBL_INTEL_STEPPINGS(RAPTORLAKE_S, X86_STEPPING_ANY, RFDS), +- VULNBL_INTEL_STEPPINGS(ALDERLAKE_N, X86_STEPPING_ANY, RFDS), ++ VULNBL_INTEL_STEPPINGS(ALDERLAKE, X86_STEPPING_ANY, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(ALDERLAKE_L, X86_STEPPING_ANY, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(RAPTORLAKE, X86_STEPPING_ANY, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(RAPTORLAKE_P, X86_STEPPING_ANY, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(RAPTORLAKE_S, X86_STEPPING_ANY, RFDS | VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(METEORLAKE_L, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(ARROWLAKE_H, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(ARROWLAKE, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(LUNARLAKE_M, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(SAPPHIRERAPIDS_X,X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(GRANITERAPIDS_X, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(EMERALDRAPIDS_X, X86_STEPPING_ANY, VMSCAPE), ++ VULNBL_INTEL_STEPPINGS(ALDERLAKE_N, X86_STEPPING_ANY, RFDS | VMSCAPE), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RFDS), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO | RFDS), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RFDS), +@@ -1303,9 +1315,9 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), +- VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), +- VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO), +- VULNBL_AMD(0x19, SRSO | TSA), ++ VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE), ++ VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE), ++ VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE), + {} + }; + +@@ -1520,6 +1532,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) + } + } + ++ /* ++ * Set the bug only on bare-metal. A nested hypervisor should already be ++ * deploying IBPB to isolate itself from nested guests. ++ */ ++ if (cpu_matches(cpu_vuln_blacklist, VMSCAPE) && ++ !boot_cpu_has(X86_FEATURE_HYPERVISOR)) ++ setup_force_cpu_bug(X86_BUG_VMSCAPE); ++ + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) + return; + +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 57ba9071841ea3..11ca05d830e725 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -10925,6 +10925,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) + if (vcpu->arch.guest_fpu.xfd_err) + wrmsrl(MSR_IA32_XFD_ERR, 0); + ++ /* ++ * Mark this CPU as needing a branch predictor flush before running ++ * userspace. Must be done before enabling preemption to ensure it gets ++ * set for the CPU that actually ran the guest, and not the CPU that it ++ * may migrate to. ++ */ ++ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) ++ this_cpu_write(x86_ibpb_exit_to_user, true); ++ + /* + * Consume any pending interrupts, including the possible source of + * VM-Exit on SVM and any ticks that occur between VM-Exit and now. +diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c +index d68c60f357640c..186bd680e74165 100644 +--- a/drivers/base/cpu.c ++++ b/drivers/base/cpu.c +@@ -606,6 +606,11 @@ ssize_t __weak cpu_show_tsa(struct device *dev, struct device_attribute *attr, c + return sysfs_emit(buf, "Not affected\n"); + } + ++ssize_t __weak cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf) ++{ ++ return sysfs_emit(buf, "Not affected\n"); ++} ++ + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); + static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); + static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); +@@ -622,6 +627,7 @@ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NU + static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL); + static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL); + static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL); ++static DEVICE_ATTR(vmscape, 0444, cpu_show_vmscape, NULL); + + static struct attribute *cpu_root_vulnerabilities_attrs[] = { + &dev_attr_meltdown.attr, +@@ -640,6 +646,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { + &dev_attr_reg_file_data_sampling.attr, + &dev_attr_indirect_target_selection.attr, + &dev_attr_tsa.attr, ++ &dev_attr_vmscape.attr, + NULL + }; + +diff --git a/include/linux/cpu.h b/include/linux/cpu.h +index 3d3ceccf822450..4883ce43d90ab5 100644 +--- a/include/linux/cpu.h ++++ b/include/linux/cpu.h +@@ -79,6 +79,7 @@ extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev, + extern ssize_t cpu_show_indirect_target_selection(struct device *dev, + struct device_attribute *attr, char *buf); + extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf); ++extern ssize_t cpu_show_vmscape(struct device *dev, struct device_attribute *attr, char *buf); + + extern __printf(4, 5) + struct device *cpu_device_create(struct device *parent, void *drvdata,