From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id C5A12138359 for ; Sun, 22 Nov 2020 19:17:16 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 06AFBE077F; Sun, 22 Nov 2020 19:17:16 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id D5B22E077F for ; Sun, 22 Nov 2020 19:17:15 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id A3F2833D0AF for ; Sun, 22 Nov 2020 19:17:14 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 5498042C for ; Sun, 22 Nov 2020 19:17:13 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1606072621.db50aaa6fd34b8904d947ad56404c9ab569198ce.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1207_linux-4.14.208.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: db50aaa6fd34b8904d947ad56404c9ab569198ce X-VCS-Branch: 4.14 Date: Sun, 22 Nov 2020 19:17:13 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 608f8781-5c0d-4a06-a4cf-0d57eac61c93 X-Archives-Hash: aabc3af618c369f907abba066cc353c7 commit: db50aaa6fd34b8904d947ad56404c9ab569198ce Author: Mike Pagano gentoo org> AuthorDate: Sun Nov 22 19:17:01 2020 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sun Nov 22 19:17:01 2020 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=db50aaa6 Linux patch 4.14.208 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1207_linux-4.14.208.patch | 1563 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 1567 insertions(+) diff --git a/0000_README b/0000_README index 12d9e49..24f69ff 100644 --- a/0000_README +++ b/0000_README @@ -871,6 +871,10 @@ Patch: 1206_linux-4.14.207.patch From: https://www.kernel.org Desc: Linux 4.14.207 +Patch: 1207_linux-4.14.208.patch +From: https://www.kernel.org +Desc: Linux 4.14.208 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1207_linux-4.14.208.patch b/1207_linux-4.14.208.patch new file mode 100644 index 0000000..dd6958f --- /dev/null +++ b/1207_linux-4.14.208.patch @@ -0,0 +1,1563 @@ +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt +index e0ce14f028d82..357c64b53cdc7 100644 +--- a/Documentation/admin-guide/kernel-parameters.txt ++++ b/Documentation/admin-guide/kernel-parameters.txt +@@ -2446,6 +2446,8 @@ + mds=off [X86] + tsx_async_abort=off [X86] + kvm.nx_huge_pages=off [X86] ++ no_entry_flush [PPC] ++ no_uaccess_flush [PPC] + + Exceptions: + This does not have any effect on +@@ -2749,6 +2751,8 @@ + + noefi Disable EFI runtime services support. + ++ no_entry_flush [PPC] Don't flush the L1-D cache when entering the kernel. ++ + noexec [IA-64] + + noexec [X86] +@@ -2798,6 +2802,9 @@ + nospec_store_bypass_disable + [HW] Disable all mitigations for the Speculative Store Bypass vulnerability + ++ no_uaccess_flush ++ [PPC] Don't flush the L1-D cache after accessing user data. ++ + noxsave [BUGS=X86] Disables x86 extended register state save + and restore using xsave. The kernel will fallback to + enabling legacy floating-point and sse state. +diff --git a/Makefile b/Makefile +index c4bb19c1e4c7b..7133039972b87 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 14 +-SUBLEVEL = 207 ++SUBLEVEL = 208 + EXTRAVERSION = + NAME = Petit Gorille + +diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h +new file mode 100644 +index 0000000000000..aa54ac2e5659e +--- /dev/null ++++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h +@@ -0,0 +1,22 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H ++#define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H ++ ++DECLARE_STATIC_KEY_FALSE(uaccess_flush_key); ++ ++/* Prototype for function defined in exceptions-64s.S */ ++void do_uaccess_flush(void); ++ ++static __always_inline void allow_user_access(void __user *to, const void __user *from, ++ unsigned long size) ++{ ++} ++ ++static inline void prevent_user_access(void __user *to, const void __user *from, ++ unsigned long size) ++{ ++ if (static_branch_unlikely(&uaccess_flush_key)) ++ do_uaccess_flush(); ++} ++ ++#endif /* _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H */ +diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h +index c3bdd2d8ec903..8825459786514 100644 +--- a/arch/powerpc/include/asm/exception-64s.h ++++ b/arch/powerpc/include/asm/exception-64s.h +@@ -84,11 +84,18 @@ + nop; \ + nop + ++#define ENTRY_FLUSH_SLOT \ ++ ENTRY_FLUSH_FIXUP_SECTION; \ ++ nop; \ ++ nop; \ ++ nop; ++ + /* + * r10 must be free to use, r13 must be paca + */ + #define INTERRUPT_TO_KERNEL \ +- STF_ENTRY_BARRIER_SLOT ++ STF_ENTRY_BARRIER_SLOT; \ ++ ENTRY_FLUSH_SLOT + + /* + * Macros for annotating the expected destination of (h)rfid +@@ -645,6 +652,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943) + EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec); \ + EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV) + ++#define MASKABLE_RELON_EXCEPTION_PSERIES_OOL(vec, label) \ ++ EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec); \ ++ EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD) ++ + /* + * Our exception common code can be passed various "additions" + * to specify the behaviour of interrupts, whether to kick the +diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h +index b1d478acbaecf..745c017b8de60 100644 +--- a/arch/powerpc/include/asm/feature-fixups.h ++++ b/arch/powerpc/include/asm/feature-fixups.h +@@ -203,6 +203,22 @@ label##3: \ + FTR_ENTRY_OFFSET 955b-956b; \ + .popsection; + ++#define UACCESS_FLUSH_FIXUP_SECTION \ ++959: \ ++ .pushsection __uaccess_flush_fixup,"a"; \ ++ .align 2; \ ++960: \ ++ FTR_ENTRY_OFFSET 959b-960b; \ ++ .popsection; ++ ++#define ENTRY_FLUSH_FIXUP_SECTION \ ++957: \ ++ .pushsection __entry_flush_fixup,"a"; \ ++ .align 2; \ ++958: \ ++ FTR_ENTRY_OFFSET 957b-958b; \ ++ .popsection; ++ + #define RFI_FLUSH_FIXUP_SECTION \ + 951: \ + .pushsection __rfi_flush_fixup,"a"; \ +@@ -235,8 +251,11 @@ label##3: \ + #include + + extern long stf_barrier_fallback; ++extern long entry_flush_fallback; + extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; + extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; ++extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup; ++extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; + extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; + extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; + extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; +diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h +index 3c7d859452294..cbcb97c43d82b 100644 +--- a/arch/powerpc/include/asm/futex.h ++++ b/arch/powerpc/include/asm/futex.h +@@ -35,6 +35,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, + { + int oldval = 0, ret; + ++ allow_write_to_user(uaddr, sizeof(*uaddr)); + pagefault_disable(); + + switch (op) { +@@ -61,6 +62,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, + + *oval = oldval; + ++ prevent_write_to_user(uaddr, sizeof(*uaddr)); + return ret; + } + +@@ -74,6 +76,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, + if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))) + return -EFAULT; + ++ allow_write_to_user(uaddr, sizeof(*uaddr)); + __asm__ __volatile__ ( + PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %1,0,%3 # futex_atomic_cmpxchg_inatomic\n\ +@@ -94,6 +97,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, + : "cc", "memory"); + + *uval = prev; ++ prevent_write_to_user(uaddr, sizeof(*uaddr)); + return ret; + } + +diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h +new file mode 100644 +index 0000000000000..f0f8e36ad71f5 +--- /dev/null ++++ b/arch/powerpc/include/asm/kup.h +@@ -0,0 +1,40 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_POWERPC_KUP_H_ ++#define _ASM_POWERPC_KUP_H_ ++ ++#ifndef __ASSEMBLY__ ++ ++#include ++ ++#ifdef CONFIG_PPC_BOOK3S_64 ++#include ++#else ++static inline void allow_user_access(void __user *to, const void __user *from, ++ unsigned long size) { } ++static inline void prevent_user_access(void __user *to, const void __user *from, ++ unsigned long size) { } ++#endif /* CONFIG_PPC_BOOK3S_64 */ ++ ++static inline void allow_read_from_user(const void __user *from, unsigned long size) ++{ ++ allow_user_access(NULL, from, size); ++} ++ ++static inline void allow_write_to_user(void __user *to, unsigned long size) ++{ ++ allow_user_access(to, NULL, size); ++} ++ ++static inline void prevent_read_from_user(const void __user *from, unsigned long size) ++{ ++ prevent_user_access(NULL, from, size); ++} ++ ++static inline void prevent_write_to_user(void __user *to, unsigned long size) ++{ ++ prevent_user_access(to, NULL, size); ++} ++ ++#endif /* !__ASSEMBLY__ */ ++ ++#endif /* _ASM_POWERPC_KUP_H_ */ +diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h +index ccf44c135389a..3b45a64e491e5 100644 +--- a/arch/powerpc/include/asm/security_features.h ++++ b/arch/powerpc/include/asm/security_features.h +@@ -84,12 +84,19 @@ static inline bool security_ftr_enabled(unsigned long feature) + // Software required to flush link stack on context switch + #define SEC_FTR_FLUSH_LINK_STACK 0x0000000000001000ull + ++// The L1-D cache should be flushed when entering the kernel ++#define SEC_FTR_L1D_FLUSH_ENTRY 0x0000000000004000ull ++ ++// The L1-D cache should be flushed after user accesses from the kernel ++#define SEC_FTR_L1D_FLUSH_UACCESS 0x0000000000008000ull + + // Features enabled by default + #define SEC_FTR_DEFAULT \ + (SEC_FTR_L1D_FLUSH_HV | \ + SEC_FTR_L1D_FLUSH_PR | \ + SEC_FTR_BNDS_CHK_SPEC_BAR | \ ++ SEC_FTR_L1D_FLUSH_ENTRY | \ ++ SEC_FTR_L1D_FLUSH_UACCESS | \ + SEC_FTR_FAVOUR_SECURITY) + + #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */ +diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h +index 5ceab440ecb9b..6750ad3cd3b1a 100644 +--- a/arch/powerpc/include/asm/setup.h ++++ b/arch/powerpc/include/asm/setup.h +@@ -51,12 +51,16 @@ enum l1d_flush_type { + }; + + void setup_rfi_flush(enum l1d_flush_type, bool enable); ++void setup_entry_flush(bool enable); ++void setup_uaccess_flush(bool enable); + void do_rfi_flush_fixups(enum l1d_flush_type types); + #ifdef CONFIG_PPC_BARRIER_NOSPEC + void setup_barrier_nospec(void); + #else + static inline void setup_barrier_nospec(void) { }; + #endif ++void do_uaccess_flush_fixups(enum l1d_flush_type types); ++void do_entry_flush_fixups(enum l1d_flush_type types); + void do_barrier_nospec_fixups(bool enable); + extern bool barrier_nospec_enabled; + +diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h +index 3865d1d235976..95f060cb7a09e 100644 +--- a/arch/powerpc/include/asm/uaccess.h ++++ b/arch/powerpc/include/asm/uaccess.h +@@ -7,6 +7,7 @@ + #include + #include + #include ++#include + + /* + * The fs value determines whether argument validity checking should be +@@ -82,9 +83,14 @@ + __put_user_check((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) + + #define __get_user(x, ptr) \ +- __get_user_nocheck((x), (ptr), sizeof(*(ptr))) ++ __get_user_nocheck((x), (ptr), sizeof(*(ptr)), true) + #define __put_user(x, ptr) \ +- __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) ++ __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), true) ++ ++#define __get_user_allowed(x, ptr) \ ++ __get_user_nocheck((x), (ptr), sizeof(*(ptr)), false) ++#define __put_user_allowed(x, ptr) \ ++ __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), false) + + #define __get_user_inatomic(x, ptr) \ + __get_user_nosleep((x), (ptr), sizeof(*(ptr))) +@@ -129,7 +135,7 @@ extern long __put_user_bad(void); + : "r" (x), "b" (addr), "i" (-EFAULT), "0" (err)) + #endif /* __powerpc64__ */ + +-#define __put_user_size(x, ptr, size, retval) \ ++#define __put_user_size_allowed(x, ptr, size, retval) \ + do { \ + retval = 0; \ + switch (size) { \ +@@ -141,14 +147,28 @@ do { \ + } \ + } while (0) + +-#define __put_user_nocheck(x, ptr, size) \ ++#define __put_user_size(x, ptr, size, retval) \ ++do { \ ++ allow_write_to_user(ptr, size); \ ++ __put_user_size_allowed(x, ptr, size, retval); \ ++ prevent_write_to_user(ptr, size); \ ++} while (0) ++ ++#define __put_user_nocheck(x, ptr, size, do_allow) \ + ({ \ + long __pu_err; \ + __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ ++ __typeof__(*(ptr)) __pu_val = (x); \ ++ __typeof__(size) __pu_size = (size); \ ++ \ + if (!is_kernel_addr((unsigned long)__pu_addr)) \ + might_fault(); \ +- __chk_user_ptr(ptr); \ +- __put_user_size((x), __pu_addr, (size), __pu_err); \ ++ __chk_user_ptr(__pu_addr); \ ++ if (do_allow) \ ++ __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ ++ else \ ++ __put_user_size_allowed(__pu_val, __pu_addr, __pu_size, __pu_err); \ ++ \ + __pu_err; \ + }) + +@@ -156,9 +176,13 @@ do { \ + ({ \ + long __pu_err = -EFAULT; \ + __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ ++ __typeof__(*(ptr)) __pu_val = (x); \ ++ __typeof__(size) __pu_size = (size); \ ++ \ + might_fault(); \ +- if (access_ok(VERIFY_WRITE, __pu_addr, size)) \ +- __put_user_size((x), __pu_addr, (size), __pu_err); \ ++ if (access_ok(VERIFY_WRITE, __pu_addr, __pu_size)) \ ++ __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ ++ \ + __pu_err; \ + }) + +@@ -166,8 +190,12 @@ do { \ + ({ \ + long __pu_err; \ + __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ +- __chk_user_ptr(ptr); \ +- __put_user_size((x), __pu_addr, (size), __pu_err); \ ++ __typeof__(*(ptr)) __pu_val = (x); \ ++ __typeof__(size) __pu_size = (size); \ ++ \ ++ __chk_user_ptr(__pu_addr); \ ++ __put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \ ++ \ + __pu_err; \ + }) + +@@ -208,7 +236,7 @@ extern long __get_user_bad(void); + : "b" (addr), "i" (-EFAULT), "0" (err)) + #endif /* __powerpc64__ */ + +-#define __get_user_size(x, ptr, size, retval) \ ++#define __get_user_size_allowed(x, ptr, size, retval) \ + do { \ + retval = 0; \ + __chk_user_ptr(ptr); \ +@@ -223,6 +251,13 @@ do { \ + } \ + } while (0) + ++#define __get_user_size(x, ptr, size, retval) \ ++do { \ ++ allow_read_from_user(ptr, size); \ ++ __get_user_size_allowed(x, ptr, size, retval); \ ++ prevent_read_from_user(ptr, size); \ ++} while (0) ++ + /* + * This is a type: either unsigned long, if the argument fits into + * that type, or otherwise unsigned long long. +@@ -230,17 +265,23 @@ do { \ + #define __long_type(x) \ + __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) + +-#define __get_user_nocheck(x, ptr, size) \ ++#define __get_user_nocheck(x, ptr, size, do_allow) \ + ({ \ + long __gu_err; \ + __long_type(*(ptr)) __gu_val; \ + __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ +- __chk_user_ptr(ptr); \ ++ __typeof__(size) __gu_size = (size); \ ++ \ ++ __chk_user_ptr(__gu_addr); \ + if (!is_kernel_addr((unsigned long)__gu_addr)) \ + might_fault(); \ + barrier_nospec(); \ +- __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ ++ if (do_allow) \ ++ __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ ++ else \ ++ __get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \ + (x) = (__typeof__(*(ptr)))__gu_val; \ ++ \ + __gu_err; \ + }) + +@@ -249,12 +290,15 @@ do { \ + long __gu_err = -EFAULT; \ + __long_type(*(ptr)) __gu_val = 0; \ + __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ ++ __typeof__(size) __gu_size = (size); \ ++ \ + might_fault(); \ +- if (access_ok(VERIFY_READ, __gu_addr, (size))) { \ ++ if (access_ok(VERIFY_READ, __gu_addr, __gu_size)) { \ + barrier_nospec(); \ +- __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ ++ __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ + } \ + (x) = (__force __typeof__(*(ptr)))__gu_val; \ ++ \ + __gu_err; \ + }) + +@@ -263,10 +307,13 @@ do { \ + long __gu_err; \ + __long_type(*(ptr)) __gu_val; \ + __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ +- __chk_user_ptr(ptr); \ ++ __typeof__(size) __gu_size = (size); \ ++ \ ++ __chk_user_ptr(__gu_addr); \ + barrier_nospec(); \ +- __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ ++ __get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \ + (x) = (__force __typeof__(*(ptr)))__gu_val; \ ++ \ + __gu_err; \ + }) + +@@ -280,16 +327,22 @@ extern unsigned long __copy_tofrom_user(void __user *to, + static inline unsigned long + raw_copy_in_user(void __user *to, const void __user *from, unsigned long n) + { ++ unsigned long ret; ++ + barrier_nospec(); +- return __copy_tofrom_user(to, from, n); ++ allow_user_access(to, from, n); ++ ret = __copy_tofrom_user(to, from, n); ++ prevent_user_access(to, from, n); ++ return ret; + } + #endif /* __powerpc64__ */ + + static inline unsigned long raw_copy_from_user(void *to, + const void __user *from, unsigned long n) + { ++ unsigned long ret; + if (__builtin_constant_p(n) && (n <= 8)) { +- unsigned long ret = 1; ++ ret = 1; + + switch (n) { + case 1: +@@ -314,27 +367,30 @@ static inline unsigned long raw_copy_from_user(void *to, + } + + barrier_nospec(); +- return __copy_tofrom_user((__force void __user *)to, from, n); ++ allow_read_from_user(from, n); ++ ret = __copy_tofrom_user((__force void __user *)to, from, n); ++ prevent_read_from_user(from, n); ++ return ret; + } + +-static inline unsigned long raw_copy_to_user(void __user *to, +- const void *from, unsigned long n) ++static inline unsigned long ++raw_copy_to_user_allowed(void __user *to, const void *from, unsigned long n) + { + if (__builtin_constant_p(n) && (n <= 8)) { + unsigned long ret = 1; + + switch (n) { + case 1: +- __put_user_size(*(u8 *)from, (u8 __user *)to, 1, ret); ++ __put_user_size_allowed(*(u8 *)from, (u8 __user *)to, 1, ret); + break; + case 2: +- __put_user_size(*(u16 *)from, (u16 __user *)to, 2, ret); ++ __put_user_size_allowed(*(u16 *)from, (u16 __user *)to, 2, ret); + break; + case 4: +- __put_user_size(*(u32 *)from, (u32 __user *)to, 4, ret); ++ __put_user_size_allowed(*(u32 *)from, (u32 __user *)to, 4, ret); + break; + case 8: +- __put_user_size(*(u64 *)from, (u64 __user *)to, 8, ret); ++ __put_user_size_allowed(*(u64 *)from, (u64 __user *)to, 8, ret); + break; + } + if (ret == 0) +@@ -344,17 +400,47 @@ static inline unsigned long raw_copy_to_user(void __user *to, + return __copy_tofrom_user(to, (__force const void __user *)from, n); + } + +-extern unsigned long __clear_user(void __user *addr, unsigned long size); ++static inline unsigned long ++raw_copy_to_user(void __user *to, const void *from, unsigned long n) ++{ ++ unsigned long ret; ++ ++ allow_write_to_user(to, n); ++ ret = raw_copy_to_user_allowed(to, from, n); ++ prevent_write_to_user(to, n); ++ return ret; ++} ++ ++unsigned long __arch_clear_user(void __user *addr, unsigned long size); + + static inline unsigned long clear_user(void __user *addr, unsigned long size) + { ++ unsigned long ret = size; + might_fault(); +- if (likely(access_ok(VERIFY_WRITE, addr, size))) +- return __clear_user(addr, size); +- return size; ++ if (likely(access_ok(VERIFY_WRITE, addr, size))) { ++ allow_write_to_user(addr, size); ++ ret = __arch_clear_user(addr, size); ++ prevent_write_to_user(addr, size); ++ } ++ return ret; ++} ++ ++static inline unsigned long __clear_user(void __user *addr, unsigned long size) ++{ ++ return clear_user(addr, size); + } + + extern long strncpy_from_user(char *dst, const char __user *src, long count); + extern __must_check long strnlen_user(const char __user *str, long n); + ++ ++#define user_access_begin(type, ptr, len) access_ok(type, ptr, len) ++#define user_access_end() prevent_user_access(NULL, NULL, ~0ul) ++ ++#define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0) ++#define unsafe_get_user(x, p, e) unsafe_op_wrap(__get_user_allowed(x, p), e) ++#define unsafe_put_user(x, p, e) unsafe_op_wrap(__put_user_allowed(x, p), e) ++#define unsafe_copy_to_user(d, s, l, e) \ ++ unsafe_op_wrap(raw_copy_to_user_allowed(d, s, l), e) ++ + #endif /* _ARCH_POWERPC_UACCESS_H */ +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S +index cdc53fd905977..b313628966adb 100644 +--- a/arch/powerpc/kernel/exceptions-64s.S ++++ b/arch/powerpc/kernel/exceptions-64s.S +@@ -484,7 +484,7 @@ EXC_COMMON_BEGIN(unrecover_mce) + b 1b + + +-EXC_REAL(data_access, 0x300, 0x80) ++EXC_REAL_OOL(data_access, 0x300, 0x80) + EXC_VIRT(data_access, 0x4300, 0x80, 0x300) + TRAMP_KVM_SKIP(PACA_EXGEN, 0x300) + +@@ -516,13 +516,16 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) + EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80) + SET_SCRATCH0(r13) + EXCEPTION_PROLOG_0(PACA_EXSLB) ++ b tramp_data_access_slb ++EXC_REAL_END(data_access_slb, 0x380, 0x80) ++ ++TRAMP_REAL_BEGIN(tramp_data_access_slb) + EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380) + mr r12,r3 /* save r3 */ + mfspr r3,SPRN_DAR + mfspr r11,SPRN_SRR1 + crset 4*cr6+eq + BRANCH_TO_COMMON(r10, slb_miss_common) +-EXC_REAL_END(data_access_slb, 0x380, 0x80) + + EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80) + SET_SCRATCH0(r13) +@@ -537,7 +540,7 @@ EXC_VIRT_END(data_access_slb, 0x4380, 0x80) + TRAMP_KVM_SKIP(PACA_EXSLB, 0x380) + + +-EXC_REAL(instruction_access, 0x400, 0x80) ++EXC_REAL_OOL(instruction_access, 0x400, 0x80) + EXC_VIRT(instruction_access, 0x4400, 0x80, 0x400) + TRAMP_KVM(PACA_EXGEN, 0x400) + +@@ -560,13 +563,16 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) + EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x80) + SET_SCRATCH0(r13) + EXCEPTION_PROLOG_0(PACA_EXSLB) ++ b tramp_instruction_access_slb ++EXC_REAL_END(instruction_access_slb, 0x480, 0x80) ++ ++TRAMP_REAL_BEGIN(tramp_instruction_access_slb) + EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480) + mr r12,r3 /* save r3 */ + mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */ + mfspr r11,SPRN_SRR1 + crclr 4*cr6+eq + BRANCH_TO_COMMON(r10, slb_miss_common) +-EXC_REAL_END(instruction_access_slb, 0x480, 0x80) + + EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80) + SET_SCRATCH0(r13) +@@ -830,13 +836,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM) + + + EXC_REAL_OOL_MASKABLE(decrementer, 0x900, 0x80) +-EXC_VIRT_MASKABLE(decrementer, 0x4900, 0x80, 0x900) ++EXC_VIRT_OOL_MASKABLE(decrementer, 0x4900, 0x80, 0x900) + TRAMP_KVM(PACA_EXGEN, 0x900) + EXC_COMMON_ASYNC(decrementer_common, 0x900, timer_interrupt) + + +-EXC_REAL_HV(hdecrementer, 0x980, 0x80) +-EXC_VIRT_HV(hdecrementer, 0x4980, 0x80, 0x980) ++EXC_REAL_OOL_HV(hdecrementer, 0x980, 0x80) ++EXC_VIRT_OOL_HV(hdecrementer, 0x4980, 0x80, 0x980) + TRAMP_KVM_HV(PACA_EXGEN, 0x980) + EXC_COMMON(hdecrementer_common, 0x980, hdec_interrupt) + +@@ -1453,15 +1459,8 @@ TRAMP_REAL_BEGIN(stf_barrier_fallback) + .endr + blr + +-TRAMP_REAL_BEGIN(rfi_flush_fallback) +- SET_SCRATCH0(r13); +- GET_PACA(r13); +- std r1,PACA_EXRFI+EX_R12(r13) +- ld r1,PACAKSAVE(r13) +- std r9,PACA_EXRFI+EX_R9(r13) +- std r10,PACA_EXRFI+EX_R10(r13) +- std r11,PACA_EXRFI+EX_R11(r13) +- mfctr r9 ++/* Clobbers r10, r11, ctr */ ++.macro L1D_DISPLACEMENT_FLUSH + ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) + ld r11,PACA_L1D_FLUSH_SIZE(r13) + srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ +@@ -1472,7 +1471,7 @@ TRAMP_REAL_BEGIN(rfi_flush_fallback) + sync + + /* +- * The load adresses are at staggered offsets within cachelines, ++ * The load addresses are at staggered offsets within cachelines, + * which suits some pipelines better (on others it should not + * hurt). + */ +@@ -1487,7 +1486,30 @@ TRAMP_REAL_BEGIN(rfi_flush_fallback) + ld r11,(0x80 + 8)*7(r10) + addi r10,r10,0x80*8 + bdnz 1b ++.endm ++ ++TRAMP_REAL_BEGIN(entry_flush_fallback) ++ std r9,PACA_EXRFI+EX_R9(r13) ++ std r10,PACA_EXRFI+EX_R10(r13) ++ std r11,PACA_EXRFI+EX_R11(r13) ++ mfctr r9 ++ L1D_DISPLACEMENT_FLUSH ++ mtctr r9 ++ ld r9,PACA_EXRFI+EX_R9(r13) ++ ld r10,PACA_EXRFI+EX_R10(r13) ++ ld r11,PACA_EXRFI+EX_R11(r13) ++ blr + ++TRAMP_REAL_BEGIN(rfi_flush_fallback) ++ SET_SCRATCH0(r13); ++ GET_PACA(r13); ++ std r1,PACA_EXRFI+EX_R12(r13) ++ ld r1,PACAKSAVE(r13) ++ std r9,PACA_EXRFI+EX_R9(r13) ++ std r10,PACA_EXRFI+EX_R10(r13) ++ std r11,PACA_EXRFI+EX_R11(r13) ++ mfctr r9 ++ L1D_DISPLACEMENT_FLUSH + mtctr r9 + ld r9,PACA_EXRFI+EX_R9(r13) + ld r10,PACA_EXRFI+EX_R10(r13) +@@ -1505,32 +1527,7 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback) + std r10,PACA_EXRFI+EX_R10(r13) + std r11,PACA_EXRFI+EX_R11(r13) + mfctr r9 +- ld r10,PACA_RFI_FLUSH_FALLBACK_AREA(r13) +- ld r11,PACA_L1D_FLUSH_SIZE(r13) +- srdi r11,r11,(7 + 3) /* 128 byte lines, unrolled 8x */ +- mtctr r11 +- DCBT_STOP_ALL_STREAM_IDS(r11) /* Stop prefetch streams */ +- +- /* order ld/st prior to dcbt stop all streams with flushing */ +- sync +- +- /* +- * The load adresses are at staggered offsets within cachelines, +- * which suits some pipelines better (on others it should not +- * hurt). +- */ +-1: +- ld r11,(0x80 + 8)*0(r10) +- ld r11,(0x80 + 8)*1(r10) +- ld r11,(0x80 + 8)*2(r10) +- ld r11,(0x80 + 8)*3(r10) +- ld r11,(0x80 + 8)*4(r10) +- ld r11,(0x80 + 8)*5(r10) +- ld r11,(0x80 + 8)*6(r10) +- ld r11,(0x80 + 8)*7(r10) +- addi r10,r10,0x80*8 +- bdnz 1b +- ++ L1D_DISPLACEMENT_FLUSH + mtctr r9 + ld r9,PACA_EXRFI+EX_R9(r13) + ld r10,PACA_EXRFI+EX_R10(r13) +@@ -1539,6 +1536,19 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback) + GET_SCRATCH0(r13); + hrfid + ++USE_TEXT_SECTION() ++ ++_GLOBAL(do_uaccess_flush) ++ UACCESS_FLUSH_FIXUP_SECTION ++ nop ++ nop ++ nop ++ blr ++ L1D_DISPLACEMENT_FLUSH ++ blr ++_ASM_NOKPROBE_SYMBOL(do_uaccess_flush) ++EXPORT_SYMBOL(do_uaccess_flush) ++ + /* + * Real mode exceptions actually use this too, but alternate + * instruction code patches (which end up in the common .text area) +diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S +index 2d0d89e2cb9a8..43884af0e35c4 100644 +--- a/arch/powerpc/kernel/head_8xx.S ++++ b/arch/powerpc/kernel/head_8xx.S +@@ -398,11 +398,9 @@ _ENTRY(ITLBMiss_cmp) + #if defined (CONFIG_HUGETLB_PAGE) && defined (CONFIG_PPC_4K_PAGES) + rlwimi r10, r11, 1, MI_SPS16K + #endif +-#ifdef CONFIG_SWAP +- rlwinm r11, r10, 32-5, _PAGE_PRESENT ++ rlwinm r11, r10, 32-11, _PAGE_PRESENT + and r11, r11, r10 + rlwimi r10, r11, 0, _PAGE_PRESENT +-#endif + li r11, RPN_PATTERN + /* The Linux PTE won't go exactly into the MMU TLB. + * Software indicator bits 20-23 and 28 must be clear. +@@ -528,11 +526,9 @@ _ENTRY(DTLBMiss_jmp) + * r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5)); + * r10 = (r10 & ~PRESENT) | r11; + */ +-#ifdef CONFIG_SWAP +- rlwinm r11, r10, 32-5, _PAGE_PRESENT ++ rlwinm r11, r10, 32-11, _PAGE_PRESENT + and r11, r11, r10 + rlwimi r10, r11, 0, _PAGE_PRESENT +-#endif + /* The Linux PTE won't go exactly into the MMU TLB. + * Software indicator bits 22 and 28 must be clear. + * Software indicator bits 24, 25, 26, and 27 must be +diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c +index a1e336901cc83..a1eec409695e4 100644 +--- a/arch/powerpc/kernel/setup_64.c ++++ b/arch/powerpc/kernel/setup_64.c +@@ -792,7 +792,13 @@ early_initcall(disable_hardlockup_detector); + static enum l1d_flush_type enabled_flush_types; + static void *l1d_flush_fallback_area; + static bool no_rfi_flush; ++static bool no_entry_flush; ++static bool no_uaccess_flush; + bool rfi_flush; ++bool entry_flush; ++bool uaccess_flush; ++DEFINE_STATIC_KEY_FALSE(uaccess_flush_key); ++EXPORT_SYMBOL(uaccess_flush_key); + + static int __init handle_no_rfi_flush(char *p) + { +@@ -802,6 +808,22 @@ static int __init handle_no_rfi_flush(char *p) + } + early_param("no_rfi_flush", handle_no_rfi_flush); + ++static int __init handle_no_entry_flush(char *p) ++{ ++ pr_info("entry-flush: disabled on command line."); ++ no_entry_flush = true; ++ return 0; ++} ++early_param("no_entry_flush", handle_no_entry_flush); ++ ++static int __init handle_no_uaccess_flush(char *p) ++{ ++ pr_info("uaccess-flush: disabled on command line."); ++ no_uaccess_flush = true; ++ return 0; ++} ++early_param("no_uaccess_flush", handle_no_uaccess_flush); ++ + /* + * The RFI flush is not KPTI, but because users will see doco that says to use + * nopti we hijack that option here to also disable the RFI flush. +@@ -833,6 +855,32 @@ void rfi_flush_enable(bool enable) + rfi_flush = enable; + } + ++void entry_flush_enable(bool enable) ++{ ++ if (enable) { ++ do_entry_flush_fixups(enabled_flush_types); ++ on_each_cpu(do_nothing, NULL, 1); ++ } else { ++ do_entry_flush_fixups(L1D_FLUSH_NONE); ++ } ++ ++ entry_flush = enable; ++} ++ ++void uaccess_flush_enable(bool enable) ++{ ++ if (enable) { ++ do_uaccess_flush_fixups(enabled_flush_types); ++ static_branch_enable(&uaccess_flush_key); ++ on_each_cpu(do_nothing, NULL, 1); ++ } else { ++ static_branch_disable(&uaccess_flush_key); ++ do_uaccess_flush_fixups(L1D_FLUSH_NONE); ++ } ++ ++ uaccess_flush = enable; ++} ++ + static void __ref init_fallback_flush(void) + { + u64 l1d_size, limit; +@@ -874,10 +922,28 @@ void setup_rfi_flush(enum l1d_flush_type types, bool enable) + + enabled_flush_types = types; + +- if (!no_rfi_flush && !cpu_mitigations_off()) ++ if (!cpu_mitigations_off() && !no_rfi_flush) + rfi_flush_enable(enable); + } + ++void setup_entry_flush(bool enable) ++{ ++ if (cpu_mitigations_off()) ++ return; ++ ++ if (!no_entry_flush) ++ entry_flush_enable(enable); ++} ++ ++void setup_uaccess_flush(bool enable) ++{ ++ if (cpu_mitigations_off()) ++ return; ++ ++ if (!no_uaccess_flush) ++ uaccess_flush_enable(enable); ++} ++ + #ifdef CONFIG_DEBUG_FS + static int rfi_flush_set(void *data, u64 val) + { +@@ -905,9 +971,63 @@ static int rfi_flush_get(void *data, u64 *val) + + DEFINE_SIMPLE_ATTRIBUTE(fops_rfi_flush, rfi_flush_get, rfi_flush_set, "%llu\n"); + ++static int entry_flush_set(void *data, u64 val) ++{ ++ bool enable; ++ ++ if (val == 1) ++ enable = true; ++ else if (val == 0) ++ enable = false; ++ else ++ return -EINVAL; ++ ++ /* Only do anything if we're changing state */ ++ if (enable != entry_flush) ++ entry_flush_enable(enable); ++ ++ return 0; ++} ++ ++static int entry_flush_get(void *data, u64 *val) ++{ ++ *val = entry_flush ? 1 : 0; ++ return 0; ++} ++ ++DEFINE_SIMPLE_ATTRIBUTE(fops_entry_flush, entry_flush_get, entry_flush_set, "%llu\n"); ++ ++static int uaccess_flush_set(void *data, u64 val) ++{ ++ bool enable; ++ ++ if (val == 1) ++ enable = true; ++ else if (val == 0) ++ enable = false; ++ else ++ return -EINVAL; ++ ++ /* Only do anything if we're changing state */ ++ if (enable != uaccess_flush) ++ uaccess_flush_enable(enable); ++ ++ return 0; ++} ++ ++static int uaccess_flush_get(void *data, u64 *val) ++{ ++ *val = uaccess_flush ? 1 : 0; ++ return 0; ++} ++ ++DEFINE_SIMPLE_ATTRIBUTE(fops_uaccess_flush, uaccess_flush_get, uaccess_flush_set, "%llu\n"); ++ + static __init int rfi_flush_debugfs_init(void) + { + debugfs_create_file("rfi_flush", 0600, powerpc_debugfs_root, NULL, &fops_rfi_flush); ++ debugfs_create_file("entry_flush", 0600, powerpc_debugfs_root, NULL, &fops_entry_flush); ++ debugfs_create_file("uaccess_flush", 0600, powerpc_debugfs_root, NULL, &fops_uaccess_flush); + return 0; + } + device_initcall(rfi_flush_debugfs_init); +diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S +index e4da937d6cf91..efb9a6982561c 100644 +--- a/arch/powerpc/kernel/vmlinux.lds.S ++++ b/arch/powerpc/kernel/vmlinux.lds.S +@@ -140,6 +140,20 @@ SECTIONS + __stop___stf_entry_barrier_fixup = .; + } + ++ . = ALIGN(8); ++ __uaccess_flush_fixup : AT(ADDR(__uaccess_flush_fixup) - LOAD_OFFSET) { ++ __start___uaccess_flush_fixup = .; ++ *(__uaccess_flush_fixup) ++ __stop___uaccess_flush_fixup = .; ++ } ++ ++ . = ALIGN(8); ++ __entry_flush_fixup : AT(ADDR(__entry_flush_fixup) - LOAD_OFFSET) { ++ __start___entry_flush_fixup = .; ++ *(__entry_flush_fixup) ++ __stop___entry_flush_fixup = .; ++ } ++ + . = ALIGN(8); + __stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) { + __start___stf_exit_barrier_fixup = .; +diff --git a/arch/powerpc/lib/checksum_wrappers.c b/arch/powerpc/lib/checksum_wrappers.c +index a0cb63fb76a1a..8d83c39be7e49 100644 +--- a/arch/powerpc/lib/checksum_wrappers.c ++++ b/arch/powerpc/lib/checksum_wrappers.c +@@ -29,6 +29,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst, + unsigned int csum; + + might_sleep(); ++ allow_read_from_user(src, len); + + *err_ptr = 0; + +@@ -60,6 +61,7 @@ __wsum csum_and_copy_from_user(const void __user *src, void *dst, + } + + out: ++ prevent_read_from_user(src, len); + return (__force __wsum)csum; + } + EXPORT_SYMBOL(csum_and_copy_from_user); +@@ -70,6 +72,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len, + unsigned int csum; + + might_sleep(); ++ allow_write_to_user(dst, len); + + *err_ptr = 0; + +@@ -97,6 +100,7 @@ __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len, + } + + out: ++ prevent_write_to_user(dst, len); + return (__force __wsum)csum; + } + EXPORT_SYMBOL(csum_and_copy_to_user); +diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c +index de7861e09b41c..6ebc3c9e7abb7 100644 +--- a/arch/powerpc/lib/feature-fixups.c ++++ b/arch/powerpc/lib/feature-fixups.c +@@ -232,6 +232,110 @@ void do_stf_barrier_fixups(enum stf_barrier_type types) + do_stf_exit_barrier_fixups(types); + } + ++void do_uaccess_flush_fixups(enum l1d_flush_type types) ++{ ++ unsigned int instrs[4], *dest; ++ long *start, *end; ++ int i; ++ ++ start = PTRRELOC(&__start___uaccess_flush_fixup); ++ end = PTRRELOC(&__stop___uaccess_flush_fixup); ++ ++ instrs[0] = 0x60000000; /* nop */ ++ instrs[1] = 0x60000000; /* nop */ ++ instrs[2] = 0x60000000; /* nop */ ++ instrs[3] = 0x4e800020; /* blr */ ++ ++ i = 0; ++ if (types == L1D_FLUSH_FALLBACK) { ++ instrs[3] = 0x60000000; /* nop */ ++ /* fallthrough to fallback flush */ ++ } ++ ++ if (types & L1D_FLUSH_ORI) { ++ instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */ ++ instrs[i++] = 0x63de0000; /* ori 30,30,0 L1d flush*/ ++ } ++ ++ if (types & L1D_FLUSH_MTTRIG) ++ instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ ++ ++ for (i = 0; start < end; start++, i++) { ++ dest = (void *)start + *start; ++ ++ pr_devel("patching dest %lx\n", (unsigned long)dest); ++ ++ patch_instruction(dest, instrs[0]); ++ ++ patch_instruction((dest + 1), instrs[1]); ++ patch_instruction((dest + 2), instrs[2]); ++ patch_instruction((dest + 3), instrs[3]); ++ } ++ ++ printk(KERN_DEBUG "uaccess-flush: patched %d locations (%s flush)\n", i, ++ (types == L1D_FLUSH_NONE) ? "no" : ++ (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : ++ (types & L1D_FLUSH_ORI) ? (types & L1D_FLUSH_MTTRIG) ++ ? "ori+mttrig type" ++ : "ori type" : ++ (types & L1D_FLUSH_MTTRIG) ? "mttrig type" ++ : "unknown"); ++} ++ ++void do_entry_flush_fixups(enum l1d_flush_type types) ++{ ++ unsigned int instrs[3], *dest; ++ long *start, *end; ++ int i; ++ ++ start = PTRRELOC(&__start___entry_flush_fixup); ++ end = PTRRELOC(&__stop___entry_flush_fixup); ++ ++ instrs[0] = 0x60000000; /* nop */ ++ instrs[1] = 0x60000000; /* nop */ ++ instrs[2] = 0x60000000; /* nop */ ++ ++ i = 0; ++ if (types == L1D_FLUSH_FALLBACK) { ++ instrs[i++] = 0x7d4802a6; /* mflr r10 */ ++ instrs[i++] = 0x60000000; /* branch patched below */ ++ instrs[i++] = 0x7d4803a6; /* mtlr r10 */ ++ } ++ ++ if (types & L1D_FLUSH_ORI) { ++ instrs[i++] = 0x63ff0000; /* ori 31,31,0 speculation barrier */ ++ instrs[i++] = 0x63de0000; /* ori 30,30,0 L1d flush*/ ++ } ++ ++ if (types & L1D_FLUSH_MTTRIG) ++ instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ ++ ++ for (i = 0; start < end; start++, i++) { ++ dest = (void *)start + *start; ++ ++ pr_devel("patching dest %lx\n", (unsigned long)dest); ++ ++ patch_instruction(dest, instrs[0]); ++ ++ if (types == L1D_FLUSH_FALLBACK) ++ patch_branch((dest + 1), (unsigned long)&entry_flush_fallback, ++ BRANCH_SET_LINK); ++ else ++ patch_instruction((dest + 1), instrs[1]); ++ ++ patch_instruction((dest + 2), instrs[2]); ++ } ++ ++ printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i, ++ (types == L1D_FLUSH_NONE) ? "no" : ++ (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : ++ (types & L1D_FLUSH_ORI) ? (types & L1D_FLUSH_MTTRIG) ++ ? "ori+mttrig type" ++ : "ori type" : ++ (types & L1D_FLUSH_MTTRIG) ? "mttrig type" ++ : "unknown"); ++} ++ + void do_rfi_flush_fixups(enum l1d_flush_type types) + { + unsigned int instrs[3], *dest; +diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S +index 0378def28d411..7ef5497f3f976 100644 +--- a/arch/powerpc/lib/string.S ++++ b/arch/powerpc/lib/string.S +@@ -88,7 +88,7 @@ _GLOBAL(memchr) + EXPORT_SYMBOL(memchr) + + #ifdef CONFIG_PPC32 +-_GLOBAL(__clear_user) ++_GLOBAL(__arch_clear_user) + addi r6,r3,-4 + li r3,0 + li r5,0 +@@ -128,5 +128,5 @@ _GLOBAL(__clear_user) + EX_TABLE(1b, 91b) + EX_TABLE(8b, 92b) + +-EXPORT_SYMBOL(__clear_user) ++EXPORT_SYMBOL(__arch_clear_user) + #endif +diff --git a/arch/powerpc/lib/string_64.S b/arch/powerpc/lib/string_64.S +index 56aac4c220257..ea3798f4f25f2 100644 +--- a/arch/powerpc/lib/string_64.S ++++ b/arch/powerpc/lib/string_64.S +@@ -29,7 +29,7 @@ PPC64_CACHES: + .section ".text" + + /** +- * __clear_user: - Zero a block of memory in user space, with less checking. ++ * __arch_clear_user: - Zero a block of memory in user space, with less checking. + * @to: Destination address, in user space. + * @n: Number of bytes to zero. + * +@@ -70,7 +70,7 @@ err3; stb r0,0(r3) + mr r3,r4 + blr + +-_GLOBAL_TOC(__clear_user) ++_GLOBAL_TOC(__arch_clear_user) + cmpdi r4,32 + neg r6,r3 + li r0,0 +@@ -193,4 +193,4 @@ err1; dcbz 0,r3 + cmpdi r4,32 + blt .Lshort_clear + b .Lmedium_clear +-EXPORT_SYMBOL(__clear_user) ++EXPORT_SYMBOL(__arch_clear_user) +diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c +index 888aa9584e94f..0693fd16e2c95 100644 +--- a/arch/powerpc/platforms/powernv/setup.c ++++ b/arch/powerpc/platforms/powernv/setup.c +@@ -124,12 +124,29 @@ static void pnv_setup_rfi_flush(void) + type = L1D_FLUSH_ORI; + } + ++ /* ++ * If we are non-Power9 bare metal, we don't need to flush on kernel ++ * entry or after user access: they fix a P9 specific vulnerability. ++ */ ++ if (!pvr_version_is(PVR_POWER9)) { ++ security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY); ++ security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS); ++ } ++ + enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && \ + (security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) || \ + security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV)); + + setup_rfi_flush(type, enable); + setup_count_cache_flush(); ++ ++ enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && ++ security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); ++ setup_entry_flush(enable); ++ ++ enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && ++ security_ftr_enabled(SEC_FTR_L1D_FLUSH_UACCESS); ++ setup_uaccess_flush(enable); + } + + static void __init pnv_setup_arch(void) +diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c +index 7a9945b350536..ab85fac02c046 100644 +--- a/arch/powerpc/platforms/pseries/setup.c ++++ b/arch/powerpc/platforms/pseries/setup.c +@@ -544,6 +544,14 @@ void pseries_setup_rfi_flush(void) + + setup_rfi_flush(types, enable); + setup_count_cache_flush(); ++ ++ enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && ++ security_ftr_enabled(SEC_FTR_L1D_FLUSH_ENTRY); ++ setup_entry_flush(enable); ++ ++ enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) && ++ security_ftr_enabled(SEC_FTR_L1D_FLUSH_UACCESS); ++ setup_uaccess_flush(enable); + } + + static void __init pSeries_setup_arch(void) +diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c +index 46559812da24e..23d3329e1c739 100644 +--- a/arch/x86/kvm/emulate.c ++++ b/arch/x86/kvm/emulate.c +@@ -3949,6 +3949,12 @@ static int em_clflush(struct x86_emulate_ctxt *ctxt) + return X86EMUL_CONTINUE; + } + ++static int em_clflushopt(struct x86_emulate_ctxt *ctxt) ++{ ++ /* emulating clflushopt regardless of cpuid */ ++ return X86EMUL_CONTINUE; ++} ++ + static int em_movsxd(struct x86_emulate_ctxt *ctxt) + { + ctxt->dst.val = (s32) ctxt->src.val; +@@ -4463,7 +4469,7 @@ static const struct opcode group11[] = { + }; + + static const struct gprefix pfx_0f_ae_7 = { +- I(SrcMem | ByteOp, em_clflush), N, N, N, ++ I(SrcMem | ByteOp, em_clflush), I(SrcMem | ByteOp, em_clflushopt), N, N, + }; + + static const struct group_dual group15 = { { +diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c +index 339e6d3dba7c3..73116acd391d1 100644 +--- a/drivers/acpi/evged.c ++++ b/drivers/acpi/evged.c +@@ -104,7 +104,7 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares, + + switch (gsi) { + case 0 ... 255: +- sprintf(ev_name, "_%c%02hhX", ++ sprintf(ev_name, "_%c%02X", + trigger == ACPI_EDGE_SENSITIVE ? 'E' : 'L', gsi); + + if (ACPI_SUCCESS(acpi_get_handle(handle, ev_name, &evt_handle))) +diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c +index d99c8d8da9a05..a09a1334afbf3 100644 +--- a/drivers/gpio/gpio-mockup.c ++++ b/drivers/gpio/gpio-mockup.c +@@ -350,6 +350,7 @@ static int __init mock_device_init(void) + err = platform_driver_register(&gpio_mockup_driver); + if (err) { + platform_device_unregister(pdev); ++ debugfs_remove_recursive(gpio_mockup_dbg_dir); + return err; + } + +diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c +index 26f83029f64ae..ce7a2bfd1dd84 100644 +--- a/drivers/i2c/busses/i2c-imx.c ++++ b/drivers/i2c/busses/i2c-imx.c +@@ -194,6 +194,7 @@ struct imx_i2c_dma { + struct imx_i2c_struct { + struct i2c_adapter adapter; + struct clk *clk; ++ struct notifier_block clk_change_nb; + void __iomem *base; + wait_queue_head_t queue; + unsigned long i2csr; +@@ -468,15 +469,14 @@ static int i2c_imx_acked(struct imx_i2c_struct *i2c_imx) + return 0; + } + +-static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx) ++static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, ++ unsigned int i2c_clk_rate) + { + struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div; +- unsigned int i2c_clk_rate; + unsigned int div; + int i; + + /* Divider value calculation */ +- i2c_clk_rate = clk_get_rate(i2c_imx->clk); + if (i2c_imx->cur_clk == i2c_clk_rate) + return; + +@@ -511,6 +511,20 @@ static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx) + #endif + } + ++static int i2c_imx_clk_notifier_call(struct notifier_block *nb, ++ unsigned long action, void *data) ++{ ++ struct clk_notifier_data *ndata = data; ++ struct imx_i2c_struct *i2c_imx = container_of(&ndata->clk, ++ struct imx_i2c_struct, ++ clk); ++ ++ if (action & POST_RATE_CHANGE) ++ i2c_imx_set_clk(i2c_imx, ndata->new_rate); ++ ++ return NOTIFY_OK; ++} ++ + static int i2c_imx_start(struct imx_i2c_struct *i2c_imx) + { + unsigned int temp = 0; +@@ -518,8 +532,6 @@ static int i2c_imx_start(struct imx_i2c_struct *i2c_imx) + + dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__); + +- i2c_imx_set_clk(i2c_imx); +- + imx_i2c_write_reg(i2c_imx->ifdr, i2c_imx, IMX_I2C_IFDR); + /* Enable I2C controller */ + imx_i2c_write_reg(i2c_imx->hwdata->i2sr_clr_opcode, i2c_imx, IMX_I2C_I2SR); +@@ -1099,14 +1111,6 @@ static int i2c_imx_probe(struct platform_device *pdev) + return ret; + } + +- /* Request IRQ */ +- ret = devm_request_irq(&pdev->dev, irq, i2c_imx_isr, IRQF_SHARED, +- pdev->name, i2c_imx); +- if (ret) { +- dev_err(&pdev->dev, "can't claim irq %d\n", irq); +- goto clk_disable; +- } +- + /* Init queue */ + init_waitqueue_head(&i2c_imx->queue); + +@@ -1125,12 +1129,23 @@ static int i2c_imx_probe(struct platform_device *pdev) + if (ret < 0) + goto rpm_disable; + ++ /* Request IRQ */ ++ ret = request_threaded_irq(irq, i2c_imx_isr, NULL, IRQF_SHARED, ++ pdev->name, i2c_imx); ++ if (ret) { ++ dev_err(&pdev->dev, "can't claim irq %d\n", irq); ++ goto rpm_disable; ++ } ++ + /* Set up clock divider */ + i2c_imx->bitrate = IMX_I2C_BIT_RATE; + ret = of_property_read_u32(pdev->dev.of_node, + "clock-frequency", &i2c_imx->bitrate); + if (ret < 0 && pdata && pdata->bitrate) + i2c_imx->bitrate = pdata->bitrate; ++ i2c_imx->clk_change_nb.notifier_call = i2c_imx_clk_notifier_call; ++ clk_notifier_register(i2c_imx->clk, &i2c_imx->clk_change_nb); ++ i2c_imx_set_clk(i2c_imx, clk_get_rate(i2c_imx->clk)); + + /* Set up chip registers to defaults */ + imx_i2c_write_reg(i2c_imx->hwdata->i2cr_ien_opcode ^ I2CR_IEN, +@@ -1141,12 +1156,12 @@ static int i2c_imx_probe(struct platform_device *pdev) + ret = i2c_imx_init_recovery_info(i2c_imx, pdev); + /* Give it another chance if pinctrl used is not ready yet */ + if (ret == -EPROBE_DEFER) +- goto rpm_disable; ++ goto clk_notifier_unregister; + + /* Add I2C adapter */ + ret = i2c_add_numbered_adapter(&i2c_imx->adapter); + if (ret < 0) +- goto rpm_disable; ++ goto clk_notifier_unregister; + + pm_runtime_mark_last_busy(&pdev->dev); + pm_runtime_put_autosuspend(&pdev->dev); +@@ -1162,13 +1177,14 @@ static int i2c_imx_probe(struct platform_device *pdev) + + return 0; /* Return OK */ + ++clk_notifier_unregister: ++ clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb); ++ free_irq(irq, i2c_imx); + rpm_disable: + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_disable(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev); + pm_runtime_dont_use_autosuspend(&pdev->dev); +- +-clk_disable: + clk_disable_unprepare(i2c_imx->clk); + return ret; + } +@@ -1176,7 +1192,7 @@ clk_disable: + static int i2c_imx_remove(struct platform_device *pdev) + { + struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev); +- int ret; ++ int irq, ret; + + ret = pm_runtime_get_sync(&pdev->dev); + if (ret < 0) +@@ -1195,6 +1211,10 @@ static int i2c_imx_remove(struct platform_device *pdev) + imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR); + imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR); + ++ clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb); ++ irq = platform_get_irq(pdev, 0); ++ if (irq >= 0) ++ free_irq(irq, i2c_imx); + clk_disable_unprepare(i2c_imx->clk); + + pm_runtime_put_noidle(&pdev->dev); +diff --git a/drivers/input/keyboard/sunkbd.c b/drivers/input/keyboard/sunkbd.c +index c95707ea26567..b1c3be1f0dfce 100644 +--- a/drivers/input/keyboard/sunkbd.c ++++ b/drivers/input/keyboard/sunkbd.c +@@ -115,7 +115,8 @@ static irqreturn_t sunkbd_interrupt(struct serio *serio, + switch (data) { + + case SUNKBD_RET_RESET: +- schedule_work(&sunkbd->tq); ++ if (sunkbd->enabled) ++ schedule_work(&sunkbd->tq); + sunkbd->reset = -1; + break; + +@@ -216,16 +217,12 @@ static int sunkbd_initialize(struct sunkbd *sunkbd) + } + + /* +- * sunkbd_reinit() sets leds and beeps to a state the computer remembers they +- * were in. ++ * sunkbd_set_leds_beeps() sets leds and beeps to a state the computer remembers ++ * they were in. + */ + +-static void sunkbd_reinit(struct work_struct *work) ++static void sunkbd_set_leds_beeps(struct sunkbd *sunkbd) + { +- struct sunkbd *sunkbd = container_of(work, struct sunkbd, tq); +- +- wait_event_interruptible_timeout(sunkbd->wait, sunkbd->reset >= 0, HZ); +- + serio_write(sunkbd->serio, SUNKBD_CMD_SETLED); + serio_write(sunkbd->serio, + (!!test_bit(LED_CAPSL, sunkbd->dev->led) << 3) | +@@ -238,11 +235,39 @@ static void sunkbd_reinit(struct work_struct *work) + SUNKBD_CMD_BELLOFF - !!test_bit(SND_BELL, sunkbd->dev->snd)); + } + ++ ++/* ++ * sunkbd_reinit() wait for the keyboard reset to complete and restores state ++ * of leds and beeps. ++ */ ++ ++static void sunkbd_reinit(struct work_struct *work) ++{ ++ struct sunkbd *sunkbd = container_of(work, struct sunkbd, tq); ++ ++ /* ++ * It is OK that we check sunkbd->enabled without pausing serio, ++ * as we only want to catch true->false transition that will ++ * happen once and we will be woken up for it. ++ */ ++ wait_event_interruptible_timeout(sunkbd->wait, ++ sunkbd->reset >= 0 || !sunkbd->enabled, ++ HZ); ++ ++ if (sunkbd->reset >= 0 && sunkbd->enabled) ++ sunkbd_set_leds_beeps(sunkbd); ++} ++ + static void sunkbd_enable(struct sunkbd *sunkbd, bool enable) + { + serio_pause_rx(sunkbd->serio); + sunkbd->enabled = enable; + serio_continue_rx(sunkbd->serio); ++ ++ if (!enable) { ++ wake_up_interruptible(&sunkbd->wait); ++ cancel_work_sync(&sunkbd->tq); ++ } + } + + /* +diff --git a/net/can/proc.c b/net/can/proc.c +index 83045f00c63c1..f98bf94ff1212 100644 +--- a/net/can/proc.c ++++ b/net/can/proc.c +@@ -554,6 +554,9 @@ void can_init_proc(struct net *net) + */ + void can_remove_proc(struct net *net) + { ++ if (!net->can.proc_dir) ++ return; ++ + if (net->can.pde_version) + remove_proc_entry(CAN_PROC_VERSION, net->can.proc_dir); + +@@ -581,6 +584,5 @@ void can_remove_proc(struct net *net) + if (net->can.pde_rcvlist_sff) + remove_proc_entry(CAN_PROC_RCVLIST_SFF, net->can.proc_dir); + +- if (net->can.proc_dir) +- remove_proc_entry("can", net->proc_net); ++ remove_proc_entry("can", net->proc_net); + } +diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c +index 2a18687019003..b74551323f5fb 100644 +--- a/net/mac80211/sta_info.c ++++ b/net/mac80211/sta_info.c +@@ -244,6 +244,24 @@ struct sta_info *sta_info_get_by_idx(struct ieee80211_sub_if_data *sdata, + */ + void sta_info_free(struct ieee80211_local *local, struct sta_info *sta) + { ++ /* ++ * If we had used sta_info_pre_move_state() then we might not ++ * have gone through the state transitions down again, so do ++ * it here now (and warn if it's inserted). ++ * ++ * This will clear state such as fast TX/RX that may have been ++ * allocated during state transitions. ++ */ ++ while (sta->sta_state > IEEE80211_STA_NONE) { ++ int ret; ++ ++ WARN_ON_ONCE(test_sta_flag(sta, WLAN_STA_INSERTED)); ++ ++ ret = sta_info_move_state(sta, sta->sta_state - 1); ++ if (WARN_ONCE(ret, "sta_info_move_state() returned %d\n", ret)) ++ break; ++ } ++ + if (sta->rate_ctrl) + rate_control_free_sta(sta); +