From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 37273138350 for ; Fri, 17 Apr 2020 11:45:42 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 6F7CBE089A; Fri, 17 Apr 2020 11:45:41 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 3B1FEE089A for ; Fri, 17 Apr 2020 11:45:41 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 005B434F19C for ; Fri, 17 Apr 2020 11:45:40 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 90D651DC for ; Fri, 17 Apr 2020 11:45:38 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1587123920.7030455eec2aa5ef288c04c76d5b9cd61dd6a529.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1115_linux-4.19.116.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 7030455eec2aa5ef288c04c76d5b9cd61dd6a529 X-VCS-Branch: 4.19 Date: Fri, 17 Apr 2020 11:45:38 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: ace2b605-56cb-4721-907c-efede68be4e1 X-Archives-Hash: 4d9f5eb27d2bbe325a90fb03c74ec7bd commit: 7030455eec2aa5ef288c04c76d5b9cd61dd6a529 Author: Mike Pagano gentoo org> AuthorDate: Fri Apr 17 11:45:20 2020 +0000 Commit: Mike Pagano gentoo org> CommitDate: Fri Apr 17 11:45:20 2020 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7030455e Linux patch 4.19.116 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1115_linux-4.19.116.patch | 5582 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 5586 insertions(+) diff --git a/0000_README b/0000_README index 65d1a80..d15813b 100644 --- a/0000_README +++ b/0000_README @@ -499,6 +499,10 @@ Patch: 1114_linux-4.19.115.patch From: https://www.kernel.org Desc: Linux 4.19.115 +Patch: 1115_linux-4.19.116.patch +From: https://www.kernel.org +Desc: Linux 4.19.116 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1115_linux-4.19.116.patch b/1115_linux-4.19.116.patch new file mode 100644 index 0000000..b5f61a1 --- /dev/null +++ b/1115_linux-4.19.116.patch @@ -0,0 +1,5582 @@ +diff --git a/Documentation/sound/hd-audio/index.rst b/Documentation/sound/hd-audio/index.rst +index f8a72ffffe66..6e12de9fc34e 100644 +--- a/Documentation/sound/hd-audio/index.rst ++++ b/Documentation/sound/hd-audio/index.rst +@@ -8,3 +8,4 @@ HD-Audio + models + controls + dp-mst ++ realtek-pc-beep +diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst +index e06238131f77..8c0de54b5649 100644 +--- a/Documentation/sound/hd-audio/models.rst ++++ b/Documentation/sound/hd-audio/models.rst +@@ -216,8 +216,6 @@ alc298-dell-aio + ALC298 fixups on Dell AIO machines + alc275-dell-xps + ALC275 fixups on Dell XPS models +-alc256-dell-xps13 +- ALC256 fixups on Dell XPS13 + lenovo-spk-noise + Workaround for speaker noise on Lenovo machines + lenovo-hotkey +diff --git a/Documentation/sound/hd-audio/realtek-pc-beep.rst b/Documentation/sound/hd-audio/realtek-pc-beep.rst +new file mode 100644 +index 000000000000..be47c6f76a6e +--- /dev/null ++++ b/Documentation/sound/hd-audio/realtek-pc-beep.rst +@@ -0,0 +1,129 @@ ++=============================== ++Realtek PC Beep Hidden Register ++=============================== ++ ++This file documents the "PC Beep Hidden Register", which is present in certain ++Realtek HDA codecs and controls a muxer and pair of passthrough mixers that can ++route audio between pins but aren't themselves exposed as HDA widgets. As far ++as I can tell, these hidden routes are designed to allow flexible PC Beep output ++for codecs that don't have mixer widgets in their output paths. Why it's easier ++to hide a mixer behind an undocumented vendor register than to just expose it ++as a widget, I have no idea. ++ ++Register Description ++==================== ++ ++The register is accessed via processing coefficient 0x36 on NID 20h. Bits not ++identified below have no discernible effect on my machine, a Dell XPS 13 9350:: ++ ++ MSB LSB ++ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ++ | |h|S|L| | B |R| | Known bits ++ +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ ++ |0|0|1|1| 0x7 |0|0x0|1| 0x7 | Reset value ++ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ++ ++1Ah input select (B): 2 bits ++ When zero, expose the PC Beep line (from the internal beep generator, when ++ enabled with the Set Beep Generation verb on NID 01h, or else from the ++ external PCBEEP pin) on the 1Ah pin node. When nonzero, expose the headphone ++ jack (or possibly Line In on some machines) input instead. If PC Beep is ++ selected, the 1Ah boost control has no effect. ++ ++Amplify 1Ah loopback, left (L): 1 bit ++ Amplify the left channel of 1Ah before mixing it into outputs as specified ++ by h and S bits. Does not affect the level of 1Ah exposed to other widgets. ++ ++Amplify 1Ah loopback, right (R): 1 bit ++ Amplify the right channel of 1Ah before mixing it into outputs as specified ++ by h and S bits. Does not affect the level of 1Ah exposed to other widgets. ++ ++Loopback 1Ah to 21h [active low] (h): 1 bit ++ When zero, mix 1Ah (possibly with amplification, depending on L and R bits) ++ into 21h (headphone jack on my machine). Mixed signal respects the mute ++ setting on 21h. ++ ++Loopback 1Ah to 14h (S): 1 bit ++ When one, mix 1Ah (possibly with amplification, depending on L and R bits) ++ into 14h (internal speaker on my machine). Mixed signal **ignores** the mute ++ setting on 14h and is present whenever 14h is configured as an output. ++ ++Path diagrams ++============= ++ ++1Ah input selection (DIV is the PC Beep divider set on NID 01h):: ++ ++ ++ | | | ++ +--DIV--+--!DIV--+ {1Ah boost control} ++ | | ++ +--(b == 0)--+--(b != 0)--+ ++ | ++ >1Ah (Beep/Headphone Mic/Line In)< ++ ++Loopback of 1Ah to 21h/14h:: ++ ++ <1Ah (Beep/Headphone Mic/Line In)> ++ | ++ {amplify if L/R} ++ | ++ +-----!h-----+-----S-----+ ++ | | ++ {21h mute control} | ++ | | ++ >21h (Headphone)< >14h (Internal Speaker)< ++ ++Background ++========== ++ ++All Realtek HDA codecs have a vendor-defined widget with node ID 20h which ++provides access to a bank of registers that control various codec functions. ++Registers are read and written via the standard HDA processing coefficient ++verbs (Set/Get Coefficient Index, Set/Get Processing Coefficient). The node is ++named "Realtek Vendor Registers" in public datasheets' verb listings and, ++apart from that, is entirely undocumented. ++ ++This particular register, exposed at coefficient 0x36 and named in commits from ++Realtek, is of note: unlike most registers, which seem to control detailed ++amplifier parameters not in scope of the HDA specification, it controls audio ++routing which could just as easily have been defined using standard HDA mixer ++and selector widgets. ++ ++Specifically, it selects between two sources for the input pin widget with Node ++ID (NID) 1Ah: the widget's signal can come either from an audio jack (on my ++laptop, a Dell XPS 13 9350, it's the headphone jack, but comments in Realtek ++commits indicate that it might be a Line In on some machines) or from the PC ++Beep line (which is itself multiplexed between the codec's internal beep ++generator and external PCBEEP pin, depending on if the beep generator is ++enabled via verbs on NID 01h). Additionally, it can mix (with optional ++amplification) that signal onto the 21h and/or 14h output pins. ++ ++The register's reset value is 0x3717, corresponding to PC Beep on 1Ah that is ++then amplified and mixed into both the headphones and the speakers. Not only ++does this violate the HDA specification, which says that "[a vendor defined ++beep input pin] connection may be maintained *only* while the Link reset ++(**RST#**) is asserted", it means that we cannot ignore the register if we care ++about the input that 1Ah would otherwise expose or if the PCBEEP trace is ++poorly shielded and picks up chassis noise (both of which are the case on my ++machine). ++ ++Unfortunately, there are lots of ways to get this register configuration wrong. ++Linux, it seems, has gone through most of them. For one, the register resets ++after S3 suspend: judging by existing code, this isn't the case for all vendor ++registers, and it's led to some fixes that improve behavior on cold boot but ++don't last after suspend. Other fixes have successfully switched the 1Ah input ++away from PC Beep but have failed to disable both loopback paths. On my ++machine, this means that the headphone input is amplified and looped back to ++the headphone output, which uses the exact same pins! As you might expect, this ++causes terrible headphone noise, the character of which is controlled by the ++1Ah boost control. (If you've seen instructions online to fix XPS 13 headphone ++noise by changing "Headphone Mic Boost" in ALSA, now you know why.) ++ ++The information here has been obtained through black-box reverse engineering of ++the ALC256 codec's behavior and is not guaranteed to be correct. It likely ++also applies for the ALC255, ALC257, ALC235, and ALC236, since those codecs ++seem to be close relatives of the ALC256. (They all share one initialization ++function.) Additionally, other codecs like the ALC225 and ALC285 also have this ++register, judging by existing fixups in ``patch_realtek.c``, but specific ++data (e.g. node IDs, bit positions, pin mappings) for those codecs may differ ++from what I've described here. +diff --git a/Makefile b/Makefile +index 9830a71e9192..d85ff698f5b9 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 19 +-SUBLEVEL = 115 ++SUBLEVEL = 116 + EXTRAVERSION = + NAME = "People's Front" + +diff --git a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts +index 49547a43cc90..54cbdaf7ffdc 100644 +--- a/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts ++++ b/arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts +@@ -318,8 +318,8 @@ + }; + + ®_dldo3 { +- regulator-min-microvolt = <2800000>; +- regulator-max-microvolt = <2800000>; ++ regulator-min-microvolt = <1800000>; ++ regulator-max-microvolt = <1800000>; + regulator-name = "vdd-csi"; + }; + +diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi +index bd4391269611..6e4e45907738 100644 +--- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi ++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi +@@ -70,8 +70,7 @@ + }; + + pmu { +- compatible = "arm,cortex-a53-pmu", +- "arm,armv8-pmuv3"; ++ compatible = "arm,cortex-a53-pmu"; + interrupts = , + , + , +diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c +index 39dc98dd78eb..181c29af5617 100644 +--- a/arch/arm64/kernel/armv8_deprecated.c ++++ b/arch/arm64/kernel/armv8_deprecated.c +@@ -604,7 +604,7 @@ static struct undef_hook setend_hooks[] = { + }, + { + /* Thumb mode */ +- .instr_mask = 0x0000fff7, ++ .instr_mask = 0xfffffff7, + .instr_val = 0x0000b650, + .pstate_mask = (PSR_AA32_T_BIT | PSR_AA32_MODE_MASK), + .pstate_val = (PSR_AA32_T_BIT | PSR_AA32_MODE_USR), +diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c +index 8272d8c648ca..43e4fc1b373c 100644 +--- a/arch/mips/cavium-octeon/octeon-irq.c ++++ b/arch/mips/cavium-octeon/octeon-irq.c +@@ -2199,6 +2199,9 @@ static int octeon_irq_cib_map(struct irq_domain *d, + } + + cd = kzalloc(sizeof(*cd), GFP_KERNEL); ++ if (!cd) ++ return -ENOMEM; ++ + cd->host_data = host_data; + cd->bit = hw; + +diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c +index 3944c49eee0c..620abc968624 100644 +--- a/arch/mips/mm/tlbex.c ++++ b/arch/mips/mm/tlbex.c +@@ -1479,6 +1479,7 @@ static void build_r4000_tlb_refill_handler(void) + + static void setup_pw(void) + { ++ unsigned int pwctl; + unsigned long pgd_i, pgd_w; + #ifndef __PAGETABLE_PMD_FOLDED + unsigned long pmd_i, pmd_w; +@@ -1505,6 +1506,7 @@ static void setup_pw(void) + + pte_i = ilog2(_PAGE_GLOBAL); + pte_w = 0; ++ pwctl = 1 << 30; /* Set PWDirExt */ + + #ifndef __PAGETABLE_PMD_FOLDED + write_c0_pwfield(pgd_i << 24 | pmd_i << 12 | pt_i << 6 | pte_i); +@@ -1515,8 +1517,9 @@ static void setup_pw(void) + #endif + + #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT +- write_c0_pwctl(1 << 6 | psn); ++ pwctl |= (1 << 6 | psn); + #endif ++ write_c0_pwctl(pwctl); + write_c0_kpgd((long)swapper_pg_dir); + kscratch_used_mask |= (1 << 7); /* KScratch6 is used for KPGD */ + } +diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h +index 9a3798660cef..fc68c0fc08b5 100644 +--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h ++++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h +@@ -145,6 +145,12 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, + extern int hash__has_transparent_hugepage(void); + #endif + ++static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) ++{ ++ BUG(); ++ return pmd; ++} ++ + #endif /* !__ASSEMBLY__ */ + + #endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */ +diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h +index f82ee8a3b561..790e4a946f6e 100644 +--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h ++++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h +@@ -233,7 +233,7 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array, + */ + static inline int hash__pmd_trans_huge(pmd_t pmd) + { +- return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) == ++ return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)) == + (_PAGE_PTE | H_PAGE_THP_HUGE)); + } + +@@ -259,6 +259,12 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm, + unsigned long addr, pmd_t *pmdp); + extern int hash__has_transparent_hugepage(void); + #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ ++ ++static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd) ++{ ++ return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)); ++} ++ + #endif /* __ASSEMBLY__ */ + + #endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */ +diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h +index 855dbae6d351..2aea6efc2e63 100644 +--- a/arch/powerpc/include/asm/book3s/64/pgtable.h ++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h +@@ -1253,7 +1253,9 @@ extern void serialize_against_pte_lookup(struct mm_struct *mm); + + static inline pmd_t pmd_mkdevmap(pmd_t pmd) + { +- return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP)); ++ if (radix_enabled()) ++ return radix__pmd_mkdevmap(pmd); ++ return hash__pmd_mkdevmap(pmd); + } + + static inline int pmd_devmap(pmd_t pmd) +diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h +index 7d1a3d1543fc..da01badef0cb 100644 +--- a/arch/powerpc/include/asm/book3s/64/radix.h ++++ b/arch/powerpc/include/asm/book3s/64/radix.h +@@ -255,6 +255,11 @@ extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm, + extern int radix__has_transparent_hugepage(void); + #endif + ++static inline pmd_t radix__pmd_mkdevmap(pmd_t pmd) ++{ ++ return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP)); ++} ++ + extern int __meminit radix__vmemmap_create_mapping(unsigned long start, + unsigned long page_size, + unsigned long phys); +diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h +index 7c1d8e74b25d..9e516fe3daab 100644 +--- a/arch/powerpc/include/asm/drmem.h ++++ b/arch/powerpc/include/asm/drmem.h +@@ -28,12 +28,12 @@ struct drmem_lmb_info { + extern struct drmem_lmb_info *drmem_info; + + #define for_each_drmem_lmb_in_range(lmb, start, end) \ +- for ((lmb) = (start); (lmb) <= (end); (lmb)++) ++ for ((lmb) = (start); (lmb) < (end); (lmb)++) + + #define for_each_drmem_lmb(lmb) \ + for_each_drmem_lmb_in_range((lmb), \ + &drmem_info->lmbs[0], \ +- &drmem_info->lmbs[drmem_info->n_lmbs - 1]) ++ &drmem_info->lmbs[drmem_info->n_lmbs]) + + /* + * The of_drconf_cell_v1 struct defines the layout of the LMB data +diff --git a/arch/powerpc/include/asm/setjmp.h b/arch/powerpc/include/asm/setjmp.h +index 279d03a1eec6..6941fe202bc8 100644 +--- a/arch/powerpc/include/asm/setjmp.h ++++ b/arch/powerpc/include/asm/setjmp.h +@@ -12,7 +12,9 @@ + + #define JMP_BUF_LEN 23 + +-extern long setjmp(long *); +-extern void longjmp(long *, long); ++typedef long jmp_buf[JMP_BUF_LEN]; ++ ++extern int setjmp(jmp_buf env) __attribute__((returns_twice)); ++extern void longjmp(jmp_buf env, int val) __attribute__((noreturn)); + + #endif /* _ASM_POWERPC_SETJMP_H */ +diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile +index d450280e5c29..1e64cfe22a83 100644 +--- a/arch/powerpc/kernel/Makefile ++++ b/arch/powerpc/kernel/Makefile +@@ -5,9 +5,6 @@ + + CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' + +-# Avoid clang warnings around longjmp/setjmp declarations +-CFLAGS_crash.o += -ffreestanding +- + subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror + + ifdef CONFIG_PPC64 +diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S +index 36178000a2f2..4a860d3b9229 100644 +--- a/arch/powerpc/kernel/idle_book3s.S ++++ b/arch/powerpc/kernel/idle_book3s.S +@@ -170,8 +170,11 @@ core_idle_lock_held: + bne- core_idle_lock_held + blr + +-/* Reuse an unused pt_regs slot for IAMR */ ++/* Reuse some unused pt_regs slots for AMR/IAMR/UAMOR/UAMOR */ ++#define PNV_POWERSAVE_AMR _TRAP + #define PNV_POWERSAVE_IAMR _DAR ++#define PNV_POWERSAVE_UAMOR _DSISR ++#define PNV_POWERSAVE_AMOR RESULT + + /* + * Pass requested state in r3: +@@ -205,8 +208,16 @@ pnv_powersave_common: + SAVE_NVGPRS(r1) + + BEGIN_FTR_SECTION ++ mfspr r4, SPRN_AMR + mfspr r5, SPRN_IAMR ++ mfspr r6, SPRN_UAMOR ++ std r4, PNV_POWERSAVE_AMR(r1) + std r5, PNV_POWERSAVE_IAMR(r1) ++ std r6, PNV_POWERSAVE_UAMOR(r1) ++BEGIN_FTR_SECTION_NESTED(42) ++ mfspr r7, SPRN_AMOR ++ std r7, PNV_POWERSAVE_AMOR(r1) ++END_FTR_SECTION_NESTED_IFSET(CPU_FTR_HVMODE, 42) + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) + + mfcr r5 +@@ -935,12 +946,20 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) + REST_GPR(2, r1) + + BEGIN_FTR_SECTION +- /* IAMR was saved in pnv_powersave_common() */ ++ /* These regs were saved in pnv_powersave_common() */ ++ ld r4, PNV_POWERSAVE_AMR(r1) + ld r5, PNV_POWERSAVE_IAMR(r1) ++ ld r6, PNV_POWERSAVE_UAMOR(r1) ++ mtspr SPRN_AMR, r4 + mtspr SPRN_IAMR, r5 ++ mtspr SPRN_UAMOR, r6 ++BEGIN_FTR_SECTION_NESTED(42) ++ ld r7, PNV_POWERSAVE_AMOR(r1) ++ mtspr SPRN_AMOR, r7 ++END_FTR_SECTION_NESTED_IFSET(CPU_FTR_HVMODE, 42) + /* +- * We don't need an isync here because the upcoming mtmsrd is +- * execution synchronizing. ++ * We don't need an isync here after restoring IAMR because the upcoming ++ * mtmsrd is execution synchronizing. + */ + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) + +diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c +index 5c60bb0f927f..53a39661eb13 100644 +--- a/arch/powerpc/kernel/kprobes.c ++++ b/arch/powerpc/kernel/kprobes.c +@@ -277,6 +277,9 @@ int kprobe_handler(struct pt_regs *regs) + if (user_mode(regs)) + return 0; + ++ if (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR)) ++ return 0; ++ + /* + * We don't want to be preempted for the entire + * duration of kprobe processing +diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c +index b088b0700d0d..7aba592bba36 100644 +--- a/arch/powerpc/kernel/signal_64.c ++++ b/arch/powerpc/kernel/signal_64.c +@@ -477,8 +477,10 @@ static long restore_tm_sigcontexts(struct task_struct *tsk, + err |= __get_user(tsk->thread.ckpt_regs.ccr, + &sc->gp_regs[PT_CCR]); + ++ /* Don't allow userspace to set the trap value */ ++ regs->trap = 0; ++ + /* These regs are not checkpointed; they can go in 'regs'. */ +- err |= __get_user(regs->trap, &sc->gp_regs[PT_TRAP]); + err |= __get_user(regs->dar, &sc->gp_regs[PT_DAR]); + err |= __get_user(regs->dsisr, &sc->gp_regs[PT_DSISR]); + err |= __get_user(regs->result, &sc->gp_regs[PT_RESULT]); +diff --git a/arch/powerpc/mm/tlb_nohash_low.S b/arch/powerpc/mm/tlb_nohash_low.S +index e066a658acac..56f58a362ea5 100644 +--- a/arch/powerpc/mm/tlb_nohash_low.S ++++ b/arch/powerpc/mm/tlb_nohash_low.S +@@ -402,7 +402,7 @@ _GLOBAL(set_context) + * extern void loadcam_entry(unsigned int index) + * + * Load TLBCAM[index] entry in to the L2 CAM MMU +- * Must preserve r7, r8, r9, and r10 ++ * Must preserve r7, r8, r9, r10 and r11 + */ + _GLOBAL(loadcam_entry) + mflr r5 +@@ -438,6 +438,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS) + */ + _GLOBAL(loadcam_multi) + mflr r8 ++ /* Don't switch to AS=1 if already there */ ++ mfmsr r11 ++ andi. r11,r11,MSR_IS ++ bne 10f + + /* + * Set up temporary TLB entry that is the same as what we're +@@ -463,6 +467,7 @@ _GLOBAL(loadcam_multi) + mtmsr r6 + isync + ++10: + mr r9,r3 + add r10,r3,r4 + 2: bl loadcam_entry +@@ -471,6 +476,10 @@ _GLOBAL(loadcam_multi) + mr r3,r9 + blt 2b + ++ /* Don't return to AS=0 if we were in AS=1 at function start */ ++ andi. r11,r11,MSR_IS ++ bne 3f ++ + /* Return to AS=0 and clear the temporary entry */ + mfmsr r6 + rlwinm. r6,r6,0,~(MSR_IS|MSR_DS) +@@ -486,6 +495,7 @@ _GLOBAL(loadcam_multi) + tlbwe + isync + ++3: + mtlr r8 + blr + #endif +diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c +index fc01a2c0f8ed..b168c3742b43 100644 +--- a/arch/powerpc/platforms/pseries/hotplug-memory.c ++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c +@@ -227,7 +227,7 @@ static int get_lmb_range(u32 drc_index, int n_lmbs, + struct drmem_lmb **end_lmb) + { + struct drmem_lmb *lmb, *start, *end; +- struct drmem_lmb *last_lmb; ++ struct drmem_lmb *limit; + + start = NULL; + for_each_drmem_lmb(lmb) { +@@ -240,10 +240,10 @@ static int get_lmb_range(u32 drc_index, int n_lmbs, + if (!start) + return -EINVAL; + +- end = &start[n_lmbs - 1]; ++ end = &start[n_lmbs]; + +- last_lmb = &drmem_info->lmbs[drmem_info->n_lmbs - 1]; +- if (end > last_lmb) ++ limit = &drmem_info->lmbs[drmem_info->n_lmbs]; ++ if (end > limit) + return -EINVAL; + + *start_lmb = start; +diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c +index 49e3a88b6a0c..d660a90616cd 100644 +--- a/arch/powerpc/platforms/pseries/lpar.c ++++ b/arch/powerpc/platforms/pseries/lpar.c +@@ -1056,7 +1056,7 @@ static int __init vpa_debugfs_init(void) + { + char name[16]; + long i; +- static struct dentry *vpa_dir; ++ struct dentry *vpa_dir; + + if (!firmware_has_feature(FW_FEATURE_SPLPAR)) + return 0; +diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c +index 3c939b9de488..1c31a08cdd54 100644 +--- a/arch/powerpc/sysdev/xive/common.c ++++ b/arch/powerpc/sysdev/xive/common.c +@@ -72,13 +72,6 @@ static u32 xive_ipi_irq; + /* Xive state for each CPU */ + static DEFINE_PER_CPU(struct xive_cpu *, xive_cpu); + +-/* +- * A "disabled" interrupt should never fire, to catch problems +- * we set its logical number to this +- */ +-#define XIVE_BAD_IRQ 0x7fffffff +-#define XIVE_MAX_IRQ (XIVE_BAD_IRQ - 1) +- + /* An invalid CPU target */ + #define XIVE_INVALID_TARGET (-1) + +@@ -1074,7 +1067,7 @@ static int xive_setup_cpu_ipi(unsigned int cpu) + xc = per_cpu(xive_cpu, cpu); + + /* Check if we are already setup */ +- if (xc->hw_ipi != 0) ++ if (xc->hw_ipi != XIVE_BAD_IRQ) + return 0; + + /* Grab an IPI from the backend, this will populate xc->hw_ipi */ +@@ -1111,7 +1104,7 @@ static void xive_cleanup_cpu_ipi(unsigned int cpu, struct xive_cpu *xc) + /* Disable the IPI and free the IRQ data */ + + /* Already cleaned up ? */ +- if (xc->hw_ipi == 0) ++ if (xc->hw_ipi == XIVE_BAD_IRQ) + return; + + /* Mask the IPI */ +@@ -1267,6 +1260,7 @@ static int xive_prepare_cpu(unsigned int cpu) + if (np) + xc->chip_id = of_get_ibm_chip_id(np); + of_node_put(np); ++ xc->hw_ipi = XIVE_BAD_IRQ; + + per_cpu(xive_cpu, cpu) = xc; + } +diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c +index 6d5b28022452..cb1f51ad48e4 100644 +--- a/arch/powerpc/sysdev/xive/native.c ++++ b/arch/powerpc/sysdev/xive/native.c +@@ -311,7 +311,7 @@ static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc) + s64 rc; + + /* Free the IPI */ +- if (!xc->hw_ipi) ++ if (xc->hw_ipi == XIVE_BAD_IRQ) + return; + for (;;) { + rc = opal_xive_free_irq(xc->hw_ipi); +@@ -319,7 +319,7 @@ static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc) + msleep(OPAL_BUSY_DELAY_MS); + continue; + } +- xc->hw_ipi = 0; ++ xc->hw_ipi = XIVE_BAD_IRQ; + break; + } + } +diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c +index e3ebf6469392..5566bbc86f4a 100644 +--- a/arch/powerpc/sysdev/xive/spapr.c ++++ b/arch/powerpc/sysdev/xive/spapr.c +@@ -509,11 +509,11 @@ static int xive_spapr_get_ipi(unsigned int cpu, struct xive_cpu *xc) + + static void xive_spapr_put_ipi(unsigned int cpu, struct xive_cpu *xc) + { +- if (!xc->hw_ipi) ++ if (xc->hw_ipi == XIVE_BAD_IRQ) + return; + + xive_irq_bitmap_free(xc->hw_ipi); +- xc->hw_ipi = 0; ++ xc->hw_ipi = XIVE_BAD_IRQ; + } + #endif /* CONFIG_SMP */ + +diff --git a/arch/powerpc/sysdev/xive/xive-internal.h b/arch/powerpc/sysdev/xive/xive-internal.h +index f34abed0c05f..48808dbb25dc 100644 +--- a/arch/powerpc/sysdev/xive/xive-internal.h ++++ b/arch/powerpc/sysdev/xive/xive-internal.h +@@ -9,6 +9,13 @@ + #ifndef __XIVE_INTERNAL_H + #define __XIVE_INTERNAL_H + ++/* ++ * A "disabled" interrupt should never fire, to catch problems ++ * we set its logical number to this ++ */ ++#define XIVE_BAD_IRQ 0x7fffffff ++#define XIVE_MAX_IRQ (XIVE_BAD_IRQ - 1) ++ + /* Each CPU carry one of these with various per-CPU state */ + struct xive_cpu { + #ifdef CONFIG_SMP +diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile +index 365e711bebab..ab193cd7dbf9 100644 +--- a/arch/powerpc/xmon/Makefile ++++ b/arch/powerpc/xmon/Makefile +@@ -1,9 +1,6 @@ + # SPDX-License-Identifier: GPL-2.0 + # Makefile for xmon + +-# Avoid clang warnings around longjmp/setjmp declarations +-subdir-ccflags-y := -ffreestanding +- + subdir-ccflags-$(CONFIG_PPC_WERROR) += -Werror + + GCOV_PROFILE := n +diff --git a/arch/s390/kernel/diag.c b/arch/s390/kernel/diag.c +index 53a5316cc4b7..35c842aa8705 100644 +--- a/arch/s390/kernel/diag.c ++++ b/arch/s390/kernel/diag.c +@@ -79,7 +79,7 @@ static int show_diag_stat(struct seq_file *m, void *v) + + static void *show_diag_stat_start(struct seq_file *m, loff_t *pos) + { +- return *pos <= nr_cpu_ids ? (void *)((unsigned long) *pos + 1) : NULL; ++ return *pos <= NR_DIAG_STAT ? (void *)((unsigned long) *pos + 1) : NULL; + } + + static void *show_diag_stat_next(struct seq_file *m, void *v, loff_t *pos) +diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c +index a2b28cd1e3fe..17d73b71df1d 100644 +--- a/arch/s390/kvm/vsie.c ++++ b/arch/s390/kvm/vsie.c +@@ -1024,6 +1024,7 @@ static int vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page) + scb_s->iprcc = PGM_ADDRESSING; + scb_s->pgmilc = 4; + scb_s->gpsw.addr = __rewind_psw(scb_s->gpsw, 4); ++ rc = 1; + } + return rc; + } +diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c +index 911c7ded35f1..b56c4fdb1517 100644 +--- a/arch/s390/mm/gmap.c ++++ b/arch/s390/mm/gmap.c +@@ -787,14 +787,18 @@ static void gmap_call_notifier(struct gmap *gmap, unsigned long start, + static inline unsigned long *gmap_table_walk(struct gmap *gmap, + unsigned long gaddr, int level) + { ++ const int asce_type = gmap->asce & _ASCE_TYPE_MASK; + unsigned long *table; + + if ((gmap->asce & _ASCE_TYPE_MASK) + 4 < (level * 4)) + return NULL; + if (gmap_is_shadow(gmap) && gmap->removed) + return NULL; +- if (gaddr & (-1UL << (31 + ((gmap->asce & _ASCE_TYPE_MASK) >> 2)*11))) ++ ++ if (asce_type != _ASCE_TYPE_REGION1 && ++ gaddr & (-1UL << (31 + (asce_type >> 2) * 11))) + return NULL; ++ + table = gmap->table; + switch (gmap->asce & _ASCE_TYPE_MASK) { + case _ASCE_TYPE_REGION1: +diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S +index 37380c0d5999..01d628ea3402 100644 +--- a/arch/x86/boot/compressed/head_32.S ++++ b/arch/x86/boot/compressed/head_32.S +@@ -106,7 +106,7 @@ ENTRY(startup_32) + notl %eax + andl %eax, %ebx + cmpl $LOAD_PHYSICAL_ADDR, %ebx +- jge 1f ++ jae 1f + #endif + movl $LOAD_PHYSICAL_ADDR, %ebx + 1: +diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S +index 4eaa724afce3..9fa644c62839 100644 +--- a/arch/x86/boot/compressed/head_64.S ++++ b/arch/x86/boot/compressed/head_64.S +@@ -106,7 +106,7 @@ ENTRY(startup_32) + notl %eax + andl %eax, %ebx + cmpl $LOAD_PHYSICAL_ADDR, %ebx +- jge 1f ++ jae 1f + #endif + movl $LOAD_PHYSICAL_ADDR, %ebx + 1: +@@ -297,7 +297,7 @@ ENTRY(startup_64) + notq %rax + andq %rax, %rbp + cmpq $LOAD_PHYSICAL_ADDR, %rbp +- jge 1f ++ jae 1f + #endif + movq $LOAD_PHYSICAL_ADDR, %rbp + 1: +diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S +index d07432062ee6..37d9016d4768 100644 +--- a/arch/x86/entry/entry_32.S ++++ b/arch/x86/entry/entry_32.S +@@ -1489,6 +1489,7 @@ ENTRY(int3) + END(int3) + + ENTRY(general_protection) ++ ASM_CLAC + pushl $do_general_protection + jmp common_exception + END(general_protection) +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index 067288d4ef6e..5c99b9bfce04 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -1070,7 +1070,7 @@ struct kvm_x86_ops { + bool (*xsaves_supported)(void); + bool (*umip_emulated)(void); + +- int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr); ++ int (*check_nested_events)(struct kvm_vcpu *vcpu); + void (*request_immediate_exit)(struct kvm_vcpu *vcpu); + + void (*sched_in)(struct kvm_vcpu *kvm, int cpu); +diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h +index 690c0307afed..2e1ed12c65f8 100644 +--- a/arch/x86/include/asm/pgtable.h ++++ b/arch/x86/include/asm/pgtable.h +@@ -608,12 +608,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) + return __pmd(val); + } + +-/* mprotect needs to preserve PAT bits when updating vm_page_prot */ ++/* ++ * mprotect needs to preserve PAT and encryption bits when updating ++ * vm_page_prot ++ */ + #define pgprot_modify pgprot_modify + static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot) + { + pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK; +- pgprotval_t addbits = pgprot_val(newprot); ++ pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK; + return __pgprot(preservebits | addbits); + } + +diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h +index 106b7d0e2dae..71ea49e7db74 100644 +--- a/arch/x86/include/asm/pgtable_types.h ++++ b/arch/x86/include/asm/pgtable_types.h +@@ -124,7 +124,7 @@ + */ + #define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ + _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \ +- _PAGE_SOFT_DIRTY | _PAGE_DEVMAP) ++ _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC) + #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE) + + /* +diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c +index 3b20607d581b..7303bb398862 100644 +--- a/arch/x86/kernel/acpi/boot.c ++++ b/arch/x86/kernel/acpi/boot.c +@@ -1752,7 +1752,7 @@ int __acpi_acquire_global_lock(unsigned int *lock) + new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1)); + val = cmpxchg(lock, old, new); + } while (unlikely (val != old)); +- return (new < 3) ? -1 : 0; ++ return ((new & 0x3) < 3) ? -1 : 0; + } + + int __acpi_release_global_lock(unsigned int *lock) +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c +index cc8f3b41a1b2..df2274414640 100644 +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -1917,6 +1917,10 @@ static void __unregister_enc_region_locked(struct kvm *kvm, + static struct kvm *svm_vm_alloc(void) + { + struct kvm_svm *kvm_svm = vzalloc(sizeof(struct kvm_svm)); ++ ++ if (!kvm_svm) ++ return NULL; ++ + return &kvm_svm->kvm; + } + +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index a81d7d9ce9d6..d37b48173e9c 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -2156,43 +2156,15 @@ static void vmcs_load(struct vmcs *vmcs) + } + + #ifdef CONFIG_KEXEC_CORE +-/* +- * This bitmap is used to indicate whether the vmclear +- * operation is enabled on all cpus. All disabled by +- * default. +- */ +-static cpumask_t crash_vmclear_enabled_bitmap = CPU_MASK_NONE; +- +-static inline void crash_enable_local_vmclear(int cpu) +-{ +- cpumask_set_cpu(cpu, &crash_vmclear_enabled_bitmap); +-} +- +-static inline void crash_disable_local_vmclear(int cpu) +-{ +- cpumask_clear_cpu(cpu, &crash_vmclear_enabled_bitmap); +-} +- +-static inline int crash_local_vmclear_enabled(int cpu) +-{ +- return cpumask_test_cpu(cpu, &crash_vmclear_enabled_bitmap); +-} +- + static void crash_vmclear_local_loaded_vmcss(void) + { + int cpu = raw_smp_processor_id(); + struct loaded_vmcs *v; + +- if (!crash_local_vmclear_enabled(cpu)) +- return; +- + list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu), + loaded_vmcss_on_cpu_link) + vmcs_clear(v->vmcs); + } +-#else +-static inline void crash_enable_local_vmclear(int cpu) { } +-static inline void crash_disable_local_vmclear(int cpu) { } + #endif /* CONFIG_KEXEC_CORE */ + + static void __loaded_vmcs_clear(void *arg) +@@ -2204,19 +2176,24 @@ static void __loaded_vmcs_clear(void *arg) + return; /* vcpu migration can race with cpu offline */ + if (per_cpu(current_vmcs, cpu) == loaded_vmcs->vmcs) + per_cpu(current_vmcs, cpu) = NULL; +- crash_disable_local_vmclear(cpu); ++ ++ vmcs_clear(loaded_vmcs->vmcs); ++ if (loaded_vmcs->shadow_vmcs && loaded_vmcs->launched) ++ vmcs_clear(loaded_vmcs->shadow_vmcs); ++ + list_del(&loaded_vmcs->loaded_vmcss_on_cpu_link); + + /* +- * we should ensure updating loaded_vmcs->loaded_vmcss_on_cpu_link +- * is before setting loaded_vmcs->vcpu to -1 which is done in +- * loaded_vmcs_init. Otherwise, other cpu can see vcpu = -1 fist +- * then adds the vmcs into percpu list before it is deleted. ++ * Ensure all writes to loaded_vmcs, including deleting it from its ++ * current percpu list, complete before setting loaded_vmcs->vcpu to ++ * -1, otherwise a different cpu can see vcpu == -1 first and add ++ * loaded_vmcs to its percpu list before it's deleted from this cpu's ++ * list. Pairs with the smp_rmb() in vmx_vcpu_load_vmcs(). + */ + smp_wmb(); + +- loaded_vmcs_init(loaded_vmcs); +- crash_enable_local_vmclear(cpu); ++ loaded_vmcs->cpu = -1; ++ loaded_vmcs->launched = 0; + } + + static void loaded_vmcs_clear(struct loaded_vmcs *loaded_vmcs) +@@ -3067,18 +3044,17 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) + if (!already_loaded) { + loaded_vmcs_clear(vmx->loaded_vmcs); + local_irq_disable(); +- crash_disable_local_vmclear(cpu); + + /* +- * Read loaded_vmcs->cpu should be before fetching +- * loaded_vmcs->loaded_vmcss_on_cpu_link. +- * See the comments in __loaded_vmcs_clear(). ++ * Ensure loaded_vmcs->cpu is read before adding loaded_vmcs to ++ * this cpu's percpu list, otherwise it may not yet be deleted ++ * from its previous cpu's percpu list. Pairs with the ++ * smb_wmb() in __loaded_vmcs_clear(). + */ + smp_rmb(); + + list_add(&vmx->loaded_vmcs->loaded_vmcss_on_cpu_link, + &per_cpu(loaded_vmcss_on_cpu, cpu)); +- crash_enable_local_vmclear(cpu); + local_irq_enable(); + } + +@@ -4422,21 +4398,6 @@ static int hardware_enable(void) + !hv_get_vp_assist_page(cpu)) + return -EFAULT; + +- INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); +- INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); +- spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); +- +- /* +- * Now we can enable the vmclear operation in kdump +- * since the loaded_vmcss_on_cpu list on this cpu +- * has been initialized. +- * +- * Though the cpu is not in VMX operation now, there +- * is no problem to enable the vmclear operation +- * for the loaded_vmcss_on_cpu list is empty! +- */ +- crash_enable_local_vmclear(cpu); +- + rdmsrl(MSR_IA32_FEATURE_CONTROL, old); + + test_bits = FEATURE_CONTROL_LOCKED; +@@ -6954,8 +6915,13 @@ static int vmx_nmi_allowed(struct kvm_vcpu *vcpu) + + static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu) + { +- return (!to_vmx(vcpu)->nested.nested_run_pending && +- vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) && ++ if (to_vmx(vcpu)->nested.nested_run_pending) ++ return false; ++ ++ if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) ++ return true; ++ ++ return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) && + !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & + (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); + } +@@ -11016,6 +10982,10 @@ STACK_FRAME_NON_STANDARD(vmx_vcpu_run); + static struct kvm *vmx_vm_alloc(void) + { + struct kvm_vmx *kvm_vmx = vzalloc(sizeof(struct kvm_vmx)); ++ ++ if (!kvm_vmx) ++ return NULL; ++ + return &kvm_vmx->kvm; + } + +@@ -12990,7 +12960,7 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu, + } + } + +-static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr) ++static int vmx_check_nested_events(struct kvm_vcpu *vcpu) + { + struct vcpu_vmx *vmx = to_vmx(vcpu); + unsigned long exit_qual; +@@ -13028,8 +12998,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr) + return 0; + } + +- if ((kvm_cpu_has_interrupt(vcpu) || external_intr) && +- nested_exit_on_intr(vcpu)) { ++ if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(vcpu)) { + if (block_nested_events) + return -EBUSY; + nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0); +@@ -13607,17 +13576,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + + if (likely(!vmx->fail)) { +- /* +- * TODO: SDM says that with acknowledge interrupt on +- * exit, bit 31 of the VM-exit interrupt information +- * (valid interrupt) is always set to 1 on +- * EXIT_REASON_EXTERNAL_INTERRUPT, so we shouldn't +- * need kvm_cpu_has_interrupt(). See the commit +- * message for details. +- */ +- if (nested_exit_intr_ack_set(vcpu) && +- exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT && +- kvm_cpu_has_interrupt(vcpu)) { ++ if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT && ++ nested_exit_intr_ack_set(vcpu)) { + int irq = kvm_cpu_get_interrupt(vcpu); + WARN_ON(irq < 0); + vmcs12->vm_exit_intr_info = irq | +@@ -14590,7 +14550,7 @@ module_exit(vmx_exit); + + static int __init vmx_init(void) + { +- int r; ++ int r, cpu; + + #if IS_ENABLED(CONFIG_HYPERV) + /* +@@ -14641,6 +14601,12 @@ static int __init vmx_init(void) + } + } + ++ for_each_possible_cpu(cpu) { ++ INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); ++ INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); ++ spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); ++ } ++ + #ifdef CONFIG_KEXEC_CORE + rcu_assign_pointer(crash_vmclear_loaded_vmcss, + crash_vmclear_local_loaded_vmcss); +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 2cb379e261c0..1a6e1aa2fb29 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -7124,7 +7124,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu) + kvm_x86_ops->update_cr8_intercept(vcpu, tpr, max_irr); + } + +-static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) ++static int inject_pending_event(struct kvm_vcpu *vcpu) + { + int r; + +@@ -7160,7 +7160,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) + * from L2 to L1. + */ + if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { +- r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); ++ r = kvm_x86_ops->check_nested_events(vcpu); + if (r != 0) + return r; + } +@@ -7210,7 +7210,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) + * KVM_REQ_EVENT only on certain events and not unconditionally? + */ + if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { +- r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); ++ r = kvm_x86_ops->check_nested_events(vcpu); + if (r != 0) + return r; + } +@@ -7683,7 +7683,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) + goto out; + } + +- if (inject_pending_event(vcpu, req_int_win) != 0) ++ if (inject_pending_event(vcpu) != 0) + req_immediate_exit = true; + else { + /* Enable SMI/NMI/IRQ window open exits if needed. +@@ -7894,7 +7894,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) + static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu) + { + if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) +- kvm_x86_ops->check_nested_events(vcpu, false); ++ kvm_x86_ops->check_nested_events(vcpu); + + return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE && + !vcpu->arch.apf.halted); +@@ -9229,6 +9229,13 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + { + int i; + ++ /* ++ * Clear out the previous array pointers for the KVM_MR_MOVE case. The ++ * old arrays will be freed by __kvm_set_memory_region() if installing ++ * the new memslot is successful. ++ */ ++ memset(&slot->arch, 0, sizeof(slot->arch)); ++ + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + struct kvm_lpage_info *linfo; + unsigned long ugfn; +@@ -9303,6 +9310,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, + const struct kvm_userspace_memory_region *mem, + enum kvm_mr_change change) + { ++ if (change == KVM_MR_MOVE) ++ return kvm_arch_create_memslot(kvm, memslot, ++ mem->memory_size >> PAGE_SHIFT); ++ + return 0; + } + +diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c +index 2a9a703ef4a0..52dd59af873e 100644 +--- a/arch/x86/platform/efi/efi_64.c ++++ b/arch/x86/platform/efi/efi_64.c +@@ -833,7 +833,7 @@ efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor, + phys_vendor = virt_to_phys_or_null(vnd); + phys_data = virt_to_phys_or_null_size(data, data_size); + +- if (!phys_name || !phys_data) ++ if (!phys_name || (data && !phys_data)) + status = EFI_INVALID_PARAMETER; + else + status = efi_thunk(set_variable, phys_name, phys_vendor, +@@ -864,7 +864,7 @@ efi_thunk_set_variable_nonblocking(efi_char16_t *name, efi_guid_t *vendor, + phys_vendor = virt_to_phys_or_null(vnd); + phys_data = virt_to_phys_or_null_size(data, data_size); + +- if (!phys_name || !phys_data) ++ if (!phys_name || (data && !phys_data)) + status = EFI_INVALID_PARAMETER; + else + status = efi_thunk(set_variable, phys_name, phys_vendor, +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index 66b1ebc21ce4..5198ed1b3669 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -5156,20 +5156,28 @@ static struct bfq_queue *bfq_init_rq(struct request *rq) + return bfqq; + } + +-static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq) ++static void ++bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq) + { +- struct bfq_data *bfqd = bfqq->bfqd; + enum bfqq_expiration reason; + unsigned long flags; + + spin_lock_irqsave(&bfqd->lock, flags); +- bfq_clear_bfqq_wait_request(bfqq); + ++ /* ++ * Considering that bfqq may be in race, we should firstly check ++ * whether bfqq is in service before doing something on it. If ++ * the bfqq in race is not in service, it has already been expired ++ * through __bfq_bfqq_expire func and its wait_request flags has ++ * been cleared in __bfq_bfqd_reset_in_service func. ++ */ + if (bfqq != bfqd->in_service_queue) { + spin_unlock_irqrestore(&bfqd->lock, flags); + return; + } + ++ bfq_clear_bfqq_wait_request(bfqq); ++ + if (bfq_bfqq_budget_timeout(bfqq)) + /* + * Also here the queue can be safely expired +@@ -5214,7 +5222,7 @@ static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer) + * early. + */ + if (bfqq) +- bfq_idle_slice_timer_body(bfqq); ++ bfq_idle_slice_timer_body(bfqd, bfqq); + + return HRTIMER_NORESTART; + } +diff --git a/block/blk-ioc.c b/block/blk-ioc.c +index 01580f88fcb3..4c810969c3e2 100644 +--- a/block/blk-ioc.c ++++ b/block/blk-ioc.c +@@ -87,6 +87,7 @@ static void ioc_destroy_icq(struct io_cq *icq) + * making it impossible to determine icq_cache. Record it in @icq. + */ + icq->__rcu_icq_cache = et->icq_cache; ++ icq->flags |= ICQ_DESTROYED; + call_rcu(&icq->__rcu_head, icq_free_icq_rcu); + } + +@@ -230,15 +231,21 @@ static void __ioc_clear_queue(struct list_head *icq_list) + { + unsigned long flags; + ++ rcu_read_lock(); + while (!list_empty(icq_list)) { + struct io_cq *icq = list_entry(icq_list->next, + struct io_cq, q_node); + struct io_context *ioc = icq->ioc; + + spin_lock_irqsave(&ioc->lock, flags); ++ if (icq->flags & ICQ_DESTROYED) { ++ spin_unlock_irqrestore(&ioc->lock, flags); ++ continue; ++ } + ioc_destroy_icq(icq); + spin_unlock_irqrestore(&ioc->lock, flags); + } ++ rcu_read_unlock(); + } + + /** +diff --git a/block/blk-settings.c b/block/blk-settings.c +index be9b39caadbd..01093b8f3e62 100644 +--- a/block/blk-settings.c ++++ b/block/blk-settings.c +@@ -717,6 +717,9 @@ void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, + printk(KERN_NOTICE "%s: Warning: Device %s is misaligned\n", + top, bottom); + } ++ ++ t->backing_dev_info->io_pages = ++ t->limits.max_sectors >> (PAGE_SHIFT - 9); + } + EXPORT_SYMBOL(disk_stack_limits); + +diff --git a/drivers/ata/libata-pmp.c b/drivers/ata/libata-pmp.c +index 2ae1799f4992..51eeaea65833 100644 +--- a/drivers/ata/libata-pmp.c ++++ b/drivers/ata/libata-pmp.c +@@ -764,6 +764,7 @@ static int sata_pmp_eh_recover_pmp(struct ata_port *ap, + + if (dev->flags & ATA_DFLAG_DETACH) { + detach = 1; ++ rc = -ENODEV; + goto fail; + } + +diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c +index 3a64fa4aaf7e..0c1572a1cc5e 100644 +--- a/drivers/ata/libata-scsi.c ++++ b/drivers/ata/libata-scsi.c +@@ -4570,22 +4570,19 @@ int ata_scsi_add_hosts(struct ata_host *host, struct scsi_host_template *sht) + */ + shost->max_host_blocked = 1; + +- rc = scsi_add_host_with_dma(ap->scsi_host, +- &ap->tdev, ap->host->dev); ++ rc = scsi_add_host_with_dma(shost, &ap->tdev, ap->host->dev); + if (rc) +- goto err_add; ++ goto err_alloc; + } + + return 0; + +- err_add: +- scsi_host_put(host->ports[i]->scsi_host); + err_alloc: + while (--i >= 0) { + struct Scsi_Host *shost = host->ports[i]->scsi_host; + ++ /* scsi_host_put() is in ata_devres_release() */ + scsi_remove_host(shost); +- scsi_host_put(shost); + } + return rc; + } +diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c +index 818d8c37d70a..3b7b748c4d4f 100644 +--- a/drivers/base/firmware_loader/fallback.c ++++ b/drivers/base/firmware_loader/fallback.c +@@ -572,7 +572,7 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, + } + + retval = fw_sysfs_wait_timeout(fw_priv, timeout); +- if (retval < 0) { ++ if (retval < 0 && retval != -ENOENT) { + mutex_lock(&fw_lock); + fw_load_abort(fw_sysfs); + mutex_unlock(&fw_lock); +diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c +index c5c0b7c89481..d2d7dc9cd58d 100644 +--- a/drivers/block/null_blk_main.c ++++ b/drivers/block/null_blk_main.c +@@ -571,6 +571,7 @@ static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq) + if (tag != -1U) { + cmd = &nq->cmds[tag]; + cmd->tag = tag; ++ cmd->error = BLK_STS_OK; + cmd->nq = nq; + if (nq->dev->irqmode == NULL_IRQ_TIMER) { + hrtimer_init(&cmd->timer, CLOCK_MONOTONIC, +@@ -1433,6 +1434,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx, + cmd->timer.function = null_cmd_timer_expired; + } + cmd->rq = bd->rq; ++ cmd->error = BLK_STS_OK; + cmd->nq = nq; + + blk_mq_start_request(bd->rq); +@@ -1480,7 +1482,12 @@ static void cleanup_queues(struct nullb *nullb) + + static void null_del_dev(struct nullb *nullb) + { +- struct nullb_device *dev = nullb->dev; ++ struct nullb_device *dev; ++ ++ if (!nullb) ++ return; ++ ++ dev = nullb->dev; + + ida_simple_remove(&nullb_indexes, nullb->index); + +@@ -1844,6 +1851,7 @@ out_cleanup_queues: + cleanup_queues(nullb); + out_free_nullb: + kfree(nullb); ++ dev->nullb = NULL; + out: + return rv; + } +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c +index 9a57af79f330..adc0e3ed01c2 100644 +--- a/drivers/block/xen-blkfront.c ++++ b/drivers/block/xen-blkfront.c +@@ -47,6 +47,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -2188,10 +2189,12 @@ static void blkfront_setup_discard(struct blkfront_info *info) + + static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) + { +- unsigned int psegs, grants; ++ unsigned int psegs, grants, memflags; + int err, i; + struct blkfront_info *info = rinfo->dev_info; + ++ memflags = memalloc_noio_save(); ++ + if (info->max_indirect_segments == 0) { + if (!HAS_EXTRA_REQ) + grants = BLKIF_MAX_SEGMENTS_PER_REQUEST; +@@ -2223,7 +2226,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) + + BUG_ON(!list_empty(&rinfo->indirect_pages)); + for (i = 0; i < num; i++) { +- struct page *indirect_page = alloc_page(GFP_NOIO); ++ struct page *indirect_page = alloc_page(GFP_KERNEL); + if (!indirect_page) + goto out_of_memory; + list_add(&indirect_page->lru, &rinfo->indirect_pages); +@@ -2234,15 +2237,15 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) + rinfo->shadow[i].grants_used = + kvcalloc(grants, + sizeof(rinfo->shadow[i].grants_used[0]), +- GFP_NOIO); ++ GFP_KERNEL); + rinfo->shadow[i].sg = kvcalloc(psegs, + sizeof(rinfo->shadow[i].sg[0]), +- GFP_NOIO); ++ GFP_KERNEL); + if (info->max_indirect_segments) + rinfo->shadow[i].indirect_grants = + kvcalloc(INDIRECT_GREFS(grants), + sizeof(rinfo->shadow[i].indirect_grants[0]), +- GFP_NOIO); ++ GFP_KERNEL); + if ((rinfo->shadow[i].grants_used == NULL) || + (rinfo->shadow[i].sg == NULL) || + (info->max_indirect_segments && +@@ -2251,6 +2254,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) + sg_init_table(rinfo->shadow[i].sg, psegs); + } + ++ memalloc_noio_restore(memflags); + + return 0; + +@@ -2270,6 +2274,9 @@ out_of_memory: + __free_page(indirect_page); + } + } ++ ++ memalloc_noio_restore(memflags); ++ + return -ENOMEM; + } + +diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c +index 1b76d9585902..2ca2cc56bcef 100644 +--- a/drivers/bus/sunxi-rsb.c ++++ b/drivers/bus/sunxi-rsb.c +@@ -345,7 +345,7 @@ static int sunxi_rsb_read(struct sunxi_rsb *rsb, u8 rtaddr, u8 addr, + if (ret) + goto unlock; + +- *buf = readl(rsb->regs + RSB_DATA); ++ *buf = readl(rsb->regs + RSB_DATA) & GENMASK(len * 8 - 1, 0); + + unlock: + mutex_unlock(&rsb->lock); +diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c +index 980eb7c60952..69734b1df792 100644 +--- a/drivers/char/ipmi/ipmi_msghandler.c ++++ b/drivers/char/ipmi/ipmi_msghandler.c +@@ -3134,8 +3134,8 @@ static void __get_guid(struct ipmi_smi *intf) + if (rv) + /* Send failed, no GUID available. */ + bmc->dyn_guid_set = 0; +- +- wait_event(intf->waitq, bmc->dyn_guid_set != 2); ++ else ++ wait_event(intf->waitq, bmc->dyn_guid_set != 2); + + /* dyn_guid_set makes the guid data available. */ + smp_rmb(); +diff --git a/drivers/char/tpm/eventlog/common.c b/drivers/char/tpm/eventlog/common.c +index 5a8720df2b51..7d70b654df04 100644 +--- a/drivers/char/tpm/eventlog/common.c ++++ b/drivers/char/tpm/eventlog/common.c +@@ -104,11 +104,8 @@ static int tpm_read_log(struct tpm_chip *chip) + * + * If an event log is found then the securityfs files are setup to + * export it to userspace, otherwise nothing is done. +- * +- * Returns -ENODEV if the firmware has no event log or securityfs is not +- * supported. + */ +-int tpm_bios_log_setup(struct tpm_chip *chip) ++void tpm_bios_log_setup(struct tpm_chip *chip) + { + const char *name = dev_name(&chip->dev); + unsigned int cnt; +@@ -117,7 +114,7 @@ int tpm_bios_log_setup(struct tpm_chip *chip) + + rc = tpm_read_log(chip); + if (rc < 0) +- return rc; ++ return; + log_version = rc; + + cnt = 0; +@@ -163,13 +160,12 @@ int tpm_bios_log_setup(struct tpm_chip *chip) + cnt++; + } + +- return 0; ++ return; + + err: +- rc = PTR_ERR(chip->bios_dir[cnt]); + chip->bios_dir[cnt] = NULL; + tpm_bios_log_teardown(chip); +- return rc; ++ return; + } + + void tpm_bios_log_teardown(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/eventlog/tpm1.c b/drivers/char/tpm/eventlog/tpm1.c +index 58c84784ba25..a4621c83e2bf 100644 +--- a/drivers/char/tpm/eventlog/tpm1.c ++++ b/drivers/char/tpm/eventlog/tpm1.c +@@ -129,6 +129,7 @@ static void *tpm1_bios_measurements_next(struct seq_file *m, void *v, + u32 converted_event_size; + u32 converted_event_type; + ++ (*pos)++; + converted_event_size = do_endian_conversion(event->event_size); + + v += sizeof(struct tcpa_event) + converted_event_size; +@@ -146,7 +147,6 @@ static void *tpm1_bios_measurements_next(struct seq_file *m, void *v, + ((v + sizeof(struct tcpa_event) + converted_event_size) >= limit)) + return NULL; + +- (*pos)++; + return v; + } + +diff --git a/drivers/char/tpm/eventlog/tpm2.c b/drivers/char/tpm/eventlog/tpm2.c +index 41b9f6c92da7..aec49c925cee 100644 +--- a/drivers/char/tpm/eventlog/tpm2.c ++++ b/drivers/char/tpm/eventlog/tpm2.c +@@ -143,6 +143,7 @@ static void *tpm2_bios_measurements_next(struct seq_file *m, void *v, + size_t event_size; + void *marker; + ++ (*pos)++; + event_header = log->bios_event_log; + + if (v == SEQ_START_TOKEN) { +@@ -167,7 +168,6 @@ static void *tpm2_bios_measurements_next(struct seq_file *m, void *v, + if (((v + event_size) >= limit) || (event_size == 0)) + return NULL; + +- (*pos)++; + return v; + } + +diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c +index 0b01eb7b14e5..4946c5b37d04 100644 +--- a/drivers/char/tpm/tpm-chip.c ++++ b/drivers/char/tpm/tpm-chip.c +@@ -463,9 +463,7 @@ int tpm_chip_register(struct tpm_chip *chip) + + tpm_sysfs_add_device(chip); + +- rc = tpm_bios_log_setup(chip); +- if (rc != 0 && rc != -ENODEV) +- return rc; ++ tpm_bios_log_setup(chip); + + tpm_add_ppi(chip); + +diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h +index f3501d05264f..289221d653cb 100644 +--- a/drivers/char/tpm/tpm.h ++++ b/drivers/char/tpm/tpm.h +@@ -602,6 +602,6 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u32 cc, + int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, + u32 cc, u8 *buf, size_t *bufsiz); + +-int tpm_bios_log_setup(struct tpm_chip *chip); ++void tpm_bios_log_setup(struct tpm_chip *chip); + void tpm_bios_log_teardown(struct tpm_chip *chip); + #endif +diff --git a/drivers/clk/ingenic/jz4770-cgu.c b/drivers/clk/ingenic/jz4770-cgu.c +index bf46a0df2004..e3057bb5ffd8 100644 +--- a/drivers/clk/ingenic/jz4770-cgu.c ++++ b/drivers/clk/ingenic/jz4770-cgu.c +@@ -436,8 +436,10 @@ static void __init jz4770_cgu_init(struct device_node *np) + + cgu = ingenic_cgu_new(jz4770_cgu_clocks, + ARRAY_SIZE(jz4770_cgu_clocks), np); +- if (!cgu) ++ if (!cgu) { + pr_err("%s: failed to initialise CGU\n", __func__); ++ return; ++ } + + retval = ingenic_cgu_register_clocks(cgu); + if (retval) +diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c +index d8c3595e9023..a0cbbdfc7735 100644 +--- a/drivers/cpufreq/imx6q-cpufreq.c ++++ b/drivers/cpufreq/imx6q-cpufreq.c +@@ -310,6 +310,9 @@ static int imx6ul_opp_check_speed_grading(struct device *dev) + void __iomem *base; + + np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp"); ++ if (!np) ++ np = of_find_compatible_node(NULL, NULL, ++ "fsl,imx6ull-ocotp"); + if (!np) + return -ENOENT; + +diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c +index 5fff39dae625..687c92ef7644 100644 +--- a/drivers/cpufreq/powernv-cpufreq.c ++++ b/drivers/cpufreq/powernv-cpufreq.c +@@ -1081,6 +1081,12 @@ free_and_return: + + static inline void clean_chip_info(void) + { ++ int i; ++ ++ /* flush any pending work items */ ++ if (chips) ++ for (i = 0; i < nr_chips; i++) ++ cancel_work_sync(&chips[i].throttle); + kfree(chips); + } + +diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c +index edacf9b39b63..ceb033930535 100644 +--- a/drivers/crypto/caam/caamalg_desc.c ++++ b/drivers/crypto/caam/caamalg_desc.c +@@ -1457,7 +1457,13 @@ EXPORT_SYMBOL(cnstr_shdsc_ablkcipher_givencap); + */ + void cnstr_shdsc_xts_ablkcipher_encap(u32 * const desc, struct alginfo *cdata) + { +- __be64 sector_size = cpu_to_be64(512); ++ /* ++ * Set sector size to a big value, practically disabling ++ * sector size segmentation in xts implementation. We cannot ++ * take full advantage of this HW feature with existing ++ * crypto API / dm-crypt SW architecture. ++ */ ++ __be64 sector_size = cpu_to_be64(BIT(15)); + u32 *key_jump_cmd; + + init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX); +@@ -1509,7 +1515,13 @@ EXPORT_SYMBOL(cnstr_shdsc_xts_ablkcipher_encap); + */ + void cnstr_shdsc_xts_ablkcipher_decap(u32 * const desc, struct alginfo *cdata) + { +- __be64 sector_size = cpu_to_be64(512); ++ /* ++ * Set sector size to a big value, practically disabling ++ * sector size segmentation in xts implementation. We cannot ++ * take full advantage of this HW feature with existing ++ * crypto API / dm-crypt SW architecture. ++ */ ++ __be64 sector_size = cpu_to_be64(BIT(15)); + u32 *key_jump_cmd; + + init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX); +diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c +index aa6b45bc13b9..57aac15a335f 100644 +--- a/drivers/crypto/ccree/cc_aead.c ++++ b/drivers/crypto/ccree/cc_aead.c +@@ -731,7 +731,7 @@ static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode, + dev_dbg(dev, "ASSOC buffer type DLLI\n"); + hw_desc_init(&desc[idx]); + set_din_type(&desc[idx], DMA_DLLI, sg_dma_address(areq->src), +- areq->assoclen, NS_BIT); ++ areq_ctx->assoclen, NS_BIT); + set_flow_mode(&desc[idx], flow_mode); + if (ctx->auth_mode == DRV_HASH_XCBC_MAC && + areq_ctx->cryptlen > 0) +@@ -1080,9 +1080,11 @@ static void cc_proc_header_desc(struct aead_request *req, + struct cc_hw_desc desc[], + unsigned int *seq_size) + { ++ struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + unsigned int idx = *seq_size; ++ + /* Hash associated data */ +- if (req->assoclen > 0) ++ if (areq_ctx->assoclen > 0) + cc_set_assoc_desc(req, DIN_HASH, desc, &idx); + + /* Hash IV */ +@@ -1310,7 +1312,7 @@ static int validate_data_size(struct cc_aead_ctx *ctx, + { + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct device *dev = drvdata_to_dev(ctx->drvdata); +- unsigned int assoclen = req->assoclen; ++ unsigned int assoclen = areq_ctx->assoclen; + unsigned int cipherlen = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ? + (req->cryptlen - ctx->authsize) : req->cryptlen; + +@@ -1469,7 +1471,7 @@ static int cc_ccm(struct aead_request *req, struct cc_hw_desc desc[], + idx++; + + /* process assoc data */ +- if (req->assoclen > 0) { ++ if (req_ctx->assoclen > 0) { + cc_set_assoc_desc(req, DIN_HASH, desc, &idx); + } else { + hw_desc_init(&desc[idx]); +@@ -1561,7 +1563,7 @@ static int config_ccm_adata(struct aead_request *req) + * NIST Special Publication 800-38C + */ + *b0 |= (8 * ((m - 2) / 2)); +- if (req->assoclen > 0) ++ if (req_ctx->assoclen > 0) + *b0 |= 64; /* Enable bit 6 if Adata exists. */ + + rc = set_msg_len(b0 + 16 - l, cryptlen, l); /* Write L'. */ +@@ -1572,7 +1574,7 @@ static int config_ccm_adata(struct aead_request *req) + /* END of "taken from crypto/ccm.c" */ + + /* l(a) - size of associated data. */ +- req_ctx->ccm_hdr_size = format_ccm_a0(a0, req->assoclen); ++ req_ctx->ccm_hdr_size = format_ccm_a0(a0, req_ctx->assoclen); + + memset(req->iv + 15 - req->iv[0], 0, req->iv[0] + 1); + req->iv[15] = 1; +@@ -1604,7 +1606,7 @@ static void cc_proc_rfc4309_ccm(struct aead_request *req) + memcpy(areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET, req->iv, + CCM_BLOCK_IV_SIZE); + req->iv = areq_ctx->ctr_iv; +- req->assoclen -= CCM_BLOCK_IV_SIZE; ++ areq_ctx->assoclen -= CCM_BLOCK_IV_SIZE; + } + + static void cc_set_ghash_desc(struct aead_request *req, +@@ -1812,7 +1814,7 @@ static int cc_gcm(struct aead_request *req, struct cc_hw_desc desc[], + // for gcm and rfc4106. + cc_set_ghash_desc(req, desc, seq_size); + /* process(ghash) assoc data */ +- if (req->assoclen > 0) ++ if (req_ctx->assoclen > 0) + cc_set_assoc_desc(req, DIN_HASH, desc, seq_size); + cc_set_gctr_desc(req, desc, seq_size); + /* process(gctr+ghash) */ +@@ -1836,8 +1838,8 @@ static int config_gcm_context(struct aead_request *req) + (req->cryptlen - ctx->authsize); + __be32 counter = cpu_to_be32(2); + +- dev_dbg(dev, "%s() cryptlen = %d, req->assoclen = %d ctx->authsize = %d\n", +- __func__, cryptlen, req->assoclen, ctx->authsize); ++ dev_dbg(dev, "%s() cryptlen = %d, req_ctx->assoclen = %d ctx->authsize = %d\n", ++ __func__, cryptlen, req_ctx->assoclen, ctx->authsize); + + memset(req_ctx->hkey, 0, AES_BLOCK_SIZE); + +@@ -1853,7 +1855,7 @@ static int config_gcm_context(struct aead_request *req) + if (!req_ctx->plaintext_authenticate_only) { + __be64 temp64; + +- temp64 = cpu_to_be64(req->assoclen * 8); ++ temp64 = cpu_to_be64(req_ctx->assoclen * 8); + memcpy(&req_ctx->gcm_len_block.len_a, &temp64, sizeof(temp64)); + temp64 = cpu_to_be64(cryptlen * 8); + memcpy(&req_ctx->gcm_len_block.len_c, &temp64, 8); +@@ -1863,8 +1865,8 @@ static int config_gcm_context(struct aead_request *req) + */ + __be64 temp64; + +- temp64 = cpu_to_be64((req->assoclen + GCM_BLOCK_RFC4_IV_SIZE + +- cryptlen) * 8); ++ temp64 = cpu_to_be64((req_ctx->assoclen + ++ GCM_BLOCK_RFC4_IV_SIZE + cryptlen) * 8); + memcpy(&req_ctx->gcm_len_block.len_a, &temp64, sizeof(temp64)); + temp64 = 0; + memcpy(&req_ctx->gcm_len_block.len_c, &temp64, 8); +@@ -1884,7 +1886,7 @@ static void cc_proc_rfc4_gcm(struct aead_request *req) + memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_IV_OFFSET, req->iv, + GCM_BLOCK_RFC4_IV_SIZE); + req->iv = areq_ctx->ctr_iv; +- req->assoclen -= GCM_BLOCK_RFC4_IV_SIZE; ++ areq_ctx->assoclen -= GCM_BLOCK_RFC4_IV_SIZE; + } + + static int cc_proc_aead(struct aead_request *req, +@@ -1909,7 +1911,7 @@ static int cc_proc_aead(struct aead_request *req, + /* Check data length according to mode */ + if (validate_data_size(ctx, direct, req)) { + dev_err(dev, "Unsupported crypt/assoc len %d/%d.\n", +- req->cryptlen, req->assoclen); ++ req->cryptlen, areq_ctx->assoclen); + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN); + return -EINVAL; + } +@@ -2058,8 +2060,11 @@ static int cc_aead_encrypt(struct aead_request *req) + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + areq_ctx->is_gcm4543 = false; + +@@ -2087,8 +2092,11 @@ static int cc_rfc4309_ccm_encrypt(struct aead_request *req) + goto out; + } + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + areq_ctx->is_gcm4543 = true; + +@@ -2106,8 +2114,11 @@ static int cc_aead_decrypt(struct aead_request *req) + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + areq_ctx->is_gcm4543 = false; + +@@ -2133,8 +2144,11 @@ static int cc_rfc4309_ccm_decrypt(struct aead_request *req) + goto out; + } + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + + areq_ctx->is_gcm4543 = true; +@@ -2250,8 +2264,11 @@ static int cc_rfc4106_gcm_encrypt(struct aead_request *req) + goto out; + } + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + + areq_ctx->plaintext_authenticate_only = false; +@@ -2273,11 +2290,14 @@ static int cc_rfc4543_gcm_encrypt(struct aead_request *req) + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + //plaintext is not encryped with rfc4543 + areq_ctx->plaintext_authenticate_only = true; + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + + cc_proc_rfc4_gcm(req); +@@ -2305,8 +2325,11 @@ static int cc_rfc4106_gcm_decrypt(struct aead_request *req) + goto out; + } + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + + areq_ctx->plaintext_authenticate_only = false; +@@ -2328,11 +2351,14 @@ static int cc_rfc4543_gcm_decrypt(struct aead_request *req) + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + int rc; + ++ memset(areq_ctx, 0, sizeof(*areq_ctx)); ++ + //plaintext is not decryped with rfc4543 + areq_ctx->plaintext_authenticate_only = true; + + /* No generated IV required */ + areq_ctx->backup_iv = req->iv; ++ areq_ctx->assoclen = req->assoclen; + areq_ctx->backup_giv = NULL; + + cc_proc_rfc4_gcm(req); +diff --git a/drivers/crypto/ccree/cc_aead.h b/drivers/crypto/ccree/cc_aead.h +index 5edf3b351fa4..74bc99067f18 100644 +--- a/drivers/crypto/ccree/cc_aead.h ++++ b/drivers/crypto/ccree/cc_aead.h +@@ -67,6 +67,7 @@ struct aead_req_ctx { + u8 backup_mac[MAX_MAC_SIZE]; + u8 *backup_iv; /*store iv for generated IV flow*/ + u8 *backup_giv; /*store iv for rfc3686(ctr) flow*/ ++ u32 assoclen; /* internal assoclen */ + dma_addr_t mac_buf_dma_addr; /* internal ICV DMA buffer */ + /* buffer for internal ccm configurations */ + dma_addr_t ccm_iv0_dma_addr; +diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c +index 90b4870078fb..630020255941 100644 +--- a/drivers/crypto/ccree/cc_buffer_mgr.c ++++ b/drivers/crypto/ccree/cc_buffer_mgr.c +@@ -65,7 +65,7 @@ static void cc_copy_mac(struct device *dev, struct aead_request *req, + { + struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); +- u32 skip = req->assoclen + req->cryptlen; ++ u32 skip = areq_ctx->assoclen + req->cryptlen; + + if (areq_ctx->is_gcm4543) + skip += crypto_aead_ivsize(tfm); +@@ -460,10 +460,8 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, + /* Map the src SGL */ + rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents, + LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); +- if (rc) { +- rc = -ENOMEM; ++ if (rc) + goto cipher_exit; +- } + if (mapped_nents > 1) + req_ctx->dma_buf_type = CC_DMA_BUF_MLLI; + +@@ -477,12 +475,11 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, + } + } else { + /* Map the dst sg */ +- if (cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL, +- &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, +- &dummy, &mapped_nents)) { +- rc = -ENOMEM; ++ rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL, ++ &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, ++ &dummy, &mapped_nents); ++ if (rc) + goto cipher_exit; +- } + if (mapped_nents > 1) + req_ctx->dma_buf_type = CC_DMA_BUF_MLLI; + +@@ -577,8 +574,8 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) + + dev_dbg(dev, "Unmapping src sgl: req->src=%pK areq_ctx->src.nents=%u areq_ctx->assoc.nents=%u assoclen:%u cryptlen=%u\n", + sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents, +- req->assoclen, req->cryptlen); +- size_to_unmap = req->assoclen + req->cryptlen; ++ areq_ctx->assoclen, req->cryptlen); ++ size_to_unmap = areq_ctx->assoclen + req->cryptlen; + if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) + size_to_unmap += areq_ctx->req_authsize; + if (areq_ctx->is_gcm4543) +@@ -720,7 +717,7 @@ static int cc_aead_chain_assoc(struct cc_drvdata *drvdata, + struct scatterlist *current_sg = req->src; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + unsigned int sg_index = 0; +- u32 size_of_assoc = req->assoclen; ++ u32 size_of_assoc = areq_ctx->assoclen; + struct device *dev = drvdata_to_dev(drvdata); + + if (areq_ctx->is_gcm4543) +@@ -731,7 +728,7 @@ static int cc_aead_chain_assoc(struct cc_drvdata *drvdata, + goto chain_assoc_exit; + } + +- if (req->assoclen == 0) { ++ if (areq_ctx->assoclen == 0) { + areq_ctx->assoc_buff_type = CC_DMA_BUF_NULL; + areq_ctx->assoc.nents = 0; + areq_ctx->assoc.mlli_nents = 0; +@@ -791,7 +788,7 @@ static int cc_aead_chain_assoc(struct cc_drvdata *drvdata, + cc_dma_buf_type(areq_ctx->assoc_buff_type), + areq_ctx->assoc.nents); + cc_add_sg_entry(dev, sg_data, areq_ctx->assoc.nents, req->src, +- req->assoclen, 0, is_last, ++ areq_ctx->assoclen, 0, is_last, + &areq_ctx->assoc.mlli_nents); + areq_ctx->assoc_buff_type = CC_DMA_BUF_MLLI; + } +@@ -975,11 +972,11 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, + u32 src_mapped_nents = 0, dst_mapped_nents = 0; + u32 offset = 0; + /* non-inplace mode */ +- unsigned int size_for_map = req->assoclen + req->cryptlen; ++ unsigned int size_for_map = areq_ctx->assoclen + req->cryptlen; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + u32 sg_index = 0; + bool is_gcm4543 = areq_ctx->is_gcm4543; +- u32 size_to_skip = req->assoclen; ++ u32 size_to_skip = areq_ctx->assoclen; + + if (is_gcm4543) + size_to_skip += crypto_aead_ivsize(tfm); +@@ -1023,9 +1020,13 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, + areq_ctx->src_offset = offset; + + if (req->src != req->dst) { +- size_for_map = req->assoclen + req->cryptlen; +- size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? +- authsize : 0; ++ size_for_map = areq_ctx->assoclen + req->cryptlen; ++ ++ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ++ size_for_map += authsize; ++ else ++ size_for_map -= authsize; ++ + if (is_gcm4543) + size_for_map += crypto_aead_ivsize(tfm); + +@@ -1033,10 +1034,8 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, + &areq_ctx->dst.nents, + LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes, + &dst_mapped_nents); +- if (rc) { +- rc = -ENOMEM; ++ if (rc) + goto chain_data_exit; +- } + } + + dst_mapped_nents = cc_get_sgl_nents(dev, req->dst, size_for_map, +@@ -1190,11 +1189,10 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) + } + areq_ctx->ccm_iv0_dma_addr = dma_addr; + +- if (cc_set_aead_conf_buf(dev, areq_ctx, areq_ctx->ccm_config, +- &sg_data, req->assoclen)) { +- rc = -ENOMEM; ++ rc = cc_set_aead_conf_buf(dev, areq_ctx, areq_ctx->ccm_config, ++ &sg_data, areq_ctx->assoclen); ++ if (rc) + goto aead_map_failure; +- } + } + + if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) { +@@ -1243,10 +1241,12 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) + areq_ctx->gcm_iv_inc2_dma_addr = dma_addr; + } + +- size_to_map = req->cryptlen + req->assoclen; +- if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) ++ size_to_map = req->cryptlen + areq_ctx->assoclen; ++ /* If we do in-place encryption, we also need the auth tag */ ++ if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) && ++ (req->src == req->dst)) { + size_to_map += authsize; +- ++ } + if (is_gcm4543) + size_to_map += crypto_aead_ivsize(tfm); + rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL, +@@ -1254,10 +1254,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) + (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES + + LLI_MAX_NUM_OF_DATA_ENTRIES), + &dummy, &mapped_nents); +- if (rc) { +- rc = -ENOMEM; ++ if (rc) + goto aead_map_failure; +- } + + if (areq_ctx->is_single_pass) { + /* +@@ -1341,6 +1339,7 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx, + struct mlli_params *mlli_params = &areq_ctx->mlli_params; + struct buffer_array sg_data; + struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle; ++ int rc = 0; + u32 dummy = 0; + u32 mapped_nents = 0; + +@@ -1360,18 +1359,18 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx, + /*TODO: copy data in case that buffer is enough for operation */ + /* map the previous buffer */ + if (*curr_buff_cnt) { +- if (cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt, +- &sg_data)) { +- return -ENOMEM; +- } ++ rc = cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt, ++ &sg_data); ++ if (rc) ++ return rc; + } + + if (src && nbytes > 0 && do_update) { +- if (cc_map_sg(dev, src, nbytes, DMA_TO_DEVICE, +- &areq_ctx->in_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, +- &dummy, &mapped_nents)) { ++ rc = cc_map_sg(dev, src, nbytes, DMA_TO_DEVICE, ++ &areq_ctx->in_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, ++ &dummy, &mapped_nents); ++ if (rc) + goto unmap_curr_buff; +- } + if (src && mapped_nents == 1 && + areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) { + memcpy(areq_ctx->buff_sg, src, +@@ -1390,7 +1389,8 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx, + /* add the src data to the sg_data */ + cc_add_sg_entry(dev, &sg_data, areq_ctx->in_nents, src, nbytes, + 0, true, &areq_ctx->mlli_nents); +- if (cc_generate_mlli(dev, &sg_data, mlli_params, flags)) ++ rc = cc_generate_mlli(dev, &sg_data, mlli_params, flags); ++ if (rc) + goto fail_unmap_din; + } + /* change the buffer index for the unmap function */ +@@ -1406,7 +1406,7 @@ unmap_curr_buff: + if (*curr_buff_cnt) + dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE); + +- return -ENOMEM; ++ return rc; + } + + int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx, +@@ -1425,6 +1425,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx, + struct buffer_array sg_data; + struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle; + unsigned int swap_index = 0; ++ int rc = 0; + u32 dummy = 0; + u32 mapped_nents = 0; + +@@ -1469,21 +1470,21 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx, + } + + if (*curr_buff_cnt) { +- if (cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt, +- &sg_data)) { +- return -ENOMEM; +- } ++ rc = cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt, ++ &sg_data); ++ if (rc) ++ return rc; + /* change the buffer index for next operation */ + swap_index = 1; + } + + if (update_data_len > *curr_buff_cnt) { +- if (cc_map_sg(dev, src, (update_data_len - *curr_buff_cnt), +- DMA_TO_DEVICE, &areq_ctx->in_nents, +- LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, +- &mapped_nents)) { ++ rc = cc_map_sg(dev, src, (update_data_len - *curr_buff_cnt), ++ DMA_TO_DEVICE, &areq_ctx->in_nents, ++ LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, ++ &mapped_nents); ++ if (rc) + goto unmap_curr_buff; +- } + if (mapped_nents == 1 && + areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) { + /* only one entry in the SG and no previous data */ +@@ -1503,7 +1504,8 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx, + cc_add_sg_entry(dev, &sg_data, areq_ctx->in_nents, src, + (update_data_len - *curr_buff_cnt), 0, true, + &areq_ctx->mlli_nents); +- if (cc_generate_mlli(dev, &sg_data, mlli_params, flags)) ++ rc = cc_generate_mlli(dev, &sg_data, mlli_params, flags); ++ if (rc) + goto fail_unmap_din; + } + areq_ctx->buff_index = (areq_ctx->buff_index ^ swap_index); +@@ -1517,7 +1519,7 @@ unmap_curr_buff: + if (*curr_buff_cnt) + dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE); + +- return -ENOMEM; ++ return rc; + } + + void cc_unmap_hash_request(struct device *dev, void *ctx, +diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c +index b926098f70ff..5c5c504dacb6 100644 +--- a/drivers/crypto/mxs-dcp.c ++++ b/drivers/crypto/mxs-dcp.c +@@ -25,6 +25,7 @@ + #include + #include + #include ++#include + + #define DCP_MAX_CHANS 4 + #define DCP_BUF_SZ PAGE_SIZE +@@ -621,49 +622,46 @@ static int dcp_sha_req_to_buf(struct crypto_async_request *arq) + struct dcp_async_ctx *actx = crypto_ahash_ctx(tfm); + struct dcp_sha_req_ctx *rctx = ahash_request_ctx(req); + struct hash_alg_common *halg = crypto_hash_alg_common(tfm); +- const int nents = sg_nents(req->src); + + uint8_t *in_buf = sdcp->coh->sha_in_buf; + uint8_t *out_buf = sdcp->coh->sha_out_buf; + +- uint8_t *src_buf; +- + struct scatterlist *src; + +- unsigned int i, len, clen; ++ unsigned int i, len, clen, oft = 0; + int ret; + + int fin = rctx->fini; + if (fin) + rctx->fini = 0; + +- for_each_sg(req->src, src, nents, i) { +- src_buf = sg_virt(src); +- len = sg_dma_len(src); +- +- do { +- if (actx->fill + len > DCP_BUF_SZ) +- clen = DCP_BUF_SZ - actx->fill; +- else +- clen = len; +- +- memcpy(in_buf + actx->fill, src_buf, clen); +- len -= clen; +- src_buf += clen; +- actx->fill += clen; ++ src = req->src; ++ len = req->nbytes; + +- /* +- * If we filled the buffer and still have some +- * more data, submit the buffer. +- */ +- if (len && actx->fill == DCP_BUF_SZ) { +- ret = mxs_dcp_run_sha(req); +- if (ret) +- return ret; +- actx->fill = 0; +- rctx->init = 0; +- } +- } while (len); ++ while (len) { ++ if (actx->fill + len > DCP_BUF_SZ) ++ clen = DCP_BUF_SZ - actx->fill; ++ else ++ clen = len; ++ ++ scatterwalk_map_and_copy(in_buf + actx->fill, src, oft, clen, ++ 0); ++ ++ len -= clen; ++ oft += clen; ++ actx->fill += clen; ++ ++ /* ++ * If we filled the buffer and still have some ++ * more data, submit the buffer. ++ */ ++ if (len && actx->fill == DCP_BUF_SZ) { ++ ret = mxs_dcp_run_sha(req); ++ if (ret) ++ return ret; ++ actx->fill = 0; ++ rctx->init = 0; ++ } + } + + if (fin) { +diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c +index c64c7da73829..05b528c7ed8f 100644 +--- a/drivers/firmware/arm_sdei.c ++++ b/drivers/firmware/arm_sdei.c +@@ -489,11 +489,6 @@ static int _sdei_event_unregister(struct sdei_event *event) + { + lockdep_assert_held(&sdei_events_lock); + +- spin_lock(&sdei_list_lock); +- event->reregister = false; +- event->reenable = false; +- spin_unlock(&sdei_list_lock); +- + if (event->type == SDEI_EVENT_TYPE_SHARED) + return sdei_api_event_unregister(event->event_num); + +@@ -516,6 +511,11 @@ int sdei_event_unregister(u32 event_num) + break; + } + ++ spin_lock(&sdei_list_lock); ++ event->reregister = false; ++ event->reenable = false; ++ spin_unlock(&sdei_list_lock); ++ + err = _sdei_event_unregister(event); + if (err) + break; +@@ -583,26 +583,15 @@ static int _sdei_event_register(struct sdei_event *event) + + lockdep_assert_held(&sdei_events_lock); + +- spin_lock(&sdei_list_lock); +- event->reregister = true; +- spin_unlock(&sdei_list_lock); +- + if (event->type == SDEI_EVENT_TYPE_SHARED) + return sdei_api_event_register(event->event_num, + sdei_entry_point, + event->registered, + SDEI_EVENT_REGISTER_RM_ANY, 0); + +- + err = sdei_do_cross_call(_local_event_register, event); +- if (err) { +- spin_lock(&sdei_list_lock); +- event->reregister = false; +- event->reenable = false; +- spin_unlock(&sdei_list_lock); +- ++ if (err) + sdei_do_cross_call(_local_event_unregister, event); +- } + + return err; + } +@@ -630,8 +619,17 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg) + break; + } + ++ spin_lock(&sdei_list_lock); ++ event->reregister = true; ++ spin_unlock(&sdei_list_lock); ++ + err = _sdei_event_register(event); + if (err) { ++ spin_lock(&sdei_list_lock); ++ event->reregister = false; ++ event->reenable = false; ++ spin_unlock(&sdei_list_lock); ++ + sdei_event_destroy(event); + pr_warn("Failed to register event %u: %d\n", event_num, + err); +diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c +index d54fca902e64..f1e0a2715269 100644 +--- a/drivers/firmware/efi/efi.c ++++ b/drivers/firmware/efi/efi.c +@@ -572,7 +572,7 @@ int __init efi_config_parse_tables(void *config_tables, int count, int sz, + } + } + +- if (efi_enabled(EFI_MEMMAP)) ++ if (!IS_ENABLED(CONFIG_X86_32) && efi_enabled(EFI_MEMMAP)) + efi_memattr_init(); + + efi_tpm_eventlog_init(); +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index 7c3c323773d3..5f508ec321fe 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -2115,9 +2115,9 @@ static bool drm_dp_get_vc_payload_bw(int dp_link_bw, + int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state) + { + int ret = 0; +- int i = 0; + struct drm_dp_mst_branch *mstb = NULL; + ++ mutex_lock(&mgr->payload_lock); + mutex_lock(&mgr->lock); + if (mst_state == mgr->mst_state) + goto out_unlock; +@@ -2176,25 +2176,18 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms + /* this can fail if the device is gone */ + drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0); + ret = 0; +- mutex_lock(&mgr->payload_lock); +- memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload)); ++ memset(mgr->payloads, 0, ++ mgr->max_payloads * sizeof(mgr->payloads[0])); ++ memset(mgr->proposed_vcpis, 0, ++ mgr->max_payloads * sizeof(mgr->proposed_vcpis[0])); + mgr->payload_mask = 0; + set_bit(0, &mgr->payload_mask); +- for (i = 0; i < mgr->max_payloads; i++) { +- struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i]; +- +- if (vcpi) { +- vcpi->vcpi = 0; +- vcpi->num_slots = 0; +- } +- mgr->proposed_vcpis[i] = NULL; +- } + mgr->vcpi_mask = 0; +- mutex_unlock(&mgr->payload_lock); + } + + out_unlock: + mutex_unlock(&mgr->lock); ++ mutex_unlock(&mgr->payload_lock); + if (mstb) + drm_dp_put_mst_branch_device(mstb); + return ret; +diff --git a/drivers/gpu/drm/drm_pci.c b/drivers/gpu/drm/drm_pci.c +index 896e42a34895..d89a992829be 100644 +--- a/drivers/gpu/drm/drm_pci.c ++++ b/drivers/gpu/drm/drm_pci.c +@@ -46,8 +46,6 @@ + drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align) + { + drm_dma_handle_t *dmah; +- unsigned long addr; +- size_t sz; + + /* pci_alloc_consistent only guarantees alignment to the smallest + * PAGE_SIZE order which is greater than or equal to the requested size. +@@ -61,22 +59,13 @@ drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t ali + return NULL; + + dmah->size = size; +- dmah->vaddr = dma_alloc_coherent(&dev->pdev->dev, size, &dmah->busaddr, GFP_KERNEL | __GFP_COMP); ++ dmah->vaddr = dma_alloc_coherent(&dev->pdev->dev, size, &dmah->busaddr, GFP_KERNEL); + + if (dmah->vaddr == NULL) { + kfree(dmah); + return NULL; + } + +- memset(dmah->vaddr, 0, size); +- +- /* XXX - Is virt_to_page() legal for consistent mem? */ +- /* Reserve */ +- for (addr = (unsigned long)dmah->vaddr, sz = size; +- sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { +- SetPageReserved(virt_to_page((void *)addr)); +- } +- + return dmah; + } + +@@ -89,19 +78,9 @@ EXPORT_SYMBOL(drm_pci_alloc); + */ + void __drm_legacy_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah) + { +- unsigned long addr; +- size_t sz; +- +- if (dmah->vaddr) { +- /* XXX - Is virt_to_page() legal for consistent mem? */ +- /* Unreserve */ +- for (addr = (unsigned long)dmah->vaddr, sz = dmah->size; +- sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { +- ClearPageReserved(virt_to_page((void *)addr)); +- } ++ if (dmah->vaddr) + dma_free_coherent(&dev->pdev->dev, dmah->size, dmah->vaddr, + dmah->busaddr); +- } + } + + /** +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c +index 4227a4006c34..3ce77cbad4ae 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c +@@ -4,6 +4,7 @@ + * Copyright (C) 2017 Zodiac Inflight Innovations + */ + ++#include "common.xml.h" + #include "etnaviv_gpu.h" + #include "etnaviv_perfmon.h" + #include "state_hi.xml.h" +@@ -31,17 +32,11 @@ struct etnaviv_pm_domain { + }; + + struct etnaviv_pm_domain_meta { ++ unsigned int feature; + const struct etnaviv_pm_domain *domains; + u32 nr_domains; + }; + +-static u32 simple_reg_read(struct etnaviv_gpu *gpu, +- const struct etnaviv_pm_domain *domain, +- const struct etnaviv_pm_signal *signal) +-{ +- return gpu_read(gpu, signal->data); +-} +- + static u32 perf_reg_read(struct etnaviv_gpu *gpu, + const struct etnaviv_pm_domain *domain, + const struct etnaviv_pm_signal *signal) +@@ -75,6 +70,34 @@ static u32 pipe_reg_read(struct etnaviv_gpu *gpu, + return value; + } + ++static u32 hi_total_cycle_read(struct etnaviv_gpu *gpu, ++ const struct etnaviv_pm_domain *domain, ++ const struct etnaviv_pm_signal *signal) ++{ ++ u32 reg = VIVS_HI_PROFILE_TOTAL_CYCLES; ++ ++ if (gpu->identity.model == chipModel_GC880 || ++ gpu->identity.model == chipModel_GC2000 || ++ gpu->identity.model == chipModel_GC2100) ++ reg = VIVS_MC_PROFILE_CYCLE_COUNTER; ++ ++ return gpu_read(gpu, reg); ++} ++ ++static u32 hi_total_idle_cycle_read(struct etnaviv_gpu *gpu, ++ const struct etnaviv_pm_domain *domain, ++ const struct etnaviv_pm_signal *signal) ++{ ++ u32 reg = VIVS_HI_PROFILE_IDLE_CYCLES; ++ ++ if (gpu->identity.model == chipModel_GC880 || ++ gpu->identity.model == chipModel_GC2000 || ++ gpu->identity.model == chipModel_GC2100) ++ reg = VIVS_HI_PROFILE_TOTAL_CYCLES; ++ ++ return gpu_read(gpu, reg); ++} ++ + static const struct etnaviv_pm_domain doms_3d[] = { + { + .name = "HI", +@@ -84,13 +107,13 @@ static const struct etnaviv_pm_domain doms_3d[] = { + .signal = (const struct etnaviv_pm_signal[]) { + { + "TOTAL_CYCLES", +- VIVS_HI_PROFILE_TOTAL_CYCLES, +- &simple_reg_read ++ 0, ++ &hi_total_cycle_read + }, + { + "IDLE_CYCLES", +- VIVS_HI_PROFILE_IDLE_CYCLES, +- &simple_reg_read ++ 0, ++ &hi_total_idle_cycle_read + }, + { + "AXI_CYCLES_READ_REQUEST_STALLED", +@@ -388,36 +411,78 @@ static const struct etnaviv_pm_domain doms_vg[] = { + + static const struct etnaviv_pm_domain_meta doms_meta[] = { + { ++ .feature = chipFeatures_PIPE_3D, + .nr_domains = ARRAY_SIZE(doms_3d), + .domains = &doms_3d[0] + }, + { ++ .feature = chipFeatures_PIPE_2D, + .nr_domains = ARRAY_SIZE(doms_2d), + .domains = &doms_2d[0] + }, + { ++ .feature = chipFeatures_PIPE_VG, + .nr_domains = ARRAY_SIZE(doms_vg), + .domains = &doms_vg[0] + } + }; + ++static unsigned int num_pm_domains(const struct etnaviv_gpu *gpu) ++{ ++ unsigned int num = 0, i; ++ ++ for (i = 0; i < ARRAY_SIZE(doms_meta); i++) { ++ const struct etnaviv_pm_domain_meta *meta = &doms_meta[i]; ++ ++ if (gpu->identity.features & meta->feature) ++ num += meta->nr_domains; ++ } ++ ++ return num; ++} ++ ++static const struct etnaviv_pm_domain *pm_domain(const struct etnaviv_gpu *gpu, ++ unsigned int index) ++{ ++ const struct etnaviv_pm_domain *domain = NULL; ++ unsigned int offset = 0, i; ++ ++ for (i = 0; i < ARRAY_SIZE(doms_meta); i++) { ++ const struct etnaviv_pm_domain_meta *meta = &doms_meta[i]; ++ ++ if (!(gpu->identity.features & meta->feature)) ++ continue; ++ ++ if (meta->nr_domains < (index - offset)) { ++ offset += meta->nr_domains; ++ continue; ++ } ++ ++ domain = meta->domains + (index - offset); ++ } ++ ++ return domain; ++} ++ + int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu, + struct drm_etnaviv_pm_domain *domain) + { +- const struct etnaviv_pm_domain_meta *meta = &doms_meta[domain->pipe]; ++ const unsigned int nr_domains = num_pm_domains(gpu); + const struct etnaviv_pm_domain *dom; + +- if (domain->iter >= meta->nr_domains) ++ if (domain->iter >= nr_domains) + return -EINVAL; + +- dom = meta->domains + domain->iter; ++ dom = pm_domain(gpu, domain->iter); ++ if (!dom) ++ return -EINVAL; + + domain->id = domain->iter; + domain->nr_signals = dom->nr_signals; + strncpy(domain->name, dom->name, sizeof(domain->name)); + + domain->iter++; +- if (domain->iter == meta->nr_domains) ++ if (domain->iter == nr_domains) + domain->iter = 0xff; + + return 0; +@@ -426,14 +491,16 @@ int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu, + int etnaviv_pm_query_sig(struct etnaviv_gpu *gpu, + struct drm_etnaviv_pm_signal *signal) + { +- const struct etnaviv_pm_domain_meta *meta = &doms_meta[signal->pipe]; ++ const unsigned int nr_domains = num_pm_domains(gpu); + const struct etnaviv_pm_domain *dom; + const struct etnaviv_pm_signal *sig; + +- if (signal->domain >= meta->nr_domains) ++ if (signal->domain >= nr_domains) + return -EINVAL; + +- dom = meta->domains + signal->domain; ++ dom = pm_domain(gpu, signal->domain); ++ if (!dom) ++ return -EINVAL; + + if (signal->iter >= dom->nr_signals) + return -EINVAL; +diff --git a/drivers/i2c/busses/i2c-st.c b/drivers/i2c/busses/i2c-st.c +index 9e62f893958a..81158ae8bfe3 100644 +--- a/drivers/i2c/busses/i2c-st.c ++++ b/drivers/i2c/busses/i2c-st.c +@@ -437,6 +437,7 @@ static void st_i2c_wr_fill_tx_fifo(struct st_i2c_dev *i2c_dev) + /** + * st_i2c_rd_fill_tx_fifo() - Fill the Tx FIFO in read mode + * @i2c_dev: Controller's private data ++ * @max: Maximum amount of data to fill into the Tx FIFO + * + * This functions fills the Tx FIFO with fixed pattern when + * in read mode to trigger clock. +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index 2db34f7b5ced..f41f3ff689c5 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -1070,12 +1070,10 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, + if (MLX5_CAP_ETH(mdev, tunnel_stateless_gre)) + resp.tunnel_offloads_caps |= + MLX5_IB_TUNNELED_OFFLOADS_GRE; +- if (MLX5_CAP_GEN(mdev, flex_parser_protocols) & +- MLX5_FLEX_PROTO_CW_MPLS_GRE) ++ if (MLX5_CAP_ETH(mdev, tunnel_stateless_mpls_over_gre)) + resp.tunnel_offloads_caps |= + MLX5_IB_TUNNELED_OFFLOADS_MPLS_GRE; +- if (MLX5_CAP_GEN(mdev, flex_parser_protocols) & +- MLX5_FLEX_PROTO_CW_MPLS_UDP) ++ if (MLX5_CAP_ETH(mdev, tunnel_stateless_mpls_over_udp)) + resp.tunnel_offloads_caps |= + MLX5_IB_TUNNELED_OFFLOADS_MPLS_UDP; + } +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h +index 136f6e7bf797..0d0f977a2f39 100644 +--- a/drivers/input/serio/i8042-x86ia64io.h ++++ b/drivers/input/serio/i8042-x86ia64io.h +@@ -534,6 +534,17 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = { + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"), + }, + }, ++ { ++ /* ++ * Acer Aspire 5738z ++ * Touchpad stops working in mux mode when dis- + re-enabled ++ * with the touchpad enable/disable toggle hotkey ++ */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"), ++ }, ++ }, + { } + }; + +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index f9b73336a39e..fe7d63cdfb1d 100644 +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -2858,12 +2858,18 @@ static int its_vpe_set_irqchip_state(struct irq_data *d, + return 0; + } + ++static int its_vpe_retrigger(struct irq_data *d) ++{ ++ return !its_vpe_set_irqchip_state(d, IRQCHIP_STATE_PENDING, true); ++} ++ + static struct irq_chip its_vpe_irq_chip = { + .name = "GICv4-vpe", + .irq_mask = its_vpe_mask_irq, + .irq_unmask = its_vpe_unmask_irq, + .irq_eoi = irq_chip_eoi_parent, + .irq_set_affinity = its_vpe_set_affinity, ++ .irq_retrigger = its_vpe_retrigger, + .irq_set_irqchip_state = its_vpe_set_irqchip_state, + .irq_set_vcpu_affinity = its_vpe_set_vcpu_affinity, + }; +diff --git a/drivers/irqchip/irq-versatile-fpga.c b/drivers/irqchip/irq-versatile-fpga.c +index 928858dada75..f1386733d3bc 100644 +--- a/drivers/irqchip/irq-versatile-fpga.c ++++ b/drivers/irqchip/irq-versatile-fpga.c +@@ -6,6 +6,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -68,12 +69,16 @@ static void fpga_irq_unmask(struct irq_data *d) + + static void fpga_irq_handle(struct irq_desc *desc) + { ++ struct irq_chip *chip = irq_desc_get_chip(desc); + struct fpga_irq_data *f = irq_desc_get_handler_data(desc); +- u32 status = readl(f->base + IRQ_STATUS); ++ u32 status; ++ ++ chained_irq_enter(chip, desc); + ++ status = readl(f->base + IRQ_STATUS); + if (status == 0) { + do_bad_IRQ(desc); +- return; ++ goto out; + } + + do { +@@ -82,6 +87,9 @@ static void fpga_irq_handle(struct irq_desc *desc) + status &= ~(1 << irq); + generic_handle_irq(irq_find_mapping(f->domain, irq)); + } while (status); ++ ++out: ++ chained_irq_exit(chip, desc); + } + + /* +@@ -204,6 +212,9 @@ int __init fpga_irq_of_init(struct device_node *node, + if (of_property_read_u32(node, "valid-mask", &valid_mask)) + valid_mask = 0; + ++ writel(clear_mask, base + IRQ_ENABLE_CLEAR); ++ writel(clear_mask, base + FIQ_ENABLE_CLEAR); ++ + /* Some chips are cascaded from a parent IRQ */ + parent_irq = irq_of_parse_and_map(node, 0); + if (!parent_irq) { +@@ -213,9 +224,6 @@ int __init fpga_irq_of_init(struct device_node *node, + + fpga_irq_init(base, node->name, 0, parent_irq, valid_mask, node); + +- writel(clear_mask, base + IRQ_ENABLE_CLEAR); +- writel(clear_mask, base + FIQ_ENABLE_CLEAR); +- + /* + * On Versatile AB/PB, some secondary interrupts have a direct + * pass-thru to the primary controller for IRQs 20 and 22-31 which need +diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c +index 684af08d0747..0f4a2143bf55 100644 +--- a/drivers/md/dm-verity-fec.c ++++ b/drivers/md/dm-verity-fec.c +@@ -552,6 +552,7 @@ void verity_fec_dtr(struct dm_verity *v) + mempool_exit(&f->rs_pool); + mempool_exit(&f->prealloc_pool); + mempool_exit(&f->extra_pool); ++ mempool_exit(&f->output_pool); + kmem_cache_destroy(f->cache); + + if (f->data_bufio) +diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c +index 4e4a09054f85..a76eda50ad48 100644 +--- a/drivers/md/dm-writecache.c ++++ b/drivers/md/dm-writecache.c +@@ -878,6 +878,7 @@ static int writecache_alloc_entries(struct dm_writecache *wc) + struct wc_entry *e = &wc->entries[b]; + e->index = b; + e->write_in_progress = false; ++ cond_resched(); + } + + return 0; +@@ -932,6 +933,7 @@ static void writecache_resume(struct dm_target *ti) + e->original_sector = le64_to_cpu(wme.original_sector); + e->seq_count = le64_to_cpu(wme.seq_count); + } ++ cond_resched(); + } + #endif + for (b = 0; b < wc->n_blocks; b++) { +@@ -1764,8 +1766,10 @@ static int init_memory(struct dm_writecache *wc) + pmem_assign(sb(wc)->n_blocks, cpu_to_le64(wc->n_blocks)); + pmem_assign(sb(wc)->seq_count, cpu_to_le64(0)); + +- for (b = 0; b < wc->n_blocks; b++) ++ for (b = 0; b < wc->n_blocks; b++) { + write_original_sector_seq_count(wc, &wc->entries[b], -1, -1); ++ cond_resched(); ++ } + + writecache_flush_all_metadata(wc); + writecache_commit_flushed(wc, false); +diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c +index 086a870087cf..53eb21343b11 100644 +--- a/drivers/md/dm-zoned-metadata.c ++++ b/drivers/md/dm-zoned-metadata.c +@@ -1105,7 +1105,6 @@ static int dmz_init_zone(struct dmz_metadata *zmd, struct dm_zone *zone, + + if (blkz->type == BLK_ZONE_TYPE_CONVENTIONAL) { + set_bit(DMZ_RND, &zone->flags); +- zmd->nr_rnd_zones++; + } else if (blkz->type == BLK_ZONE_TYPE_SEQWRITE_REQ || + blkz->type == BLK_ZONE_TYPE_SEQWRITE_PREF) { + set_bit(DMZ_SEQ, &zone->flags); +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 62f214d43e15..9426976e0860 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -5874,7 +5874,7 @@ EXPORT_SYMBOL_GPL(md_stop_writes); + static void mddev_detach(struct mddev *mddev) + { + md_bitmap_wait_behind_writes(mddev); +- if (mddev->pers && mddev->pers->quiesce) { ++ if (mddev->pers && mddev->pers->quiesce && !mddev->suspended) { + mddev->pers->quiesce(mddev, 1); + mddev->pers->quiesce(mddev, 0); + } +diff --git a/drivers/media/i2c/ov5695.c b/drivers/media/i2c/ov5695.c +index 5d107c53364d..be242bb8fefc 100644 +--- a/drivers/media/i2c/ov5695.c ++++ b/drivers/media/i2c/ov5695.c +@@ -974,16 +974,9 @@ unlock_and_return: + return ret; + } + +-/* Calculate the delay in us by clock rate and clock cycles */ +-static inline u32 ov5695_cal_delay(u32 cycles) +-{ +- return DIV_ROUND_UP(cycles, OV5695_XVCLK_FREQ / 1000 / 1000); +-} +- + static int __ov5695_power_on(struct ov5695 *ov5695) + { +- int ret; +- u32 delay_us; ++ int i, ret; + struct device *dev = &ov5695->client->dev; + + ret = clk_prepare_enable(ov5695->xvclk); +@@ -994,21 +987,28 @@ static int __ov5695_power_on(struct ov5695 *ov5695) + + gpiod_set_value_cansleep(ov5695->reset_gpio, 1); + +- ret = regulator_bulk_enable(OV5695_NUM_SUPPLIES, ov5695->supplies); +- if (ret < 0) { +- dev_err(dev, "Failed to enable regulators\n"); +- goto disable_clk; ++ /* ++ * The hardware requires the regulators to be powered on in order, ++ * so enable them one by one. ++ */ ++ for (i = 0; i < OV5695_NUM_SUPPLIES; i++) { ++ ret = regulator_enable(ov5695->supplies[i].consumer); ++ if (ret) { ++ dev_err(dev, "Failed to enable %s: %d\n", ++ ov5695->supplies[i].supply, ret); ++ goto disable_reg_clk; ++ } + } + + gpiod_set_value_cansleep(ov5695->reset_gpio, 0); + +- /* 8192 cycles prior to first SCCB transaction */ +- delay_us = ov5695_cal_delay(8192); +- usleep_range(delay_us, delay_us * 2); ++ usleep_range(1000, 1200); + + return 0; + +-disable_clk: ++disable_reg_clk: ++ for (--i; i >= 0; i--) ++ regulator_disable(ov5695->supplies[i].consumer); + clk_disable_unprepare(ov5695->xvclk); + + return ret; +@@ -1016,9 +1016,22 @@ disable_clk: + + static void __ov5695_power_off(struct ov5695 *ov5695) + { ++ struct device *dev = &ov5695->client->dev; ++ int i, ret; ++ + clk_disable_unprepare(ov5695->xvclk); + gpiod_set_value_cansleep(ov5695->reset_gpio, 1); +- regulator_bulk_disable(OV5695_NUM_SUPPLIES, ov5695->supplies); ++ ++ /* ++ * The hardware requires the regulators to be powered off in order, ++ * so disable them one by one. ++ */ ++ for (i = OV5695_NUM_SUPPLIES - 1; i >= 0; i--) { ++ ret = regulator_disable(ov5695->supplies[i].consumer); ++ if (ret) ++ dev_err(dev, "Failed to disable %s: %d\n", ++ ov5695->supplies[i].supply, ret); ++ } + } + + static int __maybe_unused ov5695_runtime_resume(struct device *dev) +@@ -1288,7 +1301,7 @@ static int ov5695_probe(struct i2c_client *client, + if (clk_get_rate(ov5695->xvclk) != OV5695_XVCLK_FREQ) + dev_warn(dev, "xvclk mismatched, modes are based on 24MHz\n"); + +- ov5695->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); ++ ov5695->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); + if (IS_ERR(ov5695->reset_gpio)) { + dev_err(dev, "Failed to get reset-gpios\n"); + return -EINVAL; +diff --git a/drivers/media/i2c/video-i2c.c b/drivers/media/i2c/video-i2c.c +index f27d294dcbef..dd50acc085d8 100644 +--- a/drivers/media/i2c/video-i2c.c ++++ b/drivers/media/i2c/video-i2c.c +@@ -105,7 +105,7 @@ static int amg88xx_xfer(struct video_i2c_data *data, char *buf) + return (ret == 2) ? 0 : -EIO; + } + +-#if IS_ENABLED(CONFIG_HWMON) ++#if IS_REACHABLE(CONFIG_HWMON) + + static const u32 amg88xx_temp_config[] = { + HWMON_T_INPUT, +diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c +index 2293d936e49c..7f515a4b9bd1 100644 +--- a/drivers/media/platform/qcom/venus/hfi_parser.c ++++ b/drivers/media/platform/qcom/venus/hfi_parser.c +@@ -181,6 +181,7 @@ static void parse_codecs(struct venus_core *core, void *data) + if (IS_V1(core)) { + core->dec_codecs &= ~HFI_VIDEO_CODEC_HEVC; + core->dec_codecs &= ~HFI_VIDEO_CODEC_SPARK; ++ core->enc_codecs &= ~HFI_VIDEO_CODEC_HEVC; + } + } + +diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c +index d1febe5baa6d..be3155275a6b 100644 +--- a/drivers/media/platform/ti-vpe/cal.c ++++ b/drivers/media/platform/ti-vpe/cal.c +@@ -541,16 +541,16 @@ static void enable_irqs(struct cal_ctx *ctx) + + static void disable_irqs(struct cal_ctx *ctx) + { ++ u32 val; ++ + /* Disable IRQ_WDMA_END 0/1 */ +- reg_write_field(ctx->dev, +- CAL_HL_IRQENABLE_CLR(2), +- CAL_HL_IRQ_CLEAR, +- CAL_HL_IRQ_MASK(ctx->csi2_port)); ++ val = 0; ++ set_field(&val, CAL_HL_IRQ_CLEAR, CAL_HL_IRQ_MASK(ctx->csi2_port)); ++ reg_write(ctx->dev, CAL_HL_IRQENABLE_CLR(2), val); + /* Disable IRQ_WDMA_START 0/1 */ +- reg_write_field(ctx->dev, +- CAL_HL_IRQENABLE_CLR(3), +- CAL_HL_IRQ_CLEAR, +- CAL_HL_IRQ_MASK(ctx->csi2_port)); ++ val = 0; ++ set_field(&val, CAL_HL_IRQ_CLEAR, CAL_HL_IRQ_MASK(ctx->csi2_port)); ++ reg_write(ctx->dev, CAL_HL_IRQENABLE_CLR(3), val); + /* Todo: Add VC_IRQ and CSI2_COMPLEXIO_IRQ handling */ + reg_write(ctx->dev, CAL_CSI2_VC_IRQENABLE(1), 0); + } +diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c +index 1476465ce803..6ea0dd37b453 100644 +--- a/drivers/mfd/dln2.c ++++ b/drivers/mfd/dln2.c +@@ -93,6 +93,11 @@ struct dln2_mod_rx_slots { + spinlock_t lock; + }; + ++enum dln2_endpoint { ++ DLN2_EP_OUT = 0, ++ DLN2_EP_IN = 1, ++}; ++ + struct dln2_dev { + struct usb_device *usb_dev; + struct usb_interface *interface; +@@ -736,10 +741,10 @@ static int dln2_probe(struct usb_interface *interface, + hostif->desc.bNumEndpoints < 2) + return -ENODEV; + +- epin = &hostif->endpoint[0].desc; +- epout = &hostif->endpoint[1].desc; ++ epout = &hostif->endpoint[DLN2_EP_OUT].desc; + if (!usb_endpoint_is_bulk_out(epout)) + return -ENODEV; ++ epin = &hostif->endpoint[DLN2_EP_IN].desc; + if (!usb_endpoint_is_bulk_in(epin)) + return -ENODEV; + +diff --git a/drivers/misc/echo/echo.c b/drivers/misc/echo/echo.c +index 8a5adc0d2e88..3ebe5d75ad6a 100644 +--- a/drivers/misc/echo/echo.c ++++ b/drivers/misc/echo/echo.c +@@ -381,7 +381,7 @@ int16_t oslec_update(struct oslec_state *ec, int16_t tx, int16_t rx) + */ + ec->factor = 0; + ec->shift = 0; +- if ((ec->nonupdate_dwell == 0)) { ++ if (!ec->nonupdate_dwell) { + int p, logp, shift; + + /* Determine: +diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c +index 48b3ab26b124..ee0c74b02220 100644 +--- a/drivers/mtd/nand/spi/core.c ++++ b/drivers/mtd/nand/spi/core.c +@@ -629,18 +629,18 @@ static int spinand_mtd_write(struct mtd_info *mtd, loff_t to, + static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos) + { + struct spinand_device *spinand = nand_to_spinand(nand); ++ u8 marker[2] = { }; + struct nand_page_io_req req = { + .pos = *pos, +- .ooblen = 2, ++ .ooblen = sizeof(marker), + .ooboffs = 0, +- .oobbuf.in = spinand->oobbuf, ++ .oobbuf.in = marker, + .mode = MTD_OPS_RAW, + }; + +- memset(spinand->oobbuf, 0, 2); + spinand_select_target(spinand, pos->target); + spinand_read_page(spinand, &req, false); +- if (spinand->oobbuf[0] != 0xff || spinand->oobbuf[1] != 0xff) ++ if (marker[0] != 0xff || marker[1] != 0xff) + return true; + + return false; +@@ -664,15 +664,15 @@ static int spinand_mtd_block_isbad(struct mtd_info *mtd, loff_t offs) + static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos) + { + struct spinand_device *spinand = nand_to_spinand(nand); ++ u8 marker[2] = { }; + struct nand_page_io_req req = { + .pos = *pos, + .ooboffs = 0, +- .ooblen = 2, +- .oobbuf.out = spinand->oobbuf, ++ .ooblen = sizeof(marker), ++ .oobbuf.out = marker, + }; + int ret; + +- /* Erase block before marking it bad. */ + ret = spinand_select_target(spinand, pos->target); + if (ret) + return ret; +@@ -681,9 +681,6 @@ static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos) + if (ret) + return ret; + +- spinand_erase_op(spinand, pos); +- +- memset(spinand->oobbuf, 0, 2); + return spinand_write_page(spinand, &req); + } + +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c +index 9f9d6cae39d5..758f2b836328 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c +@@ -246,6 +246,9 @@ static int cxgb4_ptp_fineadjtime(struct adapter *adapter, s64 delta) + FW_PTP_CMD_PORTID_V(0)); + c.retval_len16 = cpu_to_be32(FW_CMD_LEN16_V(sizeof(c) / 16)); + c.u.ts.sc = FW_PTP_SC_ADJ_FTIME; ++ c.u.ts.sign = (delta < 0) ? 1 : 0; ++ if (delta < 0) ++ delta = -delta; + c.u.ts.tm = cpu_to_be64(delta); + + err = t4_wr_mbox(adapter, adapter->mbox, &c, sizeof(c), NULL); +diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +index 4d09ea786b35..ee715bf785ad 100644 +--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c ++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +@@ -398,7 +398,8 @@ static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq, + + spin_unlock_bh(&cmdq->cmdq_lock); + +- if (!wait_for_completion_timeout(&done, CMDQ_TIMEOUT)) { ++ if (!wait_for_completion_timeout(&done, ++ msecs_to_jiffies(CMDQ_TIMEOUT))) { + spin_lock_bh(&cmdq->cmdq_lock); + + if (cmdq->errcode[curr_prod_idx] == &errcode) +diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c +index 9deec13d98e9..4c91c8ceac5f 100644 +--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c ++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c +@@ -370,50 +370,6 @@ static int wait_for_db_state(struct hinic_hwdev *hwdev) + return -EFAULT; + } + +-static int wait_for_io_stopped(struct hinic_hwdev *hwdev) +-{ +- struct hinic_cmd_io_status cmd_io_status; +- struct hinic_hwif *hwif = hwdev->hwif; +- struct pci_dev *pdev = hwif->pdev; +- struct hinic_pfhwdev *pfhwdev; +- unsigned long end; +- u16 out_size; +- int err; +- +- if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) { +- dev_err(&pdev->dev, "Unsupported PCI Function type\n"); +- return -EINVAL; +- } +- +- pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev); +- +- cmd_io_status.func_idx = HINIC_HWIF_FUNC_IDX(hwif); +- +- end = jiffies + msecs_to_jiffies(IO_STATUS_TIMEOUT); +- do { +- err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM, +- HINIC_COMM_CMD_IO_STATUS_GET, +- &cmd_io_status, sizeof(cmd_io_status), +- &cmd_io_status, &out_size, +- HINIC_MGMT_MSG_SYNC); +- if ((err) || (out_size != sizeof(cmd_io_status))) { +- dev_err(&pdev->dev, "Failed to get IO status, ret = %d\n", +- err); +- return err; +- } +- +- if (cmd_io_status.status == IO_STOPPED) { +- dev_info(&pdev->dev, "IO stopped\n"); +- return 0; +- } +- +- msleep(20); +- } while (time_before(jiffies, end)); +- +- dev_err(&pdev->dev, "Wait for IO stopped - Timeout\n"); +- return -ETIMEDOUT; +-} +- + /** + * clear_io_resource - set the IO resources as not active in the NIC + * @hwdev: the NIC HW device +@@ -433,11 +389,8 @@ static int clear_io_resources(struct hinic_hwdev *hwdev) + return -EINVAL; + } + +- err = wait_for_io_stopped(hwdev); +- if (err) { +- dev_err(&pdev->dev, "IO has not stopped yet\n"); +- return err; +- } ++ /* sleep 100ms to wait for firmware stopping I/O */ ++ msleep(100); + + cmd_clear_io_res.func_idx = HINIC_HWIF_FUNC_IDX(hwif); + +diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c +index 278dc13f3dae..9fcf2e5e0003 100644 +--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c ++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c +@@ -52,7 +52,7 @@ + + #define MSG_NOT_RESP 0xFFFF + +-#define MGMT_MSG_TIMEOUT 1000 ++#define MGMT_MSG_TIMEOUT 5000 + + #define mgmt_to_pfhwdev(pf_mgmt) \ + container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) +@@ -276,7 +276,8 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt, + goto unlock_sync_msg; + } + +- if (!wait_for_completion_timeout(recv_done, MGMT_MSG_TIMEOUT)) { ++ if (!wait_for_completion_timeout(recv_done, ++ msecs_to_jiffies(MGMT_MSG_TIMEOUT))) { + dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id); + err = -ETIMEDOUT; + goto unlock_sync_msg; +diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.h b/drivers/net/ethernet/neterion/vxge/vxge-config.h +index d743a37a3cee..e5dda2c27f18 100644 +--- a/drivers/net/ethernet/neterion/vxge/vxge-config.h ++++ b/drivers/net/ethernet/neterion/vxge/vxge-config.h +@@ -2065,7 +2065,7 @@ vxge_hw_vpath_strip_fcs_check(struct __vxge_hw_device *hldev, u64 vpath_mask); + if ((level >= VXGE_ERR && VXGE_COMPONENT_LL & VXGE_DEBUG_ERR_MASK) || \ + (level >= VXGE_TRACE && VXGE_COMPONENT_LL & VXGE_DEBUG_TRACE_MASK))\ + if ((mask & VXGE_DEBUG_MASK) == mask) \ +- printk(fmt "\n", __VA_ARGS__); \ ++ printk(fmt "\n", ##__VA_ARGS__); \ + } while (0) + #else + #define vxge_debug_ll(level, mask, fmt, ...) +diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.h b/drivers/net/ethernet/neterion/vxge/vxge-main.h +index 59a57ff5e96a..9c86f4f9cd42 100644 +--- a/drivers/net/ethernet/neterion/vxge/vxge-main.h ++++ b/drivers/net/ethernet/neterion/vxge/vxge-main.h +@@ -452,49 +452,49 @@ int vxge_fw_upgrade(struct vxgedev *vdev, char *fw_name, int override); + + #if (VXGE_DEBUG_LL_CONFIG & VXGE_DEBUG_MASK) + #define vxge_debug_ll_config(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_ll_config(level, fmt, ...) + #endif + + #if (VXGE_DEBUG_INIT & VXGE_DEBUG_MASK) + #define vxge_debug_init(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_init(level, fmt, ...) + #endif + + #if (VXGE_DEBUG_TX & VXGE_DEBUG_MASK) + #define vxge_debug_tx(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_tx(level, fmt, ...) + #endif + + #if (VXGE_DEBUG_RX & VXGE_DEBUG_MASK) + #define vxge_debug_rx(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_rx(level, fmt, ...) + #endif + + #if (VXGE_DEBUG_MEM & VXGE_DEBUG_MASK) + #define vxge_debug_mem(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_mem(level, fmt, ...) + #endif + + #if (VXGE_DEBUG_ENTRYEXIT & VXGE_DEBUG_MASK) + #define vxge_debug_entryexit(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_entryexit(level, fmt, ...) + #endif + + #if (VXGE_DEBUG_INTR & VXGE_DEBUG_MASK) + #define vxge_debug_intr(level, fmt, ...) \ +- vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, __VA_ARGS__) ++ vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, ##__VA_ARGS__) + #else + #define vxge_debug_intr(level, fmt, ...) + #endif +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c +index 07f9067affc6..cda5b0a9e948 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c +@@ -1720,7 +1720,7 @@ static int qlcnic_83xx_get_reset_instruction_template(struct qlcnic_adapter *p_d + + ahw->reset.seq_error = 0; + ahw->reset.buff = kzalloc(QLC_83XX_RESTART_TEMPLATE_SIZE, GFP_KERNEL); +- if (p_dev->ahw->reset.buff == NULL) ++ if (ahw->reset.buff == NULL) + return -ENOMEM; + + p_buff = p_dev->ahw->reset.buff; +diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +index 37786affa975..7389648d0fea 100644 +--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c ++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +@@ -288,7 +288,6 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[], + { + struct rmnet_priv *priv = netdev_priv(dev); + struct net_device *real_dev; +- struct rmnet_endpoint *ep; + struct rmnet_port *port; + u16 mux_id; + +@@ -303,19 +302,27 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[], + + if (data[IFLA_RMNET_MUX_ID]) { + mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]); +- if (rmnet_get_endpoint(port, mux_id)) { +- NL_SET_ERR_MSG_MOD(extack, "MUX ID already exists"); +- return -EINVAL; +- } +- ep = rmnet_get_endpoint(port, priv->mux_id); +- if (!ep) +- return -ENODEV; + +- hlist_del_init_rcu(&ep->hlnode); +- hlist_add_head_rcu(&ep->hlnode, &port->muxed_ep[mux_id]); ++ if (mux_id != priv->mux_id) { ++ struct rmnet_endpoint *ep; ++ ++ ep = rmnet_get_endpoint(port, priv->mux_id); ++ if (!ep) ++ return -ENODEV; + +- ep->mux_id = mux_id; +- priv->mux_id = mux_id; ++ if (rmnet_get_endpoint(port, mux_id)) { ++ NL_SET_ERR_MSG_MOD(extack, ++ "MUX ID already exists"); ++ return -EINVAL; ++ } ++ ++ hlist_del_init_rcu(&ep->hlnode); ++ hlist_add_head_rcu(&ep->hlnode, ++ &port->muxed_ep[mux_id]); ++ ++ ep->mux_id = mux_id; ++ priv->mux_id = mux_id; ++ } + } + + if (data[IFLA_RMNET_FLAGS]) { +diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c +index 74f98bbaea88..3e92b88045a5 100644 +--- a/drivers/net/wireless/ath/ath9k/main.c ++++ b/drivers/net/wireless/ath/ath9k/main.c +@@ -1457,6 +1457,9 @@ static int ath9k_config(struct ieee80211_hw *hw, u32 changed) + ath_chanctx_set_channel(sc, ctx, &hw->conf.chandef); + } + ++ if (changed & IEEE80211_CONF_CHANGE_POWER) ++ ath9k_set_txpower(sc, NULL); ++ + mutex_unlock(&sc->mutex); + ath9k_ps_restore(sc); + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index a8132e8d72bb..d5359c7c811a 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -2200,6 +2200,17 @@ static struct nvme_subsystem *__nvme_find_get_subsystem(const char *subsysnqn) + + lockdep_assert_held(&nvme_subsystems_lock); + ++ /* ++ * Fail matches for discovery subsystems. This results ++ * in each discovery controller bound to a unique subsystem. ++ * This avoids issues with validating controller values ++ * that can only be true when there is a single unique subsystem. ++ * There may be multiple and completely independent entities ++ * that provide discovery controllers. ++ */ ++ if (!strcmp(subsysnqn, NVME_DISC_SUBSYS_NAME)) ++ return NULL; ++ + list_for_each_entry(subsys, &nvme_subsystems, entry) { + if (strcmp(subsys->subnqn, subsysnqn)) + continue; +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 1875f6b8a907..ed43b06353a3 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -342,8 +342,7 @@ nvme_fc_register_localport(struct nvme_fc_port_info *pinfo, + !template->ls_req || !template->fcp_io || + !template->ls_abort || !template->fcp_abort || + !template->max_hw_queues || !template->max_sgl_segments || +- !template->max_dif_sgl_segments || !template->dma_boundary || +- !template->module) { ++ !template->max_dif_sgl_segments || !template->dma_boundary) { + ret = -EINVAL; + goto out_reghost_failed; + } +@@ -1987,7 +1986,6 @@ nvme_fc_ctrl_free(struct kref *ref) + { + struct nvme_fc_ctrl *ctrl = + container_of(ref, struct nvme_fc_ctrl, ref); +- struct nvme_fc_lport *lport = ctrl->lport; + unsigned long flags; + + if (ctrl->ctrl.tagset) { +@@ -2013,7 +2011,6 @@ nvme_fc_ctrl_free(struct kref *ref) + if (ctrl->ctrl.opts) + nvmf_free_options(ctrl->ctrl.opts); + kfree(ctrl); +- module_put(lport->ops->module); + } + + static void +@@ -3055,15 +3052,10 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + goto out_fail; + } + +- if (!try_module_get(lport->ops->module)) { +- ret = -EUNATCH; +- goto out_free_ctrl; +- } +- + idx = ida_simple_get(&nvme_fc_ctrl_cnt, 0, 0, GFP_KERNEL); + if (idx < 0) { + ret = -ENOSPC; +- goto out_mod_put; ++ goto out_free_ctrl; + } + + ctrl->ctrl.opts = opts; +@@ -3205,8 +3197,6 @@ out_free_queues: + out_free_ida: + put_device(ctrl->dev); + ida_simple_remove(&nvme_fc_ctrl_cnt, ctrl->cnum); +-out_mod_put: +- module_put(lport->ops->module); + out_free_ctrl: + kfree(ctrl); + out_fail: +diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c +index f0536d341f2f..291f4121f516 100644 +--- a/drivers/nvme/target/fcloop.c ++++ b/drivers/nvme/target/fcloop.c +@@ -825,7 +825,6 @@ fcloop_targetport_delete(struct nvmet_fc_target_port *targetport) + #define FCLOOP_DMABOUND_4G 0xFFFFFFFF + + static struct nvme_fc_port_template fctemplate = { +- .module = THIS_MODULE, + .localport_delete = fcloop_localport_delete, + .remoteport_delete = fcloop_remoteport_delete, + .create_queue = fcloop_create_queue, +diff --git a/drivers/pci/endpoint/pci-epc-mem.c b/drivers/pci/endpoint/pci-epc-mem.c +index 2bf8bd1f0563..0471643cf536 100644 +--- a/drivers/pci/endpoint/pci-epc-mem.c ++++ b/drivers/pci/endpoint/pci-epc-mem.c +@@ -79,6 +79,7 @@ int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size, + mem->page_size = page_size; + mem->pages = pages; + mem->size = size; ++ mutex_init(&mem->lock); + + epc->mem = mem; + +@@ -122,7 +123,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, + phys_addr_t *phys_addr, size_t size) + { + int pageno; +- void __iomem *virt_addr; ++ void __iomem *virt_addr = NULL; + struct pci_epc_mem *mem = epc->mem; + unsigned int page_shift = ilog2(mem->page_size); + int order; +@@ -130,15 +131,18 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, + size = ALIGN(size, mem->page_size); + order = pci_epc_mem_get_order(mem, size); + ++ mutex_lock(&mem->lock); + pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); + if (pageno < 0) +- return NULL; ++ goto ret; + + *phys_addr = mem->phys_base + (pageno << page_shift); + virt_addr = ioremap(*phys_addr, size); + if (!virt_addr) + bitmap_release_region(mem->bitmap, pageno, order); + ++ret: ++ mutex_unlock(&mem->lock); + return virt_addr; + } + EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); +@@ -164,7 +168,9 @@ void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr, + pageno = (phys_addr - mem->phys_base) >> page_shift; + size = ALIGN(size, mem->page_size); + order = pci_epc_mem_get_order(mem, size); ++ mutex_lock(&mem->lock); + bitmap_release_region(mem->bitmap, pageno, order); ++ mutex_unlock(&mem->lock); + } + EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr); + +diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c +index c3e3f5358b3b..07940d1d83b7 100644 +--- a/drivers/pci/hotplug/pciehp_hpc.c ++++ b/drivers/pci/hotplug/pciehp_hpc.c +@@ -627,17 +627,15 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) + if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) { + ret = pciehp_isr(irq, dev_id); + enable_irq(irq); +- if (ret != IRQ_WAKE_THREAD) { +- pci_config_pm_runtime_put(pdev); +- return ret; +- } ++ if (ret != IRQ_WAKE_THREAD) ++ goto out; + } + + synchronize_hardirq(irq); + events = atomic_xchg(&ctrl->pending_events, 0); + if (!events) { +- pci_config_pm_runtime_put(pdev); +- return IRQ_NONE; ++ ret = IRQ_NONE; ++ goto out; + } + + /* Check Attention Button Pressed */ +@@ -666,10 +664,12 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id) + pciehp_handle_presence_or_link_change(slot, events); + up_read(&ctrl->reset_lock); + ++ ret = IRQ_HANDLED; ++out: + pci_config_pm_runtime_put(pdev); + ctrl->ist_running = false; + wake_up(&ctrl->requester); +- return IRQ_HANDLED; ++ return ret; + } + + static int pciehp_poll(void *data) +diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c +index 1117b25fbe0b..af79a7168677 100644 +--- a/drivers/pci/pcie/aspm.c ++++ b/drivers/pci/pcie/aspm.c +@@ -747,9 +747,9 @@ static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state) + + /* Enable what we need to enable */ + pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, +- PCI_L1SS_CAP_L1_PM_SS, val); ++ PCI_L1SS_CTL1_L1SS_MASK, val); + pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, +- PCI_L1SS_CAP_L1_PM_SS, val); ++ PCI_L1SS_CTL1_L1SS_MASK, val); + } + + static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val) +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index 419dda6dbd16..9e20ace30b62 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -1947,26 +1947,92 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_80332_1, quirk + /* + * IO-APIC1 on 6300ESB generates boot interrupts, see Intel order no + * 300641-004US, section 5.7.3. ++ * ++ * Core IO on Xeon E5 1600/2600/4600, see Intel order no 326509-003. ++ * Core IO on Xeon E5 v2, see Intel order no 329188-003. ++ * Core IO on Xeon E7 v2, see Intel order no 329595-002. ++ * Core IO on Xeon E5 v3, see Intel order no 330784-003. ++ * Core IO on Xeon E7 v3, see Intel order no 332315-001US. ++ * Core IO on Xeon E5 v4, see Intel order no 333810-002US. ++ * Core IO on Xeon E7 v4, see Intel order no 332315-001US. ++ * Core IO on Xeon D-1500, see Intel order no 332051-001. ++ * Core IO on Xeon Scalable, see Intel order no 610950. + */ +-#define INTEL_6300_IOAPIC_ABAR 0x40 ++#define INTEL_6300_IOAPIC_ABAR 0x40 /* Bus 0, Dev 29, Func 5 */ + #define INTEL_6300_DISABLE_BOOT_IRQ (1<<14) + ++#define INTEL_CIPINTRC_CFG_OFFSET 0x14C /* Bus 0, Dev 5, Func 0 */ ++#define INTEL_CIPINTRC_DIS_INTX_ICH (1<<25) ++ + static void quirk_disable_intel_boot_interrupt(struct pci_dev *dev) + { + u16 pci_config_word; ++ u32 pci_config_dword; + + if (noioapicquirk) + return; + +- pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, &pci_config_word); +- pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ; +- pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, pci_config_word); +- ++ switch (dev->device) { ++ case PCI_DEVICE_ID_INTEL_ESB_10: ++ pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, ++ &pci_config_word); ++ pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ; ++ pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, ++ pci_config_word); ++ break; ++ case 0x3c28: /* Xeon E5 1600/2600/4600 */ ++ case 0x0e28: /* Xeon E5/E7 V2 */ ++ case 0x2f28: /* Xeon E5/E7 V3,V4 */ ++ case 0x6f28: /* Xeon D-1500 */ ++ case 0x2034: /* Xeon Scalable Family */ ++ pci_read_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET, ++ &pci_config_dword); ++ pci_config_dword |= INTEL_CIPINTRC_DIS_INTX_ICH; ++ pci_write_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET, ++ pci_config_dword); ++ break; ++ default: ++ return; ++ } + pci_info(dev, "disabled boot interrupts on device [%04x:%04x]\n", + dev->vendor, dev->device); + } +-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); +-DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); ++/* ++ * Device 29 Func 5 Device IDs of IO-APIC ++ * containing ABAR—APIC1 Alternate Base Address Register ++ */ ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, ++ quirk_disable_intel_boot_interrupt); ++ ++/* ++ * Device 5 Func 0 Device IDs of Core IO modules/hubs ++ * containing Coherent Interface Protocol Interrupt Control ++ * ++ * Device IDs obtained from volume 2 datasheets of commented ++ * families above. ++ */ ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x3c28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0e28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2f28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x6f28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2034, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x3c28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x0e28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2f28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x6f28, ++ quirk_disable_intel_boot_interrupt); ++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2034, ++ quirk_disable_intel_boot_interrupt); + + /* Disable boot interrupts on HT-1000 */ + #define BC_HT1000_FEATURE_REG 0x64 +diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c +index 43431816412c..291c0074ad6f 100644 +--- a/drivers/pci/switch/switchtec.c ++++ b/drivers/pci/switch/switchtec.c +@@ -147,7 +147,7 @@ static int mrpc_queue_cmd(struct switchtec_user *stuser) + kref_get(&stuser->kref); + stuser->read_len = sizeof(stuser->data); + stuser_set_state(stuser, MRPC_QUEUED); +- init_completion(&stuser->comp); ++ reinit_completion(&stuser->comp); + list_add_tail(&stuser->list, &stdev->mrpc_queue); + + mrpc_cmd_submit(stdev); +diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c +index 323ff55cc165..080211dd916a 100644 +--- a/drivers/rtc/rtc-omap.c ++++ b/drivers/rtc/rtc-omap.c +@@ -561,9 +561,7 @@ static const struct pinctrl_ops rtc_pinctrl_ops = { + .dt_free_map = pinconf_generic_dt_free_map, + }; + +-enum rtc_pin_config_param { +- PIN_CONFIG_ACTIVE_HIGH = PIN_CONFIG_END + 1, +-}; ++#define PIN_CONFIG_ACTIVE_HIGH (PIN_CONFIG_END + 1) + + static const struct pinconf_generic_params rtc_params[] = { + {"ti,active-high", PIN_CONFIG_ACTIVE_HIGH, 0}, +diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c +index f602b42b8343..7522aa06672d 100644 +--- a/drivers/s390/scsi/zfcp_erp.c ++++ b/drivers/s390/scsi/zfcp_erp.c +@@ -738,7 +738,7 @@ static void zfcp_erp_enqueue_ptp_port(struct zfcp_adapter *adapter) + adapter->peer_d_id); + if (IS_ERR(port)) /* error or port already attached */ + return; +- _zfcp_erp_port_reopen(port, 0, "ereptp1"); ++ zfcp_erp_port_reopen(port, 0, "ereptp1"); + } + + static int zfcp_erp_adapter_strat_fsf_xconf(struct zfcp_erp_action *erp_action) +diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c +index 6c355d87c709..f73726e55e44 100644 +--- a/drivers/scsi/lpfc/lpfc_nvme.c ++++ b/drivers/scsi/lpfc/lpfc_nvme.c +@@ -1903,8 +1903,6 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport, + + /* Declare and initialization an instance of the FC NVME template. */ + static struct nvme_fc_port_template lpfc_nvme_template = { +- .module = THIS_MODULE, +- + /* initiator-based functions */ + .localport_delete = lpfc_nvme_localport_delete, + .remoteport_delete = lpfc_nvme_remoteport_delete, +diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c +index d3c944d99703..5a5e5c3da657 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c +@@ -9841,8 +9841,8 @@ static void scsih_remove(struct pci_dev *pdev) + + ioc->remove_host = 1; + +- mpt3sas_wait_for_commands_to_complete(ioc); +- _scsih_flush_running_cmds(ioc); ++ if (!pci_device_is_present(pdev)) ++ _scsih_flush_running_cmds(ioc); + + _scsih_fw_event_cleanup_queue(ioc); + +@@ -9919,8 +9919,8 @@ scsih_shutdown(struct pci_dev *pdev) + + ioc->remove_host = 1; + +- mpt3sas_wait_for_commands_to_complete(ioc); +- _scsih_flush_running_cmds(ioc); ++ if (!pci_device_is_present(pdev)) ++ _scsih_flush_running_cmds(ioc); + + _scsih_fw_event_cleanup_queue(ioc); + +diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c +index db367e428095..5590d6e8b576 100644 +--- a/drivers/scsi/qla2xxx/qla_nvme.c ++++ b/drivers/scsi/qla2xxx/qla_nvme.c +@@ -560,7 +560,6 @@ static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport) + } + + static struct nvme_fc_port_template qla_nvme_fc_transport = { +- .module = THIS_MODULE, + .localport_delete = qla_nvme_localport_delete, + .remoteport_delete = qla_nvme_remoteport_delete, + .create_queue = qla_nvme_alloc_queue, +diff --git a/drivers/staging/erofs/utils.c b/drivers/staging/erofs/utils.c +index 2d96820da62e..4de9c39535eb 100644 +--- a/drivers/staging/erofs/utils.c ++++ b/drivers/staging/erofs/utils.c +@@ -309,7 +309,7 @@ unsigned long erofs_shrink_scan(struct shrinker *shrink, + sbi->shrinker_run_no = run_no; + + #ifdef CONFIG_EROFS_FS_ZIP +- freed += erofs_shrink_workstation(sbi, nr, false); ++ freed += erofs_shrink_workstation(sbi, nr - freed, false); + #endif + + spin_lock(&erofs_sb_list_lock); +diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c +index 6666d2a52bf5..60d08269ad9a 100644 +--- a/drivers/usb/dwc3/core.c ++++ b/drivers/usb/dwc3/core.c +@@ -981,6 +981,9 @@ static int dwc3_core_init(struct dwc3 *dwc) + if (dwc->dis_tx_ipgap_linecheck_quirk) + reg |= DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS; + ++ if (dwc->parkmode_disable_ss_quirk) ++ reg |= DWC3_GUCTL1_PARKMODE_DISABLE_SS; ++ + dwc3_writel(dwc->regs, DWC3_GUCTL1, reg); + } + +@@ -1287,6 +1290,8 @@ static void dwc3_get_properties(struct dwc3 *dwc) + "snps,dis-del-phy-power-chg-quirk"); + dwc->dis_tx_ipgap_linecheck_quirk = device_property_read_bool(dev, + "snps,dis-tx-ipgap-linecheck-quirk"); ++ dwc->parkmode_disable_ss_quirk = device_property_read_bool(dev, ++ "snps,parkmode-disable-ss-quirk"); + + dwc->tx_de_emphasis_quirk = device_property_read_bool(dev, + "snps,tx_de_emphasis_quirk"); +diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h +index 131028501752..e34308d64619 100644 +--- a/drivers/usb/dwc3/core.h ++++ b/drivers/usb/dwc3/core.h +@@ -242,6 +242,7 @@ + #define DWC3_GUCTL_HSTINAUTORETRY BIT(14) + + /* Global User Control 1 Register */ ++#define DWC3_GUCTL1_PARKMODE_DISABLE_SS BIT(17) + #define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS BIT(28) + #define DWC3_GUCTL1_DEV_L1_EXIT_BY_HW BIT(24) + +@@ -992,6 +993,8 @@ struct dwc3_scratchpad_array { + * change quirk. + * @dis_tx_ipgap_linecheck_quirk: set if we disable u2mac linestate + * check during HS transmit. ++ * @parkmode_disable_ss_quirk: set if we need to disable all SuperSpeed ++ * instances in park mode. + * @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk + * @tx_de_emphasis: Tx de-emphasis value + * 0 - -6dB de-emphasis +@@ -1163,6 +1166,7 @@ struct dwc3 { + unsigned dis_u2_freeclk_exists_quirk:1; + unsigned dis_del_phy_power_chg_quirk:1; + unsigned dis_tx_ipgap_linecheck_quirk:1; ++ unsigned parkmode_disable_ss_quirk:1; + + unsigned tx_de_emphasis_quirk:1; + unsigned tx_de_emphasis:2; +diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c +index 30aefd1adbad..f3436913fd3f 100644 +--- a/drivers/usb/gadget/composite.c ++++ b/drivers/usb/gadget/composite.c +@@ -847,6 +847,11 @@ static int set_config(struct usb_composite_dev *cdev, + else + power = min(power, 900U); + done: ++ if (power <= USB_SELF_POWER_VBUS_MAX_DRAW) ++ usb_gadget_set_selfpowered(gadget); ++ else ++ usb_gadget_clear_selfpowered(gadget); ++ + usb_gadget_vbus_draw(gadget, power); + if (result >= 0 && cdev->delayed_status) + result = USB_GADGET_DELAYED_STATUS; +@@ -2265,6 +2270,7 @@ void composite_suspend(struct usb_gadget *gadget) + + cdev->suspended = 1; + ++ usb_gadget_set_selfpowered(gadget); + usb_gadget_vbus_draw(gadget, 2); + } + +@@ -2293,6 +2299,9 @@ void composite_resume(struct usb_gadget *gadget) + else + maxpower = min(maxpower, 900U); + ++ if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW) ++ usb_gadget_clear_selfpowered(gadget); ++ + usb_gadget_vbus_draw(gadget, maxpower); + } + +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c +index a9239455eb6d..31b3dda3089c 100644 +--- a/drivers/usb/gadget/function/f_fs.c ++++ b/drivers/usb/gadget/function/f_fs.c +@@ -1036,6 +1036,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) + + ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC); + if (unlikely(ret)) { ++ io_data->req = NULL; + usb_ep_free_request(ep->ep, req); + goto error_lock; + } +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index 65cc362717fc..b4177287d7d0 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -1147,8 +1147,10 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) + xhci_dbg(xhci, "Stop HCD\n"); + xhci_halt(xhci); + xhci_zero_64b_regs(xhci); +- xhci_reset(xhci); ++ retval = xhci_reset(xhci); + spin_unlock_irq(&xhci->lock); ++ if (retval) ++ return retval; + xhci_cleanup_msix(xhci); + + xhci_dbg(xhci, "// Disabling event ring interrupts\n"); +diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c +index 02e4e903dfe9..f79c0cb7697a 100644 +--- a/fs/btrfs/async-thread.c ++++ b/fs/btrfs/async-thread.c +@@ -434,3 +434,11 @@ void btrfs_set_work_high_priority(struct btrfs_work *work) + { + set_bit(WORK_HIGH_PRIO_BIT, &work->flags); + } ++ ++void btrfs_flush_workqueue(struct btrfs_workqueue *wq) ++{ ++ if (wq->high) ++ flush_workqueue(wq->high->normal_wq); ++ ++ flush_workqueue(wq->normal->normal_wq); ++} +diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h +index 7861c9feba5f..e87be7c8be58 100644 +--- a/fs/btrfs/async-thread.h ++++ b/fs/btrfs/async-thread.h +@@ -73,5 +73,6 @@ void btrfs_set_work_high_priority(struct btrfs_work *work); + struct btrfs_fs_info *btrfs_work_owner(const struct btrfs_work *work); + struct btrfs_fs_info *btrfs_workqueue_owner(const struct __btrfs_workqueue *wq); + bool btrfs_workqueue_normal_congested(const struct btrfs_workqueue *wq); ++void btrfs_flush_workqueue(struct btrfs_workqueue *wq); + + #endif +diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c +index e9522f2f25cc..7374fb23381c 100644 +--- a/fs/btrfs/delayed-inode.c ++++ b/fs/btrfs/delayed-inode.c +@@ -6,6 +6,7 @@ + + #include + #include ++#include + #include "delayed-inode.h" + #include "disk-io.h" + #include "transaction.h" +@@ -801,11 +802,14 @@ static int btrfs_insert_delayed_item(struct btrfs_trans_handle *trans, + struct btrfs_delayed_item *delayed_item) + { + struct extent_buffer *leaf; ++ unsigned int nofs_flag; + char *ptr; + int ret; + ++ nofs_flag = memalloc_nofs_save(); + ret = btrfs_insert_empty_item(trans, root, path, &delayed_item->key, + delayed_item->data_len); ++ memalloc_nofs_restore(nofs_flag); + if (ret < 0 && ret != -EEXIST) + return ret; + +@@ -933,6 +937,7 @@ static int btrfs_delete_delayed_items(struct btrfs_trans_handle *trans, + struct btrfs_delayed_node *node) + { + struct btrfs_delayed_item *curr, *prev; ++ unsigned int nofs_flag; + int ret = 0; + + do_again: +@@ -941,7 +946,9 @@ do_again: + if (!curr) + goto delete_fail; + ++ nofs_flag = memalloc_nofs_save(); + ret = btrfs_search_slot(trans, root, &curr->key, path, -1, 1); ++ memalloc_nofs_restore(nofs_flag); + if (ret < 0) + goto delete_fail; + else if (ret > 0) { +@@ -1008,6 +1015,7 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans, + struct btrfs_key key; + struct btrfs_inode_item *inode_item; + struct extent_buffer *leaf; ++ unsigned int nofs_flag; + int mod; + int ret; + +@@ -1020,7 +1028,9 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans, + else + mod = 1; + ++ nofs_flag = memalloc_nofs_save(); + ret = btrfs_lookup_inode(trans, root, path, &key, mod); ++ memalloc_nofs_restore(nofs_flag); + if (ret > 0) { + btrfs_release_path(path); + return -ENOENT; +@@ -1071,7 +1081,10 @@ search: + + key.type = BTRFS_INODE_EXTREF_KEY; + key.offset = -1; ++ ++ nofs_flag = memalloc_nofs_save(); + ret = btrfs_search_slot(trans, root, &key, path, -1, 1); ++ memalloc_nofs_restore(nofs_flag); + if (ret < 0) + goto err_out; + ASSERT(ret); +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index b5039b16de93..da7a2a530647 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -3007,6 +3007,18 @@ retry_root_backup: + fs_info->generation = generation; + fs_info->last_trans_committed = generation; + ++ /* ++ * If we have a uuid root and we're not being told to rescan we need to ++ * check the generation here so we can set the ++ * BTRFS_FS_UPDATE_UUID_TREE_GEN bit. Otherwise we could commit the ++ * transaction during a balance or the log replay without updating the ++ * uuid generation, and then if we crash we would rescan the uuid tree, ++ * even though it was perfectly fine. ++ */ ++ if (fs_info->uuid_root && !btrfs_test_opt(fs_info, RESCAN_UUID_TREE) && ++ fs_info->generation == btrfs_super_uuid_tree_generation(disk_super)) ++ set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags); ++ + ret = btrfs_verify_dev_extents(fs_info); + if (ret) { + btrfs_err(fs_info, +@@ -3237,8 +3249,6 @@ retry_root_backup: + close_ctree(fs_info); + return ret; + } +- } else { +- set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags); + } + set_bit(BTRFS_FS_OPEN, &fs_info->flags); + +@@ -3949,6 +3959,19 @@ void close_ctree(struct btrfs_fs_info *fs_info) + */ + btrfs_delete_unused_bgs(fs_info); + ++ /* ++ * There might be existing delayed inode workers still running ++ * and holding an empty delayed inode item. We must wait for ++ * them to complete first because they can create a transaction. ++ * This happens when someone calls btrfs_balance_delayed_items() ++ * and then a transaction commit runs the same delayed nodes ++ * before any delayed worker has done something with the nodes. ++ * We must wait for any worker here and not at transaction ++ * commit time since that could cause a deadlock. ++ * This is a very rare case. ++ */ ++ btrfs_flush_workqueue(fs_info->delayed_workers); ++ + ret = btrfs_commit_super(fs_info); + if (ret) + btrfs_err(fs_info, "commit super ret %d", ret); +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index c2c93fe9d7fd..dc1841855a69 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -2073,6 +2073,16 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + + btrfs_init_log_ctx(&ctx, inode); + ++ /* ++ * Set the range to full if the NO_HOLES feature is not enabled. ++ * This is to avoid missing file extent items representing holes after ++ * replaying the log. ++ */ ++ if (!btrfs_fs_incompat(fs_info, NO_HOLES)) { ++ start = 0; ++ end = LLONG_MAX; ++ } ++ + /* + * We write the dirty pages in the range and wait until they complete + * out of the ->i_mutex. If so, we can flush the dirty pages by +@@ -2127,6 +2137,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + */ + ret = start_ordered_ops(inode, start, end); + if (ret) { ++ up_write(&BTRFS_I(inode)->dio_sem); + inode_unlock(inode); + goto out; + } +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c +index 0cd043f03081..cbd40826f5dc 100644 +--- a/fs/btrfs/qgroup.c ++++ b/fs/btrfs/qgroup.c +@@ -1032,6 +1032,7 @@ out_add_root: + ret = qgroup_rescan_init(fs_info, 0, 1); + if (!ret) { + qgroup_rescan_zero_tracking(fs_info); ++ fs_info->qgroup_rescan_running = true; + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); + } +@@ -2906,7 +2907,6 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid, + sizeof(fs_info->qgroup_rescan_progress)); + fs_info->qgroup_rescan_progress.objectid = progress_objectid; + init_completion(&fs_info->qgroup_rescan_completion); +- fs_info->qgroup_rescan_running = true; + + spin_unlock(&fs_info->qgroup_lock); + mutex_unlock(&fs_info->qgroup_rescan_lock); +@@ -2972,8 +2972,11 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info) + + qgroup_rescan_zero_tracking(fs_info); + ++ mutex_lock(&fs_info->qgroup_rescan_lock); ++ fs_info->qgroup_rescan_running = true; + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); ++ mutex_unlock(&fs_info->qgroup_rescan_lock); + + return 0; + } +@@ -3009,9 +3012,13 @@ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info, + void + btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info) + { +- if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) ++ if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) { ++ mutex_lock(&fs_info->qgroup_rescan_lock); ++ fs_info->qgroup_rescan_running = true; + btrfs_queue_work(fs_info->qgroup_rescan_workers, + &fs_info->qgroup_rescan_work); ++ mutex_unlock(&fs_info->qgroup_rescan_lock); ++ } + } + + /* +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c +index f98913061a40..d1c5cd90b182 100644 +--- a/fs/btrfs/relocation.c ++++ b/fs/btrfs/relocation.c +@@ -1141,7 +1141,7 @@ out: + free_backref_node(cache, lower); + } + +- free_backref_node(cache, node); ++ remove_backref_node(cache, node); + return ERR_PTR(err); + } + ASSERT(!node || !node->detached); +@@ -1253,7 +1253,7 @@ static int __must_check __add_reloc_root(struct btrfs_root *root) + if (!node) + return -ENOMEM; + +- node->bytenr = root->node->start; ++ node->bytenr = root->commit_root->start; + node->data = root; + + spin_lock(&rc->reloc_root_tree.lock); +@@ -1284,10 +1284,11 @@ static void __del_reloc_root(struct btrfs_root *root) + if (rc && root->node) { + spin_lock(&rc->reloc_root_tree.lock); + rb_node = tree_search(&rc->reloc_root_tree.rb_root, +- root->node->start); ++ root->commit_root->start); + if (rb_node) { + node = rb_entry(rb_node, struct mapping_node, rb_node); + rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root); ++ RB_CLEAR_NODE(&node->rb_node); + } + spin_unlock(&rc->reloc_root_tree.lock); + if (!node) +@@ -1305,7 +1306,7 @@ static void __del_reloc_root(struct btrfs_root *root) + * helper to update the 'address of tree root -> reloc tree' + * mapping + */ +-static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr) ++static int __update_reloc_root(struct btrfs_root *root) + { + struct btrfs_fs_info *fs_info = root->fs_info; + struct rb_node *rb_node; +@@ -1314,7 +1315,7 @@ static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr) + + spin_lock(&rc->reloc_root_tree.lock); + rb_node = tree_search(&rc->reloc_root_tree.rb_root, +- root->node->start); ++ root->commit_root->start); + if (rb_node) { + node = rb_entry(rb_node, struct mapping_node, rb_node); + rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root); +@@ -1326,7 +1327,7 @@ static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr) + BUG_ON((struct btrfs_root *)node->data != root); + + spin_lock(&rc->reloc_root_tree.lock); +- node->bytenr = new_bytenr; ++ node->bytenr = root->node->start; + rb_node = tree_insert(&rc->reloc_root_tree.rb_root, + node->bytenr, &node->rb_node); + spin_unlock(&rc->reloc_root_tree.lock); +@@ -1471,6 +1472,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans, + } + + if (reloc_root->commit_root != reloc_root->node) { ++ __update_reloc_root(reloc_root); + btrfs_set_root_node(root_item, reloc_root->node); + free_extent_buffer(reloc_root->commit_root); + reloc_root->commit_root = btrfs_root_node(reloc_root); +@@ -2434,7 +2436,21 @@ out: + free_reloc_roots(&reloc_roots); + } + +- BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root)); ++ /* ++ * We used to have ++ * ++ * BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root)); ++ * ++ * here, but it's wrong. If we fail to start the transaction in ++ * prepare_to_merge() we will have only 0 ref reloc roots, none of which ++ * have actually been removed from the reloc_root_tree rb tree. This is ++ * fine because we're bailing here, and we hold a reference on the root ++ * for the list that holds it, so these roots will be cleaned up when we ++ * do the reloc_dirty_list afterwards. Meanwhile the root->reloc_root ++ * will be cleaned up on unmount. ++ * ++ * The remaining nodes will be cleaned up by free_reloc_control. ++ */ + } + + static void free_block_list(struct rb_root *blocks) +@@ -4585,11 +4601,6 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans, + BUG_ON(rc->stage == UPDATE_DATA_PTRS && + root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID); + +- if (root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) { +- if (buf == root->node) +- __update_reloc_root(root, cow->start); +- } +- + level = btrfs_header_level(buf); + if (btrfs_header_generation(buf) <= + btrfs_root_last_snapshot(&root->root_item)) +diff --git a/fs/cifs/file.c b/fs/cifs/file.c +index ad93b063f866..cfb0d91289ec 100644 +--- a/fs/cifs/file.c ++++ b/fs/cifs/file.c +@@ -3339,7 +3339,7 @@ again: + if (rc == -ENODATA) + rc = 0; + +- ctx->rc = (rc == 0) ? ctx->total_len : rc; ++ ctx->rc = (rc == 0) ? (ssize_t)ctx->total_len : rc; + + mutex_unlock(&ctx->aio_mutex); + +diff --git a/fs/exec.c b/fs/exec.c +index 561ea64829ec..3818813d725d 100644 +--- a/fs/exec.c ++++ b/fs/exec.c +@@ -1378,7 +1378,7 @@ void setup_new_exec(struct linux_binprm * bprm) + + /* An exec changes our domain. We are no longer part of the thread + group */ +- current->self_exec_id++; ++ WRITE_ONCE(current->self_exec_id, current->self_exec_id + 1); + flush_signal_handlers(current, 0); + } + EXPORT_SYMBOL(setup_new_exec); +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c +index 23b4b1745a39..56218c79a856 100644 +--- a/fs/ext4/inode.c ++++ b/fs/ext4/inode.c +@@ -5140,7 +5140,7 @@ static int ext4_inode_blocks_set(handle_t *handle, + struct ext4_inode_info *ei) + { + struct inode *inode = &(ei->vfs_inode); +- u64 i_blocks = inode->i_blocks; ++ u64 i_blocks = READ_ONCE(inode->i_blocks); + struct super_block *sb = inode->i_sb; + + if (i_blocks <= ~0U) { +diff --git a/fs/filesystems.c b/fs/filesystems.c +index b03f57b1105b..181200daeeba 100644 +--- a/fs/filesystems.c ++++ b/fs/filesystems.c +@@ -267,7 +267,9 @@ struct file_system_type *get_fs_type(const char *name) + fs = __get_fs_type(name, len); + if (!fs && (request_module("fs-%.*s", len, name) == 0)) { + fs = __get_fs_type(name, len); +- WARN_ONCE(!fs, "request_module fs-%.*s succeeded, but still no fs?\n", len, name); ++ if (!fs) ++ pr_warn_once("request_module fs-%.*s succeeded, but still no fs?\n", ++ len, name); + } + + if (dot && fs && !(fs->fs_flags & FS_HAS_SUBTYPE)) { +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c +index ccdd8c821abd..f8a5eef3d014 100644 +--- a/fs/gfs2/glock.c ++++ b/fs/gfs2/glock.c +@@ -636,6 +636,9 @@ __acquires(&gl->gl_lockref.lock) + goto out_unlock; + if (nonblock) + goto out_sched; ++ smp_mb(); ++ if (atomic_read(&gl->gl_revokes) != 0) ++ goto out_sched; + set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags); + GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE); + gl->gl_target = gl->gl_demote_state; +diff --git a/fs/hfsplus/attributes.c b/fs/hfsplus/attributes.c +index e6d554476db4..eeebe80c6be4 100644 +--- a/fs/hfsplus/attributes.c ++++ b/fs/hfsplus/attributes.c +@@ -292,6 +292,10 @@ static int __hfsplus_delete_attr(struct inode *inode, u32 cnid, + return -ENOENT; + } + ++ /* Avoid btree corruption */ ++ hfs_bnode_read(fd->bnode, fd->search_key, ++ fd->keyoffset, fd->keylength); ++ + err = hfs_brec_remove(fd); + if (err) + return err; +diff --git a/fs/nfs/write.c b/fs/nfs/write.c +index ce1da8cbac00..63d20308a9bb 100644 +--- a/fs/nfs/write.c ++++ b/fs/nfs/write.c +@@ -432,6 +432,7 @@ nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list, + } + + subreq->wb_head = subreq; ++ nfs_release_request(old_head); + + if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags)) { + nfs_release_request(subreq); +diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c +index a342f008e42f..ff0e083ce2a1 100644 +--- a/fs/ocfs2/alloc.c ++++ b/fs/ocfs2/alloc.c +@@ -7403,6 +7403,10 @@ int ocfs2_truncate_inline(struct inode *inode, struct buffer_head *di_bh, + struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data; + struct ocfs2_inline_data *idata = &di->id2.i_data; + ++ /* No need to punch hole beyond i_size. */ ++ if (start >= i_size_read(inode)) ++ return 0; ++ + if (end > i_size_read(inode)) + end = i_size_read(inode); + +diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c +index 6f90d91a8733..6f51c6d7b965 100644 +--- a/fs/pstore/inode.c ++++ b/fs/pstore/inode.c +@@ -99,11 +99,11 @@ static void *pstore_ftrace_seq_next(struct seq_file *s, void *v, loff_t *pos) + struct pstore_private *ps = s->private; + struct pstore_ftrace_seq_data *data = v; + ++ (*pos)++; + data->off += REC_SIZE; + if (data->off + REC_SIZE > ps->total_size) + return NULL; + +- (*pos)++; + return data; + } + +@@ -113,6 +113,9 @@ static int pstore_ftrace_seq_show(struct seq_file *s, void *v) + struct pstore_ftrace_seq_data *data = v; + struct pstore_ftrace_record *rec; + ++ if (!data) ++ return 0; ++ + rec = (struct pstore_ftrace_record *)(ps->record->buf + data->off); + + seq_printf(s, "CPU:%d ts:%llu %08lx %08lx %pf <- %pF\n", +diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c +index 4bae3f4fe829..dcd9c3163587 100644 +--- a/fs/pstore/platform.c ++++ b/fs/pstore/platform.c +@@ -802,9 +802,9 @@ static int __init pstore_init(void) + + ret = pstore_init_fs(); + if (ret) +- return ret; ++ free_buf_for_compression(); + +- return 0; ++ return ret; + } + late_initcall(pstore_init); + +diff --git a/include/linux/devfreq_cooling.h b/include/linux/devfreq_cooling.h +index 4635f95000a4..79a6e37a1d6f 100644 +--- a/include/linux/devfreq_cooling.h ++++ b/include/linux/devfreq_cooling.h +@@ -75,7 +75,7 @@ void devfreq_cooling_unregister(struct thermal_cooling_device *dfc); + + #else /* !CONFIG_DEVFREQ_THERMAL */ + +-struct thermal_cooling_device * ++static inline struct thermal_cooling_device * + of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df, + struct devfreq_cooling_power *dfc_power) + { +diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h +index dba15ca8e60b..1dcd9198beb7 100644 +--- a/include/linux/iocontext.h ++++ b/include/linux/iocontext.h +@@ -8,6 +8,7 @@ + + enum { + ICQ_EXITED = 1 << 2, ++ ICQ_DESTROYED = 1 << 3, + }; + + /* +diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h +index 76b76b6aa83d..b87b1569d15b 100644 +--- a/include/linux/mlx5/mlx5_ifc.h ++++ b/include/linux/mlx5/mlx5_ifc.h +@@ -672,7 +672,14 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits { + u8 swp[0x1]; + u8 swp_csum[0x1]; + u8 swp_lso[0x1]; +- u8 reserved_at_23[0xd]; ++ u8 cqe_checksum_full[0x1]; ++ u8 tunnel_stateless_geneve_tx[0x1]; ++ u8 tunnel_stateless_mpls_over_udp[0x1]; ++ u8 tunnel_stateless_mpls_over_gre[0x1]; ++ u8 tunnel_stateless_vxlan_gpe[0x1]; ++ u8 tunnel_stateless_ipv4_over_vxlan[0x1]; ++ u8 tunnel_stateless_ip_over_ip[0x1]; ++ u8 reserved_at_2a[0x6]; + u8 max_vxlan_udp_ports[0x8]; + u8 reserved_at_38[0x6]; + u8 max_geneve_opt_len[0x1]; +diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h +index 2f3ae41c212d..496ff759f84c 100644 +--- a/include/linux/nvme-fc-driver.h ++++ b/include/linux/nvme-fc-driver.h +@@ -282,8 +282,6 @@ struct nvme_fc_remote_port { + * + * Host/Initiator Transport Entrypoints/Parameters: + * +- * @module: The LLDD module using the interface +- * + * @localport_delete: The LLDD initiates deletion of a localport via + * nvme_fc_deregister_localport(). However, the teardown is + * asynchronous. This routine is called upon the completion of the +@@ -397,8 +395,6 @@ struct nvme_fc_remote_port { + * Value is Mandatory. Allowed to be zero. + */ + struct nvme_fc_port_template { +- struct module *module; +- + /* initiator-based functions */ + void (*localport_delete)(struct nvme_fc_local_port *); + void (*remoteport_delete)(struct nvme_fc_remote_port *); +diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h +index 37dab8116901..931fda3e5e0d 100644 +--- a/include/linux/pci-epc.h ++++ b/include/linux/pci-epc.h +@@ -69,6 +69,7 @@ struct pci_epc_ops { + * @bitmap: bitmap to manage the PCI address space + * @pages: number of bits representing the address region + * @page_size: size of each page ++ * @lock: mutex to protect bitmap + */ + struct pci_epc_mem { + phys_addr_t phys_base; +@@ -76,6 +77,8 @@ struct pci_epc_mem { + unsigned long *bitmap; + size_t page_size; + int pages; ++ /* mutex to protect against concurrent access for memory allocation*/ ++ struct mutex lock; + }; + + /** +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 0530de9a4efc..c69f308f3a53 100644 +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -887,8 +887,8 @@ struct task_struct { + struct seccomp seccomp; + + /* Thread group tracking: */ +- u32 parent_exec_id; +- u32 self_exec_id; ++ u64 parent_exec_id; ++ u64 self_exec_id; + + /* Protection against (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, mempolicy: */ + spinlock_t alloc_lock; +diff --git a/include/linux/swab.h b/include/linux/swab.h +index e466fd159c85..bcff5149861a 100644 +--- a/include/linux/swab.h ++++ b/include/linux/swab.h +@@ -7,6 +7,7 @@ + # define swab16 __swab16 + # define swab32 __swab32 + # define swab64 __swab64 ++# define swab __swab + # define swahw32 __swahw32 + # define swahb32 __swahb32 + # define swab16p __swab16p +diff --git a/include/uapi/linux/swab.h b/include/uapi/linux/swab.h +index 23cd84868cc3..fa7f97da5b76 100644 +--- a/include/uapi/linux/swab.h ++++ b/include/uapi/linux/swab.h +@@ -4,6 +4,7 @@ + + #include + #include ++#include + #include + + /* +@@ -132,6 +133,15 @@ static inline __attribute_const__ __u32 __fswahb32(__u32 val) + __fswab64(x)) + #endif + ++static __always_inline unsigned long __swab(const unsigned long y) ++{ ++#if BITS_PER_LONG == 64 ++ return __swab64(y); ++#else /* BITS_PER_LONG == 32 */ ++ return __swab32(y); ++#endif ++} ++ + /** + * __swahw32 - return a word-swapped 32-bit value + * @x: value to wordswap +diff --git a/kernel/cpu.c b/kernel/cpu.c +index 2d850eaaf82e..6d6c106a495c 100644 +--- a/kernel/cpu.c ++++ b/kernel/cpu.c +@@ -2070,10 +2070,8 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) + */ + cpuhp_offline_cpu_device(cpu); + } +- if (!ret) { ++ if (!ret) + cpu_smt_control = ctrlval; +- arch_smt_update(); +- } + cpu_maps_update_done(); + return ret; + } +@@ -2084,7 +2082,6 @@ int cpuhp_smt_enable(void) + + cpu_maps_update_begin(); + cpu_smt_control = CPU_SMT_ENABLED; +- arch_smt_update(); + for_each_present_cpu(cpu) { + /* Skip online CPUs and CPUs on offline nodes */ + if (cpu_online(cpu) || !node_online(cpu_to_node(cpu))) +diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c +index e0eda2bd3975..0a76c44eb6b2 100644 +--- a/kernel/irq/irqdomain.c ++++ b/kernel/irq/irqdomain.c +@@ -1255,6 +1255,11 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain, + unsigned int irq_base, + unsigned int nr_irqs, void *arg) + { ++ if (!domain->ops->alloc) { ++ pr_debug("domain->ops->alloc() is NULL\n"); ++ return -ENOSYS; ++ } ++ + return domain->ops->alloc(domain, irq_base, nr_irqs, arg); + } + +@@ -1292,11 +1297,6 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, + return -EINVAL; + } + +- if (!domain->ops->alloc) { +- pr_debug("domain->ops->alloc() is NULL\n"); +- return -ENOSYS; +- } +- + if (realloc && irq_base >= 0) { + virq = irq_base; + } else { +diff --git a/kernel/kmod.c b/kernel/kmod.c +index bc6addd9152b..a2de58de6ab6 100644 +--- a/kernel/kmod.c ++++ b/kernel/kmod.c +@@ -120,7 +120,7 @@ out: + * invoke it. + * + * If module auto-loading support is disabled then this function +- * becomes a no-operation. ++ * simply returns -ENOENT. + */ + int __request_module(bool wait, const char *fmt, ...) + { +@@ -137,7 +137,7 @@ int __request_module(bool wait, const char *fmt, ...) + WARN_ON_ONCE(wait && current_is_async()); + + if (!modprobe_path[0]) +- return 0; ++ return -ENOENT; + + va_start(args, fmt); + ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args); +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c +index 1e272f6a01e7..8a1758b094b7 100644 +--- a/kernel/locking/lockdep.c ++++ b/kernel/locking/lockdep.c +@@ -1260,9 +1260,11 @@ unsigned long lockdep_count_forward_deps(struct lock_class *class) + this.class = class; + + raw_local_irq_save(flags); ++ current->lockdep_recursion = 1; + arch_spin_lock(&lockdep_lock); + ret = __lockdep_count_forward_deps(&this); + arch_spin_unlock(&lockdep_lock); ++ current->lockdep_recursion = 0; + raw_local_irq_restore(flags); + + return ret; +@@ -1287,9 +1289,11 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class) + this.class = class; + + raw_local_irq_save(flags); ++ current->lockdep_recursion = 1; + arch_spin_lock(&lockdep_lock); + ret = __lockdep_count_backward_deps(&this); + arch_spin_unlock(&lockdep_lock); ++ current->lockdep_recursion = 0; + raw_local_irq_restore(flags); + + return ret; +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index 94bec97bd5e2..5f0eb4565957 100644 +--- a/kernel/sched/sched.h ++++ b/kernel/sched/sched.h +@@ -123,7 +123,13 @@ static inline void cpu_load_update_active(struct rq *this_rq) { } + #ifdef CONFIG_64BIT + # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT) + # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT) +-# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT) ++# define scale_load_down(w) \ ++({ \ ++ unsigned long __w = (w); \ ++ if (__w) \ ++ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \ ++ __w; \ ++}) + #else + # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT) + # define scale_load(w) (w) +diff --git a/kernel/signal.c b/kernel/signal.c +index c42eaf39b572..6a5692118139 100644 +--- a/kernel/signal.c ++++ b/kernel/signal.c +@@ -1837,7 +1837,7 @@ bool do_notify_parent(struct task_struct *tsk, int sig) + * This is only possible if parent == real_parent. + * Check if it has changed security domain. + */ +- if (tsk->parent_exec_id != tsk->parent->self_exec_id) ++ if (tsk->parent_exec_id != READ_ONCE(tsk->parent->self_exec_id)) + sig = SIGCHLD; + } + +diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c +index c61b2b0a99e9..65b4e28ff425 100644 +--- a/kernel/trace/trace_kprobe.c ++++ b/kernel/trace/trace_kprobe.c +@@ -975,6 +975,8 @@ static int probes_seq_show(struct seq_file *m, void *v) + int i; + + seq_putc(m, trace_kprobe_is_return(tk) ? 'r' : 'p'); ++ if (trace_kprobe_is_return(tk) && tk->rp.maxactive) ++ seq_printf(m, "%d", tk->rp.maxactive); + seq_printf(m, ":%s/%s", tk->tp.call.class->system, + trace_event_name(&tk->tp.call)); + +diff --git a/lib/find_bit.c b/lib/find_bit.c +index ee3df93ba69a..8a5492173267 100644 +--- a/lib/find_bit.c ++++ b/lib/find_bit.c +@@ -153,18 +153,6 @@ EXPORT_SYMBOL(find_last_bit); + + #ifdef __BIG_ENDIAN + +-/* include/linux/byteorder does not support "unsigned long" type */ +-static inline unsigned long ext2_swab(const unsigned long y) +-{ +-#if BITS_PER_LONG == 64 +- return (unsigned long) __swab64((u64) y); +-#elif BITS_PER_LONG == 32 +- return (unsigned long) __swab32((u32) y); +-#else +-#error BITS_PER_LONG not defined +-#endif +-} +- + #if !defined(find_next_bit_le) || !defined(find_next_zero_bit_le) + static inline unsigned long _find_next_bit_le(const unsigned long *addr1, + const unsigned long *addr2, unsigned long nbits, +@@ -181,7 +169,7 @@ static inline unsigned long _find_next_bit_le(const unsigned long *addr1, + tmp ^= invert; + + /* Handle 1st word. */ +- tmp &= ext2_swab(BITMAP_FIRST_WORD_MASK(start)); ++ tmp &= swab(BITMAP_FIRST_WORD_MASK(start)); + start = round_down(start, BITS_PER_LONG); + + while (!tmp) { +@@ -195,7 +183,7 @@ static inline unsigned long _find_next_bit_le(const unsigned long *addr1, + tmp ^= invert; + } + +- return min(start + __ffs(ext2_swab(tmp)), nbits); ++ return min(start + __ffs(swab(tmp)), nbits); + } + #endif + +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index e5c610d711f3..57888cedf244 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -4537,11 +4537,11 @@ refill: + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ +- page_ref_add(page, size); ++ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); +- nc->pagecnt_bias = size + 1; ++ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + nc->offset = size; + } + +@@ -4557,10 +4557,10 @@ refill: + size = nc->size; + #endif + /* OK, page count is 0, we can safely set it */ +- set_page_count(page, size + 1); ++ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* reset page count bias and offset to start of new frag */ +- nc->pagecnt_bias = size + 1; ++ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = size - fragsz; + } + +diff --git a/mm/slub.c b/mm/slub.c +index 9b7b989273d4..d8116a43a287 100644 +--- a/mm/slub.c ++++ b/mm/slub.c +@@ -249,7 +249,7 @@ static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr, + unsigned long ptr_addr) + { + #ifdef CONFIG_SLAB_FREELIST_HARDENED +- return (void *)((unsigned long)ptr ^ s->random ^ ptr_addr); ++ return (void *)((unsigned long)ptr ^ s->random ^ swab(ptr_addr)); + #else + return ptr; + #endif +diff --git a/security/keys/key.c b/security/keys/key.c +index 749a5cf27a19..d5fa8c4fc554 100644 +--- a/security/keys/key.c ++++ b/security/keys/key.c +@@ -383,7 +383,7 @@ int key_payload_reserve(struct key *key, size_t datalen) + spin_lock(&key->user->lock); + + if (delta > 0 && +- (key->user->qnbytes + delta >= maxbytes || ++ (key->user->qnbytes + delta > maxbytes || + key->user->qnbytes + delta < key->user->qnbytes)) { + ret = -EDQUOT; + } +diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c +index ca31af186abd..e00e20204de0 100644 +--- a/security/keys/keyctl.c ++++ b/security/keys/keyctl.c +@@ -882,8 +882,8 @@ long keyctl_chown_key(key_serial_t id, uid_t user, gid_t group) + key_quota_root_maxbytes : key_quota_maxbytes; + + spin_lock(&newowner->lock); +- if (newowner->qnkeys + 1 >= maxkeys || +- newowner->qnbytes + key->quotalen >= maxbytes || ++ if (newowner->qnkeys + 1 > maxkeys || ++ newowner->qnbytes + key->quotalen > maxbytes || + newowner->qnbytes + key->quotalen < + newowner->qnbytes) + goto quota_overrun; +diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c +index 732bbede7ebf..8539047145de 100644 +--- a/sound/core/oss/pcm_plugin.c ++++ b/sound/core/oss/pcm_plugin.c +@@ -196,7 +196,9 @@ int snd_pcm_plugin_free(struct snd_pcm_plugin *plugin) + return 0; + } + +-snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_pcm_uframes_t drv_frames) ++static snd_pcm_sframes_t plug_client_size(struct snd_pcm_substream *plug, ++ snd_pcm_uframes_t drv_frames, ++ bool check_size) + { + struct snd_pcm_plugin *plugin, *plugin_prev, *plugin_next; + int stream; +@@ -209,7 +211,7 @@ snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_p + if (stream == SNDRV_PCM_STREAM_PLAYBACK) { + plugin = snd_pcm_plug_last(plug); + while (plugin && drv_frames > 0) { +- if (drv_frames > plugin->buf_frames) ++ if (check_size && drv_frames > plugin->buf_frames) + drv_frames = plugin->buf_frames; + plugin_prev = plugin->prev; + if (plugin->src_frames) +@@ -222,7 +224,7 @@ snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_p + plugin_next = plugin->next; + if (plugin->dst_frames) + drv_frames = plugin->dst_frames(plugin, drv_frames); +- if (drv_frames > plugin->buf_frames) ++ if (check_size && drv_frames > plugin->buf_frames) + drv_frames = plugin->buf_frames; + plugin = plugin_next; + } +@@ -231,7 +233,9 @@ snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_p + return drv_frames; + } + +-snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pcm_uframes_t clt_frames) ++static snd_pcm_sframes_t plug_slave_size(struct snd_pcm_substream *plug, ++ snd_pcm_uframes_t clt_frames, ++ bool check_size) + { + struct snd_pcm_plugin *plugin, *plugin_prev, *plugin_next; + snd_pcm_sframes_t frames; +@@ -252,14 +256,14 @@ snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pc + if (frames < 0) + return frames; + } +- if (frames > plugin->buf_frames) ++ if (check_size && frames > plugin->buf_frames) + frames = plugin->buf_frames; + plugin = plugin_next; + } + } else if (stream == SNDRV_PCM_STREAM_CAPTURE) { + plugin = snd_pcm_plug_last(plug); + while (plugin) { +- if (frames > plugin->buf_frames) ++ if (check_size && frames > plugin->buf_frames) + frames = plugin->buf_frames; + plugin_prev = plugin->prev; + if (plugin->src_frames) { +@@ -274,6 +278,18 @@ snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pc + return frames; + } + ++snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, ++ snd_pcm_uframes_t drv_frames) ++{ ++ return plug_client_size(plug, drv_frames, false); ++} ++ ++snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, ++ snd_pcm_uframes_t clt_frames) ++{ ++ return plug_slave_size(plug, clt_frames, false); ++} ++ + static int snd_pcm_plug_formats(const struct snd_mask *mask, + snd_pcm_format_t format) + { +@@ -630,7 +646,7 @@ snd_pcm_sframes_t snd_pcm_plug_write_transfer(struct snd_pcm_substream *plug, st + src_channels = dst_channels; + plugin = next; + } +- return snd_pcm_plug_client_size(plug, frames); ++ return plug_client_size(plug, frames, true); + } + + snd_pcm_sframes_t snd_pcm_plug_read_transfer(struct snd_pcm_substream *plug, struct snd_pcm_plugin_channel *dst_channels_final, snd_pcm_uframes_t size) +@@ -640,7 +656,7 @@ snd_pcm_sframes_t snd_pcm_plug_read_transfer(struct snd_pcm_substream *plug, str + snd_pcm_sframes_t frames = size; + int err; + +- frames = snd_pcm_plug_slave_size(plug, frames); ++ frames = plug_slave_size(plug, frames, true); + if (frames < 0) + return frames; + +diff --git a/sound/pci/hda/hda_beep.c b/sound/pci/hda/hda_beep.c +index 066b5b59c4d7..0224011a240f 100644 +--- a/sound/pci/hda/hda_beep.c ++++ b/sound/pci/hda/hda_beep.c +@@ -297,8 +297,12 @@ int snd_hda_mixer_amp_switch_get_beep(struct snd_kcontrol *kcontrol, + { + struct hda_codec *codec = snd_kcontrol_chip(kcontrol); + struct hda_beep *beep = codec->beep; ++ int chs = get_amp_channels(kcontrol); ++ + if (beep && (!beep->enabled || !ctl_has_mute(kcontrol))) { +- ucontrol->value.integer.value[0] = ++ if (chs & 1) ++ ucontrol->value.integer.value[0] = beep->enabled; ++ if (chs & 2) + ucontrol->value.integer.value[1] = beep->enabled; + return 0; + } +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 3ee5b7b9b595..a2eeb08fa61d 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2219,6 +2219,17 @@ static const struct hdac_io_ops pci_hda_io_ops = { + .dma_free_pages = dma_free_pages, + }; + ++/* Blacklist for skipping the whole probe: ++ * some HD-audio PCI entries are exposed without any codecs, and such devices ++ * should be ignored from the beginning. ++ */ ++static const struct snd_pci_quirk driver_blacklist[] = { ++ SND_PCI_QUIRK(0x1043, 0x874f, "ASUS ROG Zenith II / Strix", 0), ++ SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0), ++ SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0), ++ {} ++}; ++ + static const struct hda_controller_ops pci_hda_ops = { + .disable_msi_reset_irq = disable_msi_reset_irq, + .substream_alloc_pages = substream_alloc_pages, +@@ -2238,6 +2249,11 @@ static int azx_probe(struct pci_dev *pci, + bool schedule_probe; + int err; + ++ if (snd_pci_quirk_lookup(pci, driver_blacklist)) { ++ dev_info(&pci->dev, "Skipping the blacklisted device\n"); ++ return -ENODEV; ++ } ++ + if (dev >= SNDRV_CARDS) + return -ENODEV; + if (!enable[dev]) { +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 23aab2fdac46..ea439bee8e6f 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -379,7 +379,9 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) + case 0x10ec0215: + case 0x10ec0233: + case 0x10ec0235: ++ case 0x10ec0236: + case 0x10ec0255: ++ case 0x10ec0256: + case 0x10ec0257: + case 0x10ec0282: + case 0x10ec0283: +@@ -391,11 +393,6 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) + case 0x10ec0300: + alc_update_coef_idx(codec, 0x10, 1<<9, 0); + break; +- case 0x10ec0236: +- case 0x10ec0256: +- alc_write_coef_idx(codec, 0x36, 0x5757); +- alc_update_coef_idx(codec, 0x10, 1<<9, 0); +- break; + case 0x10ec0275: + alc_update_coef_idx(codec, 0xe, 0, 1<<0); + break; +@@ -2444,6 +2441,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = { + SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), + SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950), + SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950), ++ SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950), + SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950), + SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950), + SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD), +@@ -3249,7 +3247,13 @@ static void alc256_init(struct hda_codec *codec) + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ + alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */ + alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15); +- alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/ ++ /* ++ * Expose headphone mic (or possibly Line In on some machines) instead ++ * of PC Beep on 1Ah, and disable 1Ah loopback for all outputs. See ++ * Documentation/sound/hd-audio/realtek-pc-beep.rst for details of ++ * this register. ++ */ ++ alc_write_coef_idx(codec, 0x36, 0x5757); + } + + static void alc256_shutup(struct hda_codec *codec) +@@ -5269,17 +5273,6 @@ static void alc271_hp_gate_mic_jack(struct hda_codec *codec, + } + } + +-static void alc256_fixup_dell_xps_13_headphone_noise2(struct hda_codec *codec, +- const struct hda_fixup *fix, +- int action) +-{ +- if (action != HDA_FIXUP_ACT_PRE_PROBE) +- return; +- +- snd_hda_codec_amp_stereo(codec, 0x1a, HDA_INPUT, 0, HDA_AMP_VOLMASK, 1); +- snd_hda_override_wcaps(codec, 0x1a, get_wcaps(codec, 0x1a) & ~AC_WCAP_IN_AMP); +-} +- + static void alc269_fixup_limit_int_mic_boost(struct hda_codec *codec, + const struct hda_fixup *fix, + int action) +@@ -5671,8 +5664,6 @@ enum { + ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, + ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE, + ALC275_FIXUP_DELL_XPS, +- ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE, +- ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2, + ALC293_FIXUP_LENOVO_SPK_NOISE, + ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, + ALC255_FIXUP_DELL_SPK_NOISE, +@@ -6384,23 +6375,6 @@ static const struct hda_fixup alc269_fixups[] = { + {} + } + }, +- [ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE] = { +- .type = HDA_FIXUP_VERBS, +- .v.verbs = (const struct hda_verb[]) { +- /* Disable pass-through path for FRONT 14h */ +- {0x20, AC_VERB_SET_COEF_INDEX, 0x36}, +- {0x20, AC_VERB_SET_PROC_COEF, 0x1737}, +- {} +- }, +- .chained = true, +- .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE +- }, +- [ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2] = { +- .type = HDA_FIXUP_FUNC, +- .v.func = alc256_fixup_dell_xps_13_headphone_noise2, +- .chained = true, +- .chain_id = ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE +- }, + [ALC293_FIXUP_LENOVO_SPK_NOISE] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc_fixup_disable_aamix, +@@ -6868,17 +6842,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1028, 0x06de, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), + SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), + SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), +- SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2), + SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), + SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE), + SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP), +- SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2), + SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME), + SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME), + SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3), + SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), + SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE), +- SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2), + SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), + SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), + SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), +@@ -7229,7 +7200,6 @@ static const struct hda_model_fixup alc269_fixup_models[] = { + {.id = ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "alc298-dell1"}, + {.id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE, .name = "alc298-dell-aio"}, + {.id = ALC275_FIXUP_DELL_XPS, .name = "alc275-dell-xps"}, +- {.id = ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE, .name = "alc256-dell-xps13"}, + {.id = ALC293_FIXUP_LENOVO_SPK_NOISE, .name = "lenovo-spk-noise"}, + {.id = ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, .name = "lenovo-hotkey"}, + {.id = ALC255_FIXUP_DELL_SPK_NOISE, .name = "dell-spk-noise"}, +diff --git a/sound/pci/ice1712/prodigy_hifi.c b/sound/pci/ice1712/prodigy_hifi.c +index c97b5528e4b8..317bbb725b29 100644 +--- a/sound/pci/ice1712/prodigy_hifi.c ++++ b/sound/pci/ice1712/prodigy_hifi.c +@@ -550,7 +550,7 @@ static int wm_adc_mux_enum_get(struct snd_kcontrol *kcontrol, + struct snd_ice1712 *ice = snd_kcontrol_chip(kcontrol); + + mutex_lock(&ice->gpio_mutex); +- ucontrol->value.integer.value[0] = wm_get(ice, WM_ADC_MUX) & 0x1f; ++ ucontrol->value.enumerated.item[0] = wm_get(ice, WM_ADC_MUX) & 0x1f; + mutex_unlock(&ice->gpio_mutex); + return 0; + } +@@ -564,7 +564,7 @@ static int wm_adc_mux_enum_put(struct snd_kcontrol *kcontrol, + + mutex_lock(&ice->gpio_mutex); + oval = wm_get(ice, WM_ADC_MUX); +- nval = (oval & 0xe0) | ucontrol->value.integer.value[0]; ++ nval = (oval & 0xe0) | ucontrol->value.enumerated.item[0]; + if (nval != oval) { + wm_put(ice, WM_ADC_MUX, nval); + change = 1; +diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c +index db5b005f4b1e..d61e954417d0 100644 +--- a/sound/soc/soc-dapm.c ++++ b/sound/soc/soc-dapm.c +@@ -792,7 +792,13 @@ static void dapm_set_mixer_path_status(struct snd_soc_dapm_path *p, int i, + val = max - val; + p->connect = !!val; + } else { +- p->connect = 0; ++ /* since a virtual mixer has no backing registers to ++ * decide which path to connect, it will try to match ++ * with initial state. This is to ensure ++ * that the default mixer choice will be ++ * correctly powered up during initialization. ++ */ ++ p->connect = invert; + } + } + +diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c +index f4dc3d445aae..95fc24580f85 100644 +--- a/sound/soc/soc-ops.c ++++ b/sound/soc/soc-ops.c +@@ -832,7 +832,7 @@ int snd_soc_get_xr_sx(struct snd_kcontrol *kcontrol, + unsigned int regbase = mc->regbase; + unsigned int regcount = mc->regcount; + unsigned int regwshift = component->val_bytes * BITS_PER_BYTE; +- unsigned int regwmask = (1<invert; + unsigned long mask = (1UL<nbits)-1; + long min = mc->min; +@@ -881,7 +881,7 @@ int snd_soc_put_xr_sx(struct snd_kcontrol *kcontrol, + unsigned int regbase = mc->regbase; + unsigned int regcount = mc->regcount; + unsigned int regwshift = component->val_bytes * BITS_PER_BYTE; +- unsigned int regwmask = (1<invert; + unsigned long mask = (1UL<nbits)-1; + long max = mc->max; +diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c +index 356d4e754561..a0d1ce0edaf9 100644 +--- a/sound/soc/soc-pcm.c ++++ b/sound/soc/soc-pcm.c +@@ -2266,7 +2266,8 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream, + switch (cmd) { + case SNDRV_PCM_TRIGGER_START: + if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) && +- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP)) ++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) && ++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) + continue; + + ret = dpcm_do_trigger(dpcm, be_substream, cmd); +@@ -2296,7 +2297,8 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream, + be->dpcm[stream].state = SND_SOC_DPCM_STATE_START; + break; + case SNDRV_PCM_TRIGGER_STOP: +- if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) ++ if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) && ++ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) + continue; + + if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream)) +diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c +index 30fc45aa1869..756dd2303106 100644 +--- a/sound/soc/soc-topology.c ++++ b/sound/soc/soc-topology.c +@@ -364,7 +364,7 @@ static int soc_tplg_add_kcontrol(struct soc_tplg *tplg, + struct snd_soc_component *comp = tplg->comp; + + return soc_tplg_add_dcontrol(comp->card->snd_card, +- comp->dev, k, NULL, comp, kcontrol); ++ comp->dev, k, comp->name_prefix, comp, kcontrol); + } + + /* remove a mixer kcontrol */ +diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c +index 71069e110897..f6e2cc66153a 100644 +--- a/sound/usb/mixer_maps.c ++++ b/sound/usb/mixer_maps.c +@@ -363,6 +363,14 @@ static const struct usbmix_name_map dell_alc4020_map[] = { + { 0 } + }; + ++/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX ++ * response for Input Gain Pad (id=19, control=12). Skip it. ++ */ ++static const struct usbmix_name_map asus_rog_map[] = { ++ { 19, NULL, 12 }, /* FU, Input Gain Pad */ ++ {} ++}; ++ + /* + * Control map entries + */ +@@ -482,6 +490,26 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = { + .id = USB_ID(0x05a7, 0x1020), + .map = bose_companion5_map, + }, ++ { /* Gigabyte TRX40 Aorus Pro WiFi */ ++ .id = USB_ID(0x0414, 0xa002), ++ .map = asus_rog_map, ++ }, ++ { /* ASUS ROG Zenith II */ ++ .id = USB_ID(0x0b05, 0x1916), ++ .map = asus_rog_map, ++ }, ++ { /* ASUS ROG Strix */ ++ .id = USB_ID(0x0b05, 0x1917), ++ .map = asus_rog_map, ++ }, ++ { /* MSI TRX40 Creator */ ++ .id = USB_ID(0x0db0, 0x0d64), ++ .map = asus_rog_map, ++ }, ++ { /* MSI TRX40 */ ++ .id = USB_ID(0x0db0, 0x543d), ++ .map = asus_rog_map, ++ }, + { 0 } /* terminator */ + }; + +diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile +index 6a73c06e069c..3dbf7e8b07a5 100644 +--- a/tools/gpio/Makefile ++++ b/tools/gpio/Makefile +@@ -35,7 +35,7 @@ $(OUTPUT)include/linux/gpio.h: ../../include/uapi/linux/gpio.h + + prepare: $(OUTPUT)include/linux/gpio.h + +-GPIO_UTILS_IN := $(output)gpio-utils-in.o ++GPIO_UTILS_IN := $(OUTPUT)gpio-utils-in.o + $(GPIO_UTILS_IN): prepare FORCE + $(Q)$(MAKE) $(build)=gpio-utils + +diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config +index 510caedd7319..ae0c5bee8014 100644 +--- a/tools/perf/Makefile.config ++++ b/tools/perf/Makefile.config +@@ -205,8 +205,17 @@ strip-libs = $(filter-out -l%,$(1)) + + PYTHON_CONFIG_SQ := $(call shell-sq,$(PYTHON_CONFIG)) + ++# Python 3.8 changed the output of `python-config --ldflags` to not include the ++# '-lpythonX.Y' flag unless '--embed' is also passed. The feature check for ++# libpython fails if that flag is not included in LDFLAGS ++ifeq ($(shell $(PYTHON_CONFIG_SQ) --ldflags --embed 2>&1 1>/dev/null; echo $$?), 0) ++ PYTHON_CONFIG_LDFLAGS := --ldflags --embed ++else ++ PYTHON_CONFIG_LDFLAGS := --ldflags ++endif ++ + ifdef PYTHON_CONFIG +- PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) --ldflags 2>/dev/null) ++ PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) $(PYTHON_CONFIG_LDFLAGS) 2>/dev/null) + PYTHON_EMBED_LDFLAGS := $(call strip-libs,$(PYTHON_EMBED_LDOPTS)) + PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil + PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null) +diff --git a/tools/testing/selftests/vm/mlock2-tests.c b/tools/testing/selftests/vm/mlock2-tests.c +index 637b6d0ac0d0..11b2301f3aa3 100644 +--- a/tools/testing/selftests/vm/mlock2-tests.c ++++ b/tools/testing/selftests/vm/mlock2-tests.c +@@ -67,59 +67,6 @@ out: + return ret; + } + +-static uint64_t get_pageflags(unsigned long addr) +-{ +- FILE *file; +- uint64_t pfn; +- unsigned long offset; +- +- file = fopen("/proc/self/pagemap", "r"); +- if (!file) { +- perror("fopen pagemap"); +- _exit(1); +- } +- +- offset = addr / getpagesize() * sizeof(pfn); +- +- if (fseek(file, offset, SEEK_SET)) { +- perror("fseek pagemap"); +- _exit(1); +- } +- +- if (fread(&pfn, sizeof(pfn), 1, file) != 1) { +- perror("fread pagemap"); +- _exit(1); +- } +- +- fclose(file); +- return pfn; +-} +- +-static uint64_t get_kpageflags(unsigned long pfn) +-{ +- uint64_t flags; +- FILE *file; +- +- file = fopen("/proc/kpageflags", "r"); +- if (!file) { +- perror("fopen kpageflags"); +- _exit(1); +- } +- +- if (fseek(file, pfn * sizeof(flags), SEEK_SET)) { +- perror("fseek kpageflags"); +- _exit(1); +- } +- +- if (fread(&flags, sizeof(flags), 1, file) != 1) { +- perror("fread kpageflags"); +- _exit(1); +- } +- +- fclose(file); +- return flags; +-} +- + #define VMFLAGS "VmFlags:" + + static bool is_vmflag_set(unsigned long addr, const char *vmflag) +@@ -159,19 +106,13 @@ out: + #define RSS "Rss:" + #define LOCKED "lo" + +-static bool is_vma_lock_on_fault(unsigned long addr) ++static unsigned long get_value_for_name(unsigned long addr, const char *name) + { +- bool ret = false; +- bool locked; +- FILE *smaps = NULL; +- unsigned long vma_size, vma_rss; + char *line = NULL; +- char *value; + size_t size = 0; +- +- locked = is_vmflag_set(addr, LOCKED); +- if (!locked) +- goto out; ++ char *value_ptr; ++ FILE *smaps = NULL; ++ unsigned long value = -1UL; + + smaps = seek_to_smaps_entry(addr); + if (!smaps) { +@@ -180,112 +121,70 @@ static bool is_vma_lock_on_fault(unsigned long addr) + } + + while (getline(&line, &size, smaps) > 0) { +- if (!strstr(line, SIZE)) { ++ if (!strstr(line, name)) { + free(line); + line = NULL; + size = 0; + continue; + } + +- value = line + strlen(SIZE); +- if (sscanf(value, "%lu kB", &vma_size) < 1) { ++ value_ptr = line + strlen(name); ++ if (sscanf(value_ptr, "%lu kB", &value) < 1) { + printf("Unable to parse smaps entry for Size\n"); + goto out; + } + break; + } + +- while (getline(&line, &size, smaps) > 0) { +- if (!strstr(line, RSS)) { +- free(line); +- line = NULL; +- size = 0; +- continue; +- } +- +- value = line + strlen(RSS); +- if (sscanf(value, "%lu kB", &vma_rss) < 1) { +- printf("Unable to parse smaps entry for Rss\n"); +- goto out; +- } +- break; +- } +- +- ret = locked && (vma_rss < vma_size); + out: +- free(line); + if (smaps) + fclose(smaps); +- return ret; ++ free(line); ++ return value; + } + +-#define PRESENT_BIT 0x8000000000000000ULL +-#define PFN_MASK 0x007FFFFFFFFFFFFFULL +-#define UNEVICTABLE_BIT (1UL << 18) +- +-static int lock_check(char *map) ++static bool is_vma_lock_on_fault(unsigned long addr) + { +- unsigned long page_size = getpagesize(); +- uint64_t page1_flags, page2_flags; ++ bool locked; ++ unsigned long vma_size, vma_rss; + +- page1_flags = get_pageflags((unsigned long)map); +- page2_flags = get_pageflags((unsigned long)map + page_size); ++ locked = is_vmflag_set(addr, LOCKED); ++ if (!locked) ++ return false; + +- /* Both pages should be present */ +- if (((page1_flags & PRESENT_BIT) == 0) || +- ((page2_flags & PRESENT_BIT) == 0)) { +- printf("Failed to make both pages present\n"); +- return 1; +- } ++ vma_size = get_value_for_name(addr, SIZE); ++ vma_rss = get_value_for_name(addr, RSS); + +- page1_flags = get_kpageflags(page1_flags & PFN_MASK); +- page2_flags = get_kpageflags(page2_flags & PFN_MASK); ++ /* only one page is faulted in */ ++ return (vma_rss < vma_size); ++} + +- /* Both pages should be unevictable */ +- if (((page1_flags & UNEVICTABLE_BIT) == 0) || +- ((page2_flags & UNEVICTABLE_BIT) == 0)) { +- printf("Failed to make both pages unevictable\n"); +- return 1; +- } ++#define PRESENT_BIT 0x8000000000000000ULL ++#define PFN_MASK 0x007FFFFFFFFFFFFFULL ++#define UNEVICTABLE_BIT (1UL << 18) + +- if (!is_vmflag_set((unsigned long)map, LOCKED)) { +- printf("VMA flag %s is missing on page 1\n", LOCKED); +- return 1; +- } ++static int lock_check(unsigned long addr) ++{ ++ bool locked; ++ unsigned long vma_size, vma_rss; + +- if (!is_vmflag_set((unsigned long)map + page_size, LOCKED)) { +- printf("VMA flag %s is missing on page 2\n", LOCKED); +- return 1; +- } ++ locked = is_vmflag_set(addr, LOCKED); ++ if (!locked) ++ return false; + +- return 0; ++ vma_size = get_value_for_name(addr, SIZE); ++ vma_rss = get_value_for_name(addr, RSS); ++ ++ return (vma_rss == vma_size); + } + + static int unlock_lock_check(char *map) + { +- unsigned long page_size = getpagesize(); +- uint64_t page1_flags, page2_flags; +- +- page1_flags = get_pageflags((unsigned long)map); +- page2_flags = get_pageflags((unsigned long)map + page_size); +- page1_flags = get_kpageflags(page1_flags & PFN_MASK); +- page2_flags = get_kpageflags(page2_flags & PFN_MASK); +- +- if ((page1_flags & UNEVICTABLE_BIT) || (page2_flags & UNEVICTABLE_BIT)) { +- printf("A page is still marked unevictable after unlock\n"); +- return 1; +- } +- + if (is_vmflag_set((unsigned long)map, LOCKED)) { + printf("VMA flag %s is present on page 1 after unlock\n", LOCKED); + return 1; + } + +- if (is_vmflag_set((unsigned long)map + page_size, LOCKED)) { +- printf("VMA flag %s is present on page 2 after unlock\n", LOCKED); +- return 1; +- } +- + return 0; + } + +@@ -311,7 +210,7 @@ static int test_mlock_lock() + goto unmap; + } + +- if (lock_check(map)) ++ if (!lock_check((unsigned long)map)) + goto unmap; + + /* Now unlock and recheck attributes */ +@@ -330,64 +229,18 @@ out: + + static int onfault_check(char *map) + { +- unsigned long page_size = getpagesize(); +- uint64_t page1_flags, page2_flags; +- +- page1_flags = get_pageflags((unsigned long)map); +- page2_flags = get_pageflags((unsigned long)map + page_size); +- +- /* Neither page should be present */ +- if ((page1_flags & PRESENT_BIT) || (page2_flags & PRESENT_BIT)) { +- printf("Pages were made present by MLOCK_ONFAULT\n"); +- return 1; +- } +- + *map = 'a'; +- page1_flags = get_pageflags((unsigned long)map); +- page2_flags = get_pageflags((unsigned long)map + page_size); +- +- /* Only page 1 should be present */ +- if ((page1_flags & PRESENT_BIT) == 0) { +- printf("Page 1 is not present after fault\n"); +- return 1; +- } else if (page2_flags & PRESENT_BIT) { +- printf("Page 2 was made present\n"); +- return 1; +- } +- +- page1_flags = get_kpageflags(page1_flags & PFN_MASK); +- +- /* Page 1 should be unevictable */ +- if ((page1_flags & UNEVICTABLE_BIT) == 0) { +- printf("Failed to make faulted page unevictable\n"); +- return 1; +- } +- + if (!is_vma_lock_on_fault((unsigned long)map)) { + printf("VMA is not marked for lock on fault\n"); + return 1; + } + +- if (!is_vma_lock_on_fault((unsigned long)map + page_size)) { +- printf("VMA is not marked for lock on fault\n"); +- return 1; +- } +- + return 0; + } + + static int unlock_onfault_check(char *map) + { + unsigned long page_size = getpagesize(); +- uint64_t page1_flags; +- +- page1_flags = get_pageflags((unsigned long)map); +- page1_flags = get_kpageflags(page1_flags & PFN_MASK); +- +- if (page1_flags & UNEVICTABLE_BIT) { +- printf("Page 1 is still marked unevictable after unlock\n"); +- return 1; +- } + + if (is_vma_lock_on_fault((unsigned long)map) || + is_vma_lock_on_fault((unsigned long)map + page_size)) { +@@ -445,7 +298,6 @@ static int test_lock_onfault_of_present() + char *map; + int ret = 1; + unsigned long page_size = getpagesize(); +- uint64_t page1_flags, page2_flags; + + map = mmap(NULL, 2 * page_size, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); +@@ -465,17 +317,6 @@ static int test_lock_onfault_of_present() + goto unmap; + } + +- page1_flags = get_pageflags((unsigned long)map); +- page2_flags = get_pageflags((unsigned long)map + page_size); +- page1_flags = get_kpageflags(page1_flags & PFN_MASK); +- page2_flags = get_kpageflags(page2_flags & PFN_MASK); +- +- /* Page 1 should be unevictable */ +- if ((page1_flags & UNEVICTABLE_BIT) == 0) { +- printf("Failed to make present page unevictable\n"); +- goto unmap; +- } +- + if (!is_vma_lock_on_fault((unsigned long)map) || + !is_vma_lock_on_fault((unsigned long)map + page_size)) { + printf("VMA with present pages is not marked lock on fault\n"); +@@ -507,7 +348,7 @@ static int test_munlockall() + goto out; + } + +- if (lock_check(map)) ++ if (!lock_check((unsigned long)map)) + goto unmap; + + if (munlockall()) { +@@ -549,7 +390,7 @@ static int test_munlockall() + goto out; + } + +- if (lock_check(map)) ++ if (!lock_check((unsigned long)map)) + goto unmap; + + if (munlockall()) { +diff --git a/tools/testing/selftests/x86/ptrace_syscall.c b/tools/testing/selftests/x86/ptrace_syscall.c +index 6f22238f3217..12aaa063196e 100644 +--- a/tools/testing/selftests/x86/ptrace_syscall.c ++++ b/tools/testing/selftests/x86/ptrace_syscall.c +@@ -414,8 +414,12 @@ int main() + + #if defined(__i386__) && (!defined(__GLIBC__) || __GLIBC__ > 2 || __GLIBC_MINOR__ >= 16) + vsyscall32 = (void *)getauxval(AT_SYSINFO); +- printf("[RUN]\tCheck AT_SYSINFO return regs\n"); +- test_sys32_regs(do_full_vsyscall32); ++ if (vsyscall32) { ++ printf("[RUN]\tCheck AT_SYSINFO return regs\n"); ++ test_sys32_regs(do_full_vsyscall32); ++ } else { ++ printf("[SKIP]\tAT_SYSINFO is not available\n"); ++ } + #endif + + test_ptrace_syscall_restart();