From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 85E0D159C9B for ; Wed, 14 Aug 2024 14:11:22 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id C5186E29CD; Wed, 14 Aug 2024 14:11:21 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 75350E29CD for ; Wed, 14 Aug 2024 14:11:21 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 1FAAB343087 for ; Wed, 14 Aug 2024 14:11:20 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id AD653E6F for ; Wed, 14 Aug 2024 14:11:18 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1723644658.7e54b6e2477ea34e0449f6e35c418e158a256d8d.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1104_linux-6.1.105.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 7e54b6e2477ea34e0449f6e35c418e158a256d8d X-VCS-Branch: 6.1 Date: Wed, 14 Aug 2024 14:11:18 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: bb1454ea-a3c5-4500-85fd-2090fbbe41ca X-Archives-Hash: e5a0f61a41fa4f866e15956390afd95f commit: 7e54b6e2477ea34e0449f6e35c418e158a256d8d Author: Mike Pagano gentoo org> AuthorDate: Wed Aug 14 14:10:58 2024 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Aug 14 14:10:58 2024 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7e54b6e2 Linux patch 6.1.105 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1104_linux-6.1.105.patch | 5325 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 5329 insertions(+) diff --git a/0000_README b/0000_README index e41d3db8..e2c1d859 100644 --- a/0000_README +++ b/0000_README @@ -459,6 +459,10 @@ Patch: 1103_linux-6.1.104.patch From: https://www.kernel.org Desc: Linux 6.1.104 +Patch: 1104_linux-6.1.105.patch +From: https://www.kernel.org +Desc: Linux 6.1.105 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1104_linux-6.1.105.patch b/1104_linux-6.1.105.patch new file mode 100644 index 00000000..f65bfdcc --- /dev/null +++ b/1104_linux-6.1.105.patch @@ -0,0 +1,5325 @@ +diff --git a/Documentation/admin-guide/cifs/usage.rst b/Documentation/admin-guide/cifs/usage.rst +index a50047cf95ca2..1cd09795f1438 100644 +--- a/Documentation/admin-guide/cifs/usage.rst ++++ b/Documentation/admin-guide/cifs/usage.rst +@@ -741,7 +741,7 @@ SecurityFlags Flags which control security negotiation and + may use NTLMSSP 0x00080 + must use NTLMSSP 0x80080 + seal (packet encryption) 0x00040 +- must seal (not implemented yet) 0x40040 ++ must seal 0x40040 + + cifsFYI If set to non-zero value, additional debug information + will be logged to the system error log. This field +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt +index e6f0570cf4900..8df4c1c5c6197 100644 +--- a/Documentation/admin-guide/kernel-parameters.txt ++++ b/Documentation/admin-guide/kernel-parameters.txt +@@ -637,12 +637,6 @@ + loops can be debugged more effectively on production + systems. + +- clocksource.max_cswd_read_retries= [KNL] +- Number of clocksource_watchdog() retries due to +- external delays before the clock will be marked +- unstable. Defaults to two retries, that is, +- three attempts to read the clock under test. +- + clocksource.verify_n_cpus= [KNL] + Limit the number of CPUs checked for clocksources + marked with CLOCK_SOURCE_VERIFY_PERCPU that +@@ -4556,11 +4550,9 @@ + + profile= [KNL] Enable kernel profiling via /proc/profile + Format: [,] +- Param: : "schedule", "sleep", or "kvm" ++ Param: : "schedule" or "kvm" + [defaults to kernel profiling] + Param: "schedule" - profile schedule points. +- Param: "sleep" - profile D-state sleeping (millisecs). +- Requires CONFIG_SCHEDSTATS + Param: "kvm" - profile VM exits. + Param: - step/bucket size as a power of 2 for + statistical time based profiling. +diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst +index 27135b9c07acb..6451e9198fef7 100644 +--- a/Documentation/arm64/silicon-errata.rst ++++ b/Documentation/arm64/silicon-errata.rst +@@ -109,8 +109,16 @@ stable kernels. + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-A76 | #1463225 | ARM64_ERRATUM_1463225 | + +----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A76 | #3324349 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-A77 | #1508412 | ARM64_ERRATUM_1508412 | + +----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A77 | #3324348 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A78 | #3324344 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A78C | #3324346,3324347| ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-A510 | #2051678 | ARM64_ERRATUM_2051678 | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-A510 | #2077057 | ARM64_ERRATUM_2077057 | +@@ -125,22 +133,50 @@ stable kernels. + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-A710 | #2224489 | ARM64_ERRATUM_2224489 | + +----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A710 | #3324338 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A720 | #3456091 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-A725 | #3456106 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-X1 | #3324344 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-X1C | #3324346 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-X2 | #2119858 | ARM64_ERRATUM_2119858 | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Cortex-X2 | #2224489 | ARM64_ERRATUM_2224489 | + +----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-X2 | #3324338 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-X3 | #3324335 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-X4 | #3194386 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Cortex-X925 | #3324334 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ + | ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Neoverse-N1 | #1349291 | N/A | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Neoverse-N1 | #1542419 | ARM64_ERRATUM_1542419 | + +----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Neoverse-N1 | #3324349 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ + | ARM | Neoverse-N2 | #2139208 | ARM64_ERRATUM_2139208 | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Neoverse-N2 | #2067961 | ARM64_ERRATUM_2067961 | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | Neoverse-N2 | #2253138 | ARM64_ERRATUM_2253138 | + +----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Neoverse-N2 | #3324339 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Neoverse-V1 | #3324341 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Neoverse-V2 | #3324336 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ ++| ARM | Neoverse-V3 | #3312417 | ARM64_ERRATUM_3194386 | +++----------------+-----------------+-----------------+-----------------------------+ + | ARM | MMU-500 | #841119,826419 | N/A | + +----------------+-----------------+-----------------+-----------------------------+ + | ARM | MMU-600 | #1076982,1209401| N/A | +diff --git a/Makefile b/Makefile +index 0dd963d6d8d26..08ca316cb46dc 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 1 +-SUBLEVEL = 104 ++SUBLEVEL = 105 + EXTRAVERSION = + NAME = Curry Ramen + +diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig +index 044b98a62f7bb..2ef939075039d 100644 +--- a/arch/arm64/Kconfig ++++ b/arch/arm64/Kconfig +@@ -1000,6 +1000,44 @@ config ARM64_ERRATUM_2966298 + + If unsure, say Y. + ++config ARM64_ERRATUM_3194386 ++ bool "Cortex-*/Neoverse-*: workaround for MSR SSBS not self-synchronizing" ++ default y ++ help ++ This option adds the workaround for the following errata: ++ ++ * ARM Cortex-A76 erratum 3324349 ++ * ARM Cortex-A77 erratum 3324348 ++ * ARM Cortex-A78 erratum 3324344 ++ * ARM Cortex-A78C erratum 3324346 ++ * ARM Cortex-A78C erratum 3324347 ++ * ARM Cortex-A710 erratam 3324338 ++ * ARM Cortex-A720 erratum 3456091 ++ * ARM Cortex-A725 erratum 3456106 ++ * ARM Cortex-X1 erratum 3324344 ++ * ARM Cortex-X1C erratum 3324346 ++ * ARM Cortex-X2 erratum 3324338 ++ * ARM Cortex-X3 erratum 3324335 ++ * ARM Cortex-X4 erratum 3194386 ++ * ARM Cortex-X925 erratum 3324334 ++ * ARM Neoverse-N1 erratum 3324349 ++ * ARM Neoverse N2 erratum 3324339 ++ * ARM Neoverse-V1 erratum 3324341 ++ * ARM Neoverse V2 erratum 3324336 ++ * ARM Neoverse-V3 erratum 3312417 ++ ++ On affected cores "MSR SSBS, #0" instructions may not affect ++ subsequent speculative instructions, which may permit unexepected ++ speculative store bypassing. ++ ++ Work around this problem by placing a Speculation Barrier (SB) or ++ Instruction Synchronization Barrier (ISB) after kernel changes to ++ SSBS. The presence of the SSBS special-purpose register is hidden ++ from hwcaps and EL0 reads of ID_AA64PFR1_EL1, such that userspace ++ will use the PR_SPEC_STORE_BYPASS prctl to change SSBS. ++ ++ If unsure, say Y. ++ + config CAVIUM_ERRATUM_22375 + bool "Cavium erratum 22375, 24313" + default y +diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h +index 2cfc4245d2e2d..e22ec44d50f5f 100644 +--- a/arch/arm64/include/asm/barrier.h ++++ b/arch/arm64/include/asm/barrier.h +@@ -38,6 +38,10 @@ + */ + #define dgh() asm volatile("hint #6" : : : "memory") + ++#define spec_bar() asm volatile(ALTERNATIVE("dsb nsh\nisb\n", \ ++ SB_BARRIER_INSN"nop\n", \ ++ ARM64_HAS_SB)) ++ + #ifdef CONFIG_ARM64_PSEUDO_NMI + #define pmr_sync() \ + do { \ +diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h +index af3a678a76b3a..a0a028a6b9670 100644 +--- a/arch/arm64/include/asm/cputype.h ++++ b/arch/arm64/include/asm/cputype.h +@@ -85,6 +85,14 @@ + #define ARM_CPU_PART_CORTEX_X2 0xD48 + #define ARM_CPU_PART_NEOVERSE_N2 0xD49 + #define ARM_CPU_PART_CORTEX_A78C 0xD4B ++#define ARM_CPU_PART_CORTEX_X1C 0xD4C ++#define ARM_CPU_PART_CORTEX_X3 0xD4E ++#define ARM_CPU_PART_NEOVERSE_V2 0xD4F ++#define ARM_CPU_PART_CORTEX_A720 0xD81 ++#define ARM_CPU_PART_CORTEX_X4 0xD82 ++#define ARM_CPU_PART_NEOVERSE_V3 0xD84 ++#define ARM_CPU_PART_CORTEX_X925 0xD85 ++#define ARM_CPU_PART_CORTEX_A725 0xD87 + + #define APM_CPU_PART_XGENE 0x000 + #define APM_CPU_VAR_POTENZA 0x00 +@@ -151,6 +159,14 @@ + #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) + #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) + #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) ++#define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C) ++#define MIDR_CORTEX_X3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X3) ++#define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2) ++#define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720) ++#define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4) ++#define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) ++#define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925) ++#define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725) + #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) + #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) + #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) +diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c +index 74584597bfb82..7640031e1b845 100644 +--- a/arch/arm64/kernel/cpu_errata.c ++++ b/arch/arm64/kernel/cpu_errata.c +@@ -435,6 +435,30 @@ static struct midr_range broken_aarch32_aes[] = { + }; + #endif /* CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE */ + ++#ifdef CONFIG_ARM64_ERRATUM_3194386 ++static const struct midr_range erratum_spec_ssbs_list[] = { ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A76), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A77), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A720), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A725), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1C), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X2), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X3), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X4), ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X925), ++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1), ++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), ++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1), ++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2), ++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V3), ++ {} ++}; ++#endif ++ + const struct arm64_cpu_capabilities arm64_errata[] = { + #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE + { +@@ -726,6 +750,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = { + .cpu_enable = cpu_clear_bf16_from_user_emulation, + }, + #endif ++#ifdef CONFIG_ARM64_ERRATUM_3194386 ++ { ++ .desc = "SSBS not fully self-synchronizing", ++ .capability = ARM64_WORKAROUND_SPECULATIVE_SSBS, ++ ERRATA_MIDR_RANGE_LIST(erratum_spec_ssbs_list), ++ }, ++#endif + #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD + { + .desc = "ARM erratum 2966298", +diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c +index 770a31c6ed81b..840cc48b5147b 100644 +--- a/arch/arm64/kernel/cpufeature.c ++++ b/arch/arm64/kernel/cpufeature.c +@@ -2084,6 +2084,17 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) + } + #endif /* CONFIG_ARM64_MTE */ + ++static void user_feature_fixup(void) ++{ ++ if (cpus_have_cap(ARM64_WORKAROUND_SPECULATIVE_SSBS)) { ++ struct arm64_ftr_reg *regp; ++ ++ regp = get_arm64_ftr_reg(SYS_ID_AA64PFR1_EL1); ++ if (regp) ++ regp->user_mask &= ~ID_AA64PFR1_EL1_SSBS_MASK; ++ } ++} ++ + static void elf_hwcap_fixup(void) + { + #ifdef CONFIG_ARM64_ERRATUM_1742098 +@@ -3284,6 +3295,7 @@ void __init setup_cpu_features(void) + u32 cwg; + + setup_system_capabilities(); ++ user_feature_fixup(); + setup_elf_hwcaps(arm64_elf_hwcaps); + + if (system_supports_32bit_el0()) { +diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c +index bfce41c2a53b3..2df5e43ae4d14 100644 +--- a/arch/arm64/kernel/proton-pack.c ++++ b/arch/arm64/kernel/proton-pack.c +@@ -570,6 +570,18 @@ static enum mitigation_state spectre_v4_enable_hw_mitigation(void) + + /* SCTLR_EL1.DSSBS was initialised to 0 during boot */ + set_pstate_ssbs(0); ++ ++ /* ++ * SSBS is self-synchronizing and is intended to affect subsequent ++ * speculative instructions, but some CPUs can speculate with a stale ++ * value of SSBS. ++ * ++ * Mitigate this with an unconditional speculation barrier, as CPUs ++ * could mis-speculate branches and bypass a conditional barrier. ++ */ ++ if (IS_ENABLED(CONFIG_ARM64_ERRATUM_3194386)) ++ spec_bar(); ++ + return SPECTRE_MITIGATED; + } + +diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps +index 2dc7ddee5f044..f5c9a4f5f7226 100644 +--- a/arch/arm64/tools/cpucaps ++++ b/arch/arm64/tools/cpucaps +@@ -86,4 +86,5 @@ WORKAROUND_NVIDIA_CARMEL_CNP + WORKAROUND_QCOM_FALKOR_E1003 + WORKAROUND_REPEAT_TLBI + WORKAROUND_SPECULATIVE_AT ++WORKAROUND_SPECULATIVE_SSBS + WORKAROUND_SPECULATIVE_UNPRIV_LOAD +diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c +index 2746cac9d8a94..dbebb646c9fc6 100644 +--- a/arch/x86/kernel/cpu/mtrr/mtrr.c ++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c +@@ -816,7 +816,7 @@ void mtrr_save_state(void) + { + int first_cpu; + +- if (!mtrr_enabled()) ++ if (!mtrr_enabled() || !mtrr_state.have_fixed) + return; + + first_cpu = cpumask_first(cpu_online_mask); +diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c +index 78414c6d1b5ed..7b804c34c0201 100644 +--- a/arch/x86/mm/pti.c ++++ b/arch/x86/mm/pti.c +@@ -374,14 +374,14 @@ pti_clone_pgtable(unsigned long start, unsigned long end, + */ + *target_pmd = *pmd; + +- addr += PMD_SIZE; ++ addr = round_up(addr + 1, PMD_SIZE); + + } else if (level == PTI_CLONE_PTE) { + + /* Walk the page-table down to the pte level */ + pte = pte_offset_kernel(pmd, addr); + if (pte_none(*pte)) { +- addr += PAGE_SIZE; ++ addr = round_up(addr + 1, PAGE_SIZE); + continue; + } + +@@ -401,7 +401,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end, + /* Clone the PTE */ + *target_pte = *pte; + +- addr += PAGE_SIZE; ++ addr = round_up(addr + 1, PAGE_SIZE); + + } else { + BUG(); +@@ -496,7 +496,7 @@ static void pti_clone_entry_text(void) + { + pti_clone_pgtable((unsigned long) __entry_text_start, + (unsigned long) __entry_text_end, +- PTI_CLONE_PMD); ++ PTI_LEVEL_KERNEL_IMAGE); + } + + /* +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 3afa5c8d165b1..daf0e4f3444e7 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -439,6 +439,7 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data, + + static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) + { ++ void (*limit_depth)(blk_opf_t, struct blk_mq_alloc_data *) = NULL; + struct request_queue *q = data->q; + u64 alloc_time_ns = 0; + struct request *rq; +@@ -465,7 +466,7 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) + !blk_op_is_passthrough(data->cmd_flags) && + e->type->ops.limit_depth && + !(data->flags & BLK_MQ_REQ_RESERVED)) +- e->type->ops.limit_depth(data->cmd_flags, data); ++ limit_depth = e->type->ops.limit_depth; + } + + retry: +@@ -477,6 +478,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) + if (data->flags & BLK_MQ_REQ_RESERVED) + data->rq_flags |= RQF_RESV; + ++ if (limit_depth) ++ limit_depth(data->cmd_flags, data); ++ + /* + * Try batched alloc if we want more than 1 tag. + */ +diff --git a/block/mq-deadline.c b/block/mq-deadline.c +index f10c2a0d18d41..ff029e1acc4d5 100644 +--- a/block/mq-deadline.c ++++ b/block/mq-deadline.c +@@ -597,6 +597,20 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx) + return rq; + } + ++/* ++ * 'depth' is a number in the range 1..INT_MAX representing a number of ++ * requests. Scale it with a factor (1 << bt->sb.shift) / q->nr_requests since ++ * 1..(1 << bt->sb.shift) is the range expected by sbitmap_get_shallow(). ++ * Values larger than q->nr_requests have the same effect as q->nr_requests. ++ */ ++static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int qdepth) ++{ ++ struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags; ++ const unsigned int nrr = hctx->queue->nr_requests; ++ ++ return ((qdepth << bt->sb.shift) + nrr - 1) / nrr; ++} ++ + /* + * Called by __blk_mq_alloc_request(). The shallow_depth value set by this + * function is used by __blk_mq_get_tag(). +@@ -613,7 +627,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data) + * Throttle asynchronous requests and writes such that these requests + * do not block the allocation of synchronous requests. + */ +- data->shallow_depth = dd->async_depth; ++ data->shallow_depth = dd_to_word_depth(data->hctx, dd->async_depth); + } + + /* Called by blk_mq_update_nr_requests(). */ +@@ -623,9 +637,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) + struct deadline_data *dd = q->elevator->elevator_data; + struct blk_mq_tags *tags = hctx->sched_tags; + +- dd->async_depth = max(1UL, 3 * q->nr_requests / 4); ++ dd->async_depth = q->nr_requests; + +- sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); ++ sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1); + } + + /* Called by blk_mq_init_hctx() and blk_mq_init_sched(). */ +diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c +index 084f156bdfbc4..088740fdea355 100644 +--- a/drivers/acpi/battery.c ++++ b/drivers/acpi/battery.c +@@ -667,12 +667,18 @@ static ssize_t acpi_battery_alarm_store(struct device *dev, + return count; + } + +-static const struct device_attribute alarm_attr = { ++static struct device_attribute alarm_attr = { + .attr = {.name = "alarm", .mode = 0644}, + .show = acpi_battery_alarm_show, + .store = acpi_battery_alarm_store, + }; + ++static struct attribute *acpi_battery_attrs[] = { ++ &alarm_attr.attr, ++ NULL ++}; ++ATTRIBUTE_GROUPS(acpi_battery); ++ + /* + * The Battery Hooking API + * +@@ -809,7 +815,10 @@ static void __exit battery_hook_exit(void) + + static int sysfs_add_battery(struct acpi_battery *battery) + { +- struct power_supply_config psy_cfg = { .drv_data = battery, }; ++ struct power_supply_config psy_cfg = { ++ .drv_data = battery, ++ .attr_grp = acpi_battery_groups, ++ }; + bool full_cap_broken = false; + + if (!ACPI_BATTERY_CAPACITY_VALID(battery->full_charge_capacity) && +@@ -854,7 +863,7 @@ static int sysfs_add_battery(struct acpi_battery *battery) + return result; + } + battery_hook_add_battery(battery); +- return device_create_file(&battery->bat->dev, &alarm_attr); ++ return 0; + } + + static void sysfs_remove_battery(struct acpi_battery *battery) +@@ -865,7 +874,6 @@ static void sysfs_remove_battery(struct acpi_battery *battery) + return; + } + battery_hook_remove_battery(battery); +- device_remove_file(&battery->bat->dev, &alarm_attr); + power_supply_unregister(battery->bat); + battery->bat = NULL; + mutex_unlock(&battery->sysfs_lock); +diff --git a/drivers/acpi/sbs.c b/drivers/acpi/sbs.c +index e6a01a8df1b81..7c0eba1a37d87 100644 +--- a/drivers/acpi/sbs.c ++++ b/drivers/acpi/sbs.c +@@ -77,7 +77,6 @@ struct acpi_battery { + u16 spec; + u8 id; + u8 present:1; +- u8 have_sysfs_alarm:1; + }; + + #define to_acpi_battery(x) power_supply_get_drvdata(x) +@@ -462,12 +461,18 @@ static ssize_t acpi_battery_alarm_store(struct device *dev, + return count; + } + +-static const struct device_attribute alarm_attr = { ++static struct device_attribute alarm_attr = { + .attr = {.name = "alarm", .mode = 0644}, + .show = acpi_battery_alarm_show, + .store = acpi_battery_alarm_store, + }; + ++static struct attribute *acpi_battery_attrs[] = { ++ &alarm_attr.attr, ++ NULL ++}; ++ATTRIBUTE_GROUPS(acpi_battery); ++ + /* -------------------------------------------------------------------------- + Driver Interface + -------------------------------------------------------------------------- */ +@@ -509,7 +514,10 @@ static int acpi_battery_read(struct acpi_battery *battery) + static int acpi_battery_add(struct acpi_sbs *sbs, int id) + { + struct acpi_battery *battery = &sbs->battery[id]; +- struct power_supply_config psy_cfg = { .drv_data = battery, }; ++ struct power_supply_config psy_cfg = { ++ .drv_data = battery, ++ .attr_grp = acpi_battery_groups, ++ }; + int result; + + battery->id = id; +@@ -539,10 +547,6 @@ static int acpi_battery_add(struct acpi_sbs *sbs, int id) + goto end; + } + +- result = device_create_file(&battery->bat->dev, &alarm_attr); +- if (result) +- goto end; +- battery->have_sysfs_alarm = 1; + end: + pr_info("%s [%s]: Battery Slot [%s] (battery %s)\n", + ACPI_SBS_DEVICE_NAME, acpi_device_bid(sbs->device), +@@ -554,11 +558,8 @@ static void acpi_battery_remove(struct acpi_sbs *sbs, int id) + { + struct acpi_battery *battery = &sbs->battery[id]; + +- if (battery->bat) { +- if (battery->have_sysfs_alarm) +- device_remove_file(&battery->bat->dev, &alarm_attr); ++ if (battery->bat) + power_supply_unregister(battery->bat); +- } + } + + static int acpi_charger_add(struct acpi_sbs *sbs) +diff --git a/drivers/base/core.c b/drivers/base/core.c +index 30204e62497c2..1de19cecaa622 100644 +--- a/drivers/base/core.c ++++ b/drivers/base/core.c +@@ -25,6 +25,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -2558,6 +2559,7 @@ static const char *dev_uevent_name(struct kobject *kobj) + static int dev_uevent(struct kobject *kobj, struct kobj_uevent_env *env) + { + struct device *dev = kobj_to_dev(kobj); ++ struct device_driver *driver; + int retval = 0; + + /* add device node properties if present */ +@@ -2586,8 +2588,12 @@ static int dev_uevent(struct kobject *kobj, struct kobj_uevent_env *env) + if (dev->type && dev->type->name) + add_uevent_var(env, "DEVTYPE=%s", dev->type->name); + +- if (dev->driver) +- add_uevent_var(env, "DRIVER=%s", dev->driver->name); ++ /* Synchronize with module_remove_driver() */ ++ rcu_read_lock(); ++ driver = READ_ONCE(dev->driver); ++ if (driver) ++ add_uevent_var(env, "DRIVER=%s", driver->name); ++ rcu_read_unlock(); + + /* Add common DT information about the device */ + of_device_uevent(dev, env); +@@ -2657,11 +2663,8 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr, + if (!env) + return -ENOMEM; + +- /* Synchronize with really_probe() */ +- device_lock(dev); + /* let the kset specific function add its keys */ + retval = kset->uevent_ops->uevent(&dev->kobj, env); +- device_unlock(dev); + if (retval) + goto out; + +diff --git a/drivers/base/module.c b/drivers/base/module.c +index 46ad4d636731d..851cc5367c04c 100644 +--- a/drivers/base/module.c ++++ b/drivers/base/module.c +@@ -7,6 +7,7 @@ + #include + #include + #include ++#include + #include "base.h" + + static char *make_driver_name(struct device_driver *drv) +@@ -77,6 +78,9 @@ void module_remove_driver(struct device_driver *drv) + if (!drv) + return; + ++ /* Synchronize with dev_uevent() */ ++ synchronize_rcu(); ++ + sysfs_remove_link(&drv->p->kobj, "module"); + + if (drv->owner) +diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c +index caa4ce28cf9e5..d01080b21da6e 100644 +--- a/drivers/bus/mhi/host/pci_generic.c ++++ b/drivers/bus/mhi/host/pci_generic.c +@@ -553,6 +553,9 @@ static const struct pci_device_id mhi_pci_id_table[] = { + /* Telit FN990 */ + { PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2010), + .driver_data = (kernel_ulong_t) &mhi_telit_fn990_info }, ++ /* Telit FE990 */ ++ { PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2015), ++ .driver_data = (kernel_ulong_t) &mhi_telit_fn990_info }, + { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308), + .driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info }, + { PCI_DEVICE(0x1eac, 0x1001), /* EM120R-GL (sdx24) */ +diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c +index 7b952aa52c0b9..7a2b83157bf5f 100644 +--- a/drivers/clocksource/sh_cmt.c ++++ b/drivers/clocksource/sh_cmt.c +@@ -529,6 +529,7 @@ static void sh_cmt_set_next(struct sh_cmt_channel *ch, unsigned long delta) + static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id) + { + struct sh_cmt_channel *ch = dev_id; ++ unsigned long flags; + + /* clear flags */ + sh_cmt_write_cmcsr(ch, sh_cmt_read_cmcsr(ch) & +@@ -559,6 +560,8 @@ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id) + + ch->flags &= ~FLAG_SKIPEVENT; + ++ raw_spin_lock_irqsave(&ch->lock, flags); ++ + if (ch->flags & FLAG_REPROGRAM) { + ch->flags &= ~FLAG_REPROGRAM; + sh_cmt_clock_event_program_verify(ch, 1); +@@ -571,6 +574,8 @@ static irqreturn_t sh_cmt_interrupt(int irq, void *dev_id) + + ch->flags &= ~FLAG_IRQCONTEXT; + ++ raw_spin_unlock_irqrestore(&ch->lock, flags); ++ + return IRQ_HANDLED; + } + +@@ -781,12 +786,18 @@ static int sh_cmt_clock_event_next(unsigned long delta, + struct clock_event_device *ced) + { + struct sh_cmt_channel *ch = ced_to_sh_cmt(ced); ++ unsigned long flags; + + BUG_ON(!clockevent_state_oneshot(ced)); ++ ++ raw_spin_lock_irqsave(&ch->lock, flags); ++ + if (likely(ch->flags & FLAG_IRQCONTEXT)) + ch->next_match_value = delta - 1; + else +- sh_cmt_set_next(ch, delta - 1); ++ __sh_cmt_set_next(ch, delta - 1); ++ ++ raw_spin_unlock_irqrestore(&ch->lock, flags); + + return 0; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +index d4faa489bd5fa..4d1c2eb63090f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +@@ -3631,6 +3631,7 @@ int amdgpu_device_init(struct amdgpu_device *adev, + mutex_init(&adev->grbm_idx_mutex); + mutex_init(&adev->mn_lock); + mutex_init(&adev->virt.vf_errors.lock); ++ mutex_init(&adev->virt.rlcg_reg_lock); + hash_init(adev->mn_hash); + mutex_init(&adev->psp.mutex); + mutex_init(&adev->notifier_lock); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c +index ee83d282b49a8..4b7b3278a05f1 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c +@@ -1679,12 +1679,15 @@ static void amdgpu_ras_interrupt_process_handler(struct work_struct *work) + int amdgpu_ras_interrupt_dispatch(struct amdgpu_device *adev, + struct ras_dispatch_if *info) + { +- struct ras_manager *obj = amdgpu_ras_find_obj(adev, &info->head); +- struct ras_ih_data *data = &obj->ih_data; ++ struct ras_manager *obj; ++ struct ras_ih_data *data; + ++ obj = amdgpu_ras_find_obj(adev, &info->head); + if (!obj) + return -EINVAL; + ++ data = &obj->ih_data; ++ + if (data->inuse == 0) + return 0; + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c +index 81549f1edfe01..5ee9211c503c4 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c +@@ -956,6 +956,9 @@ static u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v + scratch_reg1 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg1; + scratch_reg2 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg2; + scratch_reg3 = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->scratch_reg3; ++ ++ mutex_lock(&adev->virt.rlcg_reg_lock); ++ + if (reg_access_ctrl->spare_int) + spare_int = (void __iomem *)adev->rmmio + 4 * reg_access_ctrl->spare_int; + +@@ -1009,6 +1012,9 @@ static u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v + } + + ret = readl(scratch_reg0); ++ ++ mutex_unlock(&adev->virt.rlcg_reg_lock); ++ + return ret; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h +index 2b9d806e23afb..dc6aaa4d67be7 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h +@@ -260,6 +260,8 @@ struct amdgpu_virt { + + /* the ucode id to signal the autoload */ + uint32_t autoload_ucode_id; ++ ++ struct mutex rlcg_reg_lock; + }; + + struct amdgpu_video_codec_info; +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 31bae620aeffc..6189685af1fda 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -2636,7 +2636,8 @@ static int dm_suspend(void *handle) + + dm->cached_dc_state = dc_copy_state(dm->dc->current_state); + +- dm_gpureset_toggle_interrupts(adev, dm->cached_dc_state, false); ++ if (dm->cached_dc_state) ++ dm_gpureset_toggle_interrupts(adev, dm->cached_dc_state, false); + + amdgpu_dm_commit_zero_streams(dm->dc); + +@@ -6388,7 +6389,8 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector) + aconnector->dc_sink = aconnector->dc_link->local_sink ? + aconnector->dc_link->local_sink : + aconnector->dc_em_sink; +- dc_sink_retain(aconnector->dc_sink); ++ if (aconnector->dc_sink) ++ dc_sink_retain(aconnector->dc_sink); + } + } + +@@ -7121,7 +7123,8 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector) + drm_add_modes_noedid(connector, 640, 480); + } else { + amdgpu_dm_connector_ddc_get_modes(connector, edid); +- amdgpu_dm_connector_add_common_modes(encoder, connector); ++ if (encoder) ++ amdgpu_dm_connector_add_common_modes(encoder, connector); + amdgpu_dm_connector_add_freesync_modes(connector, edid); + } + amdgpu_dm_fbc_init(connector); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +index a9ddff774a978..b41a188007b8c 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +@@ -1255,6 +1255,9 @@ static bool is_dsc_need_re_compute( + } + } + ++ if (new_stream_on_link_num == 0) ++ return false; ++ + /* check current_state if there stream on link but it is not in + * new request state + */ +diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c +index 179e1c593a53f..f3668911a88fd 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c +@@ -928,7 +928,7 @@ static int pp_dpm_switch_power_profile(void *handle, + enum PP_SMC_POWER_PROFILE type, bool en) + { + struct pp_hwmgr *hwmgr = handle; +- long workload; ++ long workload[1]; + uint32_t index; + + if (!hwmgr || !hwmgr->pm_en) +@@ -946,12 +946,12 @@ static int pp_dpm_switch_power_profile(void *handle, + hwmgr->workload_mask &= ~(1 << hwmgr->workload_prority[type]); + index = fls(hwmgr->workload_mask); + index = index > 0 && index <= Workload_Policy_Max ? index - 1 : 0; +- workload = hwmgr->workload_setting[index]; ++ workload[0] = hwmgr->workload_setting[index]; + } else { + hwmgr->workload_mask |= (1 << hwmgr->workload_prority[type]); + index = fls(hwmgr->workload_mask); + index = index <= Workload_Policy_Max ? index - 1 : 0; +- workload = hwmgr->workload_setting[index]; ++ workload[0] = hwmgr->workload_setting[index]; + } + + if (type == PP_SMC_POWER_PROFILE_COMPUTE && +@@ -961,7 +961,7 @@ static int pp_dpm_switch_power_profile(void *handle, + } + + if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL) +- hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, &workload, 0); ++ hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, workload, 0); + + return 0; + } +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c +index 1d829402cd2e2..f4bd8e9357e22 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/pp_psm.c +@@ -269,7 +269,7 @@ int psm_adjust_power_state_dynamic(struct pp_hwmgr *hwmgr, bool skip_display_set + struct pp_power_state *new_ps) + { + uint32_t index; +- long workload; ++ long workload[1]; + + if (hwmgr->not_vf) { + if (!skip_display_settings) +@@ -294,10 +294,10 @@ int psm_adjust_power_state_dynamic(struct pp_hwmgr *hwmgr, bool skip_display_set + if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL) { + index = fls(hwmgr->workload_mask); + index = index > 0 && index <= Workload_Policy_Max ? index - 1 : 0; +- workload = hwmgr->workload_setting[index]; ++ workload[0] = hwmgr->workload_setting[index]; + +- if (hwmgr->power_profile_mode != workload && hwmgr->hwmgr_func->set_power_profile_mode) +- hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, &workload, 0); ++ if (hwmgr->power_profile_mode != workload[0] && hwmgr->hwmgr_func->set_power_profile_mode) ++ hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, workload, 0); + } + + return 0; +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c +index 5e9410117712c..750b7527bdf83 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c +@@ -2970,6 +2970,7 @@ static int smu7_update_edc_leakage_table(struct pp_hwmgr *hwmgr) + + static int smu7_hwmgr_backend_init(struct pp_hwmgr *hwmgr) + { ++ struct amdgpu_device *adev = hwmgr->adev; + struct smu7_hwmgr *data; + int result = 0; + +@@ -3006,40 +3007,37 @@ static int smu7_hwmgr_backend_init(struct pp_hwmgr *hwmgr) + /* Initalize Dynamic State Adjustment Rule Settings */ + result = phm_initializa_dynamic_state_adjustment_rule_settings(hwmgr); + +- if (0 == result) { +- struct amdgpu_device *adev = hwmgr->adev; ++ if (result) ++ goto fail; + +- data->is_tlu_enabled = false; ++ data->is_tlu_enabled = false; + +- hwmgr->platform_descriptor.hardwareActivityPerformanceLevels = ++ hwmgr->platform_descriptor.hardwareActivityPerformanceLevels = + SMU7_MAX_HARDWARE_POWERLEVELS; +- hwmgr->platform_descriptor.hardwarePerformanceLevels = 2; +- hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50; ++ hwmgr->platform_descriptor.hardwarePerformanceLevels = 2; ++ hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50; + +- data->pcie_gen_cap = adev->pm.pcie_gen_mask; +- if (data->pcie_gen_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) +- data->pcie_spc_cap = 20; +- else +- data->pcie_spc_cap = 16; +- data->pcie_lane_cap = adev->pm.pcie_mlw_mask; +- +- hwmgr->platform_descriptor.vbiosInterruptId = 0x20000400; /* IRQ_SOURCE1_SW_INT */ +-/* The true clock step depends on the frequency, typically 4.5 or 9 MHz. Here we use 5. */ +- hwmgr->platform_descriptor.clockStep.engineClock = 500; +- hwmgr->platform_descriptor.clockStep.memoryClock = 500; +- smu7_thermal_parameter_init(hwmgr); +- } else { +- /* Ignore return value in here, we are cleaning up a mess. */ +- smu7_hwmgr_backend_fini(hwmgr); +- } ++ data->pcie_gen_cap = adev->pm.pcie_gen_mask; ++ if (data->pcie_gen_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) ++ data->pcie_spc_cap = 20; ++ else ++ data->pcie_spc_cap = 16; ++ data->pcie_lane_cap = adev->pm.pcie_mlw_mask; ++ ++ hwmgr->platform_descriptor.vbiosInterruptId = 0x20000400; /* IRQ_SOURCE1_SW_INT */ ++ /* The true clock step depends on the frequency, typically 4.5 or 9 MHz. Here we use 5. */ ++ hwmgr->platform_descriptor.clockStep.engineClock = 500; ++ hwmgr->platform_descriptor.clockStep.memoryClock = 500; ++ smu7_thermal_parameter_init(hwmgr); + + result = smu7_update_edc_leakage_table(hwmgr); +- if (result) { +- smu7_hwmgr_backend_fini(hwmgr); +- return result; +- } ++ if (result) ++ goto fail; + + return 0; ++fail: ++ smu7_hwmgr_backend_fini(hwmgr); ++ return result; + } + + static int smu7_force_dpm_highest(struct pp_hwmgr *hwmgr) +@@ -3329,8 +3327,7 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr, + const struct pp_power_state *current_ps) + { + struct amdgpu_device *adev = hwmgr->adev; +- struct smu7_power_state *smu7_ps = +- cast_phw_smu7_power_state(&request_ps->hardware); ++ struct smu7_power_state *smu7_ps; + uint32_t sclk; + uint32_t mclk; + struct PP_Clocks minimum_clocks = {0}; +@@ -3347,6 +3344,10 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr, + uint32_t latency; + bool latency_allowed = false; + ++ smu7_ps = cast_phw_smu7_power_state(&request_ps->hardware); ++ if (!smu7_ps) ++ return -EINVAL; ++ + data->battery_state = (PP_StateUILabel_Battery == + request_ps->classification.ui_label); + data->mclk_ignore_signal = false; +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c +index b015a601b385a..eb744401e0567 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c +@@ -1065,16 +1065,18 @@ static int smu8_apply_state_adjust_rules(struct pp_hwmgr *hwmgr, + struct pp_power_state *prequest_ps, + const struct pp_power_state *pcurrent_ps) + { +- struct smu8_power_state *smu8_ps = +- cast_smu8_power_state(&prequest_ps->hardware); +- +- const struct smu8_power_state *smu8_current_ps = +- cast_const_smu8_power_state(&pcurrent_ps->hardware); +- ++ struct smu8_power_state *smu8_ps; ++ const struct smu8_power_state *smu8_current_ps; + struct smu8_hwmgr *data = hwmgr->backend; + struct PP_Clocks clocks = {0, 0, 0, 0}; + bool force_high; + ++ smu8_ps = cast_smu8_power_state(&prequest_ps->hardware); ++ smu8_current_ps = cast_const_smu8_power_state(&pcurrent_ps->hardware); ++ ++ if (!smu8_ps || !smu8_current_ps) ++ return -EINVAL; ++ + smu8_ps->need_dfs_bypass = true; + + data->battery_state = (PP_StateUILabel_Battery == prequest_ps->classification.ui_label); +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c +index d8cd23438b762..f8333410cc3e4 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c +@@ -3263,8 +3263,7 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr, + const struct pp_power_state *current_ps) + { + struct amdgpu_device *adev = hwmgr->adev; +- struct vega10_power_state *vega10_ps = +- cast_phw_vega10_power_state(&request_ps->hardware); ++ struct vega10_power_state *vega10_ps; + uint32_t sclk; + uint32_t mclk; + struct PP_Clocks minimum_clocks = {0}; +@@ -3282,6 +3281,10 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr, + uint32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0; + uint32_t latency; + ++ vega10_ps = cast_phw_vega10_power_state(&request_ps->hardware); ++ if (!vega10_ps) ++ return -EINVAL; ++ + data->battery_state = (PP_StateUILabel_Battery == + request_ps->classification.ui_label); + +@@ -3419,13 +3422,17 @@ static int vega10_find_dpm_states_clocks_in_dpm_table(struct pp_hwmgr *hwmgr, co + const struct vega10_power_state *vega10_ps = + cast_const_phw_vega10_power_state(states->pnew_state); + struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table); +- uint32_t sclk = vega10_ps->performance_levels +- [vega10_ps->performance_level_count - 1].gfx_clock; + struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table); +- uint32_t mclk = vega10_ps->performance_levels +- [vega10_ps->performance_level_count - 1].mem_clock; ++ uint32_t sclk, mclk; + uint32_t i; + ++ if (vega10_ps == NULL) ++ return -EINVAL; ++ sclk = vega10_ps->performance_levels ++ [vega10_ps->performance_level_count - 1].gfx_clock; ++ mclk = vega10_ps->performance_levels ++ [vega10_ps->performance_level_count - 1].mem_clock; ++ + for (i = 0; i < sclk_table->count; i++) { + if (sclk == sclk_table->dpm_levels[i].value) + break; +@@ -3732,6 +3739,9 @@ static int vega10_generate_dpm_level_enable_mask( + cast_const_phw_vega10_power_state(states->pnew_state); + int i; + ++ if (vega10_ps == NULL) ++ return -EINVAL; ++ + PP_ASSERT_WITH_CODE(!vega10_trim_dpm_states(hwmgr, vega10_ps), + "Attempt to Trim DPM States Failed!", + return -1); +@@ -4999,6 +5009,8 @@ static int vega10_check_states_equal(struct pp_hwmgr *hwmgr, + + vega10_psa = cast_const_phw_vega10_power_state(pstate1); + vega10_psb = cast_const_phw_vega10_power_state(pstate2); ++ if (vega10_psa == NULL || vega10_psb == NULL) ++ return -EINVAL; + + /* If the two states don't even have the same number of performance levels + * they cannot be the same state. +@@ -5132,6 +5144,8 @@ static int vega10_set_sclk_od(struct pp_hwmgr *hwmgr, uint32_t value) + return -EINVAL; + + vega10_ps = cast_phw_vega10_power_state(&ps->hardware); ++ if (vega10_ps == NULL) ++ return -EINVAL; + + vega10_ps->performance_levels + [vega10_ps->performance_level_count - 1].gfx_clock = +@@ -5183,6 +5197,8 @@ static int vega10_set_mclk_od(struct pp_hwmgr *hwmgr, uint32_t value) + return -EINVAL; + + vega10_ps = cast_phw_vega10_power_state(&ps->hardware); ++ if (vega10_ps == NULL) ++ return -EINVAL; + + vega10_ps->performance_levels + [vega10_ps->performance_level_count - 1].mem_clock = +@@ -5424,6 +5440,9 @@ static void vega10_odn_update_power_state(struct pp_hwmgr *hwmgr) + return; + + vega10_ps = cast_phw_vega10_power_state(&ps->hardware); ++ if (vega10_ps == NULL) ++ return; ++ + max_level = vega10_ps->performance_level_count - 1; + + if (vega10_ps->performance_levels[max_level].gfx_clock != +@@ -5446,6 +5465,9 @@ static void vega10_odn_update_power_state(struct pp_hwmgr *hwmgr) + + ps = (struct pp_power_state *)((unsigned long)(hwmgr->ps) + hwmgr->ps_size * (hwmgr->num_ps - 1)); + vega10_ps = cast_phw_vega10_power_state(&ps->hardware); ++ if (vega10_ps == NULL) ++ return; ++ + max_level = vega10_ps->performance_level_count - 1; + + if (vega10_ps->performance_levels[max_level].gfx_clock != +@@ -5636,6 +5658,8 @@ static int vega10_get_performance_level(struct pp_hwmgr *hwmgr, const struct pp_ + return -EINVAL; + + vega10_ps = cast_const_phw_vega10_power_state(state); ++ if (vega10_ps == NULL) ++ return -EINVAL; + + i = index > vega10_ps->performance_level_count - 1 ? + vega10_ps->performance_level_count - 1 : index; +diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c +index 1d0693dad8185..91f0646eb3ee0 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c +@@ -1834,7 +1834,7 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu, + { + int ret = 0; + int index = 0; +- long workload; ++ long workload[1]; + struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm); + + if (!skip_display_settings) { +@@ -1874,10 +1874,10 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu, + smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) { + index = fls(smu->workload_mask); + index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; +- workload = smu->workload_setting[index]; ++ workload[0] = smu->workload_setting[index]; + +- if (smu->power_profile_mode != workload) +- smu_bump_power_profile_mode(smu, &workload, 0); ++ if (smu->power_profile_mode != workload[0]) ++ smu_bump_power_profile_mode(smu, workload, 0); + } + + return ret; +@@ -1927,7 +1927,7 @@ static int smu_switch_power_profile(void *handle, + { + struct smu_context *smu = handle; + struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm); +- long workload; ++ long workload[1]; + uint32_t index; + + if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled) +@@ -1940,17 +1940,17 @@ static int smu_switch_power_profile(void *handle, + smu->workload_mask &= ~(1 << smu->workload_prority[type]); + index = fls(smu->workload_mask); + index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; +- workload = smu->workload_setting[index]; ++ workload[0] = smu->workload_setting[index]; + } else { + smu->workload_mask |= (1 << smu->workload_prority[type]); + index = fls(smu->workload_mask); + index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; +- workload = smu->workload_setting[index]; ++ workload[0] = smu->workload_setting[index]; + } + + if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL && + smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) +- smu_bump_power_profile_mode(smu, &workload, 0); ++ smu_bump_power_profile_mode(smu, workload, 0); + + return 0; + } +diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c +index 6a4f20fccf841..7b0bc9704eacb 100644 +--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c ++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_reg.c +@@ -1027,7 +1027,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp, + u32 status_reg; + u8 *buffer = msg->buffer; + unsigned int i; +- int num_transferred = 0; + int ret; + + /* Buffer size of AUX CH is 16 bytes */ +@@ -1079,7 +1078,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp, + reg = buffer[i]; + writel(reg, dp->reg_base + ANALOGIX_DP_BUF_DATA_0 + + 4 * i); +- num_transferred++; + } + } + +@@ -1127,7 +1125,6 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp, + reg = readl(dp->reg_base + ANALOGIX_DP_BUF_DATA_0 + + 4 * i); + buffer[i] = (unsigned char)reg; +- num_transferred++; + } + } + +@@ -1144,7 +1141,7 @@ ssize_t analogix_dp_transfer(struct analogix_dp_device *dp, + (msg->request & ~DP_AUX_I2C_MOT) == DP_AUX_NATIVE_READ) + msg->reply = DP_AUX_NATIVE_REPLY_ACK; + +- return num_transferred > 0 ? num_transferred : -EBUSY; ++ return msg->size; + + aux_error: + /* if aux err happen, reset aux */ +diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c +index 805f8455b8d64..4204d1f930137 100644 +--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c +@@ -4024,6 +4024,7 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { + const struct drm_dp_connection_status_notify *conn_stat = + &up_req->msg.u.conn_stat; ++ bool handle_csn; + + drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", + conn_stat->port_number, +@@ -4032,6 +4033,16 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + conn_stat->message_capability_status, + conn_stat->input_port, + conn_stat->peer_device_type); ++ ++ mutex_lock(&mgr->probe_lock); ++ handle_csn = mgr->mst_primary->link_address_sent; ++ mutex_unlock(&mgr->probe_lock); ++ ++ if (!handle_csn) { ++ drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it."); ++ kfree(up_req); ++ goto out; ++ } + } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) { + const struct drm_dp_resource_status_notify *res_stat = + &up_req->msg.u.resource_stat; +diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c +index 9a65806047b5e..718acff90e2d3 100644 +--- a/drivers/gpu/drm/drm_client_modeset.c ++++ b/drivers/gpu/drm/drm_client_modeset.c +@@ -873,6 +873,11 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width, + + kfree(modeset->mode); + modeset->mode = drm_mode_duplicate(dev, mode); ++ if (!modeset->mode) { ++ ret = -ENOMEM; ++ break; ++ } ++ + drm_connector_get(connector); + modeset->connectors[modeset->num_connectors++] = connector; + modeset->x = offset->x; +diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c +index 39cab4a55f572..53ac94bdf475a 100644 +--- a/drivers/gpu/drm/lima/lima_drv.c ++++ b/drivers/gpu/drm/lima/lima_drv.c +@@ -489,3 +489,4 @@ module_platform_driver(lima_platform_driver); + MODULE_AUTHOR("Lima Project Developers"); + MODULE_DESCRIPTION("Lima DRM Driver"); + MODULE_LICENSE("GPL v2"); ++MODULE_SOFTDEP("pre: governor_simpleondemand"); +diff --git a/drivers/gpu/drm/mgag200/mgag200_i2c.c b/drivers/gpu/drm/mgag200/mgag200_i2c.c +index 0c48bdf3e7f80..f5c5d06d0d4bb 100644 +--- a/drivers/gpu/drm/mgag200/mgag200_i2c.c ++++ b/drivers/gpu/drm/mgag200/mgag200_i2c.c +@@ -31,6 +31,8 @@ + #include + #include + ++#include ++ + #include "mgag200_drv.h" + + static int mga_i2c_read_gpio(struct mga_device *mdev) +@@ -86,7 +88,7 @@ static int mga_gpio_getscl(void *data) + return (mga_i2c_read_gpio(mdev) & i2c->clock) ? 1 : 0; + } + +-static void mgag200_i2c_release(void *res) ++static void mgag200_i2c_release(struct drm_device *dev, void *res) + { + struct mga_i2c_chan *i2c = res; + +@@ -115,7 +117,7 @@ int mgag200_i2c_init(struct mga_device *mdev, struct mga_i2c_chan *i2c) + i2c->adapter.algo_data = &i2c->bit; + + i2c->bit.udelay = 10; +- i2c->bit.timeout = 2; ++ i2c->bit.timeout = usecs_to_jiffies(2200); + i2c->bit.data = i2c; + i2c->bit.setsda = mga_gpio_setsda; + i2c->bit.setscl = mga_gpio_setscl; +@@ -126,5 +128,5 @@ int mgag200_i2c_init(struct mga_device *mdev, struct mga_i2c_chan *i2c) + if (ret) + return ret; + +- return devm_add_action_or_reset(dev->dev, mgag200_i2c_release, i2c); ++ return drmm_add_action_or_reset(dev, mgag200_i2c_release, i2c); + } +diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c +index 75b9c3f26bba6..cc957655cec24 100644 +--- a/drivers/i2c/busses/i2c-qcom-geni.c ++++ b/drivers/i2c/busses/i2c-qcom-geni.c +@@ -88,6 +88,7 @@ struct geni_i2c_dev { + int cur_wr; + int cur_rd; + spinlock_t lock; ++ struct clk *core_clk; + u32 clk_freq_out; + const struct geni_i2c_clk_fld *clk_fld; + int suspended; +@@ -100,6 +101,13 @@ struct geni_i2c_dev { + bool abort_done; + }; + ++struct geni_i2c_desc { ++ bool has_core_clk; ++ char *icc_ddr; ++ bool no_dma_support; ++ unsigned int tx_fifo_depth; ++}; ++ + struct geni_i2c_err_log { + int err; + const char *msg; +@@ -763,6 +771,7 @@ static int geni_i2c_probe(struct platform_device *pdev) + u32 proto, tx_depth, fifo_disable; + int ret; + struct device *dev = &pdev->dev; ++ const struct geni_i2c_desc *desc = NULL; + + gi2c = devm_kzalloc(dev, sizeof(*gi2c), GFP_KERNEL); + if (!gi2c) +@@ -775,6 +784,14 @@ static int geni_i2c_probe(struct platform_device *pdev) + if (IS_ERR(gi2c->se.base)) + return PTR_ERR(gi2c->se.base); + ++ desc = device_get_match_data(&pdev->dev); ++ ++ if (desc && desc->has_core_clk) { ++ gi2c->core_clk = devm_clk_get(dev, "core"); ++ if (IS_ERR(gi2c->core_clk)) ++ return PTR_ERR(gi2c->core_clk); ++ } ++ + gi2c->se.clk = devm_clk_get(dev, "se"); + if (IS_ERR(gi2c->se.clk) && !has_acpi_companion(dev)) + return PTR_ERR(gi2c->se.clk); +@@ -818,7 +835,7 @@ static int geni_i2c_probe(struct platform_device *pdev) + gi2c->adap.dev.of_node = dev->of_node; + strscpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name)); + +- ret = geni_icc_get(&gi2c->se, "qup-memory"); ++ ret = geni_icc_get(&gi2c->se, desc ? desc->icc_ddr : "qup-memory"); + if (ret) + return ret; + /* +@@ -828,36 +845,62 @@ static int geni_i2c_probe(struct platform_device *pdev) + */ + gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW; + gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW; +- gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out); ++ if (!desc || desc->icc_ddr) ++ gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out); + + ret = geni_icc_set_bw(&gi2c->se); + if (ret) + return ret; + ++ ret = clk_prepare_enable(gi2c->core_clk); ++ if (ret) ++ return ret; ++ + ret = geni_se_resources_on(&gi2c->se); + if (ret) { + dev_err(dev, "Error turning on resources %d\n", ret); ++ clk_disable_unprepare(gi2c->core_clk); + return ret; + } + proto = geni_se_read_proto(&gi2c->se); + if (proto != GENI_SE_I2C) { + dev_err(dev, "Invalid proto %d\n", proto); + geni_se_resources_off(&gi2c->se); ++ clk_disable_unprepare(gi2c->core_clk); + return -ENXIO; + } + +- fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; ++ if (desc && desc->no_dma_support) ++ fifo_disable = false; ++ else ++ fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE; ++ + if (fifo_disable) { + /* FIFO is disabled, so we can only use GPI DMA */ + gi2c->gpi_mode = true; + ret = setup_gpi_dma(gi2c); +- if (ret) ++ if (ret) { ++ geni_se_resources_off(&gi2c->se); ++ clk_disable_unprepare(gi2c->core_clk); + return dev_err_probe(dev, ret, "Failed to setup GPI DMA mode\n"); ++ } + + dev_dbg(dev, "Using GPI DMA mode for I2C\n"); + } else { + gi2c->gpi_mode = false; + tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se); ++ ++ /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */ ++ if (!tx_depth && desc) ++ tx_depth = desc->tx_fifo_depth; ++ ++ if (!tx_depth) { ++ dev_err(dev, "Invalid TX FIFO depth\n"); ++ geni_se_resources_off(&gi2c->se); ++ clk_disable_unprepare(gi2c->core_clk); ++ return -EINVAL; ++ } ++ + gi2c->tx_wm = tx_depth - 1; + geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth); + geni_se_config_packing(&gi2c->se, BITS_PER_BYTE, +@@ -866,6 +909,7 @@ static int geni_i2c_probe(struct platform_device *pdev) + dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); + } + ++ clk_disable_unprepare(gi2c->core_clk); + ret = geni_se_resources_off(&gi2c->se); + if (ret) { + dev_err(dev, "Error turning off resources %d\n", ret); +@@ -931,6 +975,8 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev) + gi2c->suspended = 1; + } + ++ clk_disable_unprepare(gi2c->core_clk); ++ + return geni_icc_disable(&gi2c->se); + } + +@@ -943,10 +989,17 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev) + if (ret) + return ret; + +- ret = geni_se_resources_on(&gi2c->se); ++ ret = clk_prepare_enable(gi2c->core_clk); + if (ret) + return ret; + ++ ret = geni_se_resources_on(&gi2c->se); ++ if (ret) { ++ clk_disable_unprepare(gi2c->core_clk); ++ geni_icc_disable(&gi2c->se); ++ return ret; ++ } ++ + enable_irq(gi2c->irq); + gi2c->suspended = 0; + return 0; +diff --git a/drivers/i2c/i2c-smbus.c b/drivers/i2c/i2c-smbus.c +index 07c92c8495a3c..8d0b520eb8e8d 100644 +--- a/drivers/i2c/i2c-smbus.c ++++ b/drivers/i2c/i2c-smbus.c +@@ -34,6 +34,7 @@ static int smbus_do_alert(struct device *dev, void *addrp) + struct i2c_client *client = i2c_verify_client(dev); + struct alert_data *data = addrp; + struct i2c_driver *driver; ++ int ret; + + if (!client || client->addr != data->addr) + return 0; +@@ -47,16 +48,47 @@ static int smbus_do_alert(struct device *dev, void *addrp) + device_lock(dev); + if (client->dev.driver) { + driver = to_i2c_driver(client->dev.driver); +- if (driver->alert) ++ if (driver->alert) { ++ /* Stop iterating after we find the device */ + driver->alert(client, data->type, data->data); +- else ++ ret = -EBUSY; ++ } else { + dev_warn(&client->dev, "no driver alert()!\n"); +- } else ++ ret = -EOPNOTSUPP; ++ } ++ } else { + dev_dbg(&client->dev, "alert with no driver\n"); ++ ret = -ENODEV; ++ } ++ device_unlock(dev); ++ ++ return ret; ++} ++ ++/* Same as above, but call back all drivers with alert handler */ ++ ++static int smbus_do_alert_force(struct device *dev, void *addrp) ++{ ++ struct i2c_client *client = i2c_verify_client(dev); ++ struct alert_data *data = addrp; ++ struct i2c_driver *driver; ++ ++ if (!client || (client->flags & I2C_CLIENT_TEN)) ++ return 0; ++ ++ /* ++ * Drivers should either disable alerts, or provide at least ++ * a minimal handler. Lock so the driver won't change. ++ */ ++ device_lock(dev); ++ if (client->dev.driver) { ++ driver = to_i2c_driver(client->dev.driver); ++ if (driver->alert) ++ driver->alert(client, data->type, data->data); ++ } + device_unlock(dev); + +- /* Stop iterating after we find the device */ +- return -EBUSY; ++ return 0; + } + + /* +@@ -67,6 +99,7 @@ static irqreturn_t smbus_alert(int irq, void *d) + { + struct i2c_smbus_alert *alert = d; + struct i2c_client *ara; ++ unsigned short prev_addr = I2C_CLIENT_END; /* Not a valid address */ + + ara = alert->ara; + +@@ -94,8 +127,25 @@ static irqreturn_t smbus_alert(int irq, void *d) + data.addr, data.data); + + /* Notify driver for the device which issued the alert */ +- device_for_each_child(&ara->adapter->dev, &data, +- smbus_do_alert); ++ status = device_for_each_child(&ara->adapter->dev, &data, ++ smbus_do_alert); ++ /* ++ * If we read the same address more than once, and the alert ++ * was not handled by a driver, it won't do any good to repeat ++ * the loop because it will never terminate. Try again, this ++ * time calling the alert handlers of all devices connected to ++ * the bus, and abort the loop afterwards. If this helps, we ++ * are all set. If it doesn't, there is nothing else we can do, ++ * so we might as well abort the loop. ++ * Note: This assumes that a driver with alert handler handles ++ * the alert properly and clears it if necessary. ++ */ ++ if (data.addr == prev_addr && status != -EBUSY) { ++ device_for_each_child(&ara->adapter->dev, &data, ++ smbus_do_alert_force); ++ break; ++ } ++ prev_addr = data.addr; + } + + return IRQ_HANDLED; +diff --git a/drivers/irqchip/irq-loongarch-cpu.c b/drivers/irqchip/irq-loongarch-cpu.c +index fdec3e9cfacfb..9c6f774e49351 100644 +--- a/drivers/irqchip/irq-loongarch-cpu.c ++++ b/drivers/irqchip/irq-loongarch-cpu.c +@@ -18,11 +18,13 @@ struct fwnode_handle *cpuintc_handle; + + static u32 lpic_gsi_to_irq(u32 gsi) + { ++ int irq = 0; ++ + /* Only pch irqdomain transferring is required for LoongArch. */ + if (gsi >= GSI_MIN_PCH_IRQ && gsi <= GSI_MAX_PCH_IRQ) +- return acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_HIGH); ++ irq = acpi_register_gsi(NULL, gsi, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_HIGH); + +- return 0; ++ return (irq > 0) ? irq : 0; + } + + static struct fwnode_handle *lpic_get_gsi_domain_id(u32 gsi) +diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c +index f3faf5c997706..23117a30c6275 100644 +--- a/drivers/irqchip/irq-mbigen.c ++++ b/drivers/irqchip/irq-mbigen.c +@@ -64,6 +64,20 @@ struct mbigen_device { + void __iomem *base; + }; + ++static inline unsigned int get_mbigen_node_offset(unsigned int nid) ++{ ++ unsigned int offset = nid * MBIGEN_NODE_OFFSET; ++ ++ /* ++ * To avoid touched clear register in unexpected way, we need to directly ++ * skip clear register when access to more than 10 mbigen nodes. ++ */ ++ if (nid >= (REG_MBIGEN_CLEAR_OFFSET / MBIGEN_NODE_OFFSET)) ++ offset += MBIGEN_NODE_OFFSET; ++ ++ return offset; ++} ++ + static inline unsigned int get_mbigen_vec_reg(irq_hw_number_t hwirq) + { + unsigned int nid, pin; +@@ -72,8 +86,7 @@ static inline unsigned int get_mbigen_vec_reg(irq_hw_number_t hwirq) + nid = hwirq / IRQS_PER_MBIGEN_NODE + 1; + pin = hwirq % IRQS_PER_MBIGEN_NODE; + +- return pin * 4 + nid * MBIGEN_NODE_OFFSET +- + REG_MBIGEN_VEC_OFFSET; ++ return pin * 4 + get_mbigen_node_offset(nid) + REG_MBIGEN_VEC_OFFSET; + } + + static inline void get_mbigen_type_reg(irq_hw_number_t hwirq, +@@ -88,8 +101,7 @@ static inline void get_mbigen_type_reg(irq_hw_number_t hwirq, + *mask = 1 << (irq_ofst % 32); + ofst = irq_ofst / 32 * 4; + +- *addr = ofst + nid * MBIGEN_NODE_OFFSET +- + REG_MBIGEN_TYPE_OFFSET; ++ *addr = ofst + get_mbigen_node_offset(nid) + REG_MBIGEN_TYPE_OFFSET; + } + + static inline void get_mbigen_clear_reg(irq_hw_number_t hwirq, +diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c +index 7da18ef952119..37e4aa091ac82 100644 +--- a/drivers/irqchip/irq-meson-gpio.c ++++ b/drivers/irqchip/irq-meson-gpio.c +@@ -168,7 +168,7 @@ struct meson_gpio_irq_controller { + void __iomem *base; + u32 channel_irqs[MAX_NUM_CHANNEL]; + DECLARE_BITMAP(channel_map, MAX_NUM_CHANNEL); +- spinlock_t lock; ++ raw_spinlock_t lock; + }; + + static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl, +@@ -177,14 +177,14 @@ static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl, + unsigned long flags; + u32 tmp; + +- spin_lock_irqsave(&ctl->lock, flags); ++ raw_spin_lock_irqsave(&ctl->lock, flags); + + tmp = readl_relaxed(ctl->base + reg); + tmp &= ~mask; + tmp |= val; + writel_relaxed(tmp, ctl->base + reg); + +- spin_unlock_irqrestore(&ctl->lock, flags); ++ raw_spin_unlock_irqrestore(&ctl->lock, flags); + } + + static void meson_gpio_irq_init_dummy(struct meson_gpio_irq_controller *ctl) +@@ -234,12 +234,12 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl, + unsigned long flags; + unsigned int idx; + +- spin_lock_irqsave(&ctl->lock, flags); ++ raw_spin_lock_irqsave(&ctl->lock, flags); + + /* Find a free channel */ + idx = find_first_zero_bit(ctl->channel_map, ctl->params->nr_channels); + if (idx >= ctl->params->nr_channels) { +- spin_unlock_irqrestore(&ctl->lock, flags); ++ raw_spin_unlock_irqrestore(&ctl->lock, flags); + pr_err("No channel available\n"); + return -ENOSPC; + } +@@ -247,7 +247,7 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl, + /* Mark the channel as used */ + set_bit(idx, ctl->channel_map); + +- spin_unlock_irqrestore(&ctl->lock, flags); ++ raw_spin_unlock_irqrestore(&ctl->lock, flags); + + /* + * Setup the mux of the channel to route the signal of the pad +@@ -557,7 +557,7 @@ static int meson_gpio_irq_of_init(struct device_node *node, struct device_node * + if (!ctl) + return -ENOMEM; + +- spin_lock_init(&ctl->lock); ++ raw_spin_lock_init(&ctl->lock); + + ctl->base = of_iomap(node, 0); + if (!ctl->base) { +diff --git a/drivers/irqchip/irq-xilinx-intc.c b/drivers/irqchip/irq-xilinx-intc.c +index 238d3d3449496..7e08714d507f4 100644 +--- a/drivers/irqchip/irq-xilinx-intc.c ++++ b/drivers/irqchip/irq-xilinx-intc.c +@@ -189,7 +189,7 @@ static int __init xilinx_intc_of_init(struct device_node *intc, + irqc->intr_mask = 0; + } + +- if (irqc->intr_mask >> irqc->nr_irq) ++ if ((u64)irqc->intr_mask >> irqc->nr_irq) + pr_warn("irq-xilinx: mismatch in kind-of-intr param\n"); + + pr_info("irq-xilinx: %pOF: num_irq=%d, edge=0x%x\n", +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 7dc1c42accccd..b87c6ef0da8ab 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -489,7 +489,6 @@ void mddev_suspend(struct mddev *mddev) + clear_bit_unlock(MD_ALLOW_SB_UPDATE, &mddev->flags); + wait_event(mddev->sb_wait, !test_bit(MD_UPDATING_SB, &mddev->flags)); + +- del_timer_sync(&mddev->safemode_timer); + /* restrict memory reclaim I/O during raid array is suspend */ + mddev->noio_flag = memalloc_noio_save(); + } +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index ed99b449d8fd4..4315dabd32023 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -6316,7 +6316,9 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk + safepos = conf->reshape_safe; + sector_div(safepos, data_disks); + if (mddev->reshape_backwards) { +- BUG_ON(writepos < reshape_sectors); ++ if (WARN_ON(writepos < reshape_sectors)) ++ return MaxSector; ++ + writepos -= reshape_sectors; + readpos += reshape_sectors; + safepos += reshape_sectors; +@@ -6334,14 +6336,18 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk + * to set 'stripe_addr' which is where we will write to. + */ + if (mddev->reshape_backwards) { +- BUG_ON(conf->reshape_progress == 0); ++ if (WARN_ON(conf->reshape_progress == 0)) ++ return MaxSector; ++ + stripe_addr = writepos; +- BUG_ON((mddev->dev_sectors & +- ~((sector_t)reshape_sectors - 1)) +- - reshape_sectors - stripe_addr +- != sector_nr); ++ if (WARN_ON((mddev->dev_sectors & ++ ~((sector_t)reshape_sectors - 1)) - ++ reshape_sectors - stripe_addr != sector_nr)) ++ return MaxSector; + } else { +- BUG_ON(writepos != sector_nr + reshape_sectors); ++ if (WARN_ON(writepos != sector_nr + reshape_sectors)) ++ return MaxSector; ++ + stripe_addr = sector_nr; + } + +diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c +index dc35a87e628ec..2bfab4467b81c 100644 +--- a/drivers/media/platform/amphion/vdec.c ++++ b/drivers/media/platform/amphion/vdec.c +@@ -145,7 +145,6 @@ static int vdec_op_s_ctrl(struct v4l2_ctrl *ctrl) + struct vdec_t *vdec = inst->priv; + int ret = 0; + +- vpu_inst_lock(inst); + switch (ctrl->id) { + case V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY_ENABLE: + vdec->params.display_delay_enable = ctrl->val; +@@ -157,7 +156,6 @@ static int vdec_op_s_ctrl(struct v4l2_ctrl *ctrl) + ret = -EINVAL; + break; + } +- vpu_inst_unlock(inst); + + return ret; + } +diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c +index 1df2b35c1a240..c9cfef16c5b92 100644 +--- a/drivers/media/platform/amphion/venc.c ++++ b/drivers/media/platform/amphion/venc.c +@@ -528,7 +528,6 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl) + struct venc_t *venc = inst->priv; + int ret = 0; + +- vpu_inst_lock(inst); + switch (ctrl->id) { + case V4L2_CID_MPEG_VIDEO_H264_PROFILE: + venc->params.profile = ctrl->val; +@@ -589,7 +588,6 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl) + ret = -EINVAL; + break; + } +- vpu_inst_unlock(inst); + + return ret; + } +diff --git a/drivers/media/tuners/xc2028.c b/drivers/media/tuners/xc2028.c +index 5a967edceca93..352b8a3679b72 100644 +--- a/drivers/media/tuners/xc2028.c ++++ b/drivers/media/tuners/xc2028.c +@@ -1361,9 +1361,16 @@ static void load_firmware_cb(const struct firmware *fw, + void *context) + { + struct dvb_frontend *fe = context; +- struct xc2028_data *priv = fe->tuner_priv; ++ struct xc2028_data *priv; + int rc; + ++ if (!fe) { ++ pr_warn("xc2028: No frontend in %s\n", __func__); ++ return; ++ } ++ ++ priv = fe->tuner_priv; ++ + tuner_dbg("request_firmware_nowait(): %s\n", fw ? "OK" : "error"); + if (!fw) { + tuner_err("Could not load firmware %s.\n", priv->fname); +diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c +index a5ad3ff8bdbb9..aa0a879a9c64a 100644 +--- a/drivers/media/usb/uvc/uvc_video.c ++++ b/drivers/media/usb/uvc/uvc_video.c +@@ -212,13 +212,13 @@ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream, + * Compute a bandwidth estimation by multiplying the frame + * size by the number of video frames per second, divide the + * result by the number of USB frames (or micro-frames for +- * high-speed devices) per second and add the UVC header size +- * (assumed to be 12 bytes long). ++ * high- and super-speed devices) per second and add the UVC ++ * header size (assumed to be 12 bytes long). + */ + bandwidth = frame->wWidth * frame->wHeight / 8 * format->bpp; + bandwidth *= 10000000 / interval + 1; + bandwidth /= 1000; +- if (stream->dev->udev->speed == USB_SPEED_HIGH) ++ if (stream->dev->udev->speed >= USB_SPEED_HIGH) + bandwidth /= 8; + bandwidth += 12; + +@@ -476,6 +476,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf, + ktime_t time; + u16 host_sof; + u16 dev_sof; ++ u32 dev_stc; + + switch (data[1] & (UVC_STREAM_PTS | UVC_STREAM_SCR)) { + case UVC_STREAM_PTS | UVC_STREAM_SCR: +@@ -522,6 +523,34 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf, + if (dev_sof == stream->clock.last_sof) + return; + ++ dev_stc = get_unaligned_le32(&data[header_size - 6]); ++ ++ /* ++ * STC (Source Time Clock) is the clock used by the camera. The UVC 1.5 ++ * standard states that it "must be captured when the first video data ++ * of a video frame is put on the USB bus". This is generally understood ++ * as requiring devices to clear the payload header's SCR bit before ++ * the first packet containing video data. ++ * ++ * Most vendors follow that interpretation, but some (namely SunplusIT ++ * on some devices) always set the `UVC_STREAM_SCR` bit, fill the SCR ++ * field with 0's,and expect that the driver only processes the SCR if ++ * there is data in the packet. ++ * ++ * Ignore all the hardware timestamp information if we haven't received ++ * any data for this frame yet, the packet contains no data, and both ++ * STC and SOF are zero. This heuristics should be safe on compliant ++ * devices. This should be safe with compliant devices, as in the very ++ * unlikely case where a UVC 1.1 device would send timing information ++ * only before the first packet containing data, and both STC and SOF ++ * happen to be zero for a particular frame, we would only miss one ++ * clock sample from many and the clock recovery algorithm wouldn't ++ * suffer from this condition. ++ */ ++ if (buf && buf->bytesused == 0 && len == header_size && ++ dev_stc == 0 && dev_sof == 0) ++ return; ++ + stream->clock.last_sof = dev_sof; + + host_sof = usb_get_current_frame_number(stream->dev->udev); +@@ -560,7 +589,7 @@ uvc_video_clock_decode(struct uvc_streaming *stream, struct uvc_buffer *buf, + spin_lock_irqsave(&stream->clock.lock, flags); + + sample = &stream->clock.samples[stream->clock.head]; +- sample->dev_stc = get_unaligned_le32(&data[header_size - 6]); ++ sample->dev_stc = dev_stc; + sample->dev_sof = dev_sof; + sample->host_sof = host_sof; + sample->host_time = time; +diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c +index bf3f0f150199d..4d0246a0779a6 100644 +--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c ++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c +@@ -475,6 +475,8 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv) + clear_bit(MCP251XFD_FLAGS_FD_MODE, priv->flags); + } + ++ tx_ring->obj_num_shift_to_u8 = BITS_PER_TYPE(tx_ring->obj_num) - ++ ilog2(tx_ring->obj_num); + tx_ring->obj_size = tx_obj_size; + + rem = priv->rx_obj_num; +diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c +index 237617b0c125f..902eb767426d1 100644 +--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c ++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c +@@ -2,7 +2,7 @@ + // + // mcp251xfd - Microchip MCP251xFD Family CAN controller driver + // +-// Copyright (c) 2019, 2020, 2021 Pengutronix, ++// Copyright (c) 2019, 2020, 2021, 2023 Pengutronix, + // Marc Kleine-Budde + // + // Based on: +@@ -16,6 +16,11 @@ + + #include "mcp251xfd.h" + ++static inline bool mcp251xfd_tx_fifo_sta_full(u32 fifo_sta) ++{ ++ return !(fifo_sta & MCP251XFD_REG_FIFOSTA_TFNRFNIF); ++} ++ + static inline int + mcp251xfd_tef_tail_get_from_chip(const struct mcp251xfd_priv *priv, + u8 *tef_tail) +@@ -55,56 +60,39 @@ static int mcp251xfd_check_tef_tail(const struct mcp251xfd_priv *priv) + return 0; + } + +-static int +-mcp251xfd_handle_tefif_recover(const struct mcp251xfd_priv *priv, const u32 seq) +-{ +- const struct mcp251xfd_tx_ring *tx_ring = priv->tx; +- u32 tef_sta; +- int err; +- +- err = regmap_read(priv->map_reg, MCP251XFD_REG_TEFSTA, &tef_sta); +- if (err) +- return err; +- +- if (tef_sta & MCP251XFD_REG_TEFSTA_TEFOVIF) { +- netdev_err(priv->ndev, +- "Transmit Event FIFO buffer overflow.\n"); +- return -ENOBUFS; +- } +- +- netdev_info(priv->ndev, +- "Transmit Event FIFO buffer %s. (seq=0x%08x, tef_tail=0x%08x, tef_head=0x%08x, tx_head=0x%08x).\n", +- tef_sta & MCP251XFD_REG_TEFSTA_TEFFIF ? +- "full" : tef_sta & MCP251XFD_REG_TEFSTA_TEFNEIF ? +- "not empty" : "empty", +- seq, priv->tef->tail, priv->tef->head, tx_ring->head); +- +- /* The Sequence Number in the TEF doesn't match our tef_tail. */ +- return -EAGAIN; +-} +- + static int + mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv, + const struct mcp251xfd_hw_tef_obj *hw_tef_obj, + unsigned int *frame_len_ptr) + { + struct net_device_stats *stats = &priv->ndev->stats; ++ u32 seq, tef_tail_masked, tef_tail; + struct sk_buff *skb; +- u32 seq, seq_masked, tef_tail_masked, tef_tail; + +- seq = FIELD_GET(MCP251XFD_OBJ_FLAGS_SEQ_MCP2518FD_MASK, ++ /* Use the MCP2517FD mask on the MCP2518FD, too. We only ++ * compare 7 bits, this is enough to detect old TEF objects. ++ */ ++ seq = FIELD_GET(MCP251XFD_OBJ_FLAGS_SEQ_MCP2517FD_MASK, + hw_tef_obj->flags); +- +- /* Use the MCP2517FD mask on the MCP2518FD, too. We only +- * compare 7 bits, this should be enough to detect +- * net-yet-completed, i.e. old TEF objects. +- */ +- seq_masked = seq & +- field_mask(MCP251XFD_OBJ_FLAGS_SEQ_MCP2517FD_MASK); + tef_tail_masked = priv->tef->tail & + field_mask(MCP251XFD_OBJ_FLAGS_SEQ_MCP2517FD_MASK); +- if (seq_masked != tef_tail_masked) +- return mcp251xfd_handle_tefif_recover(priv, seq); ++ ++ /* According to mcp2518fd erratum DS80000789E 6. the FIFOCI ++ * bits of a FIFOSTA register, here the TX FIFO tail index ++ * might be corrupted and we might process past the TEF FIFO's ++ * head into old CAN frames. ++ * ++ * Compare the sequence number of the currently processed CAN ++ * frame with the expected sequence number. Abort with ++ * -EBADMSG if an old CAN frame is detected. ++ */ ++ if (seq != tef_tail_masked) { ++ netdev_dbg(priv->ndev, "%s: chip=0x%02x ring=0x%02x\n", __func__, ++ seq, tef_tail_masked); ++ stats->tx_fifo_errors++; ++ ++ return -EBADMSG; ++ } + + tef_tail = mcp251xfd_get_tef_tail(priv); + skb = priv->can.echo_skb[tef_tail]; +@@ -120,28 +108,44 @@ mcp251xfd_handle_tefif_one(struct mcp251xfd_priv *priv, + return 0; + } + +-static int mcp251xfd_tef_ring_update(struct mcp251xfd_priv *priv) ++static int ++mcp251xfd_get_tef_len(struct mcp251xfd_priv *priv, u8 *len_p) + { + const struct mcp251xfd_tx_ring *tx_ring = priv->tx; +- unsigned int new_head; +- u8 chip_tx_tail; ++ const u8 shift = tx_ring->obj_num_shift_to_u8; ++ u8 chip_tx_tail, tail, len; ++ u32 fifo_sta; + int err; + +- err = mcp251xfd_tx_tail_get_from_chip(priv, &chip_tx_tail); ++ err = regmap_read(priv->map_reg, MCP251XFD_REG_FIFOSTA(priv->tx->fifo_nr), ++ &fifo_sta); + if (err) + return err; + +- /* chip_tx_tail, is the next TX-Object send by the HW. +- * The new TEF head must be >= the old head, ... ++ if (mcp251xfd_tx_fifo_sta_full(fifo_sta)) { ++ *len_p = tx_ring->obj_num; ++ return 0; ++ } ++ ++ chip_tx_tail = FIELD_GET(MCP251XFD_REG_FIFOSTA_FIFOCI_MASK, fifo_sta); ++ ++ err = mcp251xfd_check_tef_tail(priv); ++ if (err) ++ return err; ++ tail = mcp251xfd_get_tef_tail(priv); ++ ++ /* First shift to full u8. The subtraction works on signed ++ * values, that keeps the difference steady around the u8 ++ * overflow. The right shift acts on len, which is an u8. + */ +- new_head = round_down(priv->tef->head, tx_ring->obj_num) + chip_tx_tail; +- if (new_head <= priv->tef->head) +- new_head += tx_ring->obj_num; ++ BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(chip_tx_tail)); ++ BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(tail)); ++ BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(len)); + +- /* ... but it cannot exceed the TX head. */ +- priv->tef->head = min(new_head, tx_ring->head); ++ len = (chip_tx_tail << shift) - (tail << shift); ++ *len_p = len >> shift; + +- return mcp251xfd_check_tef_tail(priv); ++ return 0; + } + + static inline int +@@ -182,13 +186,12 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv) + u8 tef_tail, len, l; + int err, i; + +- err = mcp251xfd_tef_ring_update(priv); ++ err = mcp251xfd_get_tef_len(priv, &len); + if (err) + return err; + + tef_tail = mcp251xfd_get_tef_tail(priv); +- len = mcp251xfd_get_tef_len(priv); +- l = mcp251xfd_get_tef_linear_len(priv); ++ l = mcp251xfd_get_tef_linear_len(priv, len); + err = mcp251xfd_tef_obj_read(priv, hw_tef_obj, tef_tail, l); + if (err) + return err; +@@ -203,12 +206,12 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv) + unsigned int frame_len = 0; + + err = mcp251xfd_handle_tefif_one(priv, &hw_tef_obj[i], &frame_len); +- /* -EAGAIN means the Sequence Number in the TEF +- * doesn't match our tef_tail. This can happen if we +- * read the TEF objects too early. Leave loop let the +- * interrupt handler call us again. ++ /* -EBADMSG means we're affected by mcp2518fd erratum ++ * DS80000789E 6., i.e. the Sequence Number in the TEF ++ * doesn't match our tef_tail. Don't process any ++ * further and mark processed frames as good. + */ +- if (err == -EAGAIN) ++ if (err == -EBADMSG) + goto out_netif_wake_queue; + if (err) + return err; +@@ -223,6 +226,8 @@ int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv) + struct mcp251xfd_tx_ring *tx_ring = priv->tx; + int offset; + ++ ring->head += len; ++ + /* Increment the TEF FIFO tail pointer 'len' times in + * a single SPI message. + * +diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h +index b98ded7098a5a..78d12dda08a05 100644 +--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h ++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h +@@ -519,6 +519,7 @@ struct mcp251xfd_tef_ring { + + /* u8 obj_num equals tx_ring->obj_num */ + /* u8 obj_size equals sizeof(struct mcp251xfd_hw_tef_obj) */ ++ /* u8 obj_num_shift_to_u8 equals tx_ring->obj_num_shift_to_u8 */ + + union mcp251xfd_write_reg_buf irq_enable_buf; + struct spi_transfer irq_enable_xfer; +@@ -537,6 +538,7 @@ struct mcp251xfd_tx_ring { + u8 nr; + u8 fifo_nr; + u8 obj_num; ++ u8 obj_num_shift_to_u8; + u8 obj_size; + + struct mcp251xfd_tx_obj obj[MCP251XFD_TX_OBJ_NUM_MAX]; +@@ -843,17 +845,8 @@ static inline u8 mcp251xfd_get_tef_tail(const struct mcp251xfd_priv *priv) + return priv->tef->tail & (priv->tx->obj_num - 1); + } + +-static inline u8 mcp251xfd_get_tef_len(const struct mcp251xfd_priv *priv) ++static inline u8 mcp251xfd_get_tef_linear_len(const struct mcp251xfd_priv *priv, u8 len) + { +- return priv->tef->head - priv->tef->tail; +-} +- +-static inline u8 mcp251xfd_get_tef_linear_len(const struct mcp251xfd_priv *priv) +-{ +- u8 len; +- +- len = mcp251xfd_get_tef_len(priv); +- + return min_t(u8, len, priv->tx->obj_num - mcp251xfd_get_tef_tail(priv)); + } + +diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c +index cd1f240c90f39..257df16768750 100644 +--- a/drivers/net/dsa/bcm_sf2.c ++++ b/drivers/net/dsa/bcm_sf2.c +@@ -678,8 +678,10 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds) + of_remove_property(child, prop); + + phydev = of_phy_find_device(child); +- if (phydev) ++ if (phydev) { + phy_device_remove(phydev); ++ phy_device_free(phydev); ++ } + } + + err = mdiobus_register(priv->slave_mii_bus); +diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c +index e0393dc159fc7..37d83ff5b30be 100644 +--- a/drivers/net/ethernet/freescale/fec_ptp.c ++++ b/drivers/net/ethernet/freescale/fec_ptp.c +@@ -635,6 +635,9 @@ void fec_ptp_stop(struct platform_device *pdev) + struct net_device *ndev = platform_get_drvdata(pdev); + struct fec_enet_private *fep = netdev_priv(ndev); + ++ if (fep->pps_enable) ++ fec_ptp_enable_pps(fep, 0); ++ + cancel_delayed_work_sync(&fep->time_keep); + if (fep->ptp_clock) + ptp_clock_unregister(fep->ptp_clock); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +index 56d1bd22c7c66..97d1cfec4ec03 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +@@ -2146,6 +2146,9 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq + if (likely(wi->consumed_strides < rq->mpwqe.num_strides)) + return; + ++ if (unlikely(!cstrides)) ++ return; ++ + wq = &rq->mpwqe.wq; + wqe = mlx5_wq_ll_get_wqe(wq, wqe_id); + mlx5e_free_rx_mpwqe(rq, wi, true); +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 46e0e1f1c20e0..ee0ea3d0f50ee 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -200,6 +200,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb) + break; + default: + /* not ip - do not know what to do */ ++ kfree_skb(skbn); + goto skip; + } + +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index 27446fa847526..6f648b58cbd4d 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -873,9 +873,9 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req, + struct nvme_command *cmnd) + { + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); ++ struct bio_vec bv = rq_integrity_vec(req); + +- iod->meta_dma = dma_map_bvec(dev->dev, rq_integrity_vec(req), +- rq_dma_dir(req), 0); ++ iod->meta_dma = dma_map_bvec(dev->dev, &bv, rq_dma_dir(req), 0); + if (dma_mapping_error(dev->dev, iod->meta_dma)) + return BLK_STS_IOERR; + cmnd->rw.metadata = cpu_to_le64(iod->meta_dma); +@@ -1016,7 +1016,7 @@ static __always_inline void nvme_pci_unmap_rq(struct request *req) + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + + dma_unmap_page(dev->dev, iod->meta_dma, +- rq_integrity_vec(req)->bv_len, rq_dma_dir(req)); ++ rq_integrity_vec(req).bv_len, rq_dma_dir(req)); + } + + if (blk_rq_nr_phys_segments(req)) +diff --git a/drivers/platform/x86/intel/ifs/ifs.h b/drivers/platform/x86/intel/ifs/ifs.h +index 73c8e91cf144d..1902a1722df1e 100644 +--- a/drivers/platform/x86/intel/ifs/ifs.h ++++ b/drivers/platform/x86/intel/ifs/ifs.h +@@ -157,9 +157,17 @@ union ifs_chunks_auth_status { + union ifs_scan { + u64 data; + struct { +- u32 start :8; +- u32 stop :8; +- u32 rsvd :16; ++ union { ++ struct { ++ u8 start; ++ u8 stop; ++ u16 rsvd; ++ } gen0; ++ struct { ++ u16 start; ++ u16 stop; ++ } gen2; ++ }; + u32 delay :31; + u32 sigmce :1; + }; +@@ -169,9 +177,17 @@ union ifs_scan { + union ifs_status { + u64 data; + struct { +- u32 chunk_num :8; +- u32 chunk_stop_index :8; +- u32 rsvd1 :16; ++ union { ++ struct { ++ u8 chunk_num; ++ u8 chunk_stop_index; ++ u16 rsvd1; ++ } gen0; ++ struct { ++ u16 chunk_num; ++ u16 chunk_stop_index; ++ } gen2; ++ }; + u32 error_code :8; + u32 rsvd2 :22; + u32 control_error :1; +diff --git a/drivers/platform/x86/intel/ifs/runtest.c b/drivers/platform/x86/intel/ifs/runtest.c +index b2ca2bb4501f6..aeb6a3b7a8fd9 100644 +--- a/drivers/platform/x86/intel/ifs/runtest.c ++++ b/drivers/platform/x86/intel/ifs/runtest.c +@@ -165,25 +165,35 @@ static int doscan(void *data) + */ + static void ifs_test_core(int cpu, struct device *dev) + { ++ union ifs_status status = {}; + union ifs_scan activate; +- union ifs_status status; + unsigned long timeout; + struct ifs_data *ifsd; ++ int to_start, to_stop; ++ int status_chunk; + u64 msrvals[2]; + int retries; + + ifsd = ifs_get_data(dev); + +- activate.rsvd = 0; ++ activate.gen0.rsvd = 0; + activate.delay = IFS_THREAD_WAIT; + activate.sigmce = 0; +- activate.start = 0; +- activate.stop = ifsd->valid_chunks - 1; ++ to_start = 0; ++ to_stop = ifsd->valid_chunks - 1; ++ ++ if (ifsd->generation) { ++ activate.gen2.start = to_start; ++ activate.gen2.stop = to_stop; ++ } else { ++ activate.gen0.start = to_start; ++ activate.gen0.stop = to_stop; ++ } + + timeout = jiffies + HZ / 2; + retries = MAX_IFS_RETRIES; + +- while (activate.start <= activate.stop) { ++ while (to_start <= to_stop) { + if (time_after(jiffies, timeout)) { + status.error_code = IFS_SW_TIMEOUT; + break; +@@ -194,13 +204,14 @@ static void ifs_test_core(int cpu, struct device *dev) + + status.data = msrvals[1]; + +- trace_ifs_status(cpu, activate, status); ++ trace_ifs_status(cpu, to_start, to_stop, status.data); + + /* Some cases can be retried, give up for others */ + if (!can_restart(status)) + break; + +- if (status.chunk_num == activate.start) { ++ status_chunk = ifsd->generation ? status.gen2.chunk_num : status.gen0.chunk_num; ++ if (status_chunk == to_start) { + /* Check for forward progress */ + if (--retries == 0) { + if (status.error_code == IFS_NO_ERROR) +@@ -209,7 +220,11 @@ static void ifs_test_core(int cpu, struct device *dev) + } + } else { + retries = MAX_IFS_RETRIES; +- activate.start = status.chunk_num; ++ if (ifsd->generation) ++ activate.gen2.start = status_chunk; ++ else ++ activate.gen0.start = status_chunk; ++ to_start = status_chunk; + } + } + +diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c +index 15219ed43ce95..d3a93f1afef82 100644 +--- a/drivers/power/supply/axp288_charger.c ++++ b/drivers/power/supply/axp288_charger.c +@@ -178,18 +178,18 @@ static inline int axp288_charger_set_cv(struct axp288_chrg_info *info, int cv) + u8 reg_val; + int ret; + +- if (cv <= CV_4100MV) { +- reg_val = CHRG_CCCV_CV_4100MV; +- cv = CV_4100MV; +- } else if (cv <= CV_4150MV) { +- reg_val = CHRG_CCCV_CV_4150MV; +- cv = CV_4150MV; +- } else if (cv <= CV_4200MV) { ++ if (cv >= CV_4350MV) { ++ reg_val = CHRG_CCCV_CV_4350MV; ++ cv = CV_4350MV; ++ } else if (cv >= CV_4200MV) { + reg_val = CHRG_CCCV_CV_4200MV; + cv = CV_4200MV; ++ } else if (cv >= CV_4150MV) { ++ reg_val = CHRG_CCCV_CV_4150MV; ++ cv = CV_4150MV; + } else { +- reg_val = CHRG_CCCV_CV_4350MV; +- cv = CV_4350MV; ++ reg_val = CHRG_CCCV_CV_4100MV; ++ cv = CV_4100MV; + } + + reg_val = reg_val << CHRG_CCCV_CV_BIT_POS; +@@ -337,8 +337,8 @@ static int axp288_charger_usb_set_property(struct power_supply *psy, + } + break; + case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE: +- scaled_val = min(val->intval, info->max_cv); +- scaled_val = DIV_ROUND_CLOSEST(scaled_val, 1000); ++ scaled_val = DIV_ROUND_CLOSEST(val->intval, 1000); ++ scaled_val = min(scaled_val, info->max_cv); + ret = axp288_charger_set_cv(info, scaled_val); + if (ret < 0) { + dev_warn(&info->pdev->dev, "set charge voltage failed\n"); +diff --git a/drivers/s390/char/sclp_sd.c b/drivers/s390/char/sclp_sd.c +index f9e164be7568f..944e75beb160c 100644 +--- a/drivers/s390/char/sclp_sd.c ++++ b/drivers/s390/char/sclp_sd.c +@@ -320,8 +320,14 @@ static int sclp_sd_store_data(struct sclp_sd_data *result, u8 di) + &esize); + if (rc) { + /* Cancel running request if interrupted */ +- if (rc == -ERESTARTSYS) +- sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL); ++ if (rc == -ERESTARTSYS) { ++ if (sclp_sd_sync(page, SD_EQ_HALT, di, 0, 0, NULL, NULL)) { ++ pr_warn("Could not stop Store Data request - leaking at least %zu bytes\n", ++ (size_t)dsize * PAGE_SIZE); ++ data = NULL; ++ asce = 0; ++ } ++ } + vfree(data); + goto out; + } +diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c +index 7a6b006e70c88..7bd24f71cc385 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr_os.c ++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c +@@ -3389,6 +3389,17 @@ static int mpi3mr_prepare_sg_scmd(struct mpi3mr_ioc *mrioc, + scmd->sc_data_direction); + priv->meta_sg_valid = 1; /* To unmap meta sg DMA */ + } else { ++ /* ++ * Some firmware versions byte-swap the REPORT ZONES command ++ * reply from ATA-ZAC devices by directly accessing in the host ++ * buffer. This does not respect the default command DMA ++ * direction and causes IOMMU page faults on some architectures ++ * with an IOMMU enforcing write mappings (e.g. AMD hosts). ++ * Avoid such issue by making the REPORT ZONES buffer mapping ++ * bi-directional. ++ */ ++ if (scmd->cmnd[0] == ZBC_IN && scmd->cmnd[1] == ZI_REPORT_ZONES) ++ scmd->sc_data_direction = DMA_BIDIRECTIONAL; + sg_scmd = scsi_sglist(scmd); + sges_left = scsi_dma_map(scmd); + } +diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c +index 421a03dbbeb73..03fcaf7359391 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_base.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c +@@ -2672,6 +2672,22 @@ _base_build_zero_len_sge_ieee(struct MPT3SAS_ADAPTER *ioc, void *paddr) + _base_add_sg_single_ieee(paddr, sgl_flags, 0, 0, -1); + } + ++static inline int _base_scsi_dma_map(struct scsi_cmnd *cmd) ++{ ++ /* ++ * Some firmware versions byte-swap the REPORT ZONES command reply from ++ * ATA-ZAC devices by directly accessing in the host buffer. This does ++ * not respect the default command DMA direction and causes IOMMU page ++ * faults on some architectures with an IOMMU enforcing write mappings ++ * (e.g. AMD hosts). Avoid such issue by making the report zones buffer ++ * mapping bi-directional. ++ */ ++ if (cmd->cmnd[0] == ZBC_IN && cmd->cmnd[1] == ZI_REPORT_ZONES) ++ cmd->sc_data_direction = DMA_BIDIRECTIONAL; ++ ++ return scsi_dma_map(cmd); ++} ++ + /** + * _base_build_sg_scmd - main sg creation routine + * pcie_device is unused here! +@@ -2718,7 +2734,7 @@ _base_build_sg_scmd(struct MPT3SAS_ADAPTER *ioc, + sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT; + + sg_scmd = scsi_sglist(scmd); +- sges_left = scsi_dma_map(scmd); ++ sges_left = _base_scsi_dma_map(scmd); + if (sges_left < 0) + return -ENOMEM; + +@@ -2862,7 +2878,7 @@ _base_build_sg_scmd_ieee(struct MPT3SAS_ADAPTER *ioc, + } + + sg_scmd = scsi_sglist(scmd); +- sges_left = scsi_dma_map(scmd); ++ sges_left = _base_scsi_dma_map(scmd); + if (sges_left < 0) + return -ENOMEM; + +diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c +index 9e324d72596af..dd2381ac27f67 100644 +--- a/drivers/spi/spi-fsl-lpspi.c ++++ b/drivers/spi/spi-fsl-lpspi.c +@@ -297,7 +297,7 @@ static void fsl_lpspi_set_watermark(struct fsl_lpspi_data *fsl_lpspi) + static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi) + { + struct lpspi_config config = fsl_lpspi->config; +- unsigned int perclk_rate, scldiv; ++ unsigned int perclk_rate, scldiv, div; + u8 prescale; + + perclk_rate = clk_get_rate(fsl_lpspi->clk_per); +@@ -308,8 +308,10 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi) + return -EINVAL; + } + ++ div = DIV_ROUND_UP(perclk_rate, config.speed_hz); ++ + for (prescale = 0; prescale < 8; prescale++) { +- scldiv = perclk_rate / config.speed_hz / (1 << prescale) - 2; ++ scldiv = div / (1 << prescale) - 2; + if (scldiv < 256) { + fsl_lpspi->config.prescale = prescale; + break; +diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c +index 00612efc2277f..477c3578e7d9e 100644 +--- a/drivers/spi/spidev.c ++++ b/drivers/spi/spidev.c +@@ -692,6 +692,7 @@ static const struct file_operations spidev_fops = { + static struct class *spidev_class; + + static const struct spi_device_id spidev_spi_ids[] = { ++ { .name = "bh2228fv" }, + { .name = "dh2228fv" }, + { .name = "ltc2488" }, + { .name = "sx1301" }, +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c +index fe3f1d655dfe2..58e857fb8deeb 100644 +--- a/drivers/tty/serial/serial_core.c ++++ b/drivers/tty/serial/serial_core.c +@@ -846,6 +846,14 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port, + new_flags = (__force upf_t)new_info->flags; + old_custom_divisor = uport->custom_divisor; + ++ if (!(uport->flags & UPF_FIXED_PORT)) { ++ unsigned int uartclk = new_info->baud_base * 16; ++ /* check needs to be done here before other settings made */ ++ if (uartclk == 0) { ++ retval = -EINVAL; ++ goto exit; ++ } ++ } + if (!capable(CAP_SYS_ADMIN)) { + retval = -EPERM; + if (change_irq || change_port || +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index 5922cb5a1de0d..bfed5d36fa2e5 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -3909,11 +3909,16 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba) + min_sleep_time_us = + MIN_DELAY_BEFORE_DME_CMDS_US - delta; + else +- return; /* no more delay required */ ++ min_sleep_time_us = 0; /* no more delay required */ + } + +- /* allow sleep for extra 50us if needed */ +- usleep_range(min_sleep_time_us, min_sleep_time_us + 50); ++ if (min_sleep_time_us > 0) { ++ /* allow sleep for extra 50us if needed */ ++ usleep_range(min_sleep_time_us, min_sleep_time_us + 50); ++ } ++ ++ /* update the last_dme_cmd_tstamp */ ++ hba->last_dme_cmd_tstamp = ktime_get(); + } + + /** +diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c +index ec1dceb087293..0be0966973c7f 100644 +--- a/drivers/usb/gadget/function/u_audio.c ++++ b/drivers/usb/gadget/function/u_audio.c +@@ -592,16 +592,25 @@ int u_audio_start_capture(struct g_audio *audio_dev) + struct usb_ep *ep, *ep_fback; + struct uac_rtd_params *prm; + struct uac_params *params = &audio_dev->params; +- int req_len, i; ++ int req_len, i, ret; + + prm = &uac->c_prm; + dev_dbg(dev, "start capture with rate %d\n", prm->srate); + ep = audio_dev->out_ep; +- config_ep_by_speed(gadget, &audio_dev->func, ep); ++ ret = config_ep_by_speed(gadget, &audio_dev->func, ep); ++ if (ret < 0) { ++ dev_err(dev, "config_ep_by_speed for out_ep failed (%d)\n", ret); ++ return ret; ++ } ++ + req_len = ep->maxpacket; + + prm->ep_enabled = true; +- usb_ep_enable(ep); ++ ret = usb_ep_enable(ep); ++ if (ret < 0) { ++ dev_err(dev, "usb_ep_enable failed for out_ep (%d)\n", ret); ++ return ret; ++ } + + for (i = 0; i < params->req_number; i++) { + if (!prm->reqs[i]) { +@@ -629,9 +638,18 @@ int u_audio_start_capture(struct g_audio *audio_dev) + return 0; + + /* Setup feedback endpoint */ +- config_ep_by_speed(gadget, &audio_dev->func, ep_fback); ++ ret = config_ep_by_speed(gadget, &audio_dev->func, ep_fback); ++ if (ret < 0) { ++ dev_err(dev, "config_ep_by_speed in_ep_fback failed (%d)\n", ret); ++ return ret; // TODO: Clean up out_ep ++ } ++ + prm->fb_ep_enabled = true; +- usb_ep_enable(ep_fback); ++ ret = usb_ep_enable(ep_fback); ++ if (ret < 0) { ++ dev_err(dev, "usb_ep_enable failed for in_ep_fback (%d)\n", ret); ++ return ret; // TODO: Clean up out_ep ++ } + req_len = ep_fback->maxpacket; + + req_fback = usb_ep_alloc_request(ep_fback, GFP_ATOMIC); +@@ -687,13 +705,17 @@ int u_audio_start_playback(struct g_audio *audio_dev) + struct uac_params *params = &audio_dev->params; + unsigned int factor; + const struct usb_endpoint_descriptor *ep_desc; +- int req_len, i; ++ int req_len, i, ret; + unsigned int p_pktsize; + + prm = &uac->p_prm; + dev_dbg(dev, "start playback with rate %d\n", prm->srate); + ep = audio_dev->in_ep; +- config_ep_by_speed(gadget, &audio_dev->func, ep); ++ ret = config_ep_by_speed(gadget, &audio_dev->func, ep); ++ if (ret < 0) { ++ dev_err(dev, "config_ep_by_speed for in_ep failed (%d)\n", ret); ++ return ret; ++ } + + ep_desc = ep->desc; + /* +@@ -720,7 +742,11 @@ int u_audio_start_playback(struct g_audio *audio_dev) + uac->p_residue_mil = 0; + + prm->ep_enabled = true; +- usb_ep_enable(ep); ++ ret = usb_ep_enable(ep); ++ if (ret < 0) { ++ dev_err(dev, "usb_ep_enable failed for in_ep (%d)\n", ret); ++ return ret; ++ } + + for (i = 0; i < params->req_number; i++) { + if (!prm->reqs[i]) { +diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c +index 3c51355ccc94d..1f143c6ba86bd 100644 +--- a/drivers/usb/gadget/function/u_serial.c ++++ b/drivers/usb/gadget/function/u_serial.c +@@ -1436,6 +1436,7 @@ void gserial_suspend(struct gserial *gser) + spin_lock(&port->port_lock); + spin_unlock(&serial_port_lock); + port->suspended = true; ++ port->start_delayed = true; + spin_unlock_irqrestore(&port->port_lock, flags); + } + EXPORT_SYMBOL_GPL(gserial_suspend); +diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c +index 82a10774a7ebc..9dc31a416ec2e 100644 +--- a/drivers/usb/gadget/udc/core.c ++++ b/drivers/usb/gadget/udc/core.c +@@ -118,12 +118,10 @@ int usb_ep_enable(struct usb_ep *ep) + goto out; + + /* UDC drivers can't handle endpoints with maxpacket size 0 */ +- if (usb_endpoint_maxp(ep->desc) == 0) { +- /* +- * We should log an error message here, but we can't call +- * dev_err() because there's no way to find the gadget +- * given only ep. +- */ ++ if (!ep->desc || usb_endpoint_maxp(ep->desc) == 0) { ++ WARN_ONCE(1, "%s: ep%d (%s) has %s\n", __func__, ep->address, ep->name, ++ (!ep->desc) ? "NULL descriptor" : "maxpacket 0"); ++ + ret = -EINVAL; + goto out; + } +diff --git a/drivers/usb/serial/usb_debug.c b/drivers/usb/serial/usb_debug.c +index aaf4813e4971e..406cb326e8124 100644 +--- a/drivers/usb/serial/usb_debug.c ++++ b/drivers/usb/serial/usb_debug.c +@@ -69,6 +69,11 @@ static void usb_debug_process_read_urb(struct urb *urb) + usb_serial_generic_process_read_urb(urb); + } + ++static void usb_debug_init_termios(struct tty_struct *tty) ++{ ++ tty->termios.c_lflag &= ~(ECHO | ECHONL); ++} ++ + static struct usb_serial_driver debug_device = { + .driver = { + .owner = THIS_MODULE, +@@ -78,6 +83,7 @@ static struct usb_serial_driver debug_device = { + .num_ports = 1, + .bulk_out_size = USB_DEBUG_MAX_PACKET_SIZE, + .break_ctl = usb_debug_break_ctl, ++ .init_termios = usb_debug_init_termios, + .process_read_urb = usb_debug_process_read_urb, + }; + +@@ -89,6 +95,7 @@ static struct usb_serial_driver dbc_device = { + .id_table = dbc_id_table, + .num_ports = 1, + .break_ctl = usb_debug_break_ctl, ++ .init_termios = usb_debug_init_termios, + .process_read_urb = usb_debug_process_read_urb, + }; + +diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c +index 233265550fc63..6b98f5ab6dfed 100644 +--- a/drivers/usb/usbip/vhci_hcd.c ++++ b/drivers/usb/usbip/vhci_hcd.c +@@ -745,6 +745,7 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag + * + */ + if (usb_pipedevice(urb->pipe) == 0) { ++ struct usb_device *old; + __u8 type = usb_pipetype(urb->pipe); + struct usb_ctrlrequest *ctrlreq = + (struct usb_ctrlrequest *) urb->setup_packet; +@@ -755,14 +756,15 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag + goto no_need_xmit; + } + ++ old = vdev->udev; + switch (ctrlreq->bRequest) { + case USB_REQ_SET_ADDRESS: + /* set_address may come when a device is reset */ + dev_info(dev, "SetAddress Request (%d) to port %d\n", + ctrlreq->wValue, vdev->rhport); + +- usb_put_dev(vdev->udev); + vdev->udev = usb_get_dev(urb->dev); ++ usb_put_dev(old); + + spin_lock(&vdev->ud.lock); + vdev->ud.status = VDEV_ST_USED; +@@ -781,8 +783,8 @@ static int vhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flag + usbip_dbg_vhci_hc( + "Not yet?:Get_Descriptor to device 0 (get max pipe size)\n"); + +- usb_put_dev(vdev->udev); + vdev->udev = usb_get_dev(urb->dev); ++ usb_put_dev(old); + goto out; + + default: +@@ -1067,6 +1069,7 @@ static void vhci_shutdown_connection(struct usbip_device *ud) + static void vhci_device_reset(struct usbip_device *ud) + { + struct vhci_device *vdev = container_of(ud, struct vhci_device, ud); ++ struct usb_device *old = vdev->udev; + unsigned long flags; + + spin_lock_irqsave(&ud->lock, flags); +@@ -1074,8 +1077,8 @@ static void vhci_device_reset(struct usbip_device *ud) + vdev->speed = 0; + vdev->devid = 0; + +- usb_put_dev(vdev->udev); + vdev->udev = NULL; ++ usb_put_dev(old); + + if (ud->tcp_socket) { + sockfd_put(ud->tcp_socket); +diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c +index c8374527a27d9..55f88eeb78a72 100644 +--- a/drivers/vhost/vdpa.c ++++ b/drivers/vhost/vdpa.c +@@ -1294,13 +1294,7 @@ static vm_fault_t vhost_vdpa_fault(struct vm_fault *vmf) + + notify = ops->get_vq_notification(vdpa, index); + +- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); +- if (remap_pfn_range(vma, vmf->address & PAGE_MASK, +- PFN_DOWN(notify.addr), PAGE_SIZE, +- vma->vm_page_prot)) +- return VM_FAULT_SIGBUS; +- +- return VM_FAULT_NOPAGE; ++ return vmf_insert_pfn(vma, vmf->address & PAGE_MASK, PFN_DOWN(notify.addr)); + } + + static const struct vm_operations_struct vhost_vdpa_vm_ops = { +diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h +index cca1acf2e0371..853b1f96b1fdc 100644 +--- a/fs/btrfs/ctree.h ++++ b/fs/btrfs/ctree.h +@@ -1553,6 +1553,7 @@ struct btrfs_drop_extents_args { + struct btrfs_file_private { + void *filldir_buf; + u64 last_index; ++ bool fsync_skip_inode_lock; + }; + + +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index 1783a0fbf1665..e23d178f97782 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -1526,21 +1526,37 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from) + * So here we disable page faults in the iov_iter and then retry if we + * got -EFAULT, faulting in the pages before the retry. + */ ++again: + from->nofault = true; + dio = btrfs_dio_write(iocb, from, written); + from->nofault = false; + +- /* +- * iomap_dio_complete() will call btrfs_sync_file() if we have a dsync +- * iocb, and that needs to lock the inode. So unlock it before calling +- * iomap_dio_complete() to avoid a deadlock. +- */ +- btrfs_inode_unlock(inode, ilock_flags); +- +- if (IS_ERR_OR_NULL(dio)) ++ if (IS_ERR_OR_NULL(dio)) { + err = PTR_ERR_OR_ZERO(dio); +- else ++ } else { ++ struct btrfs_file_private stack_private = { 0 }; ++ struct btrfs_file_private *private; ++ const bool have_private = (file->private_data != NULL); ++ ++ if (!have_private) ++ file->private_data = &stack_private; ++ ++ /* ++ * If we have a synchoronous write, we must make sure the fsync ++ * triggered by the iomap_dio_complete() call below doesn't ++ * deadlock on the inode lock - we are already holding it and we ++ * can't call it after unlocking because we may need to complete ++ * partial writes due to the input buffer (or parts of it) not ++ * being already faulted in. ++ */ ++ private = file->private_data; ++ private->fsync_skip_inode_lock = true; + err = iomap_dio_complete(dio); ++ private->fsync_skip_inode_lock = false; ++ ++ if (!have_private) ++ file->private_data = NULL; ++ } + + /* No increment (+=) because iomap returns a cumulative value. */ + if (err > 0) +@@ -1567,10 +1583,12 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from) + } else { + fault_in_iov_iter_readable(from, left); + prev_left = left; +- goto relock; ++ goto again; + } + } + ++ btrfs_inode_unlock(inode, ilock_flags); ++ + /* + * If 'err' is -ENOTBLK or we have not written all data, then it means + * we must fallback to buffered IO. +@@ -1777,6 +1795,7 @@ static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx) + */ + int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + { ++ struct btrfs_file_private *private = file->private_data; + struct dentry *dentry = file_dentry(file); + struct inode *inode = d_inode(dentry); + struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); +@@ -1786,6 +1805,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + int ret = 0, err; + u64 len; + bool full_sync; ++ const bool skip_ilock = (private ? private->fsync_skip_inode_lock : false); + + trace_btrfs_sync_file(file, datasync); + +@@ -1813,7 +1833,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + if (ret) + goto out; + +- btrfs_inode_lock(inode, BTRFS_ILOCK_MMAP); ++ if (skip_ilock) ++ down_write(&BTRFS_I(inode)->i_mmap_lock); ++ else ++ btrfs_inode_lock(inode, BTRFS_ILOCK_MMAP); + + atomic_inc(&root->log_batch); + +@@ -1837,7 +1860,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + */ + ret = start_ordered_ops(inode, start, end); + if (ret) { +- btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); ++ if (skip_ilock) ++ up_write(&BTRFS_I(inode)->i_mmap_lock); ++ else ++ btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); + goto out; + } + +@@ -1940,7 +1966,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + * file again, but that will end up using the synchronization + * inside btrfs_sync_log to keep things safe. + */ +- btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); ++ if (skip_ilock) ++ up_write(&BTRFS_I(inode)->i_mmap_lock); ++ else ++ btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); + + if (ret == BTRFS_NO_LOG_SYNC) { + ret = btrfs_end_transaction(trans); +@@ -2008,7 +2037,10 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) + + out_release_extents: + btrfs_release_log_ctx_extents(&ctx); +- btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); ++ if (skip_ilock) ++ up_write(&BTRFS_I(inode)->i_mmap_lock); ++ else ++ btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); + goto out; + } + +diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c +index 76d52d682b3b0..21d262da386d5 100644 +--- a/fs/btrfs/free-space-cache.c ++++ b/fs/btrfs/free-space-cache.c +@@ -865,6 +865,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode, + spin_unlock(&ctl->tree_lock); + btrfs_err(fs_info, + "Duplicate entries in free space cache, dumping"); ++ kmem_cache_free(btrfs_free_space_bitmap_cachep, e->bitmap); + kmem_cache_free(btrfs_free_space_cachep, e); + goto free_cache; + } +diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c +index 228eeb04d03d3..144b7bd05346e 100644 +--- a/fs/btrfs/print-tree.c ++++ b/fs/btrfs/print-tree.c +@@ -9,7 +9,7 @@ + + struct root_name_map { + u64 id; +- char name[16]; ++ const char *name; + }; + + static const struct root_name_map root_map[] = { +diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c +index 3a91be1d9bbe7..ee9d2faa5218f 100644 +--- a/fs/ext4/inline.c ++++ b/fs/ext4/inline.c +@@ -1439,7 +1439,11 @@ int ext4_inlinedir_to_tree(struct file *dir_file, + hinfo->hash = EXT4_DIRENT_HASH(de); + hinfo->minor_hash = EXT4_DIRENT_MINOR_HASH(de); + } else { +- ext4fs_dirhash(dir, de->name, de->name_len, hinfo); ++ err = ext4fs_dirhash(dir, de->name, de->name_len, hinfo); ++ if (err) { ++ ret = err; ++ goto out; ++ } + } + if ((hinfo->hash < start_hash) || + ((hinfo->hash == start_hash) && +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c +index 71ce3ed5ab6ba..004ad321a45d6 100644 +--- a/fs/ext4/mballoc.c ++++ b/fs/ext4/mballoc.c +@@ -2226,8 +2226,7 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac, + if (max >= ac->ac_g_ex.fe_len && ac->ac_g_ex.fe_len == sbi->s_stripe) { + ext4_fsblk_t start; + +- start = ext4_group_first_block_no(ac->ac_sb, e4b->bd_group) + +- ex.fe_start; ++ start = ext4_grp_offs_to_block(ac->ac_sb, &ex); + /* use do_div to get remainder (would be 64-bit modulo) */ + if (do_div(start, sbi->s_stripe) == 0) { + ac->ac_found++; +diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c +index b136b46b63bc9..c8d59f7c47453 100644 +--- a/fs/jbd2/journal.c ++++ b/fs/jbd2/journal.c +@@ -409,6 +409,7 @@ int jbd2_journal_write_metadata_buffer(transaction_t *transaction, + tmp = jbd2_alloc(bh_in->b_size, GFP_NOFS); + if (!tmp) { + brelse(new_bh); ++ free_buffer_head(new_bh); + return -ENOMEM; + } + spin_lock(&jh_in->b_state_lock); +diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c +index a2afdf9c5f80b..0c0a7fcc4cebc 100644 +--- a/fs/smb/client/cifs_debug.c ++++ b/fs/smb/client/cifs_debug.c +@@ -960,7 +960,7 @@ static int cifs_security_flags_proc_open(struct inode *inode, struct file *file) + static void + cifs_security_flags_handle_must_flags(unsigned int *flags) + { +- unsigned int signflags = *flags & CIFSSEC_MUST_SIGN; ++ unsigned int signflags = *flags & (CIFSSEC_MUST_SIGN | CIFSSEC_MUST_SEAL); + + if ((*flags & CIFSSEC_MUST_KRB5) == CIFSSEC_MUST_KRB5) + *flags = CIFSSEC_MUST_KRB5; +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index 1564febd1439f..7b702b3cf7b87 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -1820,7 +1820,7 @@ static inline bool is_retryable_error(int error) + #define CIFSSEC_MAY_SIGN 0x00001 + #define CIFSSEC_MAY_NTLMV2 0x00004 + #define CIFSSEC_MAY_KRB5 0x00008 +-#define CIFSSEC_MAY_SEAL 0x00040 /* not supported yet */ ++#define CIFSSEC_MAY_SEAL 0x00040 + #define CIFSSEC_MAY_NTLMSSP 0x00080 /* raw ntlmssp with ntlmv2 */ + + #define CIFSSEC_MUST_SIGN 0x01001 +@@ -1830,11 +1830,11 @@ require use of the stronger protocol */ + #define CIFSSEC_MUST_NTLMV2 0x04004 + #define CIFSSEC_MUST_KRB5 0x08008 + #ifdef CONFIG_CIFS_UPCALL +-#define CIFSSEC_MASK 0x8F08F /* flags supported if no weak allowed */ ++#define CIFSSEC_MASK 0xCF0CF /* flags supported if no weak allowed */ + #else +-#define CIFSSEC_MASK 0x87087 /* flags supported if no weak allowed */ ++#define CIFSSEC_MASK 0xC70C7 /* flags supported if no weak allowed */ + #endif /* UPCALL */ +-#define CIFSSEC_MUST_SEAL 0x40040 /* not supported yet */ ++#define CIFSSEC_MUST_SEAL 0x40040 + #define CIFSSEC_MUST_NTLMSSP 0x80080 /* raw ntlmssp with ntlmv2 */ + + #define CIFSSEC_DEF (CIFSSEC_MAY_SIGN | CIFSSEC_MAY_NTLMV2 | CIFSSEC_MAY_NTLMSSP | CIFSSEC_MAY_SEAL) +diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c +index e15bf116c7558..fad4b5dcfbd5a 100644 +--- a/fs/smb/client/smb2pdu.c ++++ b/fs/smb/client/smb2pdu.c +@@ -80,6 +80,9 @@ int smb3_encryption_required(const struct cifs_tcon *tcon) + if (tcon->seal && + (tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION)) + return 1; ++ if (((global_secflags & CIFSSEC_MUST_SEAL) == CIFSSEC_MUST_SEAL) && ++ (tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION)) ++ return 1; + return 0; + } + +diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c +index c4c18eeacb60c..aa73ab1b50a52 100644 +--- a/fs/udf/balloc.c ++++ b/fs/udf/balloc.c +@@ -22,6 +22,7 @@ + #include "udfdecl.h" + + #include ++#include + + #include "udf_i.h" + #include "udf_sb.h" +@@ -144,7 +145,6 @@ static void udf_bitmap_free_blocks(struct super_block *sb, + { + struct udf_sb_info *sbi = UDF_SB(sb); + struct buffer_head *bh = NULL; +- struct udf_part_map *partmap; + unsigned long block; + unsigned long block_group; + unsigned long bit; +@@ -153,19 +153,9 @@ static void udf_bitmap_free_blocks(struct super_block *sb, + unsigned long overflow; + + mutex_lock(&sbi->s_alloc_mutex); +- partmap = &sbi->s_partmaps[bloc->partitionReferenceNum]; +- if (bloc->logicalBlockNum + count < count || +- (bloc->logicalBlockNum + count) > partmap->s_partition_len) { +- udf_debug("%u < %d || %u + %u > %u\n", +- bloc->logicalBlockNum, 0, +- bloc->logicalBlockNum, count, +- partmap->s_partition_len); +- goto error_return; +- } +- ++ /* We make sure this cannot overflow when mounting the filesystem */ + block = bloc->logicalBlockNum + offset + + (sizeof(struct spaceBitmapDesc) << 3); +- + do { + overflow = 0; + block_group = block >> (sb->s_blocksize_bits + 3); +@@ -395,7 +385,6 @@ static void udf_table_free_blocks(struct super_block *sb, + uint32_t count) + { + struct udf_sb_info *sbi = UDF_SB(sb); +- struct udf_part_map *partmap; + uint32_t start, end; + uint32_t elen; + struct kernel_lb_addr eloc; +@@ -404,16 +393,6 @@ static void udf_table_free_blocks(struct super_block *sb, + struct udf_inode_info *iinfo; + + mutex_lock(&sbi->s_alloc_mutex); +- partmap = &sbi->s_partmaps[bloc->partitionReferenceNum]; +- if (bloc->logicalBlockNum + count < count || +- (bloc->logicalBlockNum + count) > partmap->s_partition_len) { +- udf_debug("%u < %d || %u + %u > %u\n", +- bloc->logicalBlockNum, 0, +- bloc->logicalBlockNum, count, +- partmap->s_partition_len); +- goto error_return; +- } +- + iinfo = UDF_I(table); + udf_add_free_space(sb, sbi->s_partition, count); + +@@ -688,6 +667,17 @@ void udf_free_blocks(struct super_block *sb, struct inode *inode, + { + uint16_t partition = bloc->partitionReferenceNum; + struct udf_part_map *map = &UDF_SB(sb)->s_partmaps[partition]; ++ uint32_t blk; ++ ++ if (check_add_overflow(bloc->logicalBlockNum, offset, &blk) || ++ check_add_overflow(blk, count, &blk) || ++ bloc->logicalBlockNum + count > map->s_partition_len) { ++ udf_debug("Invalid request to free blocks: (%d, %u), off %u, " ++ "len %u, partition len %u\n", ++ partition, bloc->logicalBlockNum, offset, count, ++ map->s_partition_len); ++ return; ++ } + + if (map->s_partition_flags & UDF_PART_FLAG_UNALLOC_BITMAP) { + udf_bitmap_free_blocks(sb, map->s_uspace.s_bitmap, +diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c +index 322eb2ee6c550..05e48523ea400 100644 +--- a/fs/xfs/xfs_log_recover.c ++++ b/fs/xfs/xfs_log_recover.c +@@ -2960,7 +2960,7 @@ xlog_do_recovery_pass( + int error = 0, h_size, h_len; + int error2 = 0; + int bblks, split_bblks; +- int hblks, split_hblks, wrapped_hblks; ++ int hblks = 1, split_hblks, wrapped_hblks; + int i; + struct hlist_head rhash[XLOG_RHASH_SIZE]; + LIST_HEAD (buffer_list); +@@ -3016,14 +3016,22 @@ xlog_do_recovery_pass( + if (error) + goto bread_err1; + +- hblks = xlog_logrec_hblks(log, rhead); +- if (hblks != 1) { +- kmem_free(hbp); +- hbp = xlog_alloc_buffer(log, hblks); ++ /* ++ * This open codes xlog_logrec_hblks so that we can reuse the ++ * fixed up h_size value calculated above. Without that we'd ++ * still allocate the buffer based on the incorrect on-disk ++ * size. ++ */ ++ if (h_size > XLOG_HEADER_CYCLE_SIZE && ++ (rhead->h_version & cpu_to_be32(XLOG_VERSION_2))) { ++ hblks = DIV_ROUND_UP(h_size, XLOG_HEADER_CYCLE_SIZE); ++ if (hblks > 1) { ++ kmem_free(hbp); ++ hbp = xlog_alloc_buffer(log, hblks); ++ } + } + } else { + ASSERT(log->l_sectBBsize == 1); +- hblks = 1; + hbp = xlog_alloc_buffer(log, 1); + h_size = XLOG_BIG_RECORD_BSIZE; + } +diff --git a/include/linux/blk-integrity.h b/include/linux/blk-integrity.h +index 378b2459efe2d..f7cc8080672cc 100644 +--- a/include/linux/blk-integrity.h ++++ b/include/linux/blk-integrity.h +@@ -105,14 +105,13 @@ static inline bool blk_integrity_rq(struct request *rq) + } + + /* +- * Return the first bvec that contains integrity data. Only drivers that are +- * limited to a single integrity segment should use this helper. ++ * Return the current bvec that contains the integrity data. bip_iter may be ++ * advanced to iterate over the integrity data. + */ +-static inline struct bio_vec *rq_integrity_vec(struct request *rq) ++static inline struct bio_vec rq_integrity_vec(struct request *rq) + { +- if (WARN_ON_ONCE(queue_max_integrity_segments(rq->q) > 1)) +- return NULL; +- return rq->bio->bi_integrity->bip_vec; ++ return mp_bvec_iter_bvec(rq->bio->bi_integrity->bip_vec, ++ rq->bio->bi_integrity->bip_iter); + } + #else /* CONFIG_BLK_DEV_INTEGRITY */ + static inline int blk_rq_count_integrity_sg(struct request_queue *q, +@@ -176,9 +175,10 @@ static inline int blk_integrity_rq(struct request *rq) + return 0; + } + +-static inline struct bio_vec *rq_integrity_vec(struct request *rq) ++static inline struct bio_vec rq_integrity_vec(struct request *rq) + { +- return NULL; ++ /* the optimizer will remove all calls to this function */ ++ return (struct bio_vec){ }; + } + #endif /* CONFIG_BLK_DEV_INTEGRITY */ + #endif /* _LINUX_BLK_INTEGRITY_H */ +diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h +index 1d42d4b173271..0ad8b550bb4b4 100644 +--- a/include/linux/clocksource.h ++++ b/include/linux/clocksource.h +@@ -291,7 +291,19 @@ static inline void timer_probe(void) {} + #define TIMER_ACPI_DECLARE(name, table_id, fn) \ + ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn) + +-extern ulong max_cswd_read_retries; ++static inline unsigned int clocksource_get_max_watchdog_retry(void) ++{ ++ /* ++ * When system is in the boot phase or under heavy workload, there ++ * can be random big latencies during the clocksource/watchdog ++ * read, so allow retries to filter the noise latency. As the ++ * latency's frequency and maximum value goes up with the number of ++ * CPUs, scale the number of retries with the number of online ++ * CPUs. ++ */ ++ return (ilog2(num_online_cpus()) / 2) + 1; ++} ++ + void clocksource_verify_percpu(struct clocksource *cs); + + #endif /* _LINUX_CLOCKSOURCE_H */ +diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h +index 2c1371320c295..f680897794fa2 100644 +--- a/include/linux/pci_ids.h ++++ b/include/linux/pci_ids.h +@@ -2109,6 +2109,8 @@ + + #define PCI_VENDOR_ID_CHELSIO 0x1425 + ++#define PCI_VENDOR_ID_EDIMAX 0x1432 ++ + #define PCI_VENDOR_ID_ADLINK 0x144a + + #define PCI_VENDOR_ID_SAMSUNG 0x144d +diff --git a/include/linux/profile.h b/include/linux/profile.h +index 11db1ec516e27..12da750a88a04 100644 +--- a/include/linux/profile.h ++++ b/include/linux/profile.h +@@ -11,7 +11,6 @@ + + #define CPU_PROFILING 1 + #define SCHED_PROFILING 2 +-#define SLEEP_PROFILING 3 + #define KVM_PROFILING 4 + + struct proc_dir_entry; +diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h +index c8b5e9781d01a..f70624ec4188f 100644 +--- a/include/linux/trace_events.h ++++ b/include/linux/trace_events.h +@@ -847,7 +847,6 @@ do { \ + struct perf_event; + + DECLARE_PER_CPU(struct pt_regs, perf_trace_regs); +-DECLARE_PER_CPU(int, bpf_kprobe_override); + + extern int perf_trace_init(struct perf_event *event); + extern void perf_trace_destroy(struct perf_event *event); +diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h +index 035d61d50a989..4cd0839c86c92 100644 +--- a/include/net/ip6_route.h ++++ b/include/net/ip6_route.h +@@ -132,18 +132,26 @@ void rt6_age_exceptions(struct fib6_info *f6i, struct fib6_gc_args *gc_args, + + static inline int ip6_route_get_saddr(struct net *net, struct fib6_info *f6i, + const struct in6_addr *daddr, +- unsigned int prefs, ++ unsigned int prefs, int l3mdev_index, + struct in6_addr *saddr) + { ++ struct net_device *l3mdev; ++ struct net_device *dev; ++ bool same_vrf; + int err = 0; + +- if (f6i && f6i->fib6_prefsrc.plen) { ++ rcu_read_lock(); ++ ++ l3mdev = dev_get_by_index_rcu(net, l3mdev_index); ++ if (!f6i || !f6i->fib6_prefsrc.plen || l3mdev) ++ dev = f6i ? fib6_info_nh_dev(f6i) : NULL; ++ same_vrf = !l3mdev || l3mdev_master_dev_rcu(dev) == l3mdev; ++ if (f6i && f6i->fib6_prefsrc.plen && same_vrf) + *saddr = f6i->fib6_prefsrc.addr; +- } else { +- struct net_device *dev = f6i ? fib6_info_nh_dev(f6i) : NULL; ++ else ++ err = ipv6_dev_get_saddr(net, same_vrf ? dev : l3mdev, daddr, prefs, saddr); + +- err = ipv6_dev_get_saddr(net, dev, daddr, prefs, saddr); +- } ++ rcu_read_unlock(); + + return err; + } +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h +index 9a80d0251d8f3..e365302fed95d 100644 +--- a/include/net/netfilter/nf_tables.h ++++ b/include/net/netfilter/nf_tables.h +@@ -387,7 +387,7 @@ static inline void *nft_expr_priv(const struct nft_expr *expr) + return (void *)expr->data; + } + +-int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src); ++int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src, gfp_t gfp); + void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr); + int nft_expr_dump(struct sk_buff *skb, unsigned int attr, + const struct nft_expr *expr); +@@ -889,7 +889,7 @@ struct nft_expr_ops { + struct nft_regs *regs, + const struct nft_pktinfo *pkt); + int (*clone)(struct nft_expr *dst, +- const struct nft_expr *src); ++ const struct nft_expr *src, gfp_t gfp); + unsigned int size; + + int (*init)(const struct nft_ctx *ctx, +diff --git a/include/trace/events/intel_ifs.h b/include/trace/events/intel_ifs.h +index d7353024016cc..af0af3f1d9b7c 100644 +--- a/include/trace/events/intel_ifs.h ++++ b/include/trace/events/intel_ifs.h +@@ -10,25 +10,25 @@ + + TRACE_EVENT(ifs_status, + +- TP_PROTO(int cpu, union ifs_scan activate, union ifs_status status), ++ TP_PROTO(int cpu, int start, int stop, u64 status), + +- TP_ARGS(cpu, activate, status), ++ TP_ARGS(cpu, start, stop, status), + + TP_STRUCT__entry( + __field( u64, status ) + __field( int, cpu ) +- __field( u8, start ) +- __field( u8, stop ) ++ __field( u16, start ) ++ __field( u16, stop ) + ), + + TP_fast_assign( + __entry->cpu = cpu; +- __entry->start = activate.start; +- __entry->stop = activate.stop; +- __entry->status = status.data; ++ __entry->start = start; ++ __entry->stop = stop; ++ __entry->status = status; + ), + +- TP_printk("cpu: %d, start: %.2x, stop: %.2x, status: %llx", ++ TP_printk("cpu: %d, start: %.4x, stop: %.4x, status: %.16llx", + __entry->cpu, + __entry->start, + __entry->stop, +diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c +index fd09962744014..5b173b79051cd 100644 +--- a/kernel/irq/irqdesc.c ++++ b/kernel/irq/irqdesc.c +@@ -493,6 +493,7 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node, + flags = IRQD_AFFINITY_MANAGED | + IRQD_MANAGED_SHUTDOWN; + } ++ flags |= IRQD_AFFINITY_SET; + mask = &affinity->mask; + node = cpu_to_node(cpumask_first(mask)); + affinity++; +diff --git a/kernel/jump_label.c b/kernel/jump_label.c +index eec802175ccc6..1ed269b2c4035 100644 +--- a/kernel/jump_label.c ++++ b/kernel/jump_label.c +@@ -231,7 +231,7 @@ void static_key_disable_cpuslocked(struct static_key *key) + } + + jump_label_lock(); +- if (atomic_cmpxchg(&key->enabled, 1, 0)) ++ if (atomic_cmpxchg(&key->enabled, 1, 0) == 1) + jump_label_update(key); + jump_label_unlock(); + } +@@ -284,7 +284,7 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key) + return; + + guard(mutex)(&jump_label_mutex); +- if (atomic_cmpxchg(&key->enabled, 1, 0)) ++ if (atomic_cmpxchg(&key->enabled, 1, 0) == 1) + jump_label_update(key); + else + WARN_ON_ONCE(!static_key_slow_try_dec(key)); +diff --git a/kernel/kcov.c b/kernel/kcov.c +index fe3308dfd6a73..9413f27294ca1 100644 +--- a/kernel/kcov.c ++++ b/kernel/kcov.c +@@ -161,6 +161,15 @@ static void kcov_remote_area_put(struct kcov_remote_area *area, + kmsan_unpoison_memory(&area->list, sizeof(area->list)); + } + ++/* ++ * Unlike in_serving_softirq(), this function returns false when called during ++ * a hardirq or an NMI that happened in the softirq context. ++ */ ++static inline bool in_softirq_really(void) ++{ ++ return in_serving_softirq() && !in_hardirq() && !in_nmi(); ++} ++ + static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) + { + unsigned int mode; +@@ -170,7 +179,7 @@ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_stru + * so we ignore code executed in interrupts, unless we are in a remote + * coverage collection section in a softirq. + */ +- if (!in_task() && !(in_serving_softirq() && t->kcov_softirq)) ++ if (!in_task() && !(in_softirq_really() && t->kcov_softirq)) + return false; + mode = READ_ONCE(t->kcov_mode); + /* +@@ -847,7 +856,7 @@ void kcov_remote_start(u64 handle) + + if (WARN_ON(!kcov_check_handle(handle, true, true, true))) + return; +- if (!in_task() && !in_serving_softirq()) ++ if (!in_task() && !in_softirq_really()) + return; + + local_lock_irqsave(&kcov_percpu_data.lock, flags); +@@ -989,7 +998,7 @@ void kcov_remote_stop(void) + int sequence; + unsigned long flags; + +- if (!in_task() && !in_serving_softirq()) ++ if (!in_task() && !in_softirq_really()) + return; + + local_lock_irqsave(&kcov_percpu_data.lock, flags); +diff --git a/kernel/kprobes.c b/kernel/kprobes.c +index 5b5ee060a2db5..4c4fc4d309b8b 100644 +--- a/kernel/kprobes.c ++++ b/kernel/kprobes.c +@@ -1552,8 +1552,8 @@ static bool is_cfi_preamble_symbol(unsigned long addr) + if (lookup_symbol_name(addr, symbuf)) + return false; + +- return str_has_prefix("__cfi_", symbuf) || +- str_has_prefix("__pfx_", symbuf); ++ return str_has_prefix(symbuf, "__cfi_") || ++ str_has_prefix(symbuf, "__pfx_"); + } + + static int check_kprobe_address_safe(struct kprobe *p, +diff --git a/kernel/padata.c b/kernel/padata.c +index 0261bced7eb6e..11270ffca54e0 100644 +--- a/kernel/padata.c ++++ b/kernel/padata.c +@@ -508,6 +508,13 @@ void __init padata_do_multithreaded(struct padata_mt_job *job) + ps.chunk_size = max(ps.chunk_size, job->min_chunk); + ps.chunk_size = roundup(ps.chunk_size, job->align); + ++ /* ++ * chunk_size can be 0 if the caller sets min_chunk to 0. So force it ++ * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().` ++ */ ++ if (!ps.chunk_size) ++ ps.chunk_size = 1U; ++ + list_for_each_entry(pw, &works, pw_list) + queue_work(system_unbound_wq, &pw->pw_work); + +diff --git a/kernel/profile.c b/kernel/profile.c +index 8a77769bc4b4c..984f819b701c9 100644 +--- a/kernel/profile.c ++++ b/kernel/profile.c +@@ -57,20 +57,11 @@ static DEFINE_MUTEX(profile_flip_mutex); + int profile_setup(char *str) + { + static const char schedstr[] = "schedule"; +- static const char sleepstr[] = "sleep"; + static const char kvmstr[] = "kvm"; + const char *select = NULL; + int par; + +- if (!strncmp(str, sleepstr, strlen(sleepstr))) { +-#ifdef CONFIG_SCHEDSTATS +- force_schedstat_enabled(); +- prof_on = SLEEP_PROFILING; +- select = sleepstr; +-#else +- pr_warn("kernel sleep profiling requires CONFIG_SCHEDSTATS\n"); +-#endif /* CONFIG_SCHEDSTATS */ +- } else if (!strncmp(str, schedstr, strlen(schedstr))) { ++ if (!strncmp(str, schedstr, strlen(schedstr))) { + prof_on = SCHED_PROFILING; + select = schedstr; + } else if (!strncmp(str, kvmstr, strlen(kvmstr))) { +diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c +index 8c45df910763a..c14517912cfaa 100644 +--- a/kernel/rcu/rcutorture.c ++++ b/kernel/rcu/rcutorture.c +@@ -2547,7 +2547,7 @@ static void rcu_torture_fwd_cb_cr(struct rcu_head *rhp) + spin_lock_irqsave(&rfp->rcu_fwd_lock, flags); + rfcpp = rfp->rcu_fwd_cb_tail; + rfp->rcu_fwd_cb_tail = &rfcp->rfc_next; +- WRITE_ONCE(*rfcpp, rfcp); ++ smp_store_release(rfcpp, rfcp); + WRITE_ONCE(rfp->n_launders_cb, rfp->n_launders_cb + 1); + i = ((jiffies - rfp->rcu_fwd_startat) / (HZ / FWD_CBS_HIST_DIV)); + if (i >= ARRAY_SIZE(rfp->n_launders_hist)) +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 61f9503a5fe9c..cd6144cea5a1a 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -4391,11 +4391,15 @@ void rcutree_migrate_callbacks(int cpu) + struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); + bool needwake; + +- if (rcu_rdp_is_offloaded(rdp) || +- rcu_segcblist_empty(&rdp->cblist)) +- return; /* No callbacks to migrate. */ ++ if (rcu_rdp_is_offloaded(rdp)) ++ return; + + raw_spin_lock_irqsave(&rcu_state.barrier_lock, flags); ++ if (rcu_segcblist_empty(&rdp->cblist)) { ++ raw_spin_unlock_irqrestore(&rcu_state.barrier_lock, flags); ++ return; /* No callbacks to migrate. */ ++ } ++ + WARN_ON_ONCE(rcu_rdp_cpu_online(rdp)); + rcu_barrier_entrain(rdp); + my_rdp = this_cpu_ptr(&rcu_data); +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 4a96bf1d2f37c..8388575759378 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -9398,6 +9398,22 @@ static int cpuset_cpu_inactive(unsigned int cpu) + return 0; + } + ++static inline void sched_smt_present_inc(int cpu) ++{ ++#ifdef CONFIG_SCHED_SMT ++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) ++ static_branch_inc_cpuslocked(&sched_smt_present); ++#endif ++} ++ ++static inline void sched_smt_present_dec(int cpu) ++{ ++#ifdef CONFIG_SCHED_SMT ++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) ++ static_branch_dec_cpuslocked(&sched_smt_present); ++#endif ++} ++ + int sched_cpu_activate(unsigned int cpu) + { + struct rq *rq = cpu_rq(cpu); +@@ -9409,13 +9425,10 @@ int sched_cpu_activate(unsigned int cpu) + */ + balance_push_set(cpu, false); + +-#ifdef CONFIG_SCHED_SMT + /* + * When going up, increment the number of cores with SMT present. + */ +- if (cpumask_weight(cpu_smt_mask(cpu)) == 2) +- static_branch_inc_cpuslocked(&sched_smt_present); +-#endif ++ sched_smt_present_inc(cpu); + set_cpu_active(cpu, true); + + if (sched_smp_initialized) { +@@ -9485,13 +9498,12 @@ int sched_cpu_deactivate(unsigned int cpu) + } + rq_unlock_irqrestore(rq, &rf); + +-#ifdef CONFIG_SCHED_SMT + /* + * When going down, decrement the number of cores with SMT present. + */ +- if (cpumask_weight(cpu_smt_mask(cpu)) == 2) +- static_branch_dec_cpuslocked(&sched_smt_present); ++ sched_smt_present_dec(cpu); + ++#ifdef CONFIG_SCHED_SMT + sched_core_cpu_deactivate(cpu); + #endif + +@@ -9501,6 +9513,7 @@ int sched_cpu_deactivate(unsigned int cpu) + sched_update_numa(cpu, false); + ret = cpuset_cpu_inactive(cpu); + if (ret) { ++ sched_smt_present_inc(cpu); + balance_push_set(cpu, false); + set_cpu_active(cpu, true); + sched_update_numa(cpu, true); +diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c +index 95fc778537434..b55b84f3dd542 100644 +--- a/kernel/sched/cputime.c ++++ b/kernel/sched/cputime.c +@@ -591,6 +591,12 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev, + } + + stime = mul_u64_u64_div_u64(stime, rtime, stime + utime); ++ /* ++ * Because mul_u64_u64_div_u64() can approximate on some ++ * achitectures; enforce the constraint that: a*b/(b+c) <= a. ++ */ ++ if (unlikely(stime > rtime)) ++ stime = rtime; + + update: + /* +diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c +index 857f837f52cbe..966f4eacfe51d 100644 +--- a/kernel/sched/stats.c ++++ b/kernel/sched/stats.c +@@ -92,16 +92,6 @@ void __update_stats_enqueue_sleeper(struct rq *rq, struct task_struct *p, + + trace_sched_stat_blocked(p, delta); + +- /* +- * Blocking time is in units of nanosecs, so shift by +- * 20 to get a milliseconds-range estimation of the +- * amount of time that the task spent sleeping: +- */ +- if (unlikely(prof_on == SLEEP_PROFILING)) { +- profile_hits(SLEEP_PROFILING, +- (void *)get_wchan(p), +- delta >> 20); +- } + account_scheduler_latency(p, delta >> 10, 0); + } + } +diff --git a/kernel/time/clocksource-wdtest.c b/kernel/time/clocksource-wdtest.c +index df922f49d171b..d06185e054ea2 100644 +--- a/kernel/time/clocksource-wdtest.c ++++ b/kernel/time/clocksource-wdtest.c +@@ -104,8 +104,8 @@ static void wdtest_ktime_clocksource_reset(void) + static int wdtest_func(void *arg) + { + unsigned long j1, j2; ++ int i, max_retries; + char *s; +- int i; + + schedule_timeout_uninterruptible(holdoff * HZ); + +@@ -139,18 +139,19 @@ static int wdtest_func(void *arg) + WARN_ON_ONCE(time_before(j2, j1 + NSEC_PER_USEC)); + + /* Verify tsc-like stability with various numbers of errors injected. */ +- for (i = 0; i <= max_cswd_read_retries + 1; i++) { +- if (i <= 1 && i < max_cswd_read_retries) ++ max_retries = clocksource_get_max_watchdog_retry(); ++ for (i = 0; i <= max_retries + 1; i++) { ++ if (i <= 1 && i < max_retries) + s = ""; +- else if (i <= max_cswd_read_retries) ++ else if (i <= max_retries) + s = ", expect message"; + else + s = ", expect clock skew"; +- pr_info("--- Watchdog with %dx error injection, %lu retries%s.\n", i, max_cswd_read_retries, s); ++ pr_info("--- Watchdog with %dx error injection, %d retries%s.\n", i, max_retries, s); + WRITE_ONCE(wdtest_ktime_read_ndelays, i); + schedule_timeout_uninterruptible(2 * HZ); + WARN_ON_ONCE(READ_ONCE(wdtest_ktime_read_ndelays)); +- WARN_ON_ONCE((i <= max_cswd_read_retries) != ++ WARN_ON_ONCE((i <= max_retries) != + !(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE)); + wdtest_ktime_clocksource_reset(); + } +diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c +index cc6db3bce1b2f..cd9a59011dee9 100644 +--- a/kernel/time/clocksource.c ++++ b/kernel/time/clocksource.c +@@ -207,9 +207,6 @@ void clocksource_mark_unstable(struct clocksource *cs) + spin_unlock_irqrestore(&watchdog_lock, flags); + } + +-ulong max_cswd_read_retries = 2; +-module_param(max_cswd_read_retries, ulong, 0644); +-EXPORT_SYMBOL_GPL(max_cswd_read_retries); + static int verify_n_cpus = 8; + module_param(verify_n_cpus, int, 0644); + +@@ -221,11 +218,12 @@ enum wd_read_status { + + static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow) + { +- unsigned int nretries; ++ unsigned int nretries, max_retries; + u64 wd_end, wd_end2, wd_delta; + int64_t wd_delay, wd_seq_delay; + +- for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) { ++ max_retries = clocksource_get_max_watchdog_retry(); ++ for (nretries = 0; nretries <= max_retries; nretries++) { + local_irq_disable(); + *wdnow = watchdog->read(watchdog); + *csnow = cs->read(cs); +@@ -237,7 +235,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, + wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult, + watchdog->shift); + if (wd_delay <= WATCHDOG_MAX_SKEW) { +- if (nretries > 1 || nretries >= max_cswd_read_retries) { ++ if (nretries > 1 && nretries >= max_retries) { + pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n", + smp_processor_id(), watchdog->name, nretries); + } +diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c +index 406dccb79c2b6..8d2dd214ec682 100644 +--- a/kernel/time/ntp.c ++++ b/kernel/time/ntp.c +@@ -727,17 +727,16 @@ static inline void process_adjtimex_modes(const struct __kernel_timex *txc, + } + + if (txc->modes & ADJ_MAXERROR) +- time_maxerror = txc->maxerror; ++ time_maxerror = clamp(txc->maxerror, 0, NTP_PHASE_LIMIT); + + if (txc->modes & ADJ_ESTERROR) +- time_esterror = txc->esterror; ++ time_esterror = clamp(txc->esterror, 0, NTP_PHASE_LIMIT); + + if (txc->modes & ADJ_TIMECONST) { +- time_constant = txc->constant; ++ time_constant = clamp(txc->constant, 0, MAXTC); + if (!(time_status & STA_NANO)) + time_constant += 4; +- time_constant = min(time_constant, (long)MAXTC); +- time_constant = max(time_constant, 0l); ++ time_constant = clamp(time_constant, 0, MAXTC); + } + + if (txc->modes & ADJ_TAI && +diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c +index ba551ec546f52..13a71a894cc16 100644 +--- a/kernel/time/tick-broadcast.c ++++ b/kernel/time/tick-broadcast.c +@@ -1137,7 +1137,6 @@ void tick_broadcast_switch_to_oneshot(void) + #ifdef CONFIG_HOTPLUG_CPU + void hotplug_cpu__broadcast_tick_pull(int deadcpu) + { +- struct tick_device *td = this_cpu_ptr(&tick_cpu_device); + struct clock_event_device *bc; + unsigned long flags; + +@@ -1163,6 +1162,8 @@ void hotplug_cpu__broadcast_tick_pull(int deadcpu) + * device to avoid the starvation. + */ + if (tick_check_broadcast_expired()) { ++ struct tick_device *td = this_cpu_ptr(&tick_cpu_device); ++ + cpumask_clear_cpu(smp_processor_id(), tick_broadcast_force_mask); + tick_program_event(td->evtdev->next_event, 1); + } +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c +index b158cbef4d8dc..8ac43afc11f96 100644 +--- a/kernel/time/timekeeping.c ++++ b/kernel/time/timekeeping.c +@@ -2476,7 +2476,7 @@ int do_adjtimex(struct __kernel_timex *txc) + clock_set |= timekeeping_advance(TK_ADV_FREQ); + + if (clock_set) +- clock_was_set(CLOCK_REALTIME); ++ clock_was_set(CLOCK_SET_WALL); + + ntp_notify_cmos_timer(); + +diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c +index a4dcf0f243521..3a56e7c8aa4f6 100644 +--- a/kernel/trace/tracing_map.c ++++ b/kernel/trace/tracing_map.c +@@ -454,7 +454,7 @@ static struct tracing_map_elt *get_free_elt(struct tracing_map *map) + struct tracing_map_elt *elt = NULL; + int idx; + +- idx = atomic_inc_return(&map->next_elt); ++ idx = atomic_fetch_add_unless(&map->next_elt, 1, map->max_elts); + if (idx < map->max_elts) { + elt = *(TRACING_MAP_ELT(map->elts, idx)); + if (map->ops && map->ops->elt_init) +@@ -699,7 +699,7 @@ void tracing_map_clear(struct tracing_map *map) + { + unsigned int i; + +- atomic_set(&map->next_elt, -1); ++ atomic_set(&map->next_elt, 0); + atomic64_set(&map->hits, 0); + atomic64_set(&map->drops, 0); + +@@ -783,7 +783,7 @@ struct tracing_map *tracing_map_create(unsigned int map_bits, + + map->map_bits = map_bits; + map->max_elts = (1 << map_bits); +- atomic_set(&map->next_elt, -1); ++ atomic_set(&map->next_elt, 0); + + map->map_size = (1 << (map_bits + 1)); + map->ops = ops; +diff --git a/mm/huge_memory.c b/mm/huge_memory.c +index 1b7f5950d6037..f97b221fb6567 100644 +--- a/mm/huge_memory.c ++++ b/mm/huge_memory.c +@@ -608,7 +608,7 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, + loff_t off_align = round_up(off, size); + unsigned long len_pad, ret; + +- if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall()) ++ if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall()) + return 0; + + if (off_end <= off_align || (off_end - off_align) < size) +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 05b8797163b2b..14b9494c58ede 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -1785,13 +1785,6 @@ static void __update_and_free_page(struct hstate *h, struct page *page) + return; + } + +- /* +- * Move PageHWPoison flag from head page to the raw error pages, +- * which makes any healthy subpages reusable. +- */ +- if (unlikely(PageHWPoison(page))) +- hugetlb_clear_page_hwpoison(page); +- + /* + * If vmemmap pages were allocated above, then we need to clear the + * hugetlb destructor under the hugetlb lock. +@@ -1802,6 +1795,13 @@ static void __update_and_free_page(struct hstate *h, struct page *page) + spin_unlock_irq(&hugetlb_lock); + } + ++ /* ++ * Move PageHWPoison flag from head page to the raw error pages, ++ * which makes any healthy subpages reusable. ++ */ ++ if (unlikely(PageHWPoison(page))) ++ hugetlb_clear_page_hwpoison(page); ++ + for (i = 0; i < pages_per_huge_page(h); i++) { + subpage = nth_page(page, i); + subpage->flags &= ~(1 << PG_locked | 1 << PG_error | +diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c +index 320fc1e6dff2a..3d6a22812b498 100644 +--- a/net/bluetooth/hci_sync.c ++++ b/net/bluetooth/hci_sync.c +@@ -2880,6 +2880,20 @@ static int hci_passive_scan_sync(struct hci_dev *hdev) + } else if (hci_is_adv_monitoring(hdev)) { + window = hdev->le_scan_window_adv_monitor; + interval = hdev->le_scan_int_adv_monitor; ++ ++ /* Disable duplicates filter when scanning for advertisement ++ * monitor for the following reasons. ++ * ++ * For HW pattern filtering (ex. MSFT), Realtek and Qualcomm ++ * controllers ignore RSSI_Sampling_Period when the duplicates ++ * filter is enabled. ++ * ++ * For SW pattern filtering, when we're not doing interleaved ++ * scanning, it is necessary to disable duplicates filter, ++ * otherwise hosts can only receive one advertisement and it's ++ * impossible to know if a peer is still in range. ++ */ ++ filter_dups = LE_SCAN_FILTER_DUP_DISABLE; + } else { + window = hdev->le_scan_window; + interval = hdev->le_scan_interval; +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index 98dabbbe42938..209c6d458d336 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -7811,6 +7811,7 @@ static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm, + bt_cb(skb)->l2cap.psm = psm; + + if (!chan->ops->recv(chan, skb)) { ++ l2cap_chan_unlock(chan); + l2cap_chan_put(chan); + return; + } +diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c +index 9765f9f9bf7ff..3cd2b648408d6 100644 +--- a/net/bridge/br_multicast.c ++++ b/net/bridge/br_multicast.c +@@ -1890,16 +1890,14 @@ void br_multicast_del_port(struct net_bridge_port *port) + { + struct net_bridge *br = port->br; + struct net_bridge_port_group *pg; +- HLIST_HEAD(deleted_head); + struct hlist_node *n; + + /* Take care of the remaining groups, only perm ones should be left */ + spin_lock_bh(&br->multicast_lock); + hlist_for_each_entry_safe(pg, n, &port->mglist, mglist) + br_multicast_find_del_pg(br, pg); +- hlist_move_list(&br->mcast_gc_list, &deleted_head); + spin_unlock_bh(&br->multicast_lock); +- br_multicast_gc(&deleted_head); ++ flush_work(&br->mcast_gc_work); + br_multicast_port_ctx_deinit(&port->multicast_ctx); + free_percpu(port->mcast_stats); + } +diff --git a/net/core/link_watch.c b/net/core/link_watch.c +index 13513efcfbfe8..cdabced98b116 100644 +--- a/net/core/link_watch.c ++++ b/net/core/link_watch.c +@@ -139,9 +139,9 @@ static void linkwatch_schedule_work(int urgent) + * override the existing timer. + */ + if (test_bit(LW_URGENT, &linkwatch_flags)) +- mod_delayed_work(system_wq, &linkwatch_work, 0); ++ mod_delayed_work(system_unbound_wq, &linkwatch_work, 0); + else +- schedule_delayed_work(&linkwatch_work, delay); ++ queue_delayed_work(system_unbound_wq, &linkwatch_work, delay); + } + + +diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c +index fb26401950e7e..df79044fbf3c4 100644 +--- a/net/ipv6/ip6_output.c ++++ b/net/ipv6/ip6_output.c +@@ -1120,6 +1120,7 @@ static int ip6_dst_lookup_tail(struct net *net, const struct sock *sk, + from = rt ? rcu_dereference(rt->from) : NULL; + err = ip6_route_get_saddr(net, from, &fl6->daddr, + sk ? inet6_sk(sk)->srcprefs : 0, ++ fl6->flowi6_l3mdev, + &fl6->saddr); + rcu_read_unlock(); + +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index 151414e9f7fe4..8c1d9e6124363 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -5681,7 +5681,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb, + goto nla_put_failure; + } else if (dest) { + struct in6_addr saddr_buf; +- if (ip6_route_get_saddr(net, rt, dest, 0, &saddr_buf) == 0 && ++ if (ip6_route_get_saddr(net, rt, dest, 0, 0, &saddr_buf) == 0 && + nla_put_in6_addr(skb, RTA_PREFSRC, &saddr_buf)) + goto nla_put_failure; + } +diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c +index 8d21ff25f1602..70da78ab95202 100644 +--- a/net/l2tp/l2tp_core.c ++++ b/net/l2tp/l2tp_core.c +@@ -88,6 +88,11 @@ + /* Default trace flags */ + #define L2TP_DEFAULT_DEBUG_FLAGS 0 + ++#define L2TP_DEPTH_NESTING 2 ++#if L2TP_DEPTH_NESTING == SINGLE_DEPTH_NESTING ++#error "L2TP requires its own lockdep subclass" ++#endif ++ + /* Private data stored for received packets in the skb. + */ + struct l2tp_skb_cb { +@@ -1041,7 +1046,13 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns + IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED); + nf_reset_ct(skb); + +- bh_lock_sock_nested(sk); ++ /* L2TP uses its own lockdep subclass to avoid lockdep splats caused by ++ * nested socket calls on the same lockdep socket class. This can ++ * happen when data from a user socket is routed over l2tp, which uses ++ * another userspace socket. ++ */ ++ spin_lock_nested(&sk->sk_lock.slock, L2TP_DEPTH_NESTING); ++ + if (sock_owned_by_user(sk)) { + kfree_skb(skb); + ret = NET_XMIT_DROP; +@@ -1093,7 +1104,7 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns + ret = l2tp_xmit_queue(tunnel, skb, &inet->cork.fl); + + out_unlock: +- bh_unlock_sock(sk); ++ spin_unlock(&sk->sk_lock.slock); + + return ret; + } +diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c +index 0dac2863c6e1e..1f3161a38b9d2 100644 +--- a/net/mptcp/mib.c ++++ b/net/mptcp/mib.c +@@ -19,7 +19,9 @@ static const struct snmp_mib mptcp_snmp_list[] = { + SNMP_MIB_ITEM("MPTCPRetrans", MPTCP_MIB_RETRANSSEGS), + SNMP_MIB_ITEM("MPJoinNoTokenFound", MPTCP_MIB_JOINNOTOKEN), + SNMP_MIB_ITEM("MPJoinSynRx", MPTCP_MIB_JOINSYNRX), ++ SNMP_MIB_ITEM("MPJoinSynBackupRx", MPTCP_MIB_JOINSYNBACKUPRX), + SNMP_MIB_ITEM("MPJoinSynAckRx", MPTCP_MIB_JOINSYNACKRX), ++ SNMP_MIB_ITEM("MPJoinSynAckBackupRx", MPTCP_MIB_JOINSYNACKBACKUPRX), + SNMP_MIB_ITEM("MPJoinSynAckHMacFailure", MPTCP_MIB_JOINSYNACKMAC), + SNMP_MIB_ITEM("MPJoinAckRx", MPTCP_MIB_JOINACKRX), + SNMP_MIB_ITEM("MPJoinAckHMacFailure", MPTCP_MIB_JOINACKMAC), +diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h +index 2be3596374f4e..a7b94f5c5d277 100644 +--- a/net/mptcp/mib.h ++++ b/net/mptcp/mib.h +@@ -12,7 +12,9 @@ enum linux_mptcp_mib_field { + MPTCP_MIB_RETRANSSEGS, /* Segments retransmitted at the MPTCP-level */ + MPTCP_MIB_JOINNOTOKEN, /* Received MP_JOIN but the token was not found */ + MPTCP_MIB_JOINSYNRX, /* Received a SYN + MP_JOIN */ ++ MPTCP_MIB_JOINSYNBACKUPRX, /* Received a SYN + MP_JOIN + backup flag */ + MPTCP_MIB_JOINSYNACKRX, /* Received a SYN/ACK + MP_JOIN */ ++ MPTCP_MIB_JOINSYNACKBACKUPRX, /* Received a SYN/ACK + MP_JOIN + backup flag */ + MPTCP_MIB_JOINSYNACKMAC, /* HMAC was wrong on SYN/ACK + MP_JOIN */ + MPTCP_MIB_JOINACKRX, /* Received an ACK + MP_JOIN */ + MPTCP_MIB_JOINACKMAC, /* HMAC was wrong on ACK + MP_JOIN */ +diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c +index 10c288a0cb0c2..a27ee627addef 100644 +--- a/net/mptcp/pm.c ++++ b/net/mptcp/pm.c +@@ -416,6 +416,18 @@ int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc) + return mptcp_pm_nl_get_local_id(msk, skc); + } + ++bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc) ++{ ++ struct mptcp_addr_info skc_local; ++ ++ mptcp_local_address((struct sock_common *)skc, &skc_local); ++ ++ if (mptcp_pm_is_userspace(msk)) ++ return mptcp_userspace_pm_is_backup(msk, &skc_local); ++ ++ return mptcp_pm_nl_is_backup(msk, &skc_local); ++} ++ + void mptcp_pm_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk) + { + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); +diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c +index e9dff63825817..2b5a5680f09ac 100644 +--- a/net/mptcp/pm_netlink.c ++++ b/net/mptcp/pm_netlink.c +@@ -86,8 +86,7 @@ bool mptcp_addresses_equal(const struct mptcp_addr_info *a, + return a->port == b->port; + } + +-static void local_address(const struct sock_common *skc, +- struct mptcp_addr_info *addr) ++void mptcp_local_address(const struct sock_common *skc, struct mptcp_addr_info *addr) + { + addr->family = skc->skc_family; + addr->port = htons(skc->skc_num); +@@ -122,7 +121,7 @@ static bool lookup_subflow_by_saddr(const struct list_head *list, + list_for_each_entry(subflow, list, node) { + skc = (struct sock_common *)mptcp_subflow_tcp_sock(subflow); + +- local_address(skc, &cur); ++ mptcp_local_address(skc, &cur); + if (mptcp_addresses_equal(&cur, saddr, saddr->port)) + return true; + } +@@ -274,7 +273,7 @@ bool mptcp_pm_sport_in_anno_list(struct mptcp_sock *msk, const struct sock *sk) + struct mptcp_addr_info saddr; + bool ret = false; + +- local_address((struct sock_common *)sk, &saddr); ++ mptcp_local_address((struct sock_common *)sk, &saddr); + + spin_lock_bh(&msk->pm.lock); + list_for_each_entry(entry, &msk->pm.anno_list, list) { +@@ -545,7 +544,7 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk) + struct mptcp_addr_info mpc_addr; + bool backup = false; + +- local_address((struct sock_common *)msk->first, &mpc_addr); ++ mptcp_local_address((struct sock_common *)msk->first, &mpc_addr); + rcu_read_lock(); + entry = __lookup_addr(pernet, &mpc_addr, false); + if (entry) { +@@ -753,7 +752,7 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); + struct mptcp_addr_info local, remote; + +- local_address((struct sock_common *)ssk, &local); ++ mptcp_local_address((struct sock_common *)ssk, &local); + if (!mptcp_addresses_equal(&local, addr, addr->port)) + continue; + +@@ -1072,8 +1071,8 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc) + /* The 0 ID mapping is defined by the first subflow, copied into the msk + * addr + */ +- local_address((struct sock_common *)msk, &msk_local); +- local_address((struct sock_common *)skc, &skc_local); ++ mptcp_local_address((struct sock_common *)msk, &msk_local); ++ mptcp_local_address((struct sock_common *)skc, &skc_local); + if (mptcp_addresses_equal(&msk_local, &skc_local, false)) + return 0; + +@@ -1111,6 +1110,24 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct sock_common *skc) + return ret; + } + ++bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc) ++{ ++ struct pm_nl_pernet *pernet = pm_nl_get_pernet_from_msk(msk); ++ struct mptcp_pm_addr_entry *entry; ++ bool backup = false; ++ ++ rcu_read_lock(); ++ list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) { ++ if (mptcp_addresses_equal(&entry->addr, skc, entry->addr.port)) { ++ backup = !!(entry->flags & MPTCP_PM_ADDR_FLAG_BACKUP); ++ break; ++ } ++ } ++ rcu_read_unlock(); ++ ++ return backup; ++} ++ + #define MPTCP_PM_CMD_GRP_OFFSET 0 + #define MPTCP_PM_EV_GRP_OFFSET 1 + +@@ -1343,8 +1360,8 @@ static int mptcp_nl_cmd_add_addr(struct sk_buff *skb, struct genl_info *info) + if (ret < 0) + return ret; + +- if (addr.addr.port && !(addr.flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) { +- GENL_SET_ERR_MSG(info, "flags must have signal when using port"); ++ if (addr.addr.port && !address_use_port(&addr)) { ++ GENL_SET_ERR_MSG(info, "flags must have signal and not subflow when using port"); + return -EINVAL; + } + +@@ -1507,7 +1524,7 @@ static int mptcp_nl_remove_id_zero_address(struct net *net, + if (list_empty(&msk->conn_list) || mptcp_pm_is_userspace(msk)) + goto next; + +- local_address((struct sock_common *)msk, &msk_local); ++ mptcp_local_address((struct sock_common *)msk, &msk_local); + if (!mptcp_addresses_equal(&msk_local, addr, addr->port)) + goto next; + +diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c +index 414ed70e7ba2e..278ba5955dfd1 100644 +--- a/net/mptcp/pm_userspace.c ++++ b/net/mptcp/pm_userspace.c +@@ -159,6 +159,24 @@ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk, + return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry, true); + } + ++bool mptcp_userspace_pm_is_backup(struct mptcp_sock *msk, ++ struct mptcp_addr_info *skc) ++{ ++ struct mptcp_pm_addr_entry *entry; ++ bool backup = false; ++ ++ spin_lock_bh(&msk->pm.lock); ++ list_for_each_entry(entry, &msk->pm.userspace_pm_local_addr_list, list) { ++ if (mptcp_addresses_equal(&entry->addr, skc, false)) { ++ backup = !!(entry->flags & MPTCP_PM_ADDR_FLAG_BACKUP); ++ break; ++ } ++ } ++ spin_unlock_bh(&msk->pm.lock); ++ ++ return backup; ++} ++ + int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info) + { + struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN]; +diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h +index a2b85eebb620b..9e582725ccb41 100644 +--- a/net/mptcp/protocol.h ++++ b/net/mptcp/protocol.h +@@ -618,6 +618,7 @@ void __mptcp_unaccepted_force_close(struct sock *sk); + + bool mptcp_addresses_equal(const struct mptcp_addr_info *a, + const struct mptcp_addr_info *b, bool use_port); ++void mptcp_local_address(const struct sock_common *skc, struct mptcp_addr_info *addr); + + /* called with sk socket lock held */ + int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc, +@@ -912,6 +913,9 @@ bool mptcp_pm_rm_addr_signal(struct mptcp_sock *msk, unsigned int remaining, + struct mptcp_rm_list *rm_list); + int mptcp_pm_get_local_id(struct mptcp_sock *msk, struct sock_common *skc); + int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc); ++bool mptcp_pm_is_backup(struct mptcp_sock *msk, struct sock_common *skc); ++bool mptcp_pm_nl_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc); ++bool mptcp_userspace_pm_is_backup(struct mptcp_sock *msk, struct mptcp_addr_info *skc); + + static inline u8 subflow_get_local_id(const struct mptcp_subflow_context *subflow) + { +diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c +index 96bdd4119578f..b47673b370279 100644 +--- a/net/mptcp/subflow.c ++++ b/net/mptcp/subflow.c +@@ -99,6 +99,7 @@ static struct mptcp_sock *subflow_token_join_request(struct request_sock *req) + return NULL; + } + subflow_req->local_id = local_id; ++ subflow_req->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)req); + + return msk; + } +@@ -165,6 +166,9 @@ static int subflow_check_req(struct request_sock *req, + return 0; + } else if (opt_mp_join) { + SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNRX); ++ ++ if (mp_opt.backup) ++ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNBACKUPRX); + } + + if (opt_mp_capable && listener->request_mptcp) { +@@ -469,6 +473,9 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb) + subflow->mp_join = 1; + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX); + ++ if (subflow->backup) ++ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKBACKUPRX); ++ + if (subflow_use_different_dport(mptcp_sk(parent), sk)) { + pr_debug("synack inet_dport=%d %d", + ntohs(inet_sk(sk)->inet_dport), +@@ -507,6 +514,8 @@ static int subflow_chk_local_id(struct sock *sk) + return err; + + subflow_set_local_id(subflow, err); ++ subflow->request_bkup = mptcp_pm_is_backup(msk, (struct sock_common *)sk); ++ + return 0; + } + +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index d18b698139caf..10180d280e792 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -3119,18 +3119,17 @@ static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx, + return ERR_PTR(err); + } + +-int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src) ++int nft_expr_clone(struct nft_expr *dst, struct nft_expr *src, gfp_t gfp) + { + int err; + +- if (src->ops->clone) { +- dst->ops = src->ops; +- err = src->ops->clone(dst, src); +- if (err < 0) +- return err; +- } else { +- memcpy(dst, src, src->ops->size); +- } ++ if (WARN_ON_ONCE(!src->ops->clone)) ++ return -EINVAL; ++ ++ dst->ops = src->ops; ++ err = src->ops->clone(dst, src, gfp); ++ if (err < 0) ++ return err; + + __module_get(src->ops->type->owner); + +@@ -3516,6 +3515,15 @@ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *r + nf_tables_rule_destroy(ctx, rule); + } + ++/** nft_chain_validate - loop detection and hook validation ++ * ++ * @ctx: context containing call depth and base chain ++ * @chain: chain to validate ++ * ++ * Walk through the rules of the given chain and chase all jumps/gotos ++ * and set lookups until either the jump limit is hit or all reachable ++ * chains have been validated. ++ */ + int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain) + { + struct nft_expr *expr, *last; +@@ -3534,6 +3542,9 @@ int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain) + if (!expr->ops->validate) + continue; + ++ /* This may call nft_chain_validate() recursively, ++ * callers that do so must increment ctx->level. ++ */ + err = expr->ops->validate(ctx, expr, &data); + if (err < 0) + return err; +@@ -6060,7 +6071,7 @@ int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set, + if (!expr) + goto err_expr; + +- err = nft_expr_clone(expr, set->exprs[i]); ++ err = nft_expr_clone(expr, set->exprs[i], GFP_KERNEL_ACCOUNT); + if (err < 0) { + kfree(expr); + goto err_expr; +@@ -6099,7 +6110,7 @@ static int nft_set_elem_expr_setup(struct nft_ctx *ctx, + + for (i = 0; i < num_exprs; i++) { + expr = nft_setelem_expr_at(elem_expr, elem_expr->size); +- err = nft_expr_clone(expr, expr_array[i]); ++ err = nft_expr_clone(expr, expr_array[i], GFP_KERNEL_ACCOUNT); + if (err < 0) + goto err_elem_expr_setup; + +@@ -10191,143 +10202,6 @@ int nft_chain_validate_hooks(const struct nft_chain *chain, + } + EXPORT_SYMBOL_GPL(nft_chain_validate_hooks); + +-/* +- * Loop detection - walk through the ruleset beginning at the destination chain +- * of a new jump until either the source chain is reached (loop) or all +- * reachable chains have been traversed. +- * +- * The loop check is performed whenever a new jump verdict is added to an +- * expression or verdict map or a verdict map is bound to a new chain. +- */ +- +-static int nf_tables_check_loops(const struct nft_ctx *ctx, +- const struct nft_chain *chain); +- +-static int nft_check_loops(const struct nft_ctx *ctx, +- const struct nft_set_ext *ext) +-{ +- const struct nft_data *data; +- int ret; +- +- data = nft_set_ext_data(ext); +- switch (data->verdict.code) { +- case NFT_JUMP: +- case NFT_GOTO: +- ret = nf_tables_check_loops(ctx, data->verdict.chain); +- break; +- default: +- ret = 0; +- break; +- } +- +- return ret; +-} +- +-static int nf_tables_loop_check_setelem(const struct nft_ctx *ctx, +- struct nft_set *set, +- const struct nft_set_iter *iter, +- struct nft_set_elem *elem) +-{ +- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); +- +- if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) && +- *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END) +- return 0; +- +- return nft_check_loops(ctx, ext); +-} +- +-static int nft_set_catchall_loops(const struct nft_ctx *ctx, +- struct nft_set *set) +-{ +- u8 genmask = nft_genmask_next(ctx->net); +- struct nft_set_elem_catchall *catchall; +- struct nft_set_ext *ext; +- int ret = 0; +- +- list_for_each_entry_rcu(catchall, &set->catchall_list, list) { +- ext = nft_set_elem_ext(set, catchall->elem); +- if (!nft_set_elem_active(ext, genmask)) +- continue; +- +- ret = nft_check_loops(ctx, ext); +- if (ret < 0) +- return ret; +- } +- +- return ret; +-} +- +-static int nf_tables_check_loops(const struct nft_ctx *ctx, +- const struct nft_chain *chain) +-{ +- const struct nft_rule *rule; +- const struct nft_expr *expr, *last; +- struct nft_set *set; +- struct nft_set_binding *binding; +- struct nft_set_iter iter; +- +- if (ctx->chain == chain) +- return -ELOOP; +- +- list_for_each_entry(rule, &chain->rules, list) { +- nft_rule_for_each_expr(expr, last, rule) { +- struct nft_immediate_expr *priv; +- const struct nft_data *data; +- int err; +- +- if (strcmp(expr->ops->type->name, "immediate")) +- continue; +- +- priv = nft_expr_priv(expr); +- if (priv->dreg != NFT_REG_VERDICT) +- continue; +- +- data = &priv->data; +- switch (data->verdict.code) { +- case NFT_JUMP: +- case NFT_GOTO: +- err = nf_tables_check_loops(ctx, +- data->verdict.chain); +- if (err < 0) +- return err; +- break; +- default: +- break; +- } +- } +- } +- +- list_for_each_entry(set, &ctx->table->sets, list) { +- if (!nft_is_active_next(ctx->net, set)) +- continue; +- if (!(set->flags & NFT_SET_MAP) || +- set->dtype != NFT_DATA_VERDICT) +- continue; +- +- list_for_each_entry(binding, &set->bindings, list) { +- if (!(binding->flags & NFT_SET_MAP) || +- binding->chain != chain) +- continue; +- +- iter.genmask = nft_genmask_next(ctx->net); +- iter.skip = 0; +- iter.count = 0; +- iter.err = 0; +- iter.fn = nf_tables_loop_check_setelem; +- +- set->ops->walk(ctx, set, &iter); +- if (!iter.err) +- iter.err = nft_set_catchall_loops(ctx, set); +- +- if (iter.err < 0) +- return iter.err; +- } +- } +- +- return 0; +-} +- + /** + * nft_parse_u32_check - fetch u32 attribute and check for maximum value + * +@@ -10440,7 +10314,7 @@ static int nft_validate_register_store(const struct nft_ctx *ctx, + if (data != NULL && + (data->verdict.code == NFT_GOTO || + data->verdict.code == NFT_JUMP)) { +- err = nf_tables_check_loops(ctx, data->verdict.chain); ++ err = nft_chain_validate(ctx, data->verdict.chain); + if (err < 0) + return err; + } +diff --git a/net/netfilter/nft_connlimit.c b/net/netfilter/nft_connlimit.c +index d657f999a11b6..793994622b87d 100644 +--- a/net/netfilter/nft_connlimit.c ++++ b/net/netfilter/nft_connlimit.c +@@ -209,12 +209,12 @@ static void nft_connlimit_destroy(const struct nft_ctx *ctx, + nft_connlimit_do_destroy(ctx, priv); + } + +-static int nft_connlimit_clone(struct nft_expr *dst, const struct nft_expr *src) ++static int nft_connlimit_clone(struct nft_expr *dst, const struct nft_expr *src, gfp_t gfp) + { + struct nft_connlimit *priv_dst = nft_expr_priv(dst); + struct nft_connlimit *priv_src = nft_expr_priv(src); + +- priv_dst->list = kmalloc(sizeof(*priv_dst->list), GFP_ATOMIC); ++ priv_dst->list = kmalloc(sizeof(*priv_dst->list), gfp); + if (!priv_dst->list) + return -ENOMEM; + +diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c +index f4d3573e8782d..b5fe7fe4b60db 100644 +--- a/net/netfilter/nft_counter.c ++++ b/net/netfilter/nft_counter.c +@@ -225,7 +225,7 @@ static void nft_counter_destroy(const struct nft_ctx *ctx, + nft_counter_do_destroy(priv); + } + +-static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src) ++static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src, gfp_t gfp) + { + struct nft_counter_percpu_priv *priv = nft_expr_priv(src); + struct nft_counter_percpu_priv *priv_clone = nft_expr_priv(dst); +@@ -235,7 +235,7 @@ static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src) + + nft_counter_fetch(priv, &total); + +- cpu_stats = alloc_percpu_gfp(struct nft_counter, GFP_ATOMIC); ++ cpu_stats = alloc_percpu_gfp(struct nft_counter, gfp); + if (cpu_stats == NULL) + return -ENOMEM; + +diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c +index a470e5f612843..953aba871f45c 100644 +--- a/net/netfilter/nft_dynset.c ++++ b/net/netfilter/nft_dynset.c +@@ -35,7 +35,7 @@ static int nft_dynset_expr_setup(const struct nft_dynset *priv, + + for (i = 0; i < priv->num_exprs; i++) { + expr = nft_setelem_expr_at(elem_expr, elem_expr->size); +- if (nft_expr_clone(expr, priv->expr_array[i]) < 0) ++ if (nft_expr_clone(expr, priv->expr_array[i], GFP_ATOMIC) < 0) + return -1; + + elem_expr->size += priv->expr_array[i]->ops->size; +diff --git a/net/netfilter/nft_last.c b/net/netfilter/nft_last.c +index eaa54964cf23c..2ff387b649cfc 100644 +--- a/net/netfilter/nft_last.c ++++ b/net/netfilter/nft_last.c +@@ -101,12 +101,12 @@ static void nft_last_destroy(const struct nft_ctx *ctx, + kfree(priv->last); + } + +-static int nft_last_clone(struct nft_expr *dst, const struct nft_expr *src) ++static int nft_last_clone(struct nft_expr *dst, const struct nft_expr *src, gfp_t gfp) + { + struct nft_last_priv *priv_dst = nft_expr_priv(dst); + struct nft_last_priv *priv_src = nft_expr_priv(src); + +- priv_dst->last = kzalloc(sizeof(*priv_dst->last), GFP_ATOMIC); ++ priv_dst->last = kzalloc(sizeof(*priv_dst->last), gfp); + if (!priv_dst->last) + return -ENOMEM; + +diff --git a/net/netfilter/nft_limit.c b/net/netfilter/nft_limit.c +index 36ded7d43262c..a263596da99ac 100644 +--- a/net/netfilter/nft_limit.c ++++ b/net/netfilter/nft_limit.c +@@ -150,7 +150,7 @@ static void nft_limit_destroy(const struct nft_ctx *ctx, + } + + static int nft_limit_clone(struct nft_limit_priv *priv_dst, +- const struct nft_limit_priv *priv_src) ++ const struct nft_limit_priv *priv_src, gfp_t gfp) + { + priv_dst->tokens_max = priv_src->tokens_max; + priv_dst->rate = priv_src->rate; +@@ -158,7 +158,7 @@ static int nft_limit_clone(struct nft_limit_priv *priv_dst, + priv_dst->burst = priv_src->burst; + priv_dst->invert = priv_src->invert; + +- priv_dst->limit = kmalloc(sizeof(*priv_dst->limit), GFP_ATOMIC); ++ priv_dst->limit = kmalloc(sizeof(*priv_dst->limit), gfp); + if (!priv_dst->limit) + return -ENOMEM; + +@@ -222,14 +222,15 @@ static void nft_limit_pkts_destroy(const struct nft_ctx *ctx, + nft_limit_destroy(ctx, &priv->limit); + } + +-static int nft_limit_pkts_clone(struct nft_expr *dst, const struct nft_expr *src) ++static int nft_limit_pkts_clone(struct nft_expr *dst, const struct nft_expr *src, ++ gfp_t gfp) + { + struct nft_limit_priv_pkts *priv_dst = nft_expr_priv(dst); + struct nft_limit_priv_pkts *priv_src = nft_expr_priv(src); + + priv_dst->cost = priv_src->cost; + +- return nft_limit_clone(&priv_dst->limit, &priv_src->limit); ++ return nft_limit_clone(&priv_dst->limit, &priv_src->limit, gfp); + } + + static struct nft_expr_type nft_limit_type; +@@ -280,12 +281,13 @@ static void nft_limit_bytes_destroy(const struct nft_ctx *ctx, + nft_limit_destroy(ctx, priv); + } + +-static int nft_limit_bytes_clone(struct nft_expr *dst, const struct nft_expr *src) ++static int nft_limit_bytes_clone(struct nft_expr *dst, const struct nft_expr *src, ++ gfp_t gfp) + { + struct nft_limit_priv *priv_dst = nft_expr_priv(dst); + struct nft_limit_priv *priv_src = nft_expr_priv(src); + +- return nft_limit_clone(priv_dst, priv_src); ++ return nft_limit_clone(priv_dst, priv_src, gfp); + } + + static const struct nft_expr_ops nft_limit_bytes_ops = { +diff --git a/net/netfilter/nft_quota.c b/net/netfilter/nft_quota.c +index 410a5fcf88309..ef8e7cdbd0e6a 100644 +--- a/net/netfilter/nft_quota.c ++++ b/net/netfilter/nft_quota.c +@@ -232,7 +232,7 @@ static void nft_quota_destroy(const struct nft_ctx *ctx, + return nft_quota_do_destroy(ctx, priv); + } + +-static int nft_quota_clone(struct nft_expr *dst, const struct nft_expr *src) ++static int nft_quota_clone(struct nft_expr *dst, const struct nft_expr *src, gfp_t gfp) + { + struct nft_quota *priv_dst = nft_expr_priv(dst); + struct nft_quota *priv_src = nft_expr_priv(src); +@@ -240,7 +240,7 @@ static int nft_quota_clone(struct nft_expr *dst, const struct nft_expr *src) + priv_dst->quota = priv_src->quota; + priv_dst->flags = priv_src->flags; + +- priv_dst->consumed = kmalloc(sizeof(*priv_dst->consumed), GFP_ATOMIC); ++ priv_dst->consumed = kmalloc(sizeof(*priv_dst->consumed), gfp); + if (!priv_dst->consumed) + return -ENOMEM; + +diff --git a/net/sctp/input.c b/net/sctp/input.c +index 4f43afa8678f9..4ee9374dcfb92 100644 +--- a/net/sctp/input.c ++++ b/net/sctp/input.c +@@ -748,15 +748,19 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep) + struct sock *sk = ep->base.sk; + struct net *net = sock_net(sk); + struct sctp_hashbucket *head; ++ int err = 0; + + ep->hashent = sctp_ep_hashfn(net, ep->base.bind_addr.port); + head = &sctp_ep_hashtable[ep->hashent]; + ++ write_lock(&head->lock); + if (sk->sk_reuseport) { + bool any = sctp_is_ep_boundall(sk); + struct sctp_endpoint *ep2; + struct list_head *list; +- int cnt = 0, err = 1; ++ int cnt = 0; ++ ++ err = 1; + + list_for_each(list, &ep->base.bind_addr.address_list) + cnt++; +@@ -774,24 +778,24 @@ static int __sctp_hash_endpoint(struct sctp_endpoint *ep) + if (!err) { + err = reuseport_add_sock(sk, sk2, any); + if (err) +- return err; ++ goto out; + break; + } else if (err < 0) { +- return err; ++ goto out; + } + } + + if (err) { + err = reuseport_alloc(sk, any); + if (err) +- return err; ++ goto out; + } + } + +- write_lock(&head->lock); + hlist_add_head(&ep->node, &head->chain); ++out: + write_unlock(&head->lock); +- return 0; ++ return err; + } + + /* Add an endpoint to the hash. Local BH-safe. */ +@@ -816,10 +820,9 @@ static void __sctp_unhash_endpoint(struct sctp_endpoint *ep) + + head = &sctp_ep_hashtable[ep->hashent]; + ++ write_lock(&head->lock); + if (rcu_access_pointer(sk->sk_reuseport_cb)) + reuseport_detach_sock(sk); +- +- write_lock(&head->lock); + hlist_del_init(&ep->node); + write_unlock(&head->lock); + } +diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c +index 6debf4fd42d4e..cef623ea15060 100644 +--- a/net/sunrpc/sched.c ++++ b/net/sunrpc/sched.c +@@ -369,8 +369,10 @@ static void rpc_make_runnable(struct workqueue_struct *wq, + if (RPC_IS_ASYNC(task)) { + INIT_WORK(&task->u.tk_work, rpc_async_schedule); + queue_work(wq, &task->u.tk_work); +- } else ++ } else { ++ smp_mb__after_atomic(); + wake_up_bit(&task->tk_runstate, RPC_TASK_QUEUED); ++ } + } + + /* +diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c +index db71f35b67b86..7d59f9a6c9046 100644 +--- a/net/unix/af_unix.c ++++ b/net/unix/af_unix.c +@@ -1461,6 +1461,7 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr, + struct unix_sock *u = unix_sk(sk), *newu, *otheru; + struct net *net = sock_net(sk); + struct sk_buff *skb = NULL; ++ unsigned char state; + long timeo; + int err; + +@@ -1505,7 +1506,6 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr, + goto out; + } + +- /* Latch state of peer */ + unix_state_lock(other); + + /* Apparently VFS overslept socket death. Retry. */ +@@ -1535,37 +1535,21 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr, + goto restart; + } + +- /* Latch our state. +- +- It is tricky place. We need to grab our state lock and cannot +- drop lock on peer. It is dangerous because deadlock is +- possible. Connect to self case and simultaneous +- attempt to connect are eliminated by checking socket +- state. other is TCP_LISTEN, if sk is TCP_LISTEN we +- check this before attempt to grab lock. +- +- Well, and we have to recheck the state after socket locked. ++ /* self connect and simultaneous connect are eliminated ++ * by rejecting TCP_LISTEN socket to avoid deadlock. + */ +- switch (READ_ONCE(sk->sk_state)) { +- case TCP_CLOSE: +- /* This is ok... continue with connect */ +- break; +- case TCP_ESTABLISHED: +- /* Socket is already connected */ +- err = -EISCONN; +- goto out_unlock; +- default: +- err = -EINVAL; ++ state = READ_ONCE(sk->sk_state); ++ if (unlikely(state != TCP_CLOSE)) { ++ err = state == TCP_ESTABLISHED ? -EISCONN : -EINVAL; + goto out_unlock; + } + + unix_state_lock_nested(sk, U_LOCK_SECOND); + +- if (sk->sk_state != TCP_CLOSE) { ++ if (unlikely(sk->sk_state != TCP_CLOSE)) { ++ err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EINVAL; + unix_state_unlock(sk); +- unix_state_unlock(other); +- sock_put(other); +- goto restart; ++ goto out_unlock; + } + + err = security_unix_stream_connect(sk, other, newsk); +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index a00df7b89ca86..214eee6105c7f 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -3346,6 +3346,33 @@ static int __nl80211_set_channel(struct cfg80211_registered_device *rdev, + if (chandef.chan != cur_chan) + return -EBUSY; + ++ /* only allow this for regular channel widths */ ++ switch (wdev->links[link_id].ap.chandef.width) { ++ case NL80211_CHAN_WIDTH_20_NOHT: ++ case NL80211_CHAN_WIDTH_20: ++ case NL80211_CHAN_WIDTH_40: ++ case NL80211_CHAN_WIDTH_80: ++ case NL80211_CHAN_WIDTH_80P80: ++ case NL80211_CHAN_WIDTH_160: ++ case NL80211_CHAN_WIDTH_320: ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ switch (chandef.width) { ++ case NL80211_CHAN_WIDTH_20_NOHT: ++ case NL80211_CHAN_WIDTH_20: ++ case NL80211_CHAN_WIDTH_40: ++ case NL80211_CHAN_WIDTH_80: ++ case NL80211_CHAN_WIDTH_80P80: ++ case NL80211_CHAN_WIDTH_160: ++ case NL80211_CHAN_WIDTH_320: ++ break; ++ default: ++ return -EINVAL; ++ } ++ + result = rdev_set_ap_chanwidth(rdev, dev, link_id, + &chandef); + if (result) +@@ -4394,10 +4421,7 @@ static void get_key_callback(void *c, struct key_params *params) + struct nlattr *key; + struct get_key_cookie *cookie = c; + +- if ((params->key && +- nla_put(cookie->msg, NL80211_ATTR_KEY_DATA, +- params->key_len, params->key)) || +- (params->seq && ++ if ((params->seq && + nla_put(cookie->msg, NL80211_ATTR_KEY_SEQ, + params->seq_len, params->seq)) || + (params->cipher && +@@ -4409,10 +4433,7 @@ static void get_key_callback(void *c, struct key_params *params) + if (!key) + goto nla_put_failure; + +- if ((params->key && +- nla_put(cookie->msg, NL80211_KEY_DATA, +- params->key_len, params->key)) || +- (params->seq && ++ if ((params->seq && + nla_put(cookie->msg, NL80211_KEY_SEQ, + params->seq_len, params->seq)) || + (params->cipher && +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index f460ac80c8e49..0ffacc779cd66 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1989,6 +1989,8 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid) + } + + static const struct snd_pci_quirk force_connect_list[] = { ++ SND_PCI_QUIRK(0x103c, 0x83e2, "HP EliteDesk 800 G4", 1), ++ SND_PCI_QUIRK(0x103c, 0x83ef, "HP MP9 G4 Retail System AMS", 1), + SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1), + SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1), + SND_PCI_QUIRK(0x103c, 0x8711, "HP", 1), +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index edfcd38175d23..93d65a1acc475 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -10173,6 +10173,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x8086, 0x3038, "Intel NUC 13", ALC295_FIXUP_CHROME_BOOK), + SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), + + #if 0 + /* Below is a quirk table taken from the old code. +diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c +index c438fccac8e98..b4a40a880035c 100644 +--- a/sound/soc/amd/yc/acp6x-mach.c ++++ b/sound/soc/amd/yc/acp6x-mach.c +@@ -381,6 +381,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { + DMI_MATCH(DMI_BOARD_NAME, "8A43"), + } + }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "HP"), ++ DMI_MATCH(DMI_BOARD_NAME, "8A44"), ++ } ++ }, + { + .driver_data = &acp6x_card, + .matches = { +diff --git a/sound/soc/codecs/wcd938x-sdw.c b/sound/soc/codecs/wcd938x-sdw.c +index 5b5b7c267a616..061c77d0cd45b 100644 +--- a/sound/soc/codecs/wcd938x-sdw.c ++++ b/sound/soc/codecs/wcd938x-sdw.c +@@ -1252,12 +1252,12 @@ static int wcd9380_probe(struct sdw_slave *pdev, + pdev->prop.lane_control_support = true; + pdev->prop.simple_clk_stop_capable = true; + if (wcd->is_tx) { +- pdev->prop.source_ports = GENMASK(WCD938X_MAX_SWR_PORTS, 0); ++ pdev->prop.source_ports = GENMASK(WCD938X_MAX_SWR_PORTS - 1, 0); + pdev->prop.src_dpn_prop = wcd938x_dpn_prop; + wcd->ch_info = &wcd938x_sdw_tx_ch_info[0]; + pdev->prop.wake_capable = true; + } else { +- pdev->prop.sink_ports = GENMASK(WCD938X_MAX_SWR_PORTS, 0); ++ pdev->prop.sink_ports = GENMASK(WCD938X_MAX_SWR_PORTS - 1, 0); + pdev->prop.sink_dpn_prop = wcd938x_dpn_prop; + wcd->ch_info = &wcd938x_sdw_rx_ch_info[0]; + } +diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c +index 264ec05a3c675..054da9d2776cd 100644 +--- a/sound/soc/codecs/wsa881x.c ++++ b/sound/soc/codecs/wsa881x.c +@@ -1131,7 +1131,7 @@ static int wsa881x_probe(struct sdw_slave *pdev, + wsa881x->sconfig.frame_rate = 48000; + wsa881x->sconfig.direction = SDW_DATA_DIR_RX; + wsa881x->sconfig.type = SDW_STREAM_PDM; +- pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS, 0); ++ pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS - 1, 0); + pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop; + pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY; + gpiod_direction_output(wsa881x->sd_n, 1); +diff --git a/sound/soc/codecs/wsa883x.c b/sound/soc/codecs/wsa883x.c +index cd96c35a150c8..f4b81ebab3537 100644 +--- a/sound/soc/codecs/wsa883x.c ++++ b/sound/soc/codecs/wsa883x.c +@@ -1410,7 +1410,15 @@ static int wsa883x_probe(struct sdw_slave *pdev, + wsa883x->sconfig.direction = SDW_DATA_DIR_RX; + wsa883x->sconfig.type = SDW_STREAM_PDM; + +- pdev->prop.sink_ports = GENMASK(WSA883X_MAX_SWR_PORTS, 0); ++ /** ++ * Port map index starts with 0, however the data port for this codec ++ * are from index 1 ++ */ ++ if (of_property_read_u32_array(dev->of_node, "qcom,port-mapping", &pdev->m_port_map[1], ++ WSA883X_MAX_SWR_PORTS)) ++ dev_dbg(dev, "Static Port mapping not specified\n"); ++ ++ pdev->prop.sink_ports = GENMASK(WSA883X_MAX_SWR_PORTS - 1, 0); + pdev->prop.simple_clk_stop_capable = true; + pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop; + pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY; +diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c +index 94b169a5493b5..5218e40aeb1bb 100644 +--- a/sound/soc/meson/axg-fifo.c ++++ b/sound/soc/meson/axg-fifo.c +@@ -207,25 +207,18 @@ static irqreturn_t axg_fifo_pcm_irq_block(int irq, void *dev_id) + status = FIELD_GET(STATUS1_INT_STS, status); + axg_fifo_ack_irq(fifo, status); + +- /* Use the thread to call period elapsed on nonatomic links */ +- if (status & FIFO_INT_COUNT_REPEAT) +- return IRQ_WAKE_THREAD; ++ if (status & ~FIFO_INT_COUNT_REPEAT) ++ dev_dbg(axg_fifo_dev(ss), "unexpected irq - STS 0x%02x\n", ++ status); + +- dev_dbg(axg_fifo_dev(ss), "unexpected irq - STS 0x%02x\n", +- status); ++ if (status & FIFO_INT_COUNT_REPEAT) { ++ snd_pcm_period_elapsed(ss); ++ return IRQ_HANDLED; ++ } + + return IRQ_NONE; + } + +-static irqreturn_t axg_fifo_pcm_irq_block_thread(int irq, void *dev_id) +-{ +- struct snd_pcm_substream *ss = dev_id; +- +- snd_pcm_period_elapsed(ss); +- +- return IRQ_HANDLED; +-} +- + int axg_fifo_pcm_open(struct snd_soc_component *component, + struct snd_pcm_substream *ss) + { +@@ -251,8 +244,9 @@ int axg_fifo_pcm_open(struct snd_soc_component *component, + if (ret) + return ret; + +- ret = request_threaded_irq(fifo->irq, axg_fifo_pcm_irq_block, +- axg_fifo_pcm_irq_block_thread, ++ /* Use the threaded irq handler only with non-atomic links */ ++ ret = request_threaded_irq(fifo->irq, NULL, ++ axg_fifo_pcm_irq_block, + IRQF_ONESHOT, dev_name(dev), ss); + if (ret) + return ret; +diff --git a/sound/soc/sof/mediatek/mt8195/mt8195.c b/sound/soc/sof/mediatek/mt8195/mt8195.c +index 3c81e84fcecfa..53cadbe8a05cc 100644 +--- a/sound/soc/sof/mediatek/mt8195/mt8195.c ++++ b/sound/soc/sof/mediatek/mt8195/mt8195.c +@@ -662,7 +662,7 @@ static struct snd_sof_dsp_ops sof_mt8195_ops = { + static struct snd_sof_of_mach sof_mt8195_machs[] = { + { + .compatible = "google,tomato", +- .sof_tplg_filename = "sof-mt8195-mt6359-rt1019-rt5682-dts.tplg" ++ .sof_tplg_filename = "sof-mt8195-mt6359-rt1019-rt5682.tplg" + }, { + .compatible = "mediatek,mt8195", + .sof_tplg_filename = "sof-mt8195.tplg" +diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c +index f4437015d43a7..9df49a880b750 100644 +--- a/sound/usb/line6/driver.c ++++ b/sound/usb/line6/driver.c +@@ -286,12 +286,14 @@ static void line6_data_received(struct urb *urb) + { + struct usb_line6 *line6 = (struct usb_line6 *)urb->context; + struct midi_buffer *mb = &line6->line6midi->midibuf_in; ++ unsigned long flags; + int done; + + if (urb->status == -ESHUTDOWN) + return; + + if (line6->properties->capabilities & LINE6_CAP_CONTROL_MIDI) { ++ spin_lock_irqsave(&line6->line6midi->lock, flags); + done = + line6_midibuf_write(mb, urb->transfer_buffer, urb->actual_length); + +@@ -300,12 +302,15 @@ static void line6_data_received(struct urb *urb) + dev_dbg(line6->ifcdev, "%d %d buffer overflow - message skipped\n", + done, urb->actual_length); + } ++ spin_unlock_irqrestore(&line6->line6midi->lock, flags); + + for (;;) { ++ spin_lock_irqsave(&line6->line6midi->lock, flags); + done = + line6_midibuf_read(mb, line6->buffer_message, + LINE6_MIDI_MESSAGE_MAXLEN, + LINE6_MIDIBUF_READ_RX); ++ spin_unlock_irqrestore(&line6->line6midi->lock, flags); + + if (done <= 0) + break; +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h +index 5d72dc8441cbb..af1b8cf5a9883 100644 +--- a/sound/usb/quirks-table.h ++++ b/sound/usb/quirks-table.h +@@ -2594,6 +2594,10 @@ YAMAHA_DEVICE(0x7010, "UB99"), + } + }, + ++/* Stanton ScratchAmp */ ++{ USB_DEVICE(0x103d, 0x0100) }, ++{ USB_DEVICE(0x103d, 0x0101) }, ++ + /* Novation EMS devices */ + { + USB_DEVICE_VENDOR_SPEC(0x1235, 0x0001), +diff --git a/tools/arch/arm64/include/asm/cputype.h b/tools/arch/arm64/include/asm/cputype.h +index abc418650fec0..63e90432f8c9f 100644 +--- a/tools/arch/arm64/include/asm/cputype.h ++++ b/tools/arch/arm64/include/asm/cputype.h +@@ -83,6 +83,9 @@ + #define ARM_CPU_PART_CORTEX_X2 0xD48 + #define ARM_CPU_PART_NEOVERSE_N2 0xD49 + #define ARM_CPU_PART_CORTEX_A78C 0xD4B ++#define ARM_CPU_PART_NEOVERSE_V2 0xD4F ++#define ARM_CPU_PART_CORTEX_X4 0xD82 ++#define ARM_CPU_PART_NEOVERSE_V3 0xD84 + + #define APM_CPU_PART_POTENZA 0x000 + +@@ -145,6 +148,9 @@ + #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) + #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) + #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) ++#define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2) ++#define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4) ++#define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) + #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) + #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) + #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) +diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c +index 21817eb510396..7e0b846e17eef 100644 +--- a/tools/bpf/bpftool/prog.c ++++ b/tools/bpf/bpftool/prog.c +@@ -1707,10 +1707,6 @@ static int load_with_options(int argc, char **argv, bool first_prog_only) + } + + if (pinmaps) { +- err = create_and_mount_bpffs_dir(pinmaps); +- if (err) +- goto err_unpin; +- + err = bpf_object__pin_maps(obj, pinmaps); + if (err) { + p_err("failed to pin all maps"); +diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c +index d63a20fbed339..210b806351bcf 100644 +--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c ++++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c +@@ -152,7 +152,8 @@ static void test_send_signal_tracepoint(bool signal_thread) + static void test_send_signal_perf(bool signal_thread) + { + struct perf_event_attr attr = { +- .sample_period = 1, ++ .freq = 1, ++ .sample_freq = 1000, + .type = PERF_TYPE_SOFTWARE, + .config = PERF_COUNT_SW_CPU_CLOCK, + }; +diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh +index 51f68bb6bdb8a..a283107646540 100755 +--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh ++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh +@@ -1800,6 +1800,8 @@ chk_prio_nr() + { + local mp_prio_nr_tx=$1 + local mp_prio_nr_rx=$2 ++ local mpj_syn=$3 ++ local mpj_syn_ack=$4 + local count + local dump_stats + +@@ -1827,6 +1829,30 @@ chk_prio_nr() + echo "[ ok ]" + fi + ++ printf "%-${nr_blank}s %s" " " "bkp syn" ++ count=$(get_counter ${ns1} "MPTcpExtMPJoinSynBackupRx") ++ if [ -z "$count" ]; then ++ echo -n "[skip]" ++ elif [ "$count" != "$mpj_syn" ]; then ++ echo "[fail] got $count JOIN[s] syn with Backup expected $mpj_syn" ++ fail_test ++ dump_stats=1 ++ else ++ echo -n "[ ok ]" ++ fi ++ ++ echo -n " - synack " ++ count=$(get_counter ${ns2} "MPTcpExtMPJoinSynAckBackupRx") ++ if [ -z "$count" ]; then ++ echo "[skip]" ++ elif [ "$count" != "$mpj_syn_ack" ]; then ++ echo "[fail] got $count JOIN[s] synack with Backup expected $mpj_syn_ack" ++ fail_test ++ dump_stats=1 ++ else ++ echo "[ ok ]" ++ fi ++ + [ "${dump_stats}" = 1 ] && dump_stats + } + +@@ -2633,11 +2659,23 @@ backup_tests() + pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow,backup + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup + chk_join_nr 1 1 1 +- chk_prio_nr 0 1 ++ chk_prio_nr 0 1 1 0 + fi + + # single address, backup + if reset "single address, backup" && ++ continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then ++ pm_nl_set_limits $ns1 0 1 ++ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,backup ++ pm_nl_set_limits $ns2 1 1 ++ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup ++ chk_join_nr 1 1 1 ++ chk_add_nr 1 1 ++ chk_prio_nr 1 0 0 1 ++ fi ++ ++ # single address, switch to backup ++ if reset "single address, switch to backup" && + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then + pm_nl_set_limits $ns1 0 1 + pm_nl_add_endpoint $ns1 10.0.2.1 flags signal +@@ -2645,19 +2683,19 @@ backup_tests() + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup + chk_join_nr 1 1 1 + chk_add_nr 1 1 +- chk_prio_nr 1 1 ++ chk_prio_nr 1 1 0 0 + fi + + # single address with port, backup + if reset "single address with port, backup" && + continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then + pm_nl_set_limits $ns1 0 1 +- pm_nl_add_endpoint $ns1 10.0.2.1 flags signal port 10100 ++ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal,backup port 10100 + pm_nl_set_limits $ns2 1 1 +- run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup ++ run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup + chk_join_nr 1 1 1 + chk_add_nr 1 1 +- chk_prio_nr 1 1 ++ chk_prio_nr 1 0 0 1 + fi + + if reset "mpc backup" && +@@ -2665,16 +2703,25 @@ backup_tests() + pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow + chk_join_nr 0 0 0 +- chk_prio_nr 0 1 ++ chk_prio_nr 0 1 0 0 + fi + + if reset "mpc backup both sides" && + continue_if mptcp_lib_kallsyms_doesnt_have "T mptcp_subflow_send_ack$"; then +- pm_nl_add_endpoint $ns1 10.0.1.1 flags subflow,backup ++ pm_nl_set_limits $ns1 0 2 ++ pm_nl_set_limits $ns2 1 2 ++ pm_nl_add_endpoint $ns1 10.0.1.1 flags signal,backup + pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,backup ++ ++ # 10.0.2.2 (non-backup) -> 10.0.1.1 (backup) ++ pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow ++ # 10.0.1.2 (backup) -> 10.0.2.1 (non-backup) ++ pm_nl_add_endpoint $ns1 10.0.2.1 flags signal ++ ip -net "$ns2" route add 10.0.2.1 via 10.0.1.1 dev ns2eth1 # force this path ++ + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow +- chk_join_nr 0 0 0 +- chk_prio_nr 1 1 ++ chk_join_nr 2 2 2 ++ chk_prio_nr 1 1 1 1 + fi + + if reset "mpc switch to backup" && +@@ -2682,7 +2729,7 @@ backup_tests() + pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup + chk_join_nr 0 0 0 +- chk_prio_nr 0 1 ++ chk_prio_nr 0 1 0 0 + fi + + if reset "mpc switch to backup both sides" && +@@ -2691,7 +2738,7 @@ backup_tests() + pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup + chk_join_nr 0 0 0 +- chk_prio_nr 1 1 ++ chk_prio_nr 1 1 0 0 + fi + } + +@@ -3022,7 +3069,7 @@ fullmesh_tests() + pm_nl_set_limits $ns2 4 4 + run_tests $ns1 $ns2 10.0.1.1 0 0 1 slow backup,fullmesh + chk_join_nr 2 2 2 +- chk_prio_nr 0 1 ++ chk_prio_nr 0 1 1 0 + chk_rm_nr 0 1 + fi + +@@ -3034,7 +3081,7 @@ fullmesh_tests() + pm_nl_add_endpoint $ns2 10.0.2.2 flags subflow,backup,fullmesh + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow nobackup,nofullmesh + chk_join_nr 2 2 2 +- chk_prio_nr 0 1 ++ chk_prio_nr 0 1 1 0 + chk_rm_nr 0 1 + fi + } +@@ -3140,7 +3187,7 @@ userspace_tests() + pm_nl_add_endpoint $ns2 10.0.3.2 flags subflow + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow backup + chk_join_nr 1 1 0 +- chk_prio_nr 0 0 ++ chk_prio_nr 0 0 0 0 + fi + + # userspace pm type prevents rm_addr +diff --git a/tools/testing/selftests/rcutorture/bin/torture.sh b/tools/testing/selftests/rcutorture/bin/torture.sh +index d477618e7261d..483f22f5abd7c 100755 +--- a/tools/testing/selftests/rcutorture/bin/torture.sh ++++ b/tools/testing/selftests/rcutorture/bin/torture.sh +@@ -405,16 +405,16 @@ fi + + if test "$do_clocksourcewd" = "yes" + then +- torture_bootargs="rcupdate.rcu_cpu_stall_suppress_at_boot=1 torture.disable_onoff_at_boot rcupdate.rcu_task_stall_timeout=30000" ++ torture_bootargs="rcupdate.rcu_cpu_stall_suppress_at_boot=1 torture.disable_onoff_at_boot rcupdate.rcu_task_stall_timeout=30000 tsc=watchdog" + torture_set "clocksourcewd-1" tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 45s --configs TREE03 --kconfig "CONFIG_TEST_CLOCKSOURCE_WATCHDOG=y" --trust-make + +- torture_bootargs="rcupdate.rcu_cpu_stall_suppress_at_boot=1 torture.disable_onoff_at_boot rcupdate.rcu_task_stall_timeout=30000 clocksource.max_cswd_read_retries=1" ++ torture_bootargs="rcupdate.rcu_cpu_stall_suppress_at_boot=1 torture.disable_onoff_at_boot rcupdate.rcu_task_stall_timeout=30000 tsc=watchdog" + torture_set "clocksourcewd-2" tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 45s --configs TREE03 --kconfig "CONFIG_TEST_CLOCKSOURCE_WATCHDOG=y" --trust-make + + # In case our work is already done... + if test "$do_rcutorture" != "yes" + then +- torture_bootargs="rcupdate.rcu_cpu_stall_suppress_at_boot=1 torture.disable_onoff_at_boot rcupdate.rcu_task_stall_timeout=30000" ++ torture_bootargs="rcupdate.rcu_cpu_stall_suppress_at_boot=1 torture.disable_onoff_at_boot rcupdate.rcu_task_stall_timeout=30000 tsc=watchdog" + torture_set "clocksourcewd-3" tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 45s --configs TREE03 --trust-make + fi + fi