From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 9A7E81584AD for ; Fri, 25 Apr 2025 11:47:50 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 82AF3342FF1 for ; Fri, 25 Apr 2025 11:47:50 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id 0F1871104B7; Fri, 25 Apr 2025 11:47:49 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id 030F81104B7 for ; Fri, 25 Apr 2025 11:47:48 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 03B2D340BEA for ; Fri, 25 Apr 2025 11:47:48 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 6FC4D133B for ; Fri, 25 Apr 2025 11:47:46 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1745581655.a8bb1202a1b4d7cdd7d8e5bf773b6e5b93796b80.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.12 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1024_linux-6.12.25.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: a8bb1202a1b4d7cdd7d8e5bf773b6e5b93796b80 X-VCS-Branch: 6.12 Date: Fri, 25 Apr 2025 11:47:46 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 336c023a-ae5d-42f6-a64c-057c44186da6 X-Archives-Hash: 27bd96bb5200d0879c8607ad280117f2 commit: a8bb1202a1b4d7cdd7d8e5bf773b6e5b93796b80 Author: Mike Pagano gentoo org> AuthorDate: Fri Apr 25 11:47:35 2025 +0000 Commit: Mike Pagano gentoo org> CommitDate: Fri Apr 25 11:47:35 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8bb1202 Linux patch 6.12.25 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1024_linux-6.12.25.patch | 8312 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 8316 insertions(+) diff --git a/0000_README b/0000_README index b04d2cdd..e07d8d2e 100644 --- a/0000_README +++ b/0000_README @@ -139,6 +139,10 @@ Patch: 1023_linux-6.12.24.patch From: https://www.kernel.org Desc: Linux 6.12.24 +Patch: 1024_linux-6.12.25.patch +From: https://www.kernel.org +Desc: Linux 6.12.25 + Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch From: https://git.kernel.org/ Desc: fortify: Hide run-time copy size from value range tracking diff --git a/1024_linux-6.12.25.patch b/1024_linux-6.12.25.patch new file mode 100644 index 00000000..0bba333c --- /dev/null +++ b/1024_linux-6.12.25.patch @@ -0,0 +1,8312 @@ +diff --git a/Documentation/arch/arm64/booting.rst b/Documentation/arch/arm64/booting.rst +index b57776a68f156d..15bcd1b4003a73 100644 +--- a/Documentation/arch/arm64/booting.rst ++++ b/Documentation/arch/arm64/booting.rst +@@ -285,6 +285,12 @@ Before jumping into the kernel, the following conditions must be met: + + - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1. + ++ For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present: ++ ++ - If EL3 is present and the kernel is entered at EL2: ++ ++ - SCR_EL3.FGTEn2 (bit 59) must be initialised to 0b1. ++ + For CPUs with support for HCRX_EL2 (FEAT_HCX) present: + + - If EL3 is present and the kernel is entered at EL2: +@@ -379,6 +385,22 @@ Before jumping into the kernel, the following conditions must be met: + + - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1. + ++ For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9): ++ ++ - If EL3 is present: ++ ++ - MDCR_EL3.EnPM2 (bit 7) must be initialised to 0b1. ++ ++ - If the kernel is entered at EL1 and EL2 is present: ++ ++ - HDFGRTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1. ++ - HDFGRTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1. ++ - HDFGRTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1. ++ ++ - HDFGWTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1. ++ - HDFGWTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1. ++ - HDFGWTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1. ++ + For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS): + + - If the kernel is entered at EL1 and EL2 is present: +diff --git a/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml b/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml +index 31295be910130c..234089b5954ddb 100644 +--- a/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml ++++ b/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml +@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# + title: Freescale Layerscape Reset Registers Module + + maintainers: +- - Frank Li ++ - Frank Li + + description: + Reset Module includes chip reset, service processor control and Reset Control +diff --git a/Documentation/netlink/specs/ovs_vport.yaml b/Documentation/netlink/specs/ovs_vport.yaml +index 86ba9ac2a52103..b538bb99ee9b5f 100644 +--- a/Documentation/netlink/specs/ovs_vport.yaml ++++ b/Documentation/netlink/specs/ovs_vport.yaml +@@ -123,12 +123,12 @@ attribute-sets: + + operations: + name-prefix: ovs-vport-cmd- ++ fixed-header: ovs-header + list: + - + name: new + doc: Create a new OVS vport + attribute-set: vport +- fixed-header: ovs-header + do: + request: + attributes: +@@ -141,7 +141,6 @@ operations: + name: del + doc: Delete existing OVS vport from a data path + attribute-set: vport +- fixed-header: ovs-header + do: + request: + attributes: +@@ -152,7 +151,6 @@ operations: + name: get + doc: Get / dump OVS vport configuration and state + attribute-set: vport +- fixed-header: ovs-header + do: &vport-get-op + request: + attributes: +diff --git a/Documentation/netlink/specs/rt_link.yaml b/Documentation/netlink/specs/rt_link.yaml +index 0c4d5d40cae905..a048fc30389d68 100644 +--- a/Documentation/netlink/specs/rt_link.yaml ++++ b/Documentation/netlink/specs/rt_link.yaml +@@ -1094,11 +1094,10 @@ attribute-sets: + - + name: prop-list + type: nest +- nested-attributes: link-attrs ++ nested-attributes: prop-list-link-attrs + - + name: alt-ifname + type: string +- multi-attr: true + - + name: perm-address + type: binary +@@ -1137,6 +1136,13 @@ attribute-sets: + name: dpll-pin + type: nest + nested-attributes: link-dpll-pin-attrs ++ - ++ name: prop-list-link-attrs ++ subset-of: link-attrs ++ attributes: ++ - ++ name: alt-ifname ++ multi-attr: true + - + name: af-spec-attrs + attributes: +@@ -2071,9 +2077,10 @@ attribute-sets: + type: u32 + - + name: mctp-attrs ++ name-prefix: ifla-mctp- + attributes: + - +- name: mctp-net ++ name: net + type: u32 + - + name: stats-attrs +@@ -2319,7 +2326,6 @@ operations: + - min-mtu + - max-mtu + - prop-list +- - alt-ifname + - perm-address + - proto-down-reason + - parent-dev-name +diff --git a/Documentation/wmi/devices/msi-wmi-platform.rst b/Documentation/wmi/devices/msi-wmi-platform.rst +index 31a13694289238..73197b31926a57 100644 +--- a/Documentation/wmi/devices/msi-wmi-platform.rst ++++ b/Documentation/wmi/devices/msi-wmi-platform.rst +@@ -138,6 +138,10 @@ input data, the meaning of which depends on the subfeature being accessed. + The output buffer contains a single byte which signals success or failure (``0x00`` on failure) + and 31 bytes of output data, the meaning if which depends on the subfeature being accessed. + ++.. note:: ++ The ACPI control method responsible for handling the WMI method calls is not thread-safe. ++ This is a firmware bug that needs to be handled inside the driver itself. ++ + WMI method Get_EC() + ------------------- + +diff --git a/Makefile b/Makefile +index e1fa425089c220..93f4ba25a45336 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 12 +-SUBLEVEL = 24 ++SUBLEVEL = 25 + EXTRAVERSION = + NAME = Baby Opossum Posse + +@@ -455,7 +455,6 @@ export rust_common_flags := --edition=2021 \ + -Wclippy::ignored_unit_patterns \ + -Wclippy::mut_mut \ + -Wclippy::needless_bitwise_bool \ +- -Wclippy::needless_continue \ + -Aclippy::needless_lifetimes \ + -Wclippy::no_mangle_with_rust_abi \ + -Wclippy::undocumented_unsafe_blocks \ +@@ -1016,6 +1015,9 @@ endif + # Ensure compilers do not transform certain loops into calls to wcslen() + KBUILD_CFLAGS += -fno-builtin-wcslen + ++# Ensure compilers do not transform certain loops into calls to wcslen() ++KBUILD_CFLAGS += -fno-builtin-wcslen ++ + # change __FILE__ to the relative path from the srctree + KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=) + +diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h +index e0ffdf13a18b3f..bdbe9e08664a69 100644 +--- a/arch/arm64/include/asm/el2_setup.h ++++ b/arch/arm64/include/asm/el2_setup.h +@@ -215,6 +215,30 @@ + .Lskip_fgt_\@: + .endm + ++.macro __init_el2_fgt2 ++ mrs x1, id_aa64mmfr0_el1 ++ ubfx x1, x1, #ID_AA64MMFR0_EL1_FGT_SHIFT, #4 ++ cmp x1, #ID_AA64MMFR0_EL1_FGT_FGT2 ++ b.lt .Lskip_fgt2_\@ ++ ++ mov x0, xzr ++ mrs x1, id_aa64dfr0_el1 ++ ubfx x1, x1, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4 ++ cmp x1, #ID_AA64DFR0_EL1_PMUVer_V3P9 ++ b.lt .Lskip_pmuv3p9_\@ ++ ++ orr x0, x0, #HDFGRTR2_EL2_nPMICNTR_EL0 ++ orr x0, x0, #HDFGRTR2_EL2_nPMICFILTR_EL0 ++ orr x0, x0, #HDFGRTR2_EL2_nPMUACR_EL1 ++.Lskip_pmuv3p9_\@: ++ msr_s SYS_HDFGRTR2_EL2, x0 ++ msr_s SYS_HDFGWTR2_EL2, x0 ++ msr_s SYS_HFGRTR2_EL2, xzr ++ msr_s SYS_HFGWTR2_EL2, xzr ++ msr_s SYS_HFGITR2_EL2, xzr ++.Lskip_fgt2_\@: ++.endm ++ + .macro __init_el2_nvhe_prepare_eret + mov x0, #INIT_PSTATE_EL1 + msr spsr_el2, x0 +@@ -240,6 +264,7 @@ + __init_el2_nvhe_idregs + __init_el2_cptr + __init_el2_fgt ++ __init_el2_fgt2 + .endm + + #ifndef __KVM_NVHE_HYPERVISOR__ +diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg +index 8d637ac4b7c6b9..362bcfa0aed18f 100644 +--- a/arch/arm64/tools/sysreg ++++ b/arch/arm64/tools/sysreg +@@ -1238,6 +1238,7 @@ UnsignedEnum 11:8 PMUVer + 0b0110 V3P5 + 0b0111 V3P7 + 0b1000 V3P8 ++ 0b1001 V3P9 + 0b1111 IMP_DEF + EndEnum + UnsignedEnum 7:4 TraceVer +@@ -1556,6 +1557,7 @@ EndEnum + UnsignedEnum 59:56 FGT + 0b0000 NI + 0b0001 IMP ++ 0b0010 FGT2 + EndEnum + Res0 55:48 + UnsignedEnum 47:44 EXS +@@ -1617,6 +1619,7 @@ Enum 3:0 PARANGE + 0b0100 44 + 0b0101 48 + 0b0110 52 ++ 0b0111 56 + EndEnum + EndSysreg + +@@ -2463,6 +2466,101 @@ Field 1 ICIALLU + Field 0 ICIALLUIS + EndSysreg + ++Sysreg HDFGRTR2_EL2 3 4 3 1 0 ++Res0 63:25 ++Field 24 nPMBMAR_EL1 ++Field 23 nMDSTEPOP_EL1 ++Field 22 nTRBMPAM_EL1 ++Res0 21 ++Field 20 nTRCITECR_EL1 ++Field 19 nPMSDSFR_EL1 ++Field 18 nSPMDEVAFF_EL1 ++Field 17 nSPMID ++Field 16 nSPMSCR_EL1 ++Field 15 nSPMACCESSR_EL1 ++Field 14 nSPMCR_EL0 ++Field 13 nSPMOVS ++Field 12 nSPMINTEN ++Field 11 nSPMCNTEN ++Field 10 nSPMSELR_EL0 ++Field 9 nSPMEVTYPERn_EL0 ++Field 8 nSPMEVCNTRn_EL0 ++Field 7 nPMSSCR_EL1 ++Field 6 nPMSSDATA ++Field 5 nMDSELR_EL1 ++Field 4 nPMUACR_EL1 ++Field 3 nPMICFILTR_EL0 ++Field 2 nPMICNTR_EL0 ++Field 1 nPMIAR_EL1 ++Field 0 nPMECR_EL1 ++EndSysreg ++ ++Sysreg HDFGWTR2_EL2 3 4 3 1 1 ++Res0 63:25 ++Field 24 nPMBMAR_EL1 ++Field 23 nMDSTEPOP_EL1 ++Field 22 nTRBMPAM_EL1 ++Field 21 nPMZR_EL0 ++Field 20 nTRCITECR_EL1 ++Field 19 nPMSDSFR_EL1 ++Res0 18:17 ++Field 16 nSPMSCR_EL1 ++Field 15 nSPMACCESSR_EL1 ++Field 14 nSPMCR_EL0 ++Field 13 nSPMOVS ++Field 12 nSPMINTEN ++Field 11 nSPMCNTEN ++Field 10 nSPMSELR_EL0 ++Field 9 nSPMEVTYPERn_EL0 ++Field 8 nSPMEVCNTRn_EL0 ++Field 7 nPMSSCR_EL1 ++Res0 6 ++Field 5 nMDSELR_EL1 ++Field 4 nPMUACR_EL1 ++Field 3 nPMICFILTR_EL0 ++Field 2 nPMICNTR_EL0 ++Field 1 nPMIAR_EL1 ++Field 0 nPMECR_EL1 ++EndSysreg ++ ++Sysreg HFGRTR2_EL2 3 4 3 1 2 ++Res0 63:15 ++Field 14 nACTLRALIAS_EL1 ++Field 13 nACTLRMASK_EL1 ++Field 12 nTCR2ALIAS_EL1 ++Field 11 nTCRALIAS_EL1 ++Field 10 nSCTLRALIAS2_EL1 ++Field 9 nSCTLRALIAS_EL1 ++Field 8 nCPACRALIAS_EL1 ++Field 7 nTCR2MASK_EL1 ++Field 6 nTCRMASK_EL1 ++Field 5 nSCTLR2MASK_EL1 ++Field 4 nSCTLRMASK_EL1 ++Field 3 nCPACRMASK_EL1 ++Field 2 nRCWSMASK_EL1 ++Field 1 nERXGSR_EL1 ++Field 0 nPFAR_EL1 ++EndSysreg ++ ++Sysreg HFGWTR2_EL2 3 4 3 1 3 ++Res0 63:15 ++Field 14 nACTLRALIAS_EL1 ++Field 13 nACTLRMASK_EL1 ++Field 12 nTCR2ALIAS_EL1 ++Field 11 nTCRALIAS_EL1 ++Field 10 nSCTLRALIAS2_EL1 ++Field 9 nSCTLRALIAS_EL1 ++Field 8 nCPACRALIAS_EL1 ++Field 7 nTCR2MASK_EL1 ++Field 6 nTCRMASK_EL1 ++Field 5 nSCTLR2MASK_EL1 ++Field 4 nSCTLRMASK_EL1 ++Field 3 nCPACRMASK_EL1 ++Field 2 nRCWSMASK_EL1 ++Res0 1 ++Field 0 nPFAR_EL1 ++EndSysreg ++ + Sysreg HDFGRTR_EL2 3 4 3 1 4 + Field 63 PMBIDR_EL1 + Field 62 nPMSNEVFR_EL1 +@@ -2635,6 +2733,12 @@ Field 1 AMEVCNTR00_EL0 + Field 0 AMCNTEN0 + EndSysreg + ++Sysreg HFGITR2_EL2 3 4 3 1 7 ++Res0 63:2 ++Field 1 nDCCIVAPS ++Field 0 TSBCSYNC ++EndSysreg ++ + Sysreg ZCR_EL2 3 4 1 2 0 + Fields ZCR_ELx + EndSysreg +diff --git a/arch/loongarch/kernel/acpi.c b/arch/loongarch/kernel/acpi.c +index 382a09a7152c30..1120ac2824f6e8 100644 +--- a/arch/loongarch/kernel/acpi.c ++++ b/arch/loongarch/kernel/acpi.c +@@ -249,18 +249,6 @@ static __init int setup_node(int pxm) + return acpi_map_pxm_to_node(pxm); + } + +-/* +- * Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for +- * I/O localities since SRAT does not list them. I/O localities are +- * not supported at this point. +- */ +-unsigned int numa_distance_cnt; +- +-static inline unsigned int get_numa_distances_cnt(struct acpi_table_slit *slit) +-{ +- return slit->locality_count; +-} +- + void __init numa_set_distance(int from, int to, int distance) + { + if ((u8)distance != distance || (from == to && distance != LOCAL_DISTANCE)) { +diff --git a/arch/mips/dec/prom/init.c b/arch/mips/dec/prom/init.c +index cb12eb211a49e0..8d74d7d6c05b47 100644 +--- a/arch/mips/dec/prom/init.c ++++ b/arch/mips/dec/prom/init.c +@@ -42,7 +42,7 @@ int (*__pmax_close)(int); + * Detect which PROM the DECSTATION has, and set the callback vectors + * appropriately. + */ +-void __init which_prom(s32 magic, s32 *prom_vec) ++static void __init which_prom(s32 magic, s32 *prom_vec) + { + /* + * No sign of the REX PROM's magic number means we assume a non-REX +diff --git a/arch/mips/include/asm/ds1287.h b/arch/mips/include/asm/ds1287.h +index 46cfb01f9a14e7..51cb61fd4c0330 100644 +--- a/arch/mips/include/asm/ds1287.h ++++ b/arch/mips/include/asm/ds1287.h +@@ -8,7 +8,7 @@ + #define __ASM_DS1287_H + + extern int ds1287_timer_state(void); +-extern void ds1287_set_base_clock(unsigned int clock); ++extern int ds1287_set_base_clock(unsigned int hz); + extern int ds1287_clockevent_init(int irq); + + #endif +diff --git a/arch/mips/kernel/cevt-ds1287.c b/arch/mips/kernel/cevt-ds1287.c +index 9a47fbcd4638a6..de64d6bb7ba36c 100644 +--- a/arch/mips/kernel/cevt-ds1287.c ++++ b/arch/mips/kernel/cevt-ds1287.c +@@ -10,6 +10,7 @@ + #include + #include + ++#include + #include + + int ds1287_timer_state(void) +diff --git a/arch/riscv/include/asm/kgdb.h b/arch/riscv/include/asm/kgdb.h +index 46677daf708bd0..cc11c4544cffd1 100644 +--- a/arch/riscv/include/asm/kgdb.h ++++ b/arch/riscv/include/asm/kgdb.h +@@ -19,16 +19,9 @@ + + #ifndef __ASSEMBLY__ + ++void arch_kgdb_breakpoint(void); + extern unsigned long kgdb_compiled_break; + +-static inline void arch_kgdb_breakpoint(void) +-{ +- asm(".global kgdb_compiled_break\n" +- ".option norvc\n" +- "kgdb_compiled_break: ebreak\n" +- ".option rvc\n"); +-} +- + #endif /* !__ASSEMBLY__ */ + + #define DBG_REG_ZERO "zero" +diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h +index 121fff429dce66..eceabf59ae482a 100644 +--- a/arch/riscv/include/asm/syscall.h ++++ b/arch/riscv/include/asm/syscall.h +@@ -62,8 +62,11 @@ static inline void syscall_get_arguments(struct task_struct *task, + unsigned long *args) + { + args[0] = regs->orig_a0; +- args++; +- memcpy(args, ®s->a1, 5 * sizeof(args[0])); ++ args[1] = regs->a1; ++ args[2] = regs->a2; ++ args[3] = regs->a3; ++ args[4] = regs->a4; ++ args[5] = regs->a5; + } + + static inline int syscall_get_arch(struct task_struct *task) +diff --git a/arch/riscv/kernel/kgdb.c b/arch/riscv/kernel/kgdb.c +index 2e0266ae6bd728..9f3db3503dabd6 100644 +--- a/arch/riscv/kernel/kgdb.c ++++ b/arch/riscv/kernel/kgdb.c +@@ -254,6 +254,12 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc) + regs->epc = pc; + } + ++noinline void arch_kgdb_breakpoint(void) ++{ ++ asm(".global kgdb_compiled_break\n" ++ "kgdb_compiled_break: ebreak\n"); ++} ++ + void kgdb_arch_handle_qxfer_pkt(char *remcom_in_buffer, + char *remcom_out_buffer) + { +diff --git a/arch/riscv/kernel/module-sections.c b/arch/riscv/kernel/module-sections.c +index e264e59e596e80..91d0b355ceeff6 100644 +--- a/arch/riscv/kernel/module-sections.c ++++ b/arch/riscv/kernel/module-sections.c +@@ -73,16 +73,17 @@ static bool duplicate_rela(const Elf_Rela *rela, int idx) + static void count_max_entries(Elf_Rela *relas, int num, + unsigned int *plts, unsigned int *gots) + { +- unsigned int type, i; +- +- for (i = 0; i < num; i++) { +- type = ELF_RISCV_R_TYPE(relas[i].r_info); +- if (type == R_RISCV_CALL_PLT) { ++ for (int i = 0; i < num; i++) { ++ switch (ELF_R_TYPE(relas[i].r_info)) { ++ case R_RISCV_CALL_PLT: ++ case R_RISCV_PLT32: + if (!duplicate_rela(relas, i)) + (*plts)++; +- } else if (type == R_RISCV_GOT_HI20) { ++ break; ++ case R_RISCV_GOT_HI20: + if (!duplicate_rela(relas, i)) + (*gots)++; ++ break; + } + } + } +diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c +index 47d0ebeec93c23..7f6147c18033b2 100644 +--- a/arch/riscv/kernel/module.c ++++ b/arch/riscv/kernel/module.c +@@ -648,7 +648,7 @@ process_accumulated_relocations(struct module *me, + kfree(bucket_iter); + } + +- kfree(*relocation_hashtable); ++ kvfree(*relocation_hashtable); + } + + static int add_relocation_to_accumulate(struct module *me, int type, +@@ -752,9 +752,10 @@ initialize_relocation_hashtable(unsigned int num_relocations, + + hashtable_size <<= should_double_size; + +- *relocation_hashtable = kmalloc_array(hashtable_size, +- sizeof(**relocation_hashtable), +- GFP_KERNEL); ++ /* Number of relocations may be large, so kvmalloc it */ ++ *relocation_hashtable = kvmalloc_array(hashtable_size, ++ sizeof(**relocation_hashtable), ++ GFP_KERNEL); + if (!*relocation_hashtable) + return 0; + +@@ -859,7 +860,7 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab, + } + + j++; +- if (j > sechdrs[relsec].sh_size / sizeof(*rel)) ++ if (j == num_relocations) + j = 0; + + } while (j_idx != j); +diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c +index 7934613a98c883..194bda6d74ce72 100644 +--- a/arch/riscv/kernel/setup.c ++++ b/arch/riscv/kernel/setup.c +@@ -66,6 +66,9 @@ static struct resource bss_res = { .name = "Kernel bss", }; + static struct resource elfcorehdr_res = { .name = "ELF Core hdr", }; + #endif + ++static int num_standard_resources; ++static struct resource *standard_resources; ++ + static int __init add_resource(struct resource *parent, + struct resource *res) + { +@@ -139,7 +142,7 @@ static void __init init_resources(void) + struct resource *res = NULL; + struct resource *mem_res = NULL; + size_t mem_res_sz = 0; +- int num_resources = 0, res_idx = 0; ++ int num_resources = 0, res_idx = 0, non_resv_res = 0; + int ret = 0; + + /* + 1 as memblock_alloc() might increase memblock.reserved.cnt */ +@@ -195,6 +198,7 @@ static void __init init_resources(void) + /* Add /memory regions to the resource tree */ + for_each_mem_region(region) { + res = &mem_res[res_idx--]; ++ non_resv_res++; + + if (unlikely(memblock_is_nomap(region))) { + res->name = "Reserved"; +@@ -212,6 +216,9 @@ static void __init init_resources(void) + goto error; + } + ++ num_standard_resources = non_resv_res; ++ standard_resources = &mem_res[res_idx + 1]; ++ + /* Clean-up any unused pre-allocated resources */ + if (res_idx >= 0) + memblock_free(mem_res, (res_idx + 1) * sizeof(*mem_res)); +@@ -223,6 +230,33 @@ static void __init init_resources(void) + memblock_free(mem_res, mem_res_sz); + } + ++static int __init reserve_memblock_reserved_regions(void) ++{ ++ u64 i, j; ++ ++ for (i = 0; i < num_standard_resources; i++) { ++ struct resource *mem = &standard_resources[i]; ++ phys_addr_t r_start, r_end, mem_size = resource_size(mem); ++ ++ if (!memblock_is_region_reserved(mem->start, mem_size)) ++ continue; ++ ++ for_each_reserved_mem_range(j, &r_start, &r_end) { ++ resource_size_t start, end; ++ ++ start = max(PFN_PHYS(PFN_DOWN(r_start)), mem->start); ++ end = min(PFN_PHYS(PFN_UP(r_end)) - 1, mem->end); ++ ++ if (start > mem->end || end < mem->start) ++ continue; ++ ++ reserve_region_with_split(mem, start, end, "Reserved"); ++ } ++ } ++ ++ return 0; ++} ++arch_initcall(reserve_memblock_reserved_regions); + + static void __init parse_dtb(void) + { +diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c +index dbba332e4a12d7..f676156d9f3db4 100644 +--- a/arch/x86/boot/compressed/mem.c ++++ b/arch/x86/boot/compressed/mem.c +@@ -34,11 +34,14 @@ static bool early_is_tdx_guest(void) + + void arch_accept_memory(phys_addr_t start, phys_addr_t end) + { ++ static bool sevsnp; ++ + /* Platform-specific memory-acceptance call goes here */ + if (early_is_tdx_guest()) { + if (!tdx_accept_memory(start, end)) + panic("TDX: Failed to accept memory\n"); +- } else if (sev_snp_enabled()) { ++ } else if (sevsnp || (sev_get_status() & MSR_AMD64_SEV_SNP_ENABLED)) { ++ sevsnp = true; + snp_accept_memory(start, end); + } else { + error("Cannot accept memory: unknown platform\n"); +diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c +index cd44e120fe5377..f49f7eef1dba07 100644 +--- a/arch/x86/boot/compressed/sev.c ++++ b/arch/x86/boot/compressed/sev.c +@@ -164,10 +164,7 @@ bool sev_snp_enabled(void) + + static void __page_state_change(unsigned long paddr, enum psc_op op) + { +- u64 val; +- +- if (!sev_snp_enabled()) +- return; ++ u64 val, msr; + + /* + * If private -> shared then invalidate the page before requesting the +@@ -176,6 +173,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op) + if (op == SNP_PAGE_STATE_SHARED) + pvalidate_4k_page(paddr, paddr, false); + ++ /* Save the current GHCB MSR value */ ++ msr = sev_es_rd_ghcb_msr(); ++ + /* Issue VMGEXIT to change the page state in RMP table. */ + sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op)); + VMGEXIT(); +@@ -185,6 +185,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op) + if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val)) + sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); + ++ /* Restore the GHCB MSR value */ ++ sev_es_wr_ghcb_msr(msr); ++ + /* + * Now that page state is changed in the RMP table, validate it so that it is + * consistent with the RMP entry. +@@ -195,11 +198,17 @@ static void __page_state_change(unsigned long paddr, enum psc_op op) + + void snp_set_page_private(unsigned long paddr) + { ++ if (!sev_snp_enabled()) ++ return; ++ + __page_state_change(paddr, SNP_PAGE_STATE_PRIVATE); + } + + void snp_set_page_shared(unsigned long paddr) + { ++ if (!sev_snp_enabled()) ++ return; ++ + __page_state_change(paddr, SNP_PAGE_STATE_SHARED); + } + +@@ -223,56 +232,10 @@ static bool early_setup_ghcb(void) + return true; + } + +-static phys_addr_t __snp_accept_memory(struct snp_psc_desc *desc, +- phys_addr_t pa, phys_addr_t pa_end) +-{ +- struct psc_hdr *hdr; +- struct psc_entry *e; +- unsigned int i; +- +- hdr = &desc->hdr; +- memset(hdr, 0, sizeof(*hdr)); +- +- e = desc->entries; +- +- i = 0; +- while (pa < pa_end && i < VMGEXIT_PSC_MAX_ENTRY) { +- hdr->end_entry = i; +- +- e->gfn = pa >> PAGE_SHIFT; +- e->operation = SNP_PAGE_STATE_PRIVATE; +- if (IS_ALIGNED(pa, PMD_SIZE) && (pa_end - pa) >= PMD_SIZE) { +- e->pagesize = RMP_PG_SIZE_2M; +- pa += PMD_SIZE; +- } else { +- e->pagesize = RMP_PG_SIZE_4K; +- pa += PAGE_SIZE; +- } +- +- e++; +- i++; +- } +- +- if (vmgexit_psc(boot_ghcb, desc)) +- sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); +- +- pvalidate_pages(desc); +- +- return pa; +-} +- + void snp_accept_memory(phys_addr_t start, phys_addr_t end) + { +- struct snp_psc_desc desc = {}; +- unsigned int i; +- phys_addr_t pa; +- +- if (!boot_ghcb && !early_setup_ghcb()) +- sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC); +- +- pa = start; +- while (pa < end) +- pa = __snp_accept_memory(&desc, pa, end); ++ for (phys_addr_t pa = start; pa < end; pa += PAGE_SIZE) ++ __page_state_change(pa, SNP_PAGE_STATE_PRIVATE); + } + + void sev_es_shutdown_ghcb(void) +diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h +index fc725a981b093b..4e463f33186df4 100644 +--- a/arch/x86/boot/compressed/sev.h ++++ b/arch/x86/boot/compressed/sev.h +@@ -12,11 +12,13 @@ + + bool sev_snp_enabled(void); + void snp_accept_memory(phys_addr_t start, phys_addr_t end); ++u64 sev_get_status(void); + + #else + + static inline bool sev_snp_enabled(void) { return false; } + static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } ++static inline u64 sev_get_status(void) { return 0; } + + #endif + +diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c +index 1617aa3efd68b1..1b82bcc6fa5564 100644 +--- a/arch/x86/events/intel/ds.c ++++ b/arch/x86/events/intel/ds.c +@@ -1317,8 +1317,10 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event) + * + precise_ip < 2 for the non event IP + * + For RTM TSX weight we need GPRs for the abort code. + */ +- gprs = (sample_type & PERF_SAMPLE_REGS_INTR) && +- (attr->sample_regs_intr & PEBS_GP_REGS); ++ gprs = ((sample_type & PERF_SAMPLE_REGS_INTR) && ++ (attr->sample_regs_intr & PEBS_GP_REGS)) || ++ ((sample_type & PERF_SAMPLE_REGS_USER) && ++ (attr->sample_regs_user & PEBS_GP_REGS)); + + tsx_weight = (sample_type & PERF_SAMPLE_WEIGHT_TYPE) && + ((attr->config & INTEL_ARCH_EVENT_MASK) == +@@ -1970,7 +1972,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event, + regs->flags &= ~PERF_EFLAGS_EXACT; + } + +- if (sample_type & PERF_SAMPLE_REGS_INTR) ++ if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER)) + adaptive_pebs_save_regs(regs, gprs); + } + +diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c +index ca98744343b89e..543609d1231efc 100644 +--- a/arch/x86/events/intel/uncore_snbep.c ++++ b/arch/x86/events/intel/uncore_snbep.c +@@ -4891,28 +4891,28 @@ static struct uncore_event_desc snr_uncore_iio_freerunning_events[] = { + INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"), + /* Free-Running IIO BANDWIDTH IN Counters */ + INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"), + INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"), ++ INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.0517578125e-5"), + INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"), + { /* end: all zeroes */ }, + }; +@@ -5485,37 +5485,6 @@ static struct freerunning_counters icx_iio_freerunning[] = { + [ICX_IIO_MSR_BW_IN] = { 0xaa0, 0x1, 0x10, 8, 48, icx_iio_bw_freerunning_box_offsets }, + }; + +-static struct uncore_event_desc icx_uncore_iio_freerunning_events[] = { +- /* Free-Running IIO CLOCKS Counter */ +- INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"), +- /* Free-Running IIO BANDWIDTH IN Counters */ +- INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"), +- { /* end: all zeroes */ }, +-}; +- + static struct intel_uncore_type icx_uncore_iio_free_running = { + .name = "iio_free_running", + .num_counters = 9, +@@ -5523,7 +5492,7 @@ static struct intel_uncore_type icx_uncore_iio_free_running = { + .num_freerunning_types = ICX_IIO_FREERUNNING_TYPE_MAX, + .freerunning = icx_iio_freerunning, + .ops = &skx_uncore_iio_freerunning_ops, +- .event_descs = icx_uncore_iio_freerunning_events, ++ .event_descs = snr_uncore_iio_freerunning_events, + .format_group = &skx_uncore_iio_freerunning_format_group, + }; + +@@ -6320,69 +6289,13 @@ static struct freerunning_counters spr_iio_freerunning[] = { + [SPR_IIO_MSR_BW_OUT] = { 0x3808, 0x1, 0x10, 8, 48 }, + }; + +-static struct uncore_event_desc spr_uncore_iio_freerunning_events[] = { +- /* Free-Running IIO CLOCKS Counter */ +- INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"), +- /* Free-Running IIO BANDWIDTH IN Counters */ +- INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"), +- /* Free-Running IIO BANDWIDTH OUT Counters */ +- INTEL_UNCORE_EVENT_DESC(bw_out_port0, "event=0xff,umask=0x30"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port1, "event=0xff,umask=0x31"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port2, "event=0xff,umask=0x32"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port3, "event=0xff,umask=0x33"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port4, "event=0xff,umask=0x34"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port4.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port4.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port5, "event=0xff,umask=0x35"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port5.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port5.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port6, "event=0xff,umask=0x36"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port6.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port6.unit, "MiB"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port7, "event=0xff,umask=0x37"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port7.scale, "3.814697266e-6"), +- INTEL_UNCORE_EVENT_DESC(bw_out_port7.unit, "MiB"), +- { /* end: all zeroes */ }, +-}; +- + static struct intel_uncore_type spr_uncore_iio_free_running = { + .name = "iio_free_running", + .num_counters = 17, + .num_freerunning_types = SPR_IIO_FREERUNNING_TYPE_MAX, + .freerunning = spr_iio_freerunning, + .ops = &skx_uncore_iio_freerunning_ops, +- .event_descs = spr_uncore_iio_freerunning_events, ++ .event_descs = snr_uncore_iio_freerunning_events, + .format_group = &skx_uncore_iio_freerunning_format_group, + }; + +diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c +index 425bed00b2e071..e432910859cb1a 100644 +--- a/arch/x86/kernel/cpu/amd.c ++++ b/arch/x86/kernel/cpu/amd.c +@@ -862,6 +862,16 @@ static void init_amd_zen1(struct cpuinfo_x86 *c) + + pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n"); + setup_force_cpu_bug(X86_BUG_DIV0); ++ ++ /* ++ * Turn off the Instructions Retired free counter on machines that are ++ * susceptible to erratum #1054 "Instructions Retired Performance ++ * Counter May Be Inaccurate". ++ */ ++ if (c->x86_model < 0x30) { ++ msr_clear_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT); ++ clear_cpu_cap(c, X86_FEATURE_IRPERF); ++ } + } + + static bool cpu_has_zenbleed_microcode(void) +@@ -1045,13 +1055,8 @@ static void init_amd(struct cpuinfo_x86 *c) + if (!cpu_feature_enabled(X86_FEATURE_XENPV)) + set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS); + +- /* +- * Turn on the Instructions Retired free counter on machines not +- * susceptible to erratum #1054 "Instructions Retired Performance +- * Counter May Be Inaccurate". +- */ +- if (cpu_has(c, X86_FEATURE_IRPERF) && +- (boot_cpu_has(X86_FEATURE_ZEN1) && c->x86_model > 0x2f)) ++ /* Enable the Instructions Retired free counter */ ++ if (cpu_has(c, X86_FEATURE_IRPERF)) + msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT); + + check_null_seg_clears_base(c); +diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c +index 5cd735728fa028..093d3ca43c4674 100644 +--- a/arch/x86/kernel/cpu/microcode/amd.c ++++ b/arch/x86/kernel/cpu/microcode/amd.c +@@ -199,6 +199,12 @@ static bool need_sha_check(u32 cur_rev) + case 0xa70c0: return cur_rev <= 0xa70C009; break; + case 0xaa001: return cur_rev <= 0xaa00116; break; + case 0xaa002: return cur_rev <= 0xaa00218; break; ++ case 0xb0021: return cur_rev <= 0xb002146; break; ++ case 0xb1010: return cur_rev <= 0xb101046; break; ++ case 0xb2040: return cur_rev <= 0xb204031; break; ++ case 0xb4040: return cur_rev <= 0xb404031; break; ++ case 0xb6000: return cur_rev <= 0xb600031; break; ++ case 0xb7000: return cur_rev <= 0xb700031; break; + default: break; + } + +@@ -214,8 +220,7 @@ static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsi + struct sha256_state s; + int i; + +- if (x86_family(bsp_cpuid_1_eax) < 0x17 || +- x86_family(bsp_cpuid_1_eax) > 0x19) ++ if (x86_family(bsp_cpuid_1_eax) < 0x17) + return true; + + if (!need_sha_check(cur_rev)) +diff --git a/arch/x86/xen/multicalls.c b/arch/x86/xen/multicalls.c +index 10c660fae8b300..7237d56a9d3f01 100644 +--- a/arch/x86/xen/multicalls.c ++++ b/arch/x86/xen/multicalls.c +@@ -54,14 +54,20 @@ struct mc_debug_data { + + static DEFINE_PER_CPU(struct mc_buffer, mc_buffer); + static struct mc_debug_data mc_debug_data_early __initdata; +-static DEFINE_PER_CPU(struct mc_debug_data *, mc_debug_data) = +- &mc_debug_data_early; + static struct mc_debug_data __percpu *mc_debug_data_ptr; + DEFINE_PER_CPU(unsigned long, xen_mc_irq_flags); + + static struct static_key mc_debug __ro_after_init; + static bool mc_debug_enabled __initdata; + ++static struct mc_debug_data * __ref get_mc_debug(void) ++{ ++ if (!mc_debug_data_ptr) ++ return &mc_debug_data_early; ++ ++ return this_cpu_ptr(mc_debug_data_ptr); ++} ++ + static int __init xen_parse_mc_debug(char *arg) + { + mc_debug_enabled = true; +@@ -71,20 +77,16 @@ static int __init xen_parse_mc_debug(char *arg) + } + early_param("xen_mc_debug", xen_parse_mc_debug); + +-void mc_percpu_init(unsigned int cpu) +-{ +- per_cpu(mc_debug_data, cpu) = per_cpu_ptr(mc_debug_data_ptr, cpu); +-} +- + static int __init mc_debug_enable(void) + { + unsigned long flags; ++ struct mc_debug_data __percpu *mcdb; + + if (!mc_debug_enabled) + return 0; + +- mc_debug_data_ptr = alloc_percpu(struct mc_debug_data); +- if (!mc_debug_data_ptr) { ++ mcdb = alloc_percpu(struct mc_debug_data); ++ if (!mcdb) { + pr_err("xen_mc_debug inactive\n"); + static_key_slow_dec(&mc_debug); + return -ENOMEM; +@@ -93,7 +95,7 @@ static int __init mc_debug_enable(void) + /* Be careful when switching to percpu debug data. */ + local_irq_save(flags); + xen_mc_flush(); +- mc_percpu_init(0); ++ mc_debug_data_ptr = mcdb; + local_irq_restore(flags); + + pr_info("xen_mc_debug active\n"); +@@ -155,7 +157,7 @@ void xen_mc_flush(void) + trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx); + + if (static_key_false(&mc_debug)) { +- mcdb = __this_cpu_read(mc_debug_data); ++ mcdb = get_mc_debug(); + memcpy(mcdb->entries, b->entries, + b->mcidx * sizeof(struct multicall_entry)); + } +@@ -235,7 +237,7 @@ struct multicall_space __xen_mc_entry(size_t args) + + ret.mc = &b->entries[b->mcidx]; + if (static_key_false(&mc_debug)) { +- struct mc_debug_data *mcdb = __this_cpu_read(mc_debug_data); ++ struct mc_debug_data *mcdb = get_mc_debug(); + + mcdb->caller[b->mcidx] = __builtin_return_address(0); + mcdb->argsz[b->mcidx] = args; +diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c +index 6863d3da7decfc..7ea57f728b89db 100644 +--- a/arch/x86/xen/smp_pv.c ++++ b/arch/x86/xen/smp_pv.c +@@ -305,7 +305,6 @@ static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle) + return rc; + + xen_pmu_init(cpu); +- mc_percpu_init(cpu); + + /* + * Why is this a BUG? If the hypercall fails then everything can be +diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h +index 63c13a2ccf556a..25e318ef27d6b0 100644 +--- a/arch/x86/xen/xen-ops.h ++++ b/arch/x86/xen/xen-ops.h +@@ -261,9 +261,6 @@ void xen_mc_callback(void (*fn)(void *), void *data); + */ + struct multicall_space xen_mc_extend_args(unsigned long op, size_t arg_size); + +-/* Do percpu data initialization for multicalls. */ +-void mc_percpu_init(unsigned int cpu); +- + extern bool is_xen_pmu; + + irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id); +diff --git a/block/bio-integrity.c b/block/bio-integrity.c +index 9aed61fcd0bf94..456026c4a3c962 100644 +--- a/block/bio-integrity.c ++++ b/block/bio-integrity.c +@@ -104,16 +104,12 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio, + } + EXPORT_SYMBOL(bio_integrity_alloc); + +-static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs, +- bool dirty) ++static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs) + { + int i; + +- for (i = 0; i < nr_vecs; i++) { +- if (dirty && !PageCompound(bv[i].bv_page)) +- set_page_dirty_lock(bv[i].bv_page); ++ for (i = 0; i < nr_vecs; i++) + unpin_user_page(bv[i].bv_page); +- } + } + + static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip) +@@ -129,7 +125,7 @@ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip) + ret = copy_to_iter(bvec_virt(bounce_bvec), bytes, &orig_iter); + WARN_ON_ONCE(ret != bytes); + +- bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs, true); ++ bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs); + } + + /** +@@ -149,8 +145,7 @@ void bio_integrity_unmap_user(struct bio *bio) + return; + } + +- bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt, +- bio_data_dir(bio) == READ); ++ bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt); + } + + /** +@@ -236,7 +231,7 @@ static int bio_integrity_copy_user(struct bio *bio, struct bio_vec *bvec, + } + + if (write) +- bio_integrity_unpin_bvec(bvec, nr_vecs, false); ++ bio_integrity_unpin_bvec(bvec, nr_vecs); + else + memcpy(&bip->bip_vec[1], bvec, nr_vecs * sizeof(*bvec)); + +@@ -362,7 +357,7 @@ int bio_integrity_map_user(struct bio *bio, void __user *ubuf, ssize_t bytes, + return 0; + + release_pages: +- bio_integrity_unpin_bvec(bvec, nr_bvecs, false); ++ bio_integrity_unpin_bvec(bvec, nr_bvecs); + free_bvec: + if (bvec != stack_vec) + kfree(bvec); +diff --git a/block/blk-core.c b/block/blk-core.c +index 42023addf9cda6..c7b6c1f7635978 100644 +--- a/block/blk-core.c ++++ b/block/blk-core.c +@@ -1121,8 +1121,8 @@ void blk_start_plug_nr_ios(struct blk_plug *plug, unsigned short nr_ios) + return; + + plug->cur_ktime = 0; +- plug->mq_list = NULL; +- plug->cached_rq = NULL; ++ rq_list_init(&plug->mq_list); ++ rq_list_init(&plug->cached_rqs); + plug->nr_ios = min_t(unsigned short, nr_ios, BLK_MAX_REQUEST_COUNT); + plug->rq_count = 0; + plug->multiple_queues = false; +@@ -1218,7 +1218,7 @@ void __blk_flush_plug(struct blk_plug *plug, bool from_schedule) + * queue for cached requests, we don't want a blocked task holding + * up a queue freeze/quiesce event. + */ +- if (unlikely(!rq_list_empty(plug->cached_rq))) ++ if (unlikely(!rq_list_empty(&plug->cached_rqs))) + blk_mq_free_plug_rqs(plug); + + plug->cur_ktime = 0; +diff --git a/block/blk-merge.c b/block/blk-merge.c +index 5baa950f34fe21..ceac64e796ea82 100644 +--- a/block/blk-merge.c ++++ b/block/blk-merge.c +@@ -1175,7 +1175,7 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, + struct blk_plug *plug = current->plug; + struct request *rq; + +- if (!plug || rq_list_empty(plug->mq_list)) ++ if (!plug || rq_list_empty(&plug->mq_list)) + return false; + + rq_list_for_each(&plug->mq_list, rq) { +diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c +index 9638b25fd52124..ad8d6a363f24ae 100644 +--- a/block/blk-mq-cpumap.c ++++ b/block/blk-mq-cpumap.c +@@ -11,6 +11,7 @@ + #include + #include + #include ++#include + + #include "blk.h" + #include "blk-mq.h" +@@ -54,3 +55,39 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) + + return NUMA_NO_NODE; + } ++ ++/** ++ * blk_mq_map_hw_queues - Create CPU to hardware queue mapping ++ * @qmap: CPU to hardware queue map ++ * @dev: The device to map queues ++ * @offset: Queue offset to use for the device ++ * ++ * Create a CPU to hardware queue mapping in @qmap. The struct bus_type ++ * irq_get_affinity callback will be used to retrieve the affinity. ++ */ ++void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap, ++ struct device *dev, unsigned int offset) ++ ++{ ++ const struct cpumask *mask; ++ unsigned int queue, cpu; ++ ++ if (!dev->bus->irq_get_affinity) ++ goto fallback; ++ ++ for (queue = 0; queue < qmap->nr_queues; queue++) { ++ mask = dev->bus->irq_get_affinity(dev, queue + offset); ++ if (!mask) ++ goto fallback; ++ ++ for_each_cpu(cpu, mask) ++ qmap->mq_map[cpu] = qmap->queue_offset + queue; ++ } ++ ++ return; ++ ++fallback: ++ WARN_ON_ONCE(qmap->nr_queues > 1); ++ blk_mq_clear_mq_map(qmap); ++} ++EXPORT_SYMBOL_GPL(blk_mq_map_hw_queues); +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 662e52ab06467f..f26bee56269363 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -506,7 +506,7 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data) + prefetch(tags->static_rqs[tag]); + tag_mask &= ~(1UL << i); + rq = blk_mq_rq_ctx_init(data, tags, tag); +- rq_list_add(data->cached_rq, rq); ++ rq_list_add_head(data->cached_rqs, rq); + nr++; + } + if (!(data->rq_flags & RQF_SCHED_TAGS)) +@@ -515,7 +515,7 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data) + percpu_ref_get_many(&data->q->q_usage_counter, nr - 1); + data->nr_tags -= nr; + +- return rq_list_pop(data->cached_rq); ++ return rq_list_pop(data->cached_rqs); + } + + static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) +@@ -612,7 +612,7 @@ static struct request *blk_mq_rq_cache_fill(struct request_queue *q, + .flags = flags, + .cmd_flags = opf, + .nr_tags = plug->nr_ios, +- .cached_rq = &plug->cached_rq, ++ .cached_rqs = &plug->cached_rqs, + }; + struct request *rq; + +@@ -637,14 +637,14 @@ static struct request *blk_mq_alloc_cached_request(struct request_queue *q, + if (!plug) + return NULL; + +- if (rq_list_empty(plug->cached_rq)) { ++ if (rq_list_empty(&plug->cached_rqs)) { + if (plug->nr_ios == 1) + return NULL; + rq = blk_mq_rq_cache_fill(q, plug, opf, flags); + if (!rq) + return NULL; + } else { +- rq = rq_list_peek(&plug->cached_rq); ++ rq = rq_list_peek(&plug->cached_rqs); + if (!rq || rq->q != q) + return NULL; + +@@ -653,7 +653,7 @@ static struct request *blk_mq_alloc_cached_request(struct request_queue *q, + if (op_is_flush(rq->cmd_flags) != op_is_flush(opf)) + return NULL; + +- plug->cached_rq = rq_list_next(rq); ++ rq_list_pop(&plug->cached_rqs); + blk_mq_rq_time_init(rq, 0); + } + +@@ -830,7 +830,7 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) + { + struct request *rq; + +- while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) ++ while ((rq = rq_list_pop(&plug->cached_rqs)) != NULL) + blk_mq_free_request(rq); + } + +@@ -1386,8 +1386,7 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) + */ + if (!plug->has_elevator && (rq->rq_flags & RQF_SCHED_TAGS)) + plug->has_elevator = true; +- rq->rq_next = NULL; +- rq_list_add(&plug->mq_list, rq); ++ rq_list_add_tail(&plug->mq_list, rq); + plug->rq_count++; + } + +@@ -2781,7 +2780,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug) + blk_status_t ret = BLK_STS_OK; + + while ((rq = rq_list_pop(&plug->mq_list))) { +- bool last = rq_list_empty(plug->mq_list); ++ bool last = rq_list_empty(&plug->mq_list); + + if (hctx != rq->mq_hctx) { + if (hctx) { +@@ -2824,8 +2823,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) + { + struct blk_mq_hw_ctx *this_hctx = NULL; + struct blk_mq_ctx *this_ctx = NULL; +- struct request *requeue_list = NULL; +- struct request **requeue_lastp = &requeue_list; ++ struct rq_list requeue_list = {}; + unsigned int depth = 0; + bool is_passthrough = false; + LIST_HEAD(list); +@@ -2839,12 +2837,12 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched) + is_passthrough = blk_rq_is_passthrough(rq); + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx || + is_passthrough != blk_rq_is_passthrough(rq)) { +- rq_list_add_tail(&requeue_lastp, rq); ++ rq_list_add_tail(&requeue_list, rq); + continue; + } +- list_add(&rq->queuelist, &list); ++ list_add_tail(&rq->queuelist, &list); + depth++; +- } while (!rq_list_empty(plug->mq_list)); ++ } while (!rq_list_empty(&plug->mq_list)); + + plug->mq_list = requeue_list; + trace_block_unplug(this_hctx->queue, depth, !from_sched); +@@ -2899,19 +2897,19 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) + if (q->mq_ops->queue_rqs) { + blk_mq_run_dispatch_ops(q, + __blk_mq_flush_plug_list(q, plug)); +- if (rq_list_empty(plug->mq_list)) ++ if (rq_list_empty(&plug->mq_list)) + return; + } + + blk_mq_run_dispatch_ops(q, + blk_mq_plug_issue_direct(plug)); +- if (rq_list_empty(plug->mq_list)) ++ if (rq_list_empty(&plug->mq_list)) + return; + } + + do { + blk_mq_dispatch_plug_list(plug, from_schedule); +- } while (!rq_list_empty(plug->mq_list)); ++ } while (!rq_list_empty(&plug->mq_list)); + } + + static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, +@@ -2976,7 +2974,7 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q, + if (plug) { + data.nr_tags = plug->nr_ios; + plug->nr_ios = 1; +- data.cached_rq = &plug->cached_rq; ++ data.cached_rqs = &plug->cached_rqs; + } + + rq = __blk_mq_alloc_requests(&data); +@@ -2999,7 +2997,7 @@ static struct request *blk_mq_peek_cached_request(struct blk_plug *plug, + + if (!plug) + return NULL; +- rq = rq_list_peek(&plug->cached_rq); ++ rq = rq_list_peek(&plug->cached_rqs); + if (!rq || rq->q != q) + return NULL; + if (type != rq->mq_hctx->type && +@@ -3013,14 +3011,14 @@ static struct request *blk_mq_peek_cached_request(struct blk_plug *plug, + static void blk_mq_use_cached_rq(struct request *rq, struct blk_plug *plug, + struct bio *bio) + { +- WARN_ON_ONCE(rq_list_peek(&plug->cached_rq) != rq); ++ if (rq_list_pop(&plug->cached_rqs) != rq) ++ WARN_ON_ONCE(1); + + /* + * If any qos ->throttle() end up blocking, we will have flushed the + * plug and hence killed the cached_rq list as well. Pop this entry + * before we throttle. + */ +- plug->cached_rq = rq_list_next(rq); + rq_qos_throttle(rq->q, bio); + + blk_mq_rq_time_init(rq, 0); +diff --git a/block/blk-mq.h b/block/blk-mq.h +index 364c0415293cf7..a80d3b3105f9ed 100644 +--- a/block/blk-mq.h ++++ b/block/blk-mq.h +@@ -155,7 +155,7 @@ struct blk_mq_alloc_data { + + /* allocate multiple requests/tags in one go */ + unsigned int nr_tags; +- struct request **cached_rq; ++ struct rq_list *cached_rqs; + + /* input & output parameter */ + struct blk_mq_ctx *ctx; +diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c +index 692b27266220fe..0e2520d929e1db 100644 +--- a/block/blk-sysfs.c ++++ b/block/blk-sysfs.c +@@ -813,6 +813,8 @@ int blk_register_queue(struct gendisk *disk) + out_debugfs_remove: + blk_debugfs_remove(disk); + mutex_unlock(&q->sysfs_lock); ++ if (queue_is_mq(q)) ++ blk_mq_sysfs_unregister(disk); + out_put_queue_kobj: + kobject_put(&disk->queue_kobj); + mutex_unlock(&q->sysfs_dir_lock); +diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c +index 9c76fb1ad2ec50..a7442dc0bd8e10 100644 +--- a/drivers/ata/libata-sata.c ++++ b/drivers/ata/libata-sata.c +@@ -1510,6 +1510,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link) + unsigned int err_mask, tag; + u8 *sense, sk = 0, asc = 0, ascq = 0; + u64 sense_valid, val; ++ u16 extended_sense; ++ bool aux_icc_valid; + int ret = 0; + + err_mask = ata_read_log_page(dev, ATA_LOG_SENSE_NCQ, 0, buf, 2); +@@ -1529,6 +1531,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link) + + sense_valid = (u64)buf[8] | ((u64)buf[9] << 8) | + ((u64)buf[10] << 16) | ((u64)buf[11] << 24); ++ extended_sense = get_unaligned_le16(&buf[14]); ++ aux_icc_valid = extended_sense & BIT(15); + + ata_qc_for_each_raw(ap, qc, tag) { + if (!(qc->flags & ATA_QCFLAG_EH) || +@@ -1556,6 +1560,17 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link) + continue; + } + ++ qc->result_tf.nsect = sense[6]; ++ qc->result_tf.hob_nsect = sense[7]; ++ qc->result_tf.lbal = sense[8]; ++ qc->result_tf.lbam = sense[9]; ++ qc->result_tf.lbah = sense[10]; ++ qc->result_tf.hob_lbal = sense[11]; ++ qc->result_tf.hob_lbam = sense[12]; ++ qc->result_tf.hob_lbah = sense[13]; ++ if (aux_icc_valid) ++ qc->result_tf.auxiliary = get_unaligned_le32(&sense[16]); ++ + /* Set sense without also setting scsicmd->result */ + scsi_build_sense_buffer(dev->flags & ATA_DFLAG_D_SENSE, + qc->scsicmd->sense_buffer, sk, +diff --git a/drivers/block/loop.c b/drivers/block/loop.c +index 86cc3b19faae86..8827a768284ac4 100644 +--- a/drivers/block/loop.c ++++ b/drivers/block/loop.c +@@ -233,72 +233,6 @@ static void loop_set_size(struct loop_device *lo, loff_t size) + kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE); + } + +-static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos) +-{ +- struct iov_iter i; +- ssize_t bw; +- +- iov_iter_bvec(&i, ITER_SOURCE, bvec, 1, bvec->bv_len); +- +- bw = vfs_iter_write(file, &i, ppos, 0); +- +- if (likely(bw == bvec->bv_len)) +- return 0; +- +- printk_ratelimited(KERN_ERR +- "loop: Write error at byte offset %llu, length %i.\n", +- (unsigned long long)*ppos, bvec->bv_len); +- if (bw >= 0) +- bw = -EIO; +- return bw; +-} +- +-static int lo_write_simple(struct loop_device *lo, struct request *rq, +- loff_t pos) +-{ +- struct bio_vec bvec; +- struct req_iterator iter; +- int ret = 0; +- +- rq_for_each_segment(bvec, rq, iter) { +- ret = lo_write_bvec(lo->lo_backing_file, &bvec, &pos); +- if (ret < 0) +- break; +- cond_resched(); +- } +- +- return ret; +-} +- +-static int lo_read_simple(struct loop_device *lo, struct request *rq, +- loff_t pos) +-{ +- struct bio_vec bvec; +- struct req_iterator iter; +- struct iov_iter i; +- ssize_t len; +- +- rq_for_each_segment(bvec, rq, iter) { +- iov_iter_bvec(&i, ITER_DEST, &bvec, 1, bvec.bv_len); +- len = vfs_iter_read(lo->lo_backing_file, &i, &pos, 0); +- if (len < 0) +- return len; +- +- flush_dcache_page(bvec.bv_page); +- +- if (len != bvec.bv_len) { +- struct bio *bio; +- +- __rq_for_each_bio(bio, rq) +- zero_fill_bio(bio); +- break; +- } +- cond_resched(); +- } +- +- return 0; +-} +- + static void loop_clear_limits(struct loop_device *lo, int mode) + { + struct queue_limits lim = queue_limits_start_update(lo->lo_queue); +@@ -357,7 +291,7 @@ static void lo_complete_rq(struct request *rq) + struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq); + blk_status_t ret = BLK_STS_OK; + +- if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) || ++ if (cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) || + req_op(rq) != REQ_OP_READ) { + if (cmd->ret < 0) + ret = errno_to_blk_status(cmd->ret); +@@ -373,14 +307,13 @@ static void lo_complete_rq(struct request *rq) + cmd->ret = 0; + blk_mq_requeue_request(rq, true); + } else { +- if (cmd->use_aio) { +- struct bio *bio = rq->bio; ++ struct bio *bio = rq->bio; + +- while (bio) { +- zero_fill_bio(bio); +- bio = bio->bi_next; +- } ++ while (bio) { ++ zero_fill_bio(bio); ++ bio = bio->bi_next; + } ++ + ret = BLK_STS_IOERR; + end_io: + blk_mq_end_request(rq, ret); +@@ -460,9 +393,14 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, + + cmd->iocb.ki_pos = pos; + cmd->iocb.ki_filp = file; +- cmd->iocb.ki_complete = lo_rw_aio_complete; +- cmd->iocb.ki_flags = IOCB_DIRECT; +- cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0); ++ cmd->iocb.ki_ioprio = req_get_ioprio(rq); ++ if (cmd->use_aio) { ++ cmd->iocb.ki_complete = lo_rw_aio_complete; ++ cmd->iocb.ki_flags = IOCB_DIRECT; ++ } else { ++ cmd->iocb.ki_complete = NULL; ++ cmd->iocb.ki_flags = 0; ++ } + + if (rw == ITER_SOURCE) + ret = file->f_op->write_iter(&cmd->iocb, &iter); +@@ -473,7 +411,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, + + if (ret != -EIOCBQUEUED) + lo_rw_aio_complete(&cmd->iocb, ret); +- return 0; ++ return -EIOCBQUEUED; + } + + static int do_req_filebacked(struct loop_device *lo, struct request *rq) +@@ -481,15 +419,6 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq) + struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq); + loff_t pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset; + +- /* +- * lo_write_simple and lo_read_simple should have been covered +- * by io submit style function like lo_rw_aio(), one blocker +- * is that lo_read_simple() need to call flush_dcache_page after +- * the page is written from kernel, and it isn't easy to handle +- * this in io submit style function which submits all segments +- * of the req at one time. And direct read IO doesn't need to +- * run flush_dcache_page(). +- */ + switch (req_op(rq)) { + case REQ_OP_FLUSH: + return lo_req_flush(lo, rq); +@@ -505,15 +434,9 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq) + case REQ_OP_DISCARD: + return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE); + case REQ_OP_WRITE: +- if (cmd->use_aio) +- return lo_rw_aio(lo, cmd, pos, ITER_SOURCE); +- else +- return lo_write_simple(lo, rq, pos); ++ return lo_rw_aio(lo, cmd, pos, ITER_SOURCE); + case REQ_OP_READ: +- if (cmd->use_aio) +- return lo_rw_aio(lo, cmd, pos, ITER_DEST); +- else +- return lo_read_simple(lo, rq, pos); ++ return lo_rw_aio(lo, cmd, pos, ITER_DEST); + default: + WARN_ON_ONCE(1); + return -EIO; +@@ -645,19 +568,20 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev, + * dependency. + */ + fput(old_file); ++ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0); + if (partscan) + loop_reread_partitions(lo); + + error = 0; + done: +- /* enable and uncork uevent now that we are done */ +- dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0); ++ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE); + return error; + + out_err: + loop_global_unlock(lo, is_loop); + out_putf: + fput(file); ++ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0); + goto done; + } + +@@ -1111,8 +1035,8 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode, + if (partscan) + clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state); + +- /* enable and uncork uevent now that we are done */ + dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0); ++ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE); + + loop_global_unlock(lo, is_loop); + if (partscan) +@@ -1888,7 +1812,6 @@ static void loop_handle_cmd(struct loop_cmd *cmd) + struct loop_device *lo = rq->q->queuedata; + int ret = 0; + struct mem_cgroup *old_memcg = NULL; +- const bool use_aio = cmd->use_aio; + + if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) { + ret = -EIO; +@@ -1918,7 +1841,7 @@ static void loop_handle_cmd(struct loop_cmd *cmd) + } + failed: + /* complete non-aio request */ +- if (!use_aio || ret) { ++ if (ret != -EIOCBQUEUED) { + if (ret == -EOPNOTSUPP) + cmd->ret = ret; + else +diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c +index c479348ce8ff69..f10369ad90f768 100644 +--- a/drivers/block/null_blk/main.c ++++ b/drivers/block/null_blk/main.c +@@ -1638,10 +1638,9 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx, + return BLK_STS_OK; + } + +-static void null_queue_rqs(struct request **rqlist) ++static void null_queue_rqs(struct rq_list *rqlist) + { +- struct request *requeue_list = NULL; +- struct request **requeue_lastp = &requeue_list; ++ struct rq_list requeue_list = {}; + struct blk_mq_queue_data bd = { }; + blk_status_t ret; + +@@ -1651,8 +1650,8 @@ static void null_queue_rqs(struct request **rqlist) + bd.rq = rq; + ret = null_queue_rq(rq->mq_hctx, &bd); + if (ret != BLK_STS_OK) +- rq_list_add_tail(&requeue_lastp, rq); +- } while (!rq_list_empty(*rqlist)); ++ rq_list_add_tail(&requeue_list, rq); ++ } while (!rq_list_empty(rqlist)); + + *rqlist = requeue_list; + } +diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c +index 44a6937a4b65cc..fd6c565f8a507c 100644 +--- a/drivers/block/virtio_blk.c ++++ b/drivers/block/virtio_blk.c +@@ -472,7 +472,7 @@ static bool virtblk_prep_rq_batch(struct request *req) + } + + static void virtblk_add_req_batch(struct virtio_blk_vq *vq, +- struct request **rqlist) ++ struct rq_list *rqlist) + { + struct request *req; + unsigned long flags; +@@ -499,11 +499,10 @@ static void virtblk_add_req_batch(struct virtio_blk_vq *vq, + virtqueue_notify(vq->vq); + } + +-static void virtio_queue_rqs(struct request **rqlist) ++static void virtio_queue_rqs(struct rq_list *rqlist) + { +- struct request *submit_list = NULL; +- struct request *requeue_list = NULL; +- struct request **requeue_lastp = &requeue_list; ++ struct rq_list submit_list = { }; ++ struct rq_list requeue_list = { }; + struct virtio_blk_vq *vq = NULL; + struct request *req; + +@@ -515,9 +514,9 @@ static void virtio_queue_rqs(struct request **rqlist) + vq = this_vq; + + if (virtblk_prep_rq_batch(req)) +- rq_list_add(&submit_list, req); /* reverse order */ ++ rq_list_add_tail(&submit_list, req); + else +- rq_list_add_tail(&requeue_lastp, req); ++ rq_list_add_tail(&requeue_list, req); + } + + if (vq) +diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c +index 0a6ca6dfb94841..59eb9486642232 100644 +--- a/drivers/bluetooth/btrtl.c ++++ b/drivers/bluetooth/btrtl.c +@@ -1215,6 +1215,8 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev, + rtl_dev_err(hdev, "mandatory config file %s not found", + btrtl_dev->ic_info->cfg_name); + ret = btrtl_dev->cfg_len; ++ if (!ret) ++ ret = -EINVAL; + goto err_free; + } + } +diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c +index 7651321d351ccd..9ac22e4a070bef 100644 +--- a/drivers/bluetooth/hci_vhci.c ++++ b/drivers/bluetooth/hci_vhci.c +@@ -289,18 +289,18 @@ static void vhci_coredump(struct hci_dev *hdev) + + static void vhci_coredump_hdr(struct hci_dev *hdev, struct sk_buff *skb) + { +- char buf[80]; ++ const char *buf; + +- snprintf(buf, sizeof(buf), "Controller Name: vhci_ctrl\n"); ++ buf = "Controller Name: vhci_ctrl\n"; + skb_put_data(skb, buf, strlen(buf)); + +- snprintf(buf, sizeof(buf), "Firmware Version: vhci_fw\n"); ++ buf = "Firmware Version: vhci_fw\n"; + skb_put_data(skb, buf, strlen(buf)); + +- snprintf(buf, sizeof(buf), "Driver: vhci_drv\n"); ++ buf = "Driver: vhci_drv\n"; + skb_put_data(skb, buf, strlen(buf)); + +- snprintf(buf, sizeof(buf), "Vendor: vhci\n"); ++ buf = "Vendor: vhci\n"; + skb_put_data(skb, buf, strlen(buf)); + } + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index f98c9438760c97..67b4e3d18ffe22 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2748,10 +2748,18 @@ EXPORT_SYMBOL(cpufreq_update_policy); + */ + void cpufreq_update_limits(unsigned int cpu) + { ++ struct cpufreq_policy *policy; ++ ++ policy = cpufreq_cpu_get(cpu); ++ if (!policy) ++ return; ++ + if (cpufreq_driver->update_limits) + cpufreq_driver->update_limits(cpu); + else + cpufreq_update_policy(cpu); ++ ++ cpufreq_cpu_put(policy); + } + EXPORT_SYMBOL_GPL(cpufreq_update_limits); + +diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c +index 8ed2bb01a619fd..44860630050019 100644 +--- a/drivers/crypto/caam/qi.c ++++ b/drivers/crypto/caam/qi.c +@@ -122,12 +122,12 @@ int caam_qi_enqueue(struct device *qidev, struct caam_drv_req *req) + qm_fd_addr_set64(&fd, addr); + + do { ++ refcount_inc(&req->drv_ctx->refcnt); + ret = qman_enqueue(req->drv_ctx->req_fq, &fd); +- if (likely(!ret)) { +- refcount_inc(&req->drv_ctx->refcnt); ++ if (likely(!ret)) + return 0; +- } + ++ refcount_dec(&req->drv_ctx->refcnt); + if (ret != -EBUSY) + break; + num_retries++; +diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c +index 0ed0515e1ed54c..cd52807e76afdb 100644 +--- a/drivers/crypto/tegra/tegra-se-aes.c ++++ b/drivers/crypto/tegra/tegra-se-aes.c +@@ -263,13 +263,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq) + unsigned int cmdlen; + int ret; + +- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_AES_BUFLEN, +- &rctx->datbuf.addr, GFP_KERNEL); +- if (!rctx->datbuf.buf) +- return -ENOMEM; +- +- rctx->datbuf.size = SE_AES_BUFLEN; +- rctx->iv = (u32 *)req->iv; ++ rctx->iv = (ctx->alg == SE_ALG_ECB) ? NULL : (u32 *)req->iv; + rctx->len = req->cryptlen; + + /* Pad input to AES Block size */ +@@ -278,6 +272,12 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq) + rctx->len += AES_BLOCK_SIZE - (rctx->len % AES_BLOCK_SIZE); + } + ++ rctx->datbuf.size = rctx->len; ++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size, ++ &rctx->datbuf.addr, GFP_KERNEL); ++ if (!rctx->datbuf.buf) ++ return -ENOMEM; ++ + scatterwalk_map_and_copy(rctx->datbuf.buf, req->src, 0, req->cryptlen, 0); + + /* Prepare the command and submit for execution */ +@@ -289,7 +289,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq) + scatterwalk_map_and_copy(rctx->datbuf.buf, req->dst, 0, req->cryptlen, 1); + + /* Free the buffer */ +- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size, + rctx->datbuf.buf, rctx->datbuf.addr); + + crypto_finalize_skcipher_request(se->engine, req, ret); +@@ -443,9 +443,6 @@ static int tegra_aes_crypt(struct skcipher_request *req, bool encrypt) + if (!req->cryptlen) + return 0; + +- if (ctx->alg == SE_ALG_ECB) +- req->iv = NULL; +- + rctx->encrypt = encrypt; + rctx->config = tegra234_aes_cfg(ctx->alg, encrypt); + rctx->crypto_config = tegra234_aes_crypto_cfg(ctx->alg, encrypt); +@@ -1120,6 +1117,11 @@ static int tegra_ccm_crypt_init(struct aead_request *req, struct tegra_se *se, + rctx->assoclen = req->assoclen; + rctx->authsize = crypto_aead_authsize(tfm); + ++ if (rctx->encrypt) ++ rctx->cryptlen = req->cryptlen; ++ else ++ rctx->cryptlen = req->cryptlen - rctx->authsize; ++ + memcpy(iv, req->iv, 16); + + ret = tegra_ccm_check_iv(iv); +@@ -1148,30 +1150,26 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq) + struct tegra_se *se = ctx->se; + int ret; + ++ ret = tegra_ccm_crypt_init(req, se, rctx); ++ if (ret) ++ return ret; ++ + /* Allocate buffers required */ +- rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100; ++ rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size, + &rctx->inbuf.addr, GFP_KERNEL); + if (!rctx->inbuf.buf) + return -ENOMEM; + +- rctx->inbuf.size = SE_AES_BUFLEN; +- +- rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100; ++ rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size, + &rctx->outbuf.addr, GFP_KERNEL); + if (!rctx->outbuf.buf) { + ret = -ENOMEM; + goto outbuf_err; + } + +- rctx->outbuf.size = SE_AES_BUFLEN; +- +- ret = tegra_ccm_crypt_init(req, se, rctx); +- if (ret) +- goto out; +- + if (rctx->encrypt) { +- rctx->cryptlen = req->cryptlen; +- + /* CBC MAC Operation */ + ret = tegra_ccm_compute_auth(ctx, rctx); + if (ret) +@@ -1182,10 +1180,6 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq) + if (ret) + goto out; + } else { +- rctx->cryptlen = req->cryptlen - ctx->authsize; +- if (ret) +- goto out; +- + /* CTR operation */ + ret = tegra_ccm_do_ctr(ctx, rctx); + if (ret) +@@ -1198,11 +1192,11 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq) + } + + out: +- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ dma_free_coherent(ctx->se->dev, rctx->inbuf.size, + rctx->outbuf.buf, rctx->outbuf.addr); + + outbuf_err: +- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ dma_free_coherent(ctx->se->dev, rctx->outbuf.size, + rctx->inbuf.buf, rctx->inbuf.addr); + + crypto_finalize_aead_request(ctx->se->engine, req, ret); +@@ -1218,23 +1212,6 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq) + struct tegra_aead_reqctx *rctx = aead_request_ctx(req); + int ret; + +- /* Allocate buffers required */ +- rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN, +- &rctx->inbuf.addr, GFP_KERNEL); +- if (!rctx->inbuf.buf) +- return -ENOMEM; +- +- rctx->inbuf.size = SE_AES_BUFLEN; +- +- rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN, +- &rctx->outbuf.addr, GFP_KERNEL); +- if (!rctx->outbuf.buf) { +- ret = -ENOMEM; +- goto outbuf_err; +- } +- +- rctx->outbuf.size = SE_AES_BUFLEN; +- + rctx->src_sg = req->src; + rctx->dst_sg = req->dst; + rctx->assoclen = req->assoclen; +@@ -1248,6 +1225,21 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq) + memcpy(rctx->iv, req->iv, GCM_AES_IV_SIZE); + rctx->iv[3] = (1 << 24); + ++ /* Allocate buffers required */ ++ rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen; ++ rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size, ++ &rctx->inbuf.addr, GFP_KERNEL); ++ if (!rctx->inbuf.buf) ++ return -ENOMEM; ++ ++ rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen; ++ rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size, ++ &rctx->outbuf.addr, GFP_KERNEL); ++ if (!rctx->outbuf.buf) { ++ ret = -ENOMEM; ++ goto outbuf_err; ++ } ++ + /* If there is associated data perform GMAC operation */ + if (rctx->assoclen) { + ret = tegra_gcm_do_gmac(ctx, rctx); +@@ -1271,11 +1263,11 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq) + ret = tegra_gcm_do_verify(ctx->se, rctx); + + out: +- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ dma_free_coherent(ctx->se->dev, rctx->outbuf.size, + rctx->outbuf.buf, rctx->outbuf.addr); + + outbuf_err: +- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN, ++ dma_free_coherent(ctx->se->dev, rctx->inbuf.size, + rctx->inbuf.buf, rctx->inbuf.addr); + + /* Finalize the request if there are no errors */ +@@ -1502,6 +1494,11 @@ static int tegra_cmac_do_update(struct ahash_request *req) + return 0; + } + ++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size, ++ &rctx->datbuf.addr, GFP_KERNEL); ++ if (!rctx->datbuf.buf) ++ return -ENOMEM; ++ + /* Copy the previous residue first */ + if (rctx->residue.size) + memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size); +@@ -1527,6 +1524,9 @@ static int tegra_cmac_do_update(struct ahash_request *req) + + tegra_cmac_copy_result(ctx->se, rctx); + ++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size, ++ rctx->datbuf.buf, rctx->datbuf.addr); ++ + return ret; + } + +@@ -1541,10 +1541,20 @@ static int tegra_cmac_do_final(struct ahash_request *req) + + if (!req->nbytes && !rctx->total_len && ctx->fallback_tfm) { + return crypto_shash_tfm_digest(ctx->fallback_tfm, +- rctx->datbuf.buf, 0, req->result); ++ NULL, 0, req->result); ++ } ++ ++ if (rctx->residue.size) { ++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size, ++ &rctx->datbuf.addr, GFP_KERNEL); ++ if (!rctx->datbuf.buf) { ++ ret = -ENOMEM; ++ goto out_free; ++ } ++ ++ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size); + } + +- memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size); + rctx->datbuf.size = rctx->residue.size; + rctx->total_len += rctx->residue.size; + rctx->config = tegra234_aes_cfg(SE_ALG_CMAC, 0); +@@ -1570,8 +1580,10 @@ static int tegra_cmac_do_final(struct ahash_request *req) + writel(0, se->base + se->hw->regs->result + (i * 4)); + + out: +- dma_free_coherent(se->dev, SE_SHA_BUFLEN, +- rctx->datbuf.buf, rctx->datbuf.addr); ++ if (rctx->residue.size) ++ dma_free_coherent(se->dev, rctx->datbuf.size, ++ rctx->datbuf.buf, rctx->datbuf.addr); ++out_free: + dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm) * 2, + rctx->residue.buf, rctx->residue.addr); + return ret; +@@ -1683,28 +1695,15 @@ static int tegra_cmac_init(struct ahash_request *req) + rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size * 2, + &rctx->residue.addr, GFP_KERNEL); + if (!rctx->residue.buf) +- goto resbuf_fail; ++ return -ENOMEM; + + rctx->residue.size = 0; + +- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN, +- &rctx->datbuf.addr, GFP_KERNEL); +- if (!rctx->datbuf.buf) +- goto datbuf_fail; +- +- rctx->datbuf.size = 0; +- + /* Clear any previous result */ + for (i = 0; i < CMAC_RESULT_REG_COUNT; i++) + writel(0, se->base + se->hw->regs->result + (i * 4)); + + return 0; +- +-datbuf_fail: +- dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf, +- rctx->residue.addr); +-resbuf_fail: +- return -ENOMEM; + } + + static int tegra_cmac_setkey(struct crypto_ahash *tfm, const u8 *key, +diff --git a/drivers/crypto/tegra/tegra-se-hash.c b/drivers/crypto/tegra/tegra-se-hash.c +index 726e30c0e63ebb..451b8eaab16aab 100644 +--- a/drivers/crypto/tegra/tegra-se-hash.c ++++ b/drivers/crypto/tegra/tegra-se-hash.c +@@ -332,6 +332,11 @@ static int tegra_sha_do_update(struct ahash_request *req) + return 0; + } + ++ rctx->datbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->datbuf.size, ++ &rctx->datbuf.addr, GFP_KERNEL); ++ if (!rctx->datbuf.buf) ++ return -ENOMEM; ++ + /* Copy the previous residue first */ + if (rctx->residue.size) + memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size); +@@ -368,6 +373,9 @@ static int tegra_sha_do_update(struct ahash_request *req) + if (!(rctx->task & SHA_FINAL)) + tegra_sha_copy_hash_result(se, rctx); + ++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size, ++ rctx->datbuf.buf, rctx->datbuf.addr); ++ + return ret; + } + +@@ -380,7 +388,17 @@ static int tegra_sha_do_final(struct ahash_request *req) + u32 *cpuvaddr = se->cmdbuf->addr; + int size, ret = 0; + +- memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size); ++ if (rctx->residue.size) { ++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size, ++ &rctx->datbuf.addr, GFP_KERNEL); ++ if (!rctx->datbuf.buf) { ++ ret = -ENOMEM; ++ goto out_free; ++ } ++ ++ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size); ++ } ++ + rctx->datbuf.size = rctx->residue.size; + rctx->total_len += rctx->residue.size; + +@@ -397,8 +415,10 @@ static int tegra_sha_do_final(struct ahash_request *req) + memcpy(req->result, rctx->digest.buf, rctx->digest.size); + + out: +- dma_free_coherent(se->dev, SE_SHA_BUFLEN, +- rctx->datbuf.buf, rctx->datbuf.addr); ++ if (rctx->residue.size) ++ dma_free_coherent(se->dev, rctx->datbuf.size, ++ rctx->datbuf.buf, rctx->datbuf.addr); ++out_free: + dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm), + rctx->residue.buf, rctx->residue.addr); + dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf, +@@ -534,19 +554,11 @@ static int tegra_sha_init(struct ahash_request *req) + if (!rctx->residue.buf) + goto resbuf_fail; + +- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN, +- &rctx->datbuf.addr, GFP_KERNEL); +- if (!rctx->datbuf.buf) +- goto datbuf_fail; +- + return 0; + +-datbuf_fail: +- dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf, +- rctx->residue.addr); + resbuf_fail: +- dma_free_coherent(se->dev, SE_SHA_BUFLEN, rctx->datbuf.buf, +- rctx->datbuf.addr); ++ dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf, ++ rctx->digest.addr); + digbuf_fail: + return -ENOMEM; + } +diff --git a/drivers/crypto/tegra/tegra-se.h b/drivers/crypto/tegra/tegra-se.h +index b54aefe717a174..e196a90eedb92c 100644 +--- a/drivers/crypto/tegra/tegra-se.h ++++ b/drivers/crypto/tegra/tegra-se.h +@@ -340,8 +340,6 @@ + #define SE_CRYPTO_CTR_REG_COUNT 4 + #define SE_MAX_KEYSLOT 15 + #define SE_MAX_MEM_ALLOC SZ_4M +-#define SE_AES_BUFLEN 0x8000 +-#define SE_SHA_BUFLEN 0x2000 + + #define SHA_FIRST BIT(0) + #define SHA_UPDATE BIT(1) +diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c +index c353029789cf1a..1290886f065e33 100644 +--- a/drivers/dma-buf/sw_sync.c ++++ b/drivers/dma-buf/sw_sync.c +@@ -444,15 +444,17 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a + return -EINVAL; + + pt = dma_fence_to_sync_pt(fence); +- if (!pt) +- return -EINVAL; ++ if (!pt) { ++ ret = -EINVAL; ++ goto put_fence; ++ } + + spin_lock_irqsave(fence->lock, flags); +- if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) { +- data.deadline_ns = ktime_to_ns(pt->deadline); +- } else { ++ if (!test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) { + ret = -ENOENT; ++ goto unlock; + } ++ data.deadline_ns = ktime_to_ns(pt->deadline); + spin_unlock_irqrestore(fence->lock, flags); + + dma_fence_put(fence); +@@ -464,6 +466,13 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a + return -EFAULT; + + return 0; ++ ++unlock: ++ spin_unlock_irqrestore(fence->lock, flags); ++put_fence: ++ dma_fence_put(fence); ++ ++ return ret; + } + + static long sw_sync_ioctl(struct file *file, unsigned int cmd, +diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h +index 685098f9626f2b..eebcdf653d0729 100644 +--- a/drivers/firmware/efi/libstub/efistub.h ++++ b/drivers/firmware/efi/libstub/efistub.h +@@ -171,7 +171,7 @@ void efi_set_u64_split(u64 data, u32 *lo, u32 *hi) + * the EFI memory map. Other related structures, e.g. x86 e820ext, need + * to factor in this headroom requirement as well. + */ +-#define EFI_MMAP_NR_SLACK_SLOTS 8 ++#define EFI_MMAP_NR_SLACK_SLOTS 32 + + typedef struct efi_generic_dev_path efi_device_path_protocol_t; + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c +index 45affc02548c16..a3a7d20ab4fea9 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c +@@ -437,6 +437,13 @@ static bool amdgpu_get_bios_apu(struct amdgpu_device *adev) + return true; + } + ++static bool amdgpu_prefer_rom_resource(struct amdgpu_device *adev) ++{ ++ struct resource *res = &adev->pdev->resource[PCI_ROM_RESOURCE]; ++ ++ return (res->flags & IORESOURCE_ROM_SHADOW); ++} ++ + static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev) + { + if (amdgpu_atrm_get_bios(adev)) { +@@ -455,14 +462,27 @@ static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev) + goto success; + } + +- if (amdgpu_read_platform_bios(adev)) { +- dev_info(adev->dev, "Fetched VBIOS from platform\n"); +- goto success; +- } ++ if (amdgpu_prefer_rom_resource(adev)) { ++ if (amdgpu_read_bios(adev)) { ++ dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n"); ++ goto success; ++ } + +- if (amdgpu_read_bios(adev)) { +- dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n"); +- goto success; ++ if (amdgpu_read_platform_bios(adev)) { ++ dev_info(adev->dev, "Fetched VBIOS from platform\n"); ++ goto success; ++ } ++ ++ } else { ++ if (amdgpu_read_platform_bios(adev)) { ++ dev_info(adev->dev, "Fetched VBIOS from platform\n"); ++ goto success; ++ } ++ ++ if (amdgpu_read_bios(adev)) { ++ dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n"); ++ goto success; ++ } + } + + if (amdgpu_read_bios_from_rom(adev)) { +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +index 31d4df96889812..24d007715a14ae 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +@@ -3322,6 +3322,7 @@ static int amdgpu_device_ip_fini(struct amdgpu_device *adev) + amdgpu_device_mem_scratch_fini(adev); + amdgpu_ib_pool_fini(adev); + amdgpu_seq64_fini(adev); ++ amdgpu_doorbell_fini(adev); + } + + r = adev->ip_blocks[i].version->funcs->sw_fini((void *)adev); +@@ -4670,7 +4671,6 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev) + + iounmap(adev->rmmio); + adev->rmmio = NULL; +- amdgpu_doorbell_fini(adev); + drm_dev_exit(idx); + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +index 8e81a83d37d846..2f90fff1b9ddc0 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +@@ -181,7 +181,7 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach, + struct sg_table *sgt, + enum dma_data_direction dir) + { +- if (sgt->sgl->page_link) { ++ if (sg_page(sgt->sgl)) { + dma_unmap_sgtable(attach->dev, sgt, dir, 0); + sg_free_table(sgt); + kfree(sgt); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +index 7978d5189c37d4..a9eb0927a7664f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c +@@ -1795,7 +1795,6 @@ static const u16 amdgpu_unsupported_pciidlist[] = { + }; + + static const struct pci_device_id pciidlist[] = { +-#ifdef CONFIG_DRM_AMDGPU_SI + {0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI}, + {0x1002, 0x6784, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI}, + {0x1002, 0x6788, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI}, +@@ -1868,8 +1867,6 @@ static const struct pci_device_id pciidlist[] = { + {0x1002, 0x6665, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY}, + {0x1002, 0x6667, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY}, + {0x1002, 0x666F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY}, +-#endif +-#ifdef CONFIG_DRM_AMDGPU_CIK + /* Kaveri */ + {0x1002, 0x1304, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_MOBILITY|AMD_IS_APU}, + {0x1002, 0x1305, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_APU}, +@@ -1952,7 +1949,6 @@ static const struct pci_device_id pciidlist[] = { + {0x1002, 0x985D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU}, + {0x1002, 0x985E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU}, + {0x1002, 0x985F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU}, +-#endif + /* topaz */ + {0x1002, 0x6900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ}, + {0x1002, 0x6901, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ}, +@@ -2284,14 +2280,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, + return -ENOTSUPP; + } + ++ switch (flags & AMD_ASIC_MASK) { ++ case CHIP_TAHITI: ++ case CHIP_PITCAIRN: ++ case CHIP_VERDE: ++ case CHIP_OLAND: ++ case CHIP_HAINAN: + #ifdef CONFIG_DRM_AMDGPU_SI +- if (!amdgpu_si_support) { +- switch (flags & AMD_ASIC_MASK) { +- case CHIP_TAHITI: +- case CHIP_PITCAIRN: +- case CHIP_VERDE: +- case CHIP_OLAND: +- case CHIP_HAINAN: ++ if (!amdgpu_si_support) { + dev_info(&pdev->dev, + "SI support provided by radeon.\n"); + dev_info(&pdev->dev, +@@ -2299,16 +2295,18 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, + ); + return -ENODEV; + } +- } ++ break; ++#else ++ dev_info(&pdev->dev, "amdgpu is built without SI support.\n"); ++ return -ENODEV; + #endif ++ case CHIP_KAVERI: ++ case CHIP_BONAIRE: ++ case CHIP_HAWAII: ++ case CHIP_KABINI: ++ case CHIP_MULLINS: + #ifdef CONFIG_DRM_AMDGPU_CIK +- if (!amdgpu_cik_support) { +- switch (flags & AMD_ASIC_MASK) { +- case CHIP_KAVERI: +- case CHIP_BONAIRE: +- case CHIP_HAWAII: +- case CHIP_KABINI: +- case CHIP_MULLINS: ++ if (!amdgpu_cik_support) { + dev_info(&pdev->dev, + "CIK support provided by radeon.\n"); + dev_info(&pdev->dev, +@@ -2316,8 +2314,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev, + ); + return -ENODEV; + } +- } ++ break; ++#else ++ dev_info(&pdev->dev, "amdgpu is built without CIK support.\n"); ++ return -ENODEV; + #endif ++ default: ++ break; ++ } + + adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev); + if (IS_ERR(adev)) +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +index 971419e3a9bbdf..4c4bdc4f51b294 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +@@ -161,8 +161,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain) + * When GTT is just an alternative to VRAM make sure that we + * only use it as fallback and still try to fill up VRAM first. + */ +- if (domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM && +- !(adev->flags & AMD_IS_APU)) ++ if (abo->tbo.resource && !(adev->flags & AMD_IS_APU) && ++ domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM) + places[c].flags |= TTM_PL_FLAG_FALLBACK; + c++; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +index 231a3d490ea8e3..7a773fcd7752c2 100644 +--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c +@@ -859,6 +859,10 @@ static void mes_v11_0_get_fw_version(struct amdgpu_device *adev) + { + int pipe; + ++ /* return early if we have already fetched these */ ++ if (adev->mes.sched_version && adev->mes.kiq_version) ++ return; ++ + /* get MES scheduler/KIQ versions */ + mutex_lock(&adev->srbm_mutex); + +diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c +index b3175ff676f33c..459f7b8d72b4d1 100644 +--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c +@@ -1225,17 +1225,20 @@ static int mes_v12_0_queue_init(struct amdgpu_device *adev, + mes_v12_0_queue_init_register(ring); + } + +- /* get MES scheduler/KIQ versions */ +- mutex_lock(&adev->srbm_mutex); +- soc21_grbm_select(adev, 3, pipe, 0, 0); ++ if (((pipe == AMDGPU_MES_SCHED_PIPE) && !adev->mes.sched_version) || ++ ((pipe == AMDGPU_MES_KIQ_PIPE) && !adev->mes.kiq_version)) { ++ /* get MES scheduler/KIQ versions */ ++ mutex_lock(&adev->srbm_mutex); ++ soc21_grbm_select(adev, 3, pipe, 0, 0); + +- if (pipe == AMDGPU_MES_SCHED_PIPE) +- adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); +- else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq) +- adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); ++ if (pipe == AMDGPU_MES_SCHED_PIPE) ++ adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); ++ else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq) ++ adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); + +- soc21_grbm_select(adev, 0, 0, 0, 0); +- mutex_unlock(&adev->srbm_mutex); ++ soc21_grbm_select(adev, 0, 0, 0, 0); ++ mutex_unlock(&adev->srbm_mutex); ++ } + + return 0; + } +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 260b6b8d29fd6c..c22da13859bd51 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -1690,6 +1690,13 @@ static const struct dmi_system_id dmi_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"), + }, + }, ++ { ++ .callback = edp0_on_dp1_callback, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "HP"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 645 14 inch G11 Notebook PC"), ++ }, ++ }, + { + .callback = edp0_on_dp1_callback, + .matches = { +@@ -1697,6 +1704,20 @@ static const struct dmi_system_id dmi_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"), + }, + }, ++ { ++ .callback = edp0_on_dp1_callback, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "HP"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 445 14 inch G11 Notebook PC"), ++ }, ++ }, ++ { ++ .callback = edp0_on_dp1_callback, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "HP"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 465 16 inch G11 Notebook PC"), ++ }, ++ }, + {} + /* TODO: refactor this from a fixed table to a dynamic option */ + }; +@@ -8458,14 +8479,39 @@ static void manage_dm_interrupts(struct amdgpu_device *adev, + int offdelay; + + if (acrtc_state) { +- if (amdgpu_ip_version(adev, DCE_HWIP, 0) < +- IP_VERSION(3, 5, 0) || +- acrtc_state->stream->link->psr_settings.psr_version < +- DC_PSR_VERSION_UNSUPPORTED || +- !(adev->flags & AMD_IS_APU)) { +- timing = &acrtc_state->stream->timing; +- +- /* at least 2 frames */ ++ timing = &acrtc_state->stream->timing; ++ ++ /* ++ * Depending on when the HW latching event of double-buffered ++ * registers happen relative to the PSR SDP deadline, and how ++ * bad the Panel clock has drifted since the last ALPM off ++ * event, there can be up to 3 frames of delay between sending ++ * the PSR exit cmd to DMUB fw, and when the panel starts ++ * displaying live frames. ++ * ++ * We can set: ++ * ++ * 20/100 * offdelay_ms = 3_frames_ms ++ * => offdelay_ms = 5 * 3_frames_ms ++ * ++ * This ensures that `3_frames_ms` will only be experienced as a ++ * 20% delay on top how long the display has been static, and ++ * thus make the delay less perceivable. ++ */ ++ if (acrtc_state->stream->link->psr_settings.psr_version < ++ DC_PSR_VERSION_UNSUPPORTED) { ++ offdelay = DIV64_U64_ROUND_UP((u64)5 * 3 * 10 * ++ timing->v_total * ++ timing->h_total, ++ timing->pix_clk_100hz); ++ config.offdelay_ms = offdelay ?: 30; ++ } else if (amdgpu_ip_version(adev, DCE_HWIP, 0) < ++ IP_VERSION(3, 5, 0) || ++ !(adev->flags & AMD_IS_APU)) { ++ /* ++ * Older HW and DGPU have issues with instant off; ++ * use a 2 frame offdelay. ++ */ + offdelay = DIV64_U64_ROUND_UP((u64)20 * + timing->v_total * + timing->h_total, +@@ -8473,6 +8519,8 @@ static void manage_dm_interrupts(struct amdgpu_device *adev, + + config.offdelay_ms = offdelay ?: 30; + } else { ++ /* offdelay_ms = 0 will never disable vblank */ ++ config.offdelay_ms = 1; + config.disable_immediate = true; + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +index 70fcfae8e4c552..2ac56e79df05e6 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +@@ -113,6 +113,7 @@ bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state) + * + * Panel Replay and PSR SU + * - Enable when: ++ * - VRR is disabled + * - vblank counter is disabled + * - entry is allowed: usermode demonstrates an adequate number of fast + * commits) +@@ -131,19 +132,20 @@ static void amdgpu_dm_crtc_set_panel_sr_feature( + bool is_sr_active = (link->replay_settings.replay_allow_active || + link->psr_settings.psr_allow_active); + bool is_crc_window_active = false; ++ bool vrr_active = amdgpu_dm_crtc_vrr_active_irq(vblank_work->acrtc); + + #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY + is_crc_window_active = + amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base); + #endif + +- if (link->replay_settings.replay_feature_enabled && ++ if (link->replay_settings.replay_feature_enabled && !vrr_active && + allow_sr_entry && !is_sr_active && !is_crc_window_active) { + amdgpu_dm_replay_enable(vblank_work->stream, true); + } else if (vblank_enabled) { + if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active) + amdgpu_dm_psr_disable(vblank_work->stream, false); +- } else if (link->psr_settings.psr_feature_enabled && ++ } else if (link->psr_settings.psr_feature_enabled && !vrr_active && + allow_sr_entry && !is_sr_active && !is_crc_window_active) { + + struct amdgpu_dm_connector *aconn = +diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c +index d35dd507cb9f85..cb187604744e96 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c ++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c +@@ -87,6 +87,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co + /* Store configuration options */ + (*dml_ctx)->config = *config; + ++ DC_FP_START(); ++ + /*Initialize SOCBB and DCNIP params */ + dml21_initialize_soc_bb_params(&(*dml_ctx)->v21.dml_init, config, in_dc); + dml21_initialize_ip_params(&(*dml_ctx)->v21.dml_init, config, in_dc); +@@ -97,6 +99,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co + + /*Initialize DML21 instance */ + dml2_initialize_instance(&(*dml_ctx)->v21.dml_init); ++ ++ DC_FP_END(); + } + + bool dml21_create(const struct dc *in_dc, struct dml2_context **dml_ctx, const struct dml2_configuration_options *config) +@@ -277,11 +281,16 @@ bool dml21_validate(const struct dc *in_dc, struct dc_state *context, struct dml + { + bool out = false; + ++ DC_FP_START(); ++ + /* Use dml_validate_only for fast_validate path */ +- if (fast_validate) { ++ if (fast_validate) + out = dml21_check_mode_support(in_dc, context, dml_ctx); +- } else ++ else + out = dml21_mode_check_and_programming(in_dc, context, dml_ctx); ++ ++ DC_FP_END(); ++ + return out; + } + +@@ -420,8 +429,12 @@ void dml21_copy(struct dml2_context *dst_dml_ctx, + + dst_dml_ctx->v21.mode_programming.programming = dst_dml2_programming; + ++ DC_FP_START(); ++ + /* need to initialize copied instance for internal references to be correct */ + dml2_initialize_instance(&dst_dml_ctx->v21.dml_init); ++ ++ DC_FP_END(); + } + + bool dml21_create_copy(struct dml2_context **dst_dml_ctx, +diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c +index 4d64c45930da49..cb2cb89dfecb2c 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c ++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c +@@ -734,11 +734,16 @@ bool dml2_validate(const struct dc *in_dc, struct dc_state *context, struct dml2 + return out; + } + ++ DC_FP_START(); ++ + /* Use dml_validate_only for fast_validate path */ + if (fast_validate) + out = dml2_validate_only(context); + else + out = dml2_validate_and_build_resource(in_dc, context); ++ ++ DC_FP_END(); ++ + return out; + } + +@@ -779,11 +784,15 @@ static void dml2_init(const struct dc *in_dc, const struct dml2_configuration_op + break; + } + ++ DC_FP_START(); ++ + initialize_dml2_ip_params(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.ip); + + initialize_dml2_soc_bbox(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc); + + initialize_dml2_soc_states(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc, &(*dml2)->v20.dml_core_ctx.states); ++ ++ DC_FP_END(); + } + + bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2) +diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c +index 36d12db8d02256..f5f1ccd8303cf3 100644 +--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c +@@ -3003,7 +3003,11 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx) + dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, dp_hpo_inst); + + phyd32clk = get_phyd32clk_src(link); +- dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk); ++ if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) { ++ dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst); ++ } else { ++ dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk); ++ } + } else { + if (dccg->funcs->enable_symclk_se) + dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst, +diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c +index 0b743669f23b44..62f1e597787e69 100644 +--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c +@@ -1001,8 +1001,11 @@ void dcn401_enable_stream(struct pipe_ctx *pipe_ctx) + if (dc_is_dp_signal(pipe_ctx->stream->signal) || dc_is_virtual_signal(pipe_ctx->stream->signal)) { + if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx)) { + dccg->funcs->set_dpstreamclk(dccg, DPREFCLK, tg->inst, dp_hpo_inst); +- +- dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk); ++ if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) { ++ dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst); ++ } else { ++ dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk); ++ } + } else { + /* need to set DTBCLK_P source to DPREFCLK for DP8B10B */ + dccg->funcs->set_dtbclk_p_src(dccg, DPREFCLK, tg->inst); +diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c +index 80386f698ae4de..0ca6358a9782e2 100644 +--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c +@@ -891,7 +891,7 @@ static const struct dc_debug_options debug_defaults_drv = { + .disable_z10 = true, + .enable_legacy_fast_update = true, + .enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/ +- .dml_hostvm_override = DML_HOSTVM_NO_OVERRIDE, ++ .dml_hostvm_override = DML_HOSTVM_OVERRIDE_FALSE, + .using_dml2 = false, + }; + +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c +index a8fc0fa44db69d..ba5c1237fcfe1a 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c +@@ -267,10 +267,10 @@ int smu7_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed) + if (hwmgr->thermal_controller.fanInfo.bNoFan || + (hwmgr->thermal_controller.fanInfo. + ucTachometerPulsesPerRevolution == 0) || +- speed == 0 || ++ (!speed || speed > UINT_MAX/8) || + (speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) || + (speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM)) +- return 0; ++ return -EINVAL; + + if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl)) + smu7_fan_ctrl_stop_smc_fan_control(hwmgr); +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c +index 379012494da57b..56423aedf3fa7c 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c +@@ -307,10 +307,10 @@ int vega10_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed) + int result = 0; + + if (hwmgr->thermal_controller.fanInfo.bNoFan || +- speed == 0 || ++ (!speed || speed > UINT_MAX/8) || + (speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) || + (speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM)) +- return -1; ++ return -EINVAL; + + if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl)) + result = vega10_fan_ctrl_stop_smc_fan_control(hwmgr); +diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c +index a3331ffb2daf7f..1b1c88590156cd 100644 +--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c ++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c +@@ -191,7 +191,7 @@ int vega20_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed) + uint32_t tach_period, crystal_clock_freq; + int result = 0; + +- if (!speed) ++ if (!speed || speed > UINT_MAX/8) + return -EINVAL; + + if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl)) { +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c +index fc1297fecc62e0..d4b954b22441c5 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c +@@ -1267,6 +1267,9 @@ static int arcturus_set_fan_speed_rpm(struct smu_context *smu, + uint32_t crystal_clock_freq = 2500; + uint32_t tach_period; + ++ if (!speed || speed > UINT_MAX/8) ++ return -EINVAL; ++ + tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed); + WREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT, + REG_SET_FIELD(RREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT), +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c +index 16fcd9dcd202e0..6c61e87359dd48 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c +@@ -1199,7 +1199,7 @@ int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu, + uint32_t crystal_clock_freq = 2500; + uint32_t tach_period; + +- if (speed == 0) ++ if (!speed || speed > UINT_MAX/8) + return -EINVAL; + /* + * To prevent from possible overheat, some ASICs may have requirement +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +index 2024a85fa11bd5..4f78c84da780c7 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +@@ -1228,7 +1228,7 @@ int smu_v13_0_set_fan_speed_rpm(struct smu_context *smu, + uint32_t tach_period; + int ret; + +- if (!speed) ++ if (!speed || speed > UINT_MAX/8) + return -EINVAL; + + ret = smu_v13_0_auto_fan_control(smu, 0); +diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c +index 00b364f9a71e54..5dadc895e7f26b 100644 +--- a/drivers/gpu/drm/ast/ast_dp.c ++++ b/drivers/gpu/drm/ast/ast_dp.c +@@ -17,6 +17,12 @@ static bool ast_astdp_is_connected(struct ast_device *ast) + { + if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, AST_IO_VGACRDF_HPD)) + return false; ++ /* ++ * HPD might be set even if no monitor is connected, so also check that ++ * the link training was successful. ++ */ ++ if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, AST_IO_VGACRDC_LINK_SUCCESS)) ++ return false; + return true; + } + +diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c +index d5eb8de645a9a3..4f8899cd125d9d 100644 +--- a/drivers/gpu/drm/i915/display/intel_display.c ++++ b/drivers/gpu/drm/i915/display/intel_display.c +@@ -1006,7 +1006,9 @@ static bool vrr_params_changed(const struct intel_crtc_state *old_crtc_state, + old_crtc_state->vrr.vmin != new_crtc_state->vrr.vmin || + old_crtc_state->vrr.vmax != new_crtc_state->vrr.vmax || + old_crtc_state->vrr.guardband != new_crtc_state->vrr.guardband || +- old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full; ++ old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full || ++ old_crtc_state->vrr.vsync_start != new_crtc_state->vrr.vsync_start || ++ old_crtc_state->vrr.vsync_end != new_crtc_state->vrr.vsync_end; + } + + static bool cmrr_params_changed(const struct intel_crtc_state *old_crtc_state, +diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c +index 908f910420c20c..4ef45520e199af 100644 +--- a/drivers/gpu/drm/i915/gvt/opregion.c ++++ b/drivers/gpu/drm/i915/gvt/opregion.c +@@ -222,7 +222,6 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu) + u8 *buf; + struct opregion_header *header; + struct vbt v; +- const char opregion_signature[16] = OPREGION_SIGNATURE; + + gvt_dbg_core("init vgpu%d opregion\n", vgpu->id); + vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL | +@@ -236,8 +235,10 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu) + /* emulated opregion with VBT mailbox only */ + buf = (u8 *)vgpu_opregion(vgpu)->va; + header = (struct opregion_header *)buf; +- memcpy(header->signature, opregion_signature, +- sizeof(opregion_signature)); ++ ++ static_assert(sizeof(header->signature) == sizeof(OPREGION_SIGNATURE) - 1); ++ memcpy(header->signature, OPREGION_SIGNATURE, sizeof(header->signature)); ++ + header->size = 0x8; + header->opregion_ver = 0x02000000; + header->mboxes = MBOX_VBT; +diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c +index 3debc9870a82ae..d09c4c68411627 100644 +--- a/drivers/gpu/drm/imagination/pvr_fw.c ++++ b/drivers/gpu/drm/imagination/pvr_fw.c +@@ -732,7 +732,7 @@ pvr_fw_process(struct pvr_device *pvr_dev) + fw_mem->core_data, fw_mem->core_code_alloc_size); + + if (err) +- goto err_free_fw_core_data_obj; ++ goto err_free_kdata; + + memcpy(fw_code_ptr, fw_mem->code, fw_mem->code_alloc_size); + memcpy(fw_data_ptr, fw_mem->data, fw_mem->data_alloc_size); +@@ -742,10 +742,14 @@ pvr_fw_process(struct pvr_device *pvr_dev) + memcpy(fw_core_data_ptr, fw_mem->core_data, fw_mem->core_data_alloc_size); + + /* We're finished with the firmware section memory on the CPU, unmap. */ +- if (fw_core_data_ptr) ++ if (fw_core_data_ptr) { + pvr_fw_object_vunmap(fw_mem->core_data_obj); +- if (fw_core_code_ptr) ++ fw_core_data_ptr = NULL; ++ } ++ if (fw_core_code_ptr) { + pvr_fw_object_vunmap(fw_mem->core_code_obj); ++ fw_core_code_ptr = NULL; ++ } + pvr_fw_object_vunmap(fw_mem->data_obj); + fw_data_ptr = NULL; + pvr_fw_object_vunmap(fw_mem->code_obj); +@@ -753,7 +757,7 @@ pvr_fw_process(struct pvr_device *pvr_dev) + + err = pvr_fw_create_fwif_connection_ctl(pvr_dev); + if (err) +- goto err_free_fw_core_data_obj; ++ goto err_free_kdata; + + return 0; + +@@ -763,13 +767,16 @@ pvr_fw_process(struct pvr_device *pvr_dev) + kfree(fw_mem->data); + kfree(fw_mem->code); + +-err_free_fw_core_data_obj: + if (fw_core_data_ptr) +- pvr_fw_object_unmap_and_destroy(fw_mem->core_data_obj); ++ pvr_fw_object_vunmap(fw_mem->core_data_obj); ++ if (fw_mem->core_data_obj) ++ pvr_fw_object_destroy(fw_mem->core_data_obj); + + err_free_fw_core_code_obj: + if (fw_core_code_ptr) +- pvr_fw_object_unmap_and_destroy(fw_mem->core_code_obj); ++ pvr_fw_object_vunmap(fw_mem->core_code_obj); ++ if (fw_mem->core_code_obj) ++ pvr_fw_object_destroy(fw_mem->core_code_obj); + + err_free_fw_data_obj: + if (fw_data_ptr) +@@ -836,6 +843,12 @@ pvr_fw_cleanup(struct pvr_device *pvr_dev) + struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem; + + pvr_fw_fini_fwif_connection_ctl(pvr_dev); ++ ++ kfree(fw_mem->core_data); ++ kfree(fw_mem->core_code); ++ kfree(fw_mem->data); ++ kfree(fw_mem->code); ++ + if (fw_mem->core_code_obj) + pvr_fw_object_destroy(fw_mem->core_code_obj); + if (fw_mem->core_data_obj) +diff --git a/drivers/gpu/drm/imagination/pvr_job.c b/drivers/gpu/drm/imagination/pvr_job.c +index 78c2f3c6dce019..6a15c1d2d871d3 100644 +--- a/drivers/gpu/drm/imagination/pvr_job.c ++++ b/drivers/gpu/drm/imagination/pvr_job.c +@@ -684,6 +684,13 @@ pvr_jobs_link_geom_frag(struct pvr_job_data *job_data, u32 *job_count) + geom_job->paired_job = frag_job; + frag_job->paired_job = geom_job; + ++ /* The geometry job pvr_job structure is used when the fragment ++ * job is being prepared by the GPU scheduler. Have the fragment ++ * job hold a reference on the geometry job to prevent it being ++ * freed until the fragment job has finished with it. ++ */ ++ pvr_job_get(geom_job); ++ + /* Skip the fragment job we just paired to the geometry job. */ + i++; + } +diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c +index 87780cc7c0c322..130473cfdfc9b7 100644 +--- a/drivers/gpu/drm/imagination/pvr_queue.c ++++ b/drivers/gpu/drm/imagination/pvr_queue.c +@@ -866,6 +866,10 @@ static void pvr_queue_free_job(struct drm_sched_job *sched_job) + struct pvr_job *job = container_of(sched_job, struct pvr_job, base); + + drm_sched_job_cleanup(sched_job); ++ ++ if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job) ++ pvr_job_put(job->paired_job); ++ + job->paired_job = NULL; + pvr_job_put(job); + } +diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c +index fb71658c3117b2..6067d08aeee34b 100644 +--- a/drivers/gpu/drm/mgag200/mgag200_mode.c ++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c +@@ -223,7 +223,7 @@ void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mod + vsyncstr = mode->crtc_vsync_start - 1; + vsyncend = mode->crtc_vsync_end - 1; + vtotal = mode->crtc_vtotal - 2; +- vblkstr = mode->crtc_vblank_start; ++ vblkstr = mode->crtc_vblank_start - 1; + vblkend = vtotal + 1; + + linecomp = vdispend; +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +index e386b059187acf..67fa528f546d33 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +@@ -1126,49 +1126,50 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu) + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + u32 val; ++ int ret; + + /* +- * The GMU may still be in slumber unless the GPU started so check and +- * skip putting it back into slumber if so ++ * GMU firmware's internal power state gets messed up if we send "prepare_slumber" hfi when ++ * oob_gpu handshake wasn't done after the last wake up. So do a dummy handshake here when ++ * required + */ +- val = gmu_read(gmu, REG_A6XX_GPU_GMU_CX_GMU_RPMH_POWER_STATE); ++ if (adreno_gpu->base.needs_hw_init) { ++ if (a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET)) ++ goto force_off; + +- if (val != 0xf) { +- int ret = a6xx_gmu_wait_for_idle(gmu); ++ a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); ++ } + +- /* If the GMU isn't responding assume it is hung */ +- if (ret) { +- a6xx_gmu_force_off(gmu); +- return; +- } ++ ret = a6xx_gmu_wait_for_idle(gmu); + +- a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung); ++ /* If the GMU isn't responding assume it is hung */ ++ if (ret) ++ goto force_off; + +- /* tell the GMU we want to slumber */ +- ret = a6xx_gmu_notify_slumber(gmu); +- if (ret) { +- a6xx_gmu_force_off(gmu); +- return; +- } ++ a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung); + +- ret = gmu_poll_timeout(gmu, +- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val, +- !(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB), +- 100, 10000); ++ /* tell the GMU we want to slumber */ ++ ret = a6xx_gmu_notify_slumber(gmu); ++ if (ret) ++ goto force_off; + +- /* +- * Let the user know we failed to slumber but don't worry too +- * much because we are powering down anyway +- */ ++ ret = gmu_poll_timeout(gmu, ++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val, ++ !(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB), ++ 100, 10000); + +- if (ret) +- DRM_DEV_ERROR(gmu->dev, +- "Unable to slumber GMU: status = 0%x/0%x\n", +- gmu_read(gmu, +- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS), +- gmu_read(gmu, +- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2)); +- } ++ /* ++ * Let the user know we failed to slumber but don't worry too ++ * much because we are powering down anyway ++ */ ++ ++ if (ret) ++ DRM_DEV_ERROR(gmu->dev, ++ "Unable to slumber GMU: status = 0%x/0%x\n", ++ gmu_read(gmu, ++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS), ++ gmu_read(gmu, ++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2)); + + /* Turn off HFI */ + a6xx_hfi_stop(gmu); +@@ -1178,6 +1179,11 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu) + + /* Tell RPMh to power off the GPU */ + a6xx_rpmh_stop(gmu); ++ ++ return; ++ ++force_off: ++ a6xx_gmu_force_off(gmu); + } + + +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +index 702b8d4b349723..d903ad9c0b5fb8 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +@@ -233,10 +233,10 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: +- OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3); ++ OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, upper_32_bits(submit->cmd[i].iova)); +- OUT_RING(ring, submit->cmd[i].size); ++ OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size)); + ibs++; + break; + } +@@ -319,10 +319,10 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) + break; + fallthrough; + case MSM_SUBMIT_CMD_BUF: +- OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3); ++ OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3); + OUT_RING(ring, lower_32_bits(submit->cmd[i].iova)); + OUT_RING(ring, upper_32_bits(submit->cmd[i].iova)); +- OUT_RING(ring, submit->cmd[i].size); ++ OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size)); + ibs++; + break; + } +diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c +index 7459fb8c517746..d22e01751f5eeb 100644 +--- a/drivers/gpu/drm/msm/dsi/dsi_host.c ++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c +@@ -1827,8 +1827,15 @@ static int dsi_host_parse_dt(struct msm_dsi_host *msm_host) + __func__, ret); + goto err; + } +- if (!ret) ++ if (!ret) { + msm_dsi->te_source = devm_kstrdup(dev, te_source, GFP_KERNEL); ++ if (!msm_dsi->te_source) { ++ DRM_DEV_ERROR(dev, "%s: failed to allocate te_source\n", ++ __func__); ++ ret = -ENOMEM; ++ goto err; ++ } ++ } + ret = 0; + + if (of_property_read_bool(np, "syscon-sfpb")) { +diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml +index cab01af55d2226..c6cdc5c003dc07 100644 +--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml ++++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml +@@ -2264,5 +2264,12 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords) + + + ++ ++ ++ ++ ++ ++ ++ + + +diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c +index db961eade2257f..2016c1e7242fe3 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_bo.c ++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c +@@ -144,6 +144,9 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo) + nouveau_bo_del_io_reserve_lru(bo); + nv10_bo_put_tile_region(dev, nvbo->tile, NULL); + ++ if (bo->base.import_attach) ++ drm_prime_gem_destroy(&bo->base, bo->sg); ++ + /* + * If nouveau_bo_new() allocated this buffer, the GEM object was never + * initialized, so don't attempt to release it. +diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c +index 9ae2cee1c7c580..67e3c99de73ae6 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_gem.c ++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c +@@ -87,9 +87,6 @@ nouveau_gem_object_del(struct drm_gem_object *gem) + return; + } + +- if (gem->import_attach) +- drm_prime_gem_destroy(gem, nvbo->bo.sg); +- + ttm_bo_put(&nvbo->bo); + + pm_runtime_mark_last_busy(dev); +diff --git a/drivers/gpu/drm/sti/Makefile b/drivers/gpu/drm/sti/Makefile +index f203ac5514ae0b..f778a4eee7c9cf 100644 +--- a/drivers/gpu/drm/sti/Makefile ++++ b/drivers/gpu/drm/sti/Makefile +@@ -7,8 +7,6 @@ sti-drm-y := \ + sti_compositor.o \ + sti_crtc.o \ + sti_plane.o \ +- sti_crtc.o \ +- sti_plane.o \ + sti_hdmi.o \ + sti_hdmi_tx3g4c28phy.o \ + sti_dvo.o \ +diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c +index 1f78aa3d26bbd4..768dfea15aec09 100644 +--- a/drivers/gpu/drm/tiny/repaper.c ++++ b/drivers/gpu/drm/tiny/repaper.c +@@ -455,7 +455,7 @@ static void repaper_frame_fixed_repeat(struct repaper_epd *epd, u8 fixed_value, + enum repaper_stage stage) + { + u64 start = local_clock(); +- u64 end = start + (epd->factored_stage_time * 1000 * 1000); ++ u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000); + + do { + repaper_frame_fixed(epd, fixed_value, stage); +@@ -466,7 +466,7 @@ static void repaper_frame_data_repeat(struct repaper_epd *epd, const u8 *image, + const u8 *mask, enum repaper_stage stage) + { + u64 start = local_clock(); +- u64 end = start + (epd->factored_stage_time * 1000 * 1000); ++ u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000); + + do { + repaper_frame_data(epd, image, mask, stage); +diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c +index 3066cfdb054cc0..4a6aa36619fe39 100644 +--- a/drivers/gpu/drm/v3d/v3d_sched.c ++++ b/drivers/gpu/drm/v3d/v3d_sched.c +@@ -410,7 +410,8 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job) + struct v3d_bo *bo = to_v3d_bo(job->base.bo[0]); + struct v3d_bo *indirect = to_v3d_bo(indirect_csd->indirect); + struct drm_v3d_submit_csd *args = &indirect_csd->job->args; +- u32 *wg_counts; ++ struct v3d_dev *v3d = job->base.v3d; ++ u32 num_batches, *wg_counts; + + v3d_get_bo_vaddr(bo); + v3d_get_bo_vaddr(indirect); +@@ -423,8 +424,17 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job) + args->cfg[0] = wg_counts[0] << V3D_CSD_CFG012_WG_COUNT_SHIFT; + args->cfg[1] = wg_counts[1] << V3D_CSD_CFG012_WG_COUNT_SHIFT; + args->cfg[2] = wg_counts[2] << V3D_CSD_CFG012_WG_COUNT_SHIFT; +- args->cfg[4] = DIV_ROUND_UP(indirect_csd->wg_size, 16) * +- (wg_counts[0] * wg_counts[1] * wg_counts[2]) - 1; ++ ++ num_batches = DIV_ROUND_UP(indirect_csd->wg_size, 16) * ++ (wg_counts[0] * wg_counts[1] * wg_counts[2]); ++ ++ /* V3D 7.1.6 and later don't subtract 1 from the number of batches */ ++ if (v3d->ver < 71 || (v3d->ver == 71 && v3d->rev < 6)) ++ args->cfg[4] = num_batches - 1; ++ else ++ args->cfg[4] = num_batches; ++ ++ WARN_ON(args->cfg[4] == ~0); + + for (int i = 0; i < 3; i++) { + /* 0xffffffff indicates that the uniform rewrite is not needed */ +diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c +index f3bf7d3157b479..78204578443f46 100644 +--- a/drivers/gpu/drm/xe/xe_dma_buf.c ++++ b/drivers/gpu/drm/xe/xe_dma_buf.c +@@ -145,10 +145,7 @@ static void xe_dma_buf_unmap(struct dma_buf_attachment *attach, + struct sg_table *sgt, + enum dma_data_direction dir) + { +- struct dma_buf *dma_buf = attach->dmabuf; +- struct xe_bo *bo = gem_to_xe_bo(dma_buf->priv); +- +- if (!xe_bo_is_vram(bo)) { ++ if (sg_page(sgt->sgl)) { + dma_unmap_sgtable(attach->dev, sgt, dir, 0); + sg_free_table(sgt); + kfree(sgt); +diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c +index ace1fe831a7b72..98a450271f5cee 100644 +--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c ++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c +@@ -310,6 +310,13 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt) + return 0; + } + ++/* ++ * Ensure that roundup_pow_of_two(length) doesn't overflow. ++ * Note that roundup_pow_of_two() operates on unsigned long, ++ * not on u64. ++ */ ++#define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX)) ++ + /** + * xe_gt_tlb_invalidation_range - Issue a TLB invalidation on this GT for an + * address range +@@ -334,6 +341,7 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt, + struct xe_device *xe = gt_to_xe(gt); + #define MAX_TLB_INVALIDATION_LEN 7 + u32 action[MAX_TLB_INVALIDATION_LEN]; ++ u64 length = end - start; + int len = 0; + + xe_gt_assert(gt, fence); +@@ -346,11 +354,11 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt, + + action[len++] = XE_GUC_ACTION_TLB_INVALIDATION; + action[len++] = 0; /* seqno, replaced in send_tlb_invalidation */ +- if (!xe->info.has_range_tlb_invalidation) { ++ if (!xe->info.has_range_tlb_invalidation || ++ length > MAX_RANGE_TLB_INVALIDATION_LENGTH) { + action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL); + } else { + u64 orig_start = start; +- u64 length = end - start; + u64 align; + + if (length < SZ_4K) +diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c +index d1902a8581ca11..e144fd41c0a762 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ads.c ++++ b/drivers/gpu/drm/xe/xe_guc_ads.c +@@ -483,24 +483,52 @@ static void fill_engine_enable_masks(struct xe_gt *gt, + engine_enable_mask(gt, XE_ENGINE_CLASS_OTHER)); + } + +-static void guc_prep_golden_lrc_null(struct xe_guc_ads *ads) ++/* ++ * Write the offsets corresponding to the golden LRCs. The actual data is ++ * populated later by guc_golden_lrc_populate() ++ */ ++static void guc_golden_lrc_init(struct xe_guc_ads *ads) + { + struct xe_device *xe = ads_to_xe(ads); ++ struct xe_gt *gt = ads_to_gt(ads); + struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads), + offsetof(struct __guc_ads_blob, system_info)); +- u8 guc_class; ++ size_t alloc_size, real_size; ++ u32 addr_ggtt, offset; ++ int class; ++ ++ offset = guc_ads_golden_lrc_offset(ads); ++ addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset; ++ ++ for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) { ++ u8 guc_class; ++ ++ guc_class = xe_engine_class_to_guc_class(class); + +- for (guc_class = 0; guc_class <= GUC_MAX_ENGINE_CLASSES; ++guc_class) { + if (!info_map_read(xe, &info_map, + engine_enabled_masks[guc_class])) + continue; + ++ real_size = xe_gt_lrc_size(gt, class); ++ alloc_size = PAGE_ALIGN(real_size); ++ ++ /* ++ * This interface is slightly confusing. We need to pass the ++ * base address of the full golden context and the size of just ++ * the engine state, which is the section of the context image ++ * that starts after the execlists LRC registers. This is ++ * required to allow the GuC to restore just the engine state ++ * when a watchdog reset occurs. ++ * We calculate the engine state size by removing the size of ++ * what comes before it in the context image (which is identical ++ * on all engines). ++ */ + ads_blob_write(ads, ads.eng_state_size[guc_class], +- guc_ads_golden_lrc_size(ads) - +- xe_lrc_skip_size(xe)); ++ real_size - xe_lrc_skip_size(xe)); + ads_blob_write(ads, ads.golden_context_lrca[guc_class], +- xe_bo_ggtt_addr(ads->bo) + +- guc_ads_golden_lrc_offset(ads)); ++ addr_ggtt); ++ ++ addr_ggtt += alloc_size; + } + } + +@@ -710,7 +738,7 @@ void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads) + + xe_map_memset(ads_to_xe(ads), ads_to_map(ads), 0, 0, ads->bo->size); + guc_policies_init(ads); +- guc_prep_golden_lrc_null(ads); ++ guc_golden_lrc_init(ads); + guc_mapping_table_init_invalid(gt, &info_map); + guc_doorbell_init(ads); + +@@ -736,7 +764,7 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads) + guc_policies_init(ads); + fill_engine_enable_masks(gt, &info_map); + guc_mmio_reg_state_init(ads); +- guc_prep_golden_lrc_null(ads); ++ guc_golden_lrc_init(ads); + guc_mapping_table_init(gt, &info_map); + guc_capture_list_init(ads); + guc_doorbell_init(ads); +@@ -756,18 +784,22 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads) + guc_ads_private_data_offset(ads)); + } + +-static void guc_populate_golden_lrc(struct xe_guc_ads *ads) ++/* ++ * After the golden LRC's are recorded for each engine class by the first ++ * submission, copy them to the ADS, as initialized earlier by ++ * guc_golden_lrc_init(). ++ */ ++static void guc_golden_lrc_populate(struct xe_guc_ads *ads) + { + struct xe_device *xe = ads_to_xe(ads); + struct xe_gt *gt = ads_to_gt(ads); + struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads), + offsetof(struct __guc_ads_blob, system_info)); + size_t total_size = 0, alloc_size, real_size; +- u32 addr_ggtt, offset; ++ u32 offset; + int class; + + offset = guc_ads_golden_lrc_offset(ads); +- addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset; + + for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) { + u8 guc_class; +@@ -784,26 +816,9 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads) + alloc_size = PAGE_ALIGN(real_size); + total_size += alloc_size; + +- /* +- * This interface is slightly confusing. We need to pass the +- * base address of the full golden context and the size of just +- * the engine state, which is the section of the context image +- * that starts after the execlists LRC registers. This is +- * required to allow the GuC to restore just the engine state +- * when a watchdog reset occurs. +- * We calculate the engine state size by removing the size of +- * what comes before it in the context image (which is identical +- * on all engines). +- */ +- ads_blob_write(ads, ads.eng_state_size[guc_class], +- real_size - xe_lrc_skip_size(xe)); +- ads_blob_write(ads, ads.golden_context_lrca[guc_class], +- addr_ggtt); +- + xe_map_memcpy_to(xe, ads_to_map(ads), offset, + gt->default_lrc[class], real_size); + +- addr_ggtt += alloc_size; + offset += alloc_size; + } + +@@ -812,7 +827,7 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads) + + void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads) + { +- guc_populate_golden_lrc(ads); ++ guc_golden_lrc_populate(ads); + } + + static int guc_ads_action_update_policies(struct xe_guc_ads *ads, u32 policy_offset) +diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c +index f6bc4f29d7538e..3d0278c3db9355 100644 +--- a/drivers/gpu/drm/xe/xe_hmm.c ++++ b/drivers/gpu/drm/xe/xe_hmm.c +@@ -19,29 +19,6 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end) + return (end - start) >> PAGE_SHIFT; + } + +-/** +- * xe_mark_range_accessed() - mark a range is accessed, so core mm +- * have such information for memory eviction or write back to +- * hard disk +- * @range: the range to mark +- * @write: if write to this range, we mark pages in this range +- * as dirty +- */ +-static void xe_mark_range_accessed(struct hmm_range *range, bool write) +-{ +- struct page *page; +- u64 i, npages; +- +- npages = xe_npages_in_range(range->start, range->end); +- for (i = 0; i < npages; i++) { +- page = hmm_pfn_to_page(range->hmm_pfns[i]); +- if (write) +- set_page_dirty_lock(page); +- +- mark_page_accessed(page); +- } +-} +- + static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st, + struct hmm_range *range, struct rw_semaphore *notifier_sem) + { +@@ -331,7 +308,6 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, + if (ret) + goto out_unlock; + +- xe_mark_range_accessed(&hmm_range, write); + userptr->sg = &userptr->sgt; + xe_hmm_userptr_set_mapped(uvma); + userptr->notifier_seq = hmm_range.notifier_seq; +diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c +index 1b97d90aaddaf4..6431697c616939 100644 +--- a/drivers/gpu/drm/xe/xe_migrate.c ++++ b/drivers/gpu/drm/xe/xe_migrate.c +@@ -1177,7 +1177,7 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m, + err_sync: + /* Sync partial copies if any. FIXME: job_mutex? */ + if (fence) { +- dma_fence_wait(m->fence, false); ++ dma_fence_wait(fence, false); + dma_fence_put(fence); + } + +diff --git a/drivers/i2c/busses/i2c-cros-ec-tunnel.c b/drivers/i2c/busses/i2c-cros-ec-tunnel.c +index ab2688bd4d338a..e19cb62d6796d9 100644 +--- a/drivers/i2c/busses/i2c-cros-ec-tunnel.c ++++ b/drivers/i2c/busses/i2c-cros-ec-tunnel.c +@@ -247,6 +247,9 @@ static int ec_i2c_probe(struct platform_device *pdev) + u32 remote_bus; + int err; + ++ if (!ec) ++ return dev_err_probe(dev, -EPROBE_DEFER, "couldn't find parent EC device\n"); ++ + if (!ec->cmd_xfer) { + dev_err(dev, "Missing sendrecv\n"); + return -EINVAL; +diff --git a/drivers/i2c/i2c-atr.c b/drivers/i2c/i2c-atr.c +index 0d54d0b5e32731..5342e934aa5e40 100644 +--- a/drivers/i2c/i2c-atr.c ++++ b/drivers/i2c/i2c-atr.c +@@ -8,12 +8,12 @@ + * Originally based on i2c-mux.c + */ + +-#include + #include + #include + #include + #include + #include ++#include + #include + #include + +diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c +index 91db10515d7472..176d0b3e448870 100644 +--- a/drivers/infiniband/core/cma.c ++++ b/drivers/infiniband/core/cma.c +@@ -72,6 +72,8 @@ static const char * const cma_events[] = { + static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid, + enum ib_gid_type gid_type); + ++static void cma_netevent_work_handler(struct work_struct *_work); ++ + const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event) + { + size_t index = event; +@@ -1033,6 +1035,7 @@ __rdma_create_id(struct net *net, rdma_cm_event_handler event_handler, + get_random_bytes(&id_priv->seq_num, sizeof id_priv->seq_num); + id_priv->id.route.addr.dev_addr.net = get_net(net); + id_priv->seq_num &= 0x00ffffff; ++ INIT_WORK(&id_priv->id.net_work, cma_netevent_work_handler); + + rdma_restrack_new(&id_priv->res, RDMA_RESTRACK_CM_ID); + if (parent) +@@ -5227,7 +5230,6 @@ static int cma_netevent_callback(struct notifier_block *self, + if (!memcmp(current_id->id.route.addr.dev_addr.dst_dev_addr, + neigh->ha, ETH_ALEN)) + continue; +- INIT_WORK(¤t_id->id.net_work, cma_netevent_work_handler); + cma_id_get(current_id); + queue_work(cma_wq, ¤t_id->id.net_work); + } +diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c +index e9fa22d31c2332..c48ef608302055 100644 +--- a/drivers/infiniband/core/umem_odp.c ++++ b/drivers/infiniband/core/umem_odp.c +@@ -76,12 +76,14 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp, + + npfns = (end - start) >> PAGE_SHIFT; + umem_odp->pfn_list = kvcalloc( +- npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL); ++ npfns, sizeof(*umem_odp->pfn_list), ++ GFP_KERNEL | __GFP_NOWARN); + if (!umem_odp->pfn_list) + return -ENOMEM; + + umem_odp->dma_list = kvcalloc( +- ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL); ++ ndmas, sizeof(*umem_odp->dma_list), ++ GFP_KERNEL | __GFP_NOWARN); + if (!umem_odp->dma_list) { + ret = -ENOMEM; + goto out_pfn_list; +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c +index cf89a8db4f64cd..8d0b63d4b50a6c 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c +@@ -763,7 +763,7 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev) + if (ret) + return ret; + } +- dma_set_max_seg_size(dev, UINT_MAX); ++ dma_set_max_seg_size(dev, SZ_2G); + ret = ib_register_device(ib_dev, "hns_%d", dev); + if (ret) { + dev_err(dev, "ib_register_device failed!\n"); +diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c +index 13b654ddd3cc8d..bcf7d8607d56ef 100644 +--- a/drivers/infiniband/hw/usnic/usnic_ib_main.c ++++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c +@@ -380,7 +380,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev) + if (!us_ibdev) { + usnic_err("Device %s context alloc failed\n", + netdev_name(pci_get_drvdata(dev))); +- return ERR_PTR(-EFAULT); ++ return NULL; + } + + us_ibdev->ufdev = usnic_fwd_dev_alloc(dev); +@@ -500,8 +500,8 @@ static struct usnic_ib_dev *usnic_ib_discover_pf(struct usnic_vnic *vnic) + } + + us_ibdev = usnic_ib_device_add(parent_pci); +- if (IS_ERR_OR_NULL(us_ibdev)) { +- us_ibdev = us_ibdev ? us_ibdev : ERR_PTR(-EFAULT); ++ if (!us_ibdev) { ++ us_ibdev = ERR_PTR(-EFAULT); + goto out; + } + +@@ -569,10 +569,10 @@ static int usnic_ib_pci_probe(struct pci_dev *pdev, + } + + pf = usnic_ib_discover_pf(vf->vnic); +- if (IS_ERR_OR_NULL(pf)) { +- usnic_err("Failed to discover pf of vnic %s with err%ld\n", +- pci_name(pdev), PTR_ERR(pf)); +- err = pf ? PTR_ERR(pf) : -EFAULT; ++ if (IS_ERR(pf)) { ++ err = PTR_ERR(pf); ++ usnic_err("Failed to discover pf of vnic %s with err%d\n", ++ pci_name(pdev), err); + goto out_clean_vnic; + } + +diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c +index 2e3087556adb37..fbb4f57010da69 100644 +--- a/drivers/md/md-bitmap.c ++++ b/drivers/md/md-bitmap.c +@@ -2355,9 +2355,8 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats) + + if (!bitmap) + return -ENOENT; +- if (bitmap->mddev->bitmap_info.external) +- return -ENOENT; +- if (!bitmap->storage.sb_page) /* no superblock */ ++ if (!bitmap->mddev->bitmap_info.external && ++ !bitmap->storage.sb_page) + return -EINVAL; + sb = kmap_local_page(bitmap->storage.sb_page); + stats->sync_size = le64_to_cpu(sb->sync_size); +diff --git a/drivers/md/md.c b/drivers/md/md.c +index fff28aea23c89e..7809b951e09aa0 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev) + queue_work(md_misc_wq, &mddev->del_work); + } + ++static void mddev_put_locked(struct mddev *mddev) ++{ ++ if (atomic_dec_and_test(&mddev->active)) ++ __mddev_put(mddev); ++} ++ + void mddev_put(struct mddev *mddev) + { + if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) +@@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v) + if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs)) + status_unused(seq); + +- if (atomic_dec_and_test(&mddev->active)) +- __mddev_put(mddev); +- ++ mddev_put_locked(mddev); + return 0; + } + +@@ -9886,11 +9890,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks); + static int md_notify_reboot(struct notifier_block *this, + unsigned long code, void *x) + { +- struct mddev *mddev, *n; ++ struct mddev *mddev; + int need_delay = 0; + + spin_lock(&all_mddevs_lock); +- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) { ++ list_for_each_entry(mddev, &all_mddevs, all_mddevs) { + if (!mddev_get(mddev)) + continue; + spin_unlock(&all_mddevs_lock); +@@ -9902,8 +9906,8 @@ static int md_notify_reboot(struct notifier_block *this, + mddev_unlock(mddev); + } + need_delay = 1; +- mddev_put(mddev); + spin_lock(&all_mddevs_lock); ++ mddev_put_locked(mddev); + } + spin_unlock(&all_mddevs_lock); + +@@ -10236,7 +10240,7 @@ void md_autostart_arrays(int part) + + static __exit void md_exit(void) + { +- struct mddev *mddev, *n; ++ struct mddev *mddev; + int delay = 1; + + unregister_blkdev(MD_MAJOR,"md"); +@@ -10257,7 +10261,7 @@ static __exit void md_exit(void) + remove_proc_entry("mdstat", NULL); + + spin_lock(&all_mddevs_lock); +- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) { ++ list_for_each_entry(mddev, &all_mddevs, all_mddevs) { + if (!mddev_get(mddev)) + continue; + spin_unlock(&all_mddevs_lock); +@@ -10269,8 +10273,8 @@ static __exit void md_exit(void) + * the mddev for destruction by a workqueue, and the + * destroy_workqueue() below will wait for that to complete. + */ +- mddev_put(mddev); + spin_lock(&all_mddevs_lock); ++ mddev_put_locked(mddev); + } + spin_unlock(&all_mddevs_lock); + +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index a214fed4f16226..cc194f6ec18dab 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -1687,6 +1687,7 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) + * The discard bio returns only first r10bio finishes + */ + if (first_copy) { ++ md_account_bio(mddev, &bio); + r10_bio->master_bio = bio; + set_bit(R10BIO_Discard, &r10_bio->state); + first_copy = false; +diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c +index 8dea2b44fd8bfe..e22afb420d099e 100644 +--- a/drivers/misc/pci_endpoint_test.c ++++ b/drivers/misc/pci_endpoint_test.c +@@ -251,6 +251,9 @@ static bool pci_endpoint_test_request_irq(struct pci_endpoint_test *test) + break; + } + ++ test->num_irqs = i; ++ pci_endpoint_test_release_irq(test); ++ + return false; + } + +@@ -738,6 +741,7 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test, + if (!pci_endpoint_test_request_irq(test)) + goto err; + ++ irq_type = test->irq_type; + return true; + + err: +diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c +index d9a937ba126c3c..ac514766d431ce 100644 +--- a/drivers/net/can/rockchip/rockchip_canfd-core.c ++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c +@@ -907,15 +907,16 @@ static int rkcanfd_probe(struct platform_device *pdev) + priv->can.data_bittiming_const = &rkcanfd_data_bittiming_const; + priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK | + CAN_CTRLMODE_BERR_REPORTING; +- if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN)) +- priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD; + priv->can.do_set_mode = rkcanfd_set_mode; + priv->can.do_get_berr_counter = rkcanfd_get_berr_counter; + priv->ndev = ndev; + + match = device_get_match_data(&pdev->dev); +- if (match) ++ if (match) { + priv->devtype_data = *(struct rkcanfd_devtype_data *)match; ++ if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN)) ++ priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD; ++ } + + err = can_rx_offload_add_manual(ndev, &priv->offload, + RKCANFD_NAPI_WEIGHT); +diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c +index c39cb119e760db..d4600ab0b70b3b 100644 +--- a/drivers/net/dsa/b53/b53_common.c ++++ b/drivers/net/dsa/b53/b53_common.c +@@ -737,6 +737,15 @@ static void b53_enable_mib(struct b53_device *dev) + b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc); + } + ++static void b53_enable_stp(struct b53_device *dev) ++{ ++ u8 gc; ++ ++ b53_read8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, &gc); ++ gc |= GC_RX_BPDU_EN; ++ b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc); ++} ++ + static u16 b53_default_pvid(struct b53_device *dev) + { + if (is5325(dev) || is5365(dev)) +@@ -876,6 +885,7 @@ static int b53_switch_reset(struct b53_device *dev) + } + + b53_enable_mib(dev); ++ b53_enable_stp(dev); + + return b53_flush_arl(dev, FAST_AGE_STATIC); + } +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c +index e20d9d62032e31..df1df601541217 100644 +--- a/drivers/net/dsa/mv88e6xxx/chip.c ++++ b/drivers/net/dsa/mv88e6xxx/chip.c +@@ -1878,6 +1878,8 @@ static int mv88e6xxx_vtu_get(struct mv88e6xxx_chip *chip, u16 vid, + if (!chip->info->ops->vtu_getnext) + return -EOPNOTSUPP; + ++ memset(entry, 0, sizeof(*entry)); ++ + entry->vid = vid ? vid - 1 : mv88e6xxx_max_vid(chip); + entry->valid = false; + +@@ -2013,7 +2015,16 @@ static int mv88e6xxx_mst_put(struct mv88e6xxx_chip *chip, u8 sid) + struct mv88e6xxx_mst *mst, *tmp; + int err; + +- if (!sid) ++ /* If the SID is zero, it is for a VLAN mapped to the default MSTI, ++ * and mv88e6xxx_stu_setup() made sure it is always present, and thus, ++ * should not be removed here. ++ * ++ * If the chip lacks STU support, numerically the "sid" variable will ++ * happen to also be zero, but we don't want to rely on that fact, so ++ * we explicitly test that first. In that case, there is also nothing ++ * to do here. ++ */ ++ if (!mv88e6xxx_has_stu(chip) || !sid) + return 0; + + list_for_each_entry_safe(mst, tmp, &chip->msts, node) { +diff --git a/drivers/net/dsa/mv88e6xxx/devlink.c b/drivers/net/dsa/mv88e6xxx/devlink.c +index a08dab75e0c0c1..f57fde02077d22 100644 +--- a/drivers/net/dsa/mv88e6xxx/devlink.c ++++ b/drivers/net/dsa/mv88e6xxx/devlink.c +@@ -743,7 +743,8 @@ void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds) + int i; + + for (i = 0; i < ARRAY_SIZE(mv88e6xxx_regions); i++) +- dsa_devlink_region_destroy(chip->regions[i]); ++ if (chip->regions[i]) ++ dsa_devlink_region_destroy(chip->regions[i]); + } + + void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port) +diff --git a/drivers/net/ethernet/amd/pds_core/debugfs.c b/drivers/net/ethernet/amd/pds_core/debugfs.c +index ac37a4e738ae7d..04c5e3abd8d706 100644 +--- a/drivers/net/ethernet/amd/pds_core/debugfs.c ++++ b/drivers/net/ethernet/amd/pds_core/debugfs.c +@@ -154,8 +154,9 @@ void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq) + debugfs_create_u32("index", 0400, intr_dentry, &intr->index); + debugfs_create_u32("vector", 0400, intr_dentry, &intr->vector); + +- intr_ctrl_regset = kzalloc(sizeof(*intr_ctrl_regset), +- GFP_KERNEL); ++ intr_ctrl_regset = devm_kzalloc(pdsc->dev, ++ sizeof(*intr_ctrl_regset), ++ GFP_KERNEL); + if (!intr_ctrl_regset) + return; + intr_ctrl_regset->regs = intr_ctrl_regs; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index e7580df13229a6..016dcfec8d4965 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -758,7 +758,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) + dev_kfree_skb_any(skb); + tx_kick_pending: + if (BNXT_TX_PTP_IS_SET(lflags)) { +- txr->tx_buf_ring[txr->tx_prod].is_ts_pkt = 0; ++ txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].is_ts_pkt = 0; + atomic64_inc(&bp->ptp_cfg->stats.ts_err); + if (!(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP)) + /* set SKB to err so PTP worker will clean up */ +@@ -766,7 +766,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) + } + if (txr->kick_pending) + bnxt_txr_db_kick(bp, txr, txr->tx_prod); +- txr->tx_buf_ring[txr->tx_prod].skb = NULL; ++ txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].skb = NULL; + dev_core_stats_tx_dropped_inc(dev); + return NETDEV_TX_OK; + } +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c +index 7f3f5afa864f4a..1546c3db08f093 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c +@@ -2270,6 +2270,7 @@ int cxgb4_init_ethtool_filters(struct adapter *adap) + eth_filter->port[i].bmap = bitmap_zalloc(nentries, GFP_KERNEL); + if (!eth_filter->port[i].bmap) { + ret = -ENOMEM; ++ kvfree(eth_filter->port[i].loc_array); + goto free_eth_finfo; + } + } +diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h +index eac0f966e0e4c5..323db1e2be3886 100644 +--- a/drivers/net/ethernet/intel/igc/igc.h ++++ b/drivers/net/ethernet/intel/igc/igc.h +@@ -319,6 +319,7 @@ struct igc_adapter { + struct timespec64 prev_ptp_time; /* Pre-reset PTP clock */ + ktime_t ptp_reset_start; /* Reset time in clock mono */ + struct system_time_snapshot snapshot; ++ struct mutex ptm_lock; /* Only allow one PTM transaction at a time */ + + char fw_version[32]; + +diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h +index 8e449904aa7dbd..d19325b0e6e0ba 100644 +--- a/drivers/net/ethernet/intel/igc/igc_defines.h ++++ b/drivers/net/ethernet/intel/igc/igc_defines.h +@@ -574,7 +574,10 @@ + #define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x3f) << 2) + #define IGC_PTM_CTRL_PTM_TO(usec) (((usec) & 0xff) << 8) + +-#define IGC_PTM_SHORT_CYC_DEFAULT 1 /* Default short cycle interval */ ++/* A short cycle time of 1us theoretically should work, but appears to be too ++ * short in practice. ++ */ ++#define IGC_PTM_SHORT_CYC_DEFAULT 4 /* Default short cycle interval */ + #define IGC_PTM_CYC_TIME_DEFAULT 5 /* Default PTM cycle time */ + #define IGC_PTM_TIMEOUT_DEFAULT 255 /* Default timeout for PTM errors */ + +@@ -593,6 +596,7 @@ + #define IGC_PTM_STAT_T4M1_OVFL BIT(3) /* T4 minus T1 overflow */ + #define IGC_PTM_STAT_ADJUST_1ST BIT(4) /* 1588 timer adjusted during 1st PTM cycle */ + #define IGC_PTM_STAT_ADJUST_CYC BIT(5) /* 1588 timer adjusted during non-1st PTM cycle */ ++#define IGC_PTM_STAT_ALL GENMASK(5, 0) /* Used to clear all status */ + + /* PCIe PTM Cycle Control */ + #define IGC_PTM_CYCLE_CTRL_CYC_TIME(msec) ((msec) & 0x3ff) /* PTM Cycle Time (msec) */ +diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c +index 1ec9e8cc99d947..082b0baf5d37c5 100644 +--- a/drivers/net/ethernet/intel/igc/igc_main.c ++++ b/drivers/net/ethernet/intel/igc/igc_main.c +@@ -7173,6 +7173,7 @@ static int igc_probe(struct pci_dev *pdev, + + err_register: + igc_release_hw_control(adapter); ++ igc_ptp_stop(adapter); + err_eeprom: + if (!igc_check_reset_block(hw)) + igc_reset_phy(hw); +diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c +index 946edbad43022c..612ed26a29c5d4 100644 +--- a/drivers/net/ethernet/intel/igc/igc_ptp.c ++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c +@@ -974,45 +974,62 @@ static void igc_ptm_log_error(struct igc_adapter *adapter, u32 ptm_stat) + } + } + ++/* The PTM lock: adapter->ptm_lock must be held when calling igc_ptm_trigger() */ ++static void igc_ptm_trigger(struct igc_hw *hw) ++{ ++ u32 ctrl; ++ ++ /* To "manually" start the PTM cycle we need to set the ++ * trigger (TRIG) bit ++ */ ++ ctrl = rd32(IGC_PTM_CTRL); ++ ctrl |= IGC_PTM_CTRL_TRIG; ++ wr32(IGC_PTM_CTRL, ctrl); ++ /* Perform flush after write to CTRL register otherwise ++ * transaction may not start ++ */ ++ wrfl(); ++} ++ ++/* The PTM lock: adapter->ptm_lock must be held when calling igc_ptm_reset() */ ++static void igc_ptm_reset(struct igc_hw *hw) ++{ ++ u32 ctrl; ++ ++ ctrl = rd32(IGC_PTM_CTRL); ++ ctrl &= ~IGC_PTM_CTRL_TRIG; ++ wr32(IGC_PTM_CTRL, ctrl); ++ /* Write to clear all status */ ++ wr32(IGC_PTM_STAT, IGC_PTM_STAT_ALL); ++} ++ + static int igc_phc_get_syncdevicetime(ktime_t *device, + struct system_counterval_t *system, + void *ctx) + { +- u32 stat, t2_curr_h, t2_curr_l, ctrl; + struct igc_adapter *adapter = ctx; + struct igc_hw *hw = &adapter->hw; ++ u32 stat, t2_curr_h, t2_curr_l; + int err, count = 100; + ktime_t t1, t2_curr; + +- /* Get a snapshot of system clocks to use as historic value. */ +- ktime_get_snapshot(&adapter->snapshot); +- ++ /* Doing this in a loop because in the event of a ++ * badly timed (ha!) system clock adjustment, we may ++ * get PTM errors from the PCI root, but these errors ++ * are transitory. Repeating the process returns valid ++ * data eventually. ++ */ + do { +- /* Doing this in a loop because in the event of a +- * badly timed (ha!) system clock adjustment, we may +- * get PTM errors from the PCI root, but these errors +- * are transitory. Repeating the process returns valid +- * data eventually. +- */ ++ /* Get a snapshot of system clocks to use as historic value. */ ++ ktime_get_snapshot(&adapter->snapshot); + +- /* To "manually" start the PTM cycle we need to clear and +- * then set again the TRIG bit. +- */ +- ctrl = rd32(IGC_PTM_CTRL); +- ctrl &= ~IGC_PTM_CTRL_TRIG; +- wr32(IGC_PTM_CTRL, ctrl); +- ctrl |= IGC_PTM_CTRL_TRIG; +- wr32(IGC_PTM_CTRL, ctrl); +- +- /* The cycle only starts "for real" when software notifies +- * that it has read the registers, this is done by setting +- * VALID bit. +- */ +- wr32(IGC_PTM_STAT, IGC_PTM_STAT_VALID); ++ igc_ptm_trigger(hw); + + err = readx_poll_timeout(rd32, IGC_PTM_STAT, stat, + stat, IGC_PTM_STAT_SLEEP, + IGC_PTM_STAT_TIMEOUT); ++ igc_ptm_reset(hw); ++ + if (err < 0) { + netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n"); + return err; +@@ -1021,15 +1038,7 @@ static int igc_phc_get_syncdevicetime(ktime_t *device, + if ((stat & IGC_PTM_STAT_VALID) == IGC_PTM_STAT_VALID) + break; + +- if (stat & ~IGC_PTM_STAT_VALID) { +- /* An error occurred, log it. */ +- igc_ptm_log_error(adapter, stat); +- /* The STAT register is write-1-to-clear (W1C), +- * so write the previous error status to clear it. +- */ +- wr32(IGC_PTM_STAT, stat); +- continue; +- } ++ igc_ptm_log_error(adapter, stat); + } while (--count); + + if (!count) { +@@ -1061,9 +1070,16 @@ static int igc_ptp_getcrosststamp(struct ptp_clock_info *ptp, + { + struct igc_adapter *adapter = container_of(ptp, struct igc_adapter, + ptp_caps); ++ int ret; ++ ++ /* This blocks until any in progress PTM transactions complete */ ++ mutex_lock(&adapter->ptm_lock); + +- return get_device_system_crosststamp(igc_phc_get_syncdevicetime, +- adapter, &adapter->snapshot, cts); ++ ret = get_device_system_crosststamp(igc_phc_get_syncdevicetime, ++ adapter, &adapter->snapshot, cts); ++ mutex_unlock(&adapter->ptm_lock); ++ ++ return ret; + } + + static int igc_ptp_getcyclesx64(struct ptp_clock_info *ptp, +@@ -1162,6 +1178,7 @@ void igc_ptp_init(struct igc_adapter *adapter) + spin_lock_init(&adapter->ptp_tx_lock); + spin_lock_init(&adapter->free_timer_lock); + spin_lock_init(&adapter->tmreg_lock); ++ mutex_init(&adapter->ptm_lock); + + adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; + adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF; +@@ -1174,6 +1191,7 @@ void igc_ptp_init(struct igc_adapter *adapter) + if (IS_ERR(adapter->ptp_clock)) { + adapter->ptp_clock = NULL; + netdev_err(netdev, "ptp_clock_register failed\n"); ++ mutex_destroy(&adapter->ptm_lock); + } else if (adapter->ptp_clock) { + netdev_info(netdev, "PHC added\n"); + adapter->ptp_flags |= IGC_PTP_ENABLED; +@@ -1203,10 +1221,12 @@ static void igc_ptm_stop(struct igc_adapter *adapter) + struct igc_hw *hw = &adapter->hw; + u32 ctrl; + ++ mutex_lock(&adapter->ptm_lock); + ctrl = rd32(IGC_PTM_CTRL); + ctrl &= ~IGC_PTM_CTRL_EN; + + wr32(IGC_PTM_CTRL, ctrl); ++ mutex_unlock(&adapter->ptm_lock); + } + + /** +@@ -1237,13 +1257,18 @@ void igc_ptp_suspend(struct igc_adapter *adapter) + **/ + void igc_ptp_stop(struct igc_adapter *adapter) + { ++ if (!(adapter->ptp_flags & IGC_PTP_ENABLED)) ++ return; ++ + igc_ptp_suspend(adapter); + ++ adapter->ptp_flags &= ~IGC_PTP_ENABLED; + if (adapter->ptp_clock) { + ptp_clock_unregister(adapter->ptp_clock); + netdev_info(adapter->netdev, "PHC removed\n"); + adapter->ptp_flags &= ~IGC_PTP_ENABLED; + } ++ mutex_destroy(&adapter->ptm_lock); + } + + /** +@@ -1255,10 +1280,13 @@ void igc_ptp_stop(struct igc_adapter *adapter) + void igc_ptp_reset(struct igc_adapter *adapter) + { + struct igc_hw *hw = &adapter->hw; +- u32 cycle_ctrl, ctrl; ++ u32 cycle_ctrl, ctrl, stat; + unsigned long flags; + u32 timadj; + ++ if (!(adapter->ptp_flags & IGC_PTP_ENABLED)) ++ return; ++ + /* reset the tstamp_config */ + igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config); + +@@ -1280,6 +1308,7 @@ void igc_ptp_reset(struct igc_adapter *adapter) + if (!igc_is_crosststamp_supported(adapter)) + break; + ++ mutex_lock(&adapter->ptm_lock); + wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT); + wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT); + +@@ -1290,14 +1319,20 @@ void igc_ptp_reset(struct igc_adapter *adapter) + ctrl = IGC_PTM_CTRL_EN | + IGC_PTM_CTRL_START_NOW | + IGC_PTM_CTRL_SHRT_CYC(IGC_PTM_SHORT_CYC_DEFAULT) | +- IGC_PTM_CTRL_PTM_TO(IGC_PTM_TIMEOUT_DEFAULT) | +- IGC_PTM_CTRL_TRIG; ++ IGC_PTM_CTRL_PTM_TO(IGC_PTM_TIMEOUT_DEFAULT); + + wr32(IGC_PTM_CTRL, ctrl); + + /* Force the first cycle to run. */ +- wr32(IGC_PTM_STAT, IGC_PTM_STAT_VALID); ++ igc_ptm_trigger(hw); ++ ++ if (readx_poll_timeout_atomic(rd32, IGC_PTM_STAT, stat, ++ stat, IGC_PTM_STAT_SLEEP, ++ IGC_PTM_STAT_TIMEOUT)) ++ netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n"); + ++ igc_ptm_reset(hw); ++ mutex_unlock(&adapter->ptm_lock); + break; + default: + /* No work to do. */ +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index ed7313c10a0524..d408dcda76d794 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -734,7 +734,7 @@ static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx, + case SPEED_100: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 103) | +- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3); ++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1); + break; + case SPEED_1000: +@@ -757,13 +757,13 @@ static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx, + case SPEED_100: + val |= MTK_QTX_SCH_MAX_RATE_EN | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) | +- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5); ++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1); + break; + case SPEED_1000: + val |= MTK_QTX_SCH_MAX_RATE_EN | +- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 10) | +- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) | ++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) | ++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 6) | + FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 10); + break; + default: +@@ -823,9 +823,25 @@ static const struct phylink_mac_ops mtk_phylink_ops = { + .mac_link_up = mtk_mac_link_up, + }; + ++static void mtk_mdio_config(struct mtk_eth *eth) ++{ ++ u32 val; ++ ++ /* Configure MDC Divider */ ++ val = FIELD_PREP(PPSC_MDC_CFG, eth->mdc_divider); ++ ++ /* Configure MDC Turbo Mode */ ++ if (mtk_is_netsys_v3_or_greater(eth)) ++ mtk_m32(eth, 0, MISC_MDC_TURBO, MTK_MAC_MISC_V3); ++ else ++ val |= PPSC_MDC_TURBO; ++ ++ mtk_m32(eth, PPSC_MDC_CFG, val, MTK_PPSC); ++} ++ + static int mtk_mdio_init(struct mtk_eth *eth) + { +- unsigned int max_clk = 2500000, divider; ++ unsigned int max_clk = 2500000; + struct device_node *mii_np; + int ret; + u32 val; +@@ -865,20 +881,9 @@ static int mtk_mdio_init(struct mtk_eth *eth) + } + max_clk = val; + } +- divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63); +- +- /* Configure MDC Turbo Mode */ +- if (mtk_is_netsys_v3_or_greater(eth)) +- mtk_m32(eth, 0, MISC_MDC_TURBO, MTK_MAC_MISC_V3); +- +- /* Configure MDC Divider */ +- val = FIELD_PREP(PPSC_MDC_CFG, divider); +- if (!mtk_is_netsys_v3_or_greater(eth)) +- val |= PPSC_MDC_TURBO; +- mtk_m32(eth, PPSC_MDC_CFG, val, MTK_PPSC); +- +- dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / divider); +- ++ eth->mdc_divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63); ++ mtk_mdio_config(eth); ++ dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / eth->mdc_divider); + ret = of_mdiobus_register(eth->mii_bus, mii_np); + + err_put_node: +@@ -3269,7 +3274,7 @@ static int mtk_start_dma(struct mtk_eth *eth) + if (mtk_is_netsys_v2_or_greater(eth)) + val |= MTK_MUTLI_CNT | MTK_RESV_BUF | + MTK_WCOMP_EN | MTK_DMAD_WR_WDONE | +- MTK_CHK_DDONE_EN | MTK_LEAKY_BUCKET_EN; ++ MTK_CHK_DDONE_EN; + else + val |= MTK_RX_BT_32DWORDS; + mtk_w32(eth, val, reg_map->qdma.glo_cfg); +@@ -3928,6 +3933,10 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset) + else + mtk_hw_reset(eth); + ++ /* No MT7628/88 support yet */ ++ if (reset && !MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) ++ mtk_mdio_config(eth); ++ + if (mtk_is_netsys_v3_or_greater(eth)) { + /* Set FE to PDMAv2 if necessary */ + val = mtk_r32(eth, MTK_FE_GLO_MISC); +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +index 0d5225f1d3eef6..8d7b6818d86012 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +@@ -1260,6 +1260,7 @@ struct mtk_eth { + struct clk *clks[MTK_CLK_MAX]; + + struct mii_bus *mii_bus; ++ unsigned int mdc_divider; + struct work_struct pending_work; + unsigned long state; + +diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c +index 308a2b72a65de3..a21e7c0afbfdc8 100644 +--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c ++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c +@@ -2680,7 +2680,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common) + of_property_read_bool(port_np, "ti,mac-only"); + + /* get phy/link info */ +- port->slave.port_np = port_np; ++ port->slave.port_np = of_node_get(port_np); + ret = of_get_phy_mode(port_np, &port->slave.phy_if); + if (ret) { + dev_err(dev, "%pOF read phy-mode err %d\n", +@@ -2741,6 +2741,17 @@ static void am65_cpsw_nuss_phylink_cleanup(struct am65_cpsw_common *common) + } + } + ++static void am65_cpsw_remove_dt(struct am65_cpsw_common *common) ++{ ++ struct am65_cpsw_port *port; ++ int i; ++ ++ for (i = 0; i < common->port_num; i++) { ++ port = &common->ports[i]; ++ of_node_put(port->slave.port_np); ++ } ++} ++ + static int + am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx) + { +@@ -3647,6 +3658,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) + am65_cpsw_nuss_cleanup_ndev(common); + am65_cpsw_nuss_phylink_cleanup(common); + am65_cpts_release(common->cpts); ++ am65_cpsw_remove_dt(common); + err_of_clear: + if (common->mdio_dev) + of_platform_device_destroy(common->mdio_dev, NULL); +@@ -3686,6 +3698,7 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev) + am65_cpsw_nuss_phylink_cleanup(common); + am65_cpts_release(common->cpts); + am65_cpsw_disable_serdes_phy(common); ++ am65_cpsw_remove_dt(common); + + if (common->mdio_dev) + of_platform_device_destroy(common->mdio_dev, NULL); +diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c +index d59c1744840af2..2a1c43316f462b 100644 +--- a/drivers/net/ethernet/ti/icssg/icss_iep.c ++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c +@@ -406,66 +406,79 @@ static void icss_iep_update_to_next_boundary(struct icss_iep *iep, u64 start_ns) + static int icss_iep_perout_enable_hw(struct icss_iep *iep, + struct ptp_perout_request *req, int on) + { ++ struct timespec64 ts; ++ u64 ns_start; ++ u64 ns_width; + int ret; + u64 cmp; + ++ if (!on) { ++ /* Disable CMP 1 */ ++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, ++ IEP_CMP_CFG_CMP_EN(1), 0); ++ ++ /* clear CMP regs */ ++ regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0); ++ if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT) ++ regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0); ++ ++ /* Disable sync */ ++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); ++ ++ return 0; ++ } ++ ++ /* Calculate width of the signal for PPS/PEROUT handling */ ++ ts.tv_sec = req->on.sec; ++ ts.tv_nsec = req->on.nsec; ++ ns_width = timespec64_to_ns(&ts); ++ ++ if (req->flags & PTP_PEROUT_PHASE) { ++ ts.tv_sec = req->phase.sec; ++ ts.tv_nsec = req->phase.nsec; ++ ns_start = timespec64_to_ns(&ts); ++ } else { ++ ns_start = 0; ++ } ++ + if (iep->ops && iep->ops->perout_enable) { + ret = iep->ops->perout_enable(iep->clockops_data, req, on, &cmp); + if (ret) + return ret; + +- if (on) { +- /* Configure CMP */ +- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, lower_32_bits(cmp)); +- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT) +- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, upper_32_bits(cmp)); +- /* Configure SYNC, 1ms pulse width */ +- regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG, 1000000); +- regmap_write(iep->map, ICSS_IEP_SYNC0_PERIOD_REG, 0); +- regmap_write(iep->map, ICSS_IEP_SYNC_START_REG, 0); +- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); /* one-shot mode */ +- /* Enable CMP 1 */ +- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, +- IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1)); +- } else { +- /* Disable CMP 1 */ +- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, +- IEP_CMP_CFG_CMP_EN(1), 0); +- +- /* clear regs */ +- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0); +- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT) +- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0); +- } ++ /* Configure CMP */ ++ regmap_write(iep->map, ICSS_IEP_CMP1_REG0, lower_32_bits(cmp)); ++ if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT) ++ regmap_write(iep->map, ICSS_IEP_CMP1_REG1, upper_32_bits(cmp)); ++ /* Configure SYNC, based on req on width */ ++ regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG, ++ div_u64(ns_width, iep->def_inc)); ++ regmap_write(iep->map, ICSS_IEP_SYNC0_PERIOD_REG, 0); ++ regmap_write(iep->map, ICSS_IEP_SYNC_START_REG, ++ div_u64(ns_start, iep->def_inc)); ++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); /* one-shot mode */ ++ /* Enable CMP 1 */ ++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, ++ IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1)); + } else { +- if (on) { +- u64 start_ns; +- +- iep->period = ((u64)req->period.sec * NSEC_PER_SEC) + +- req->period.nsec; +- start_ns = ((u64)req->period.sec * NSEC_PER_SEC) +- + req->period.nsec; +- icss_iep_update_to_next_boundary(iep, start_ns); +- +- /* Enable Sync in single shot mode */ +- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, +- IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN); +- /* Enable CMP 1 */ +- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, +- IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1)); +- } else { +- /* Disable CMP 1 */ +- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, +- IEP_CMP_CFG_CMP_EN(1), 0); +- +- /* clear CMP regs */ +- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0); +- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT) +- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0); +- +- /* Disable sync */ +- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); +- } ++ u64 start_ns; ++ ++ iep->period = ((u64)req->period.sec * NSEC_PER_SEC) + ++ req->period.nsec; ++ start_ns = ((u64)req->period.sec * NSEC_PER_SEC) ++ + req->period.nsec; ++ icss_iep_update_to_next_boundary(iep, start_ns); ++ ++ regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG, ++ div_u64(ns_width, iep->def_inc)); ++ regmap_write(iep->map, ICSS_IEP_SYNC_START_REG, ++ div_u64(ns_start, iep->def_inc)); ++ /* Enable Sync in single shot mode */ ++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, ++ IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN); ++ /* Enable CMP 1 */ ++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, ++ IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1)); + } + + return 0; +@@ -474,7 +487,41 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep, + static int icss_iep_perout_enable(struct icss_iep *iep, + struct ptp_perout_request *req, int on) + { +- return -EOPNOTSUPP; ++ int ret = 0; ++ ++ if (!on) ++ goto disable; ++ ++ /* Reject requests with unsupported flags */ ++ if (req->flags & ~(PTP_PEROUT_DUTY_CYCLE | ++ PTP_PEROUT_PHASE)) ++ return -EOPNOTSUPP; ++ ++ /* Set default "on" time (1ms) for the signal if not passed by the app */ ++ if (!(req->flags & PTP_PEROUT_DUTY_CYCLE)) { ++ req->on.sec = 0; ++ req->on.nsec = NSEC_PER_MSEC; ++ } ++ ++disable: ++ mutex_lock(&iep->ptp_clk_mutex); ++ ++ if (iep->pps_enabled) { ++ ret = -EBUSY; ++ goto exit; ++ } ++ ++ if (iep->perout_enabled == !!on) ++ goto exit; ++ ++ ret = icss_iep_perout_enable_hw(iep, req, on); ++ if (!ret) ++ iep->perout_enabled = !!on; ++ ++exit: ++ mutex_unlock(&iep->ptp_clk_mutex); ++ ++ return ret; + } + + static void icss_iep_cap_cmp_work(struct work_struct *work) +@@ -549,10 +596,13 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on) + if (on) { + ns = icss_iep_gettime(iep, NULL); + ts = ns_to_timespec64(ns); ++ rq.perout.flags = 0; + rq.perout.period.sec = 1; + rq.perout.period.nsec = 0; + rq.perout.start.sec = ts.tv_sec + 2; + rq.perout.start.nsec = 0; ++ rq.perout.on.sec = 0; ++ rq.perout.on.nsec = NSEC_PER_MSEC; + ret = icss_iep_perout_enable_hw(iep, &rq.perout, on); + } else { + ret = icss_iep_perout_enable_hw(iep, &rq.perout, on); +diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c +index 53aeae2f884b01..1be2a5cc4a83c3 100644 +--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c ++++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c +@@ -607,7 +607,7 @@ static int ngbe_probe(struct pci_dev *pdev, + /* setup the private structure */ + err = ngbe_sw_init(wx); + if (err) +- goto err_free_mac_table; ++ goto err_pci_release_regions; + + /* check if flash load is done after hw power up */ + err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PERST); +@@ -701,6 +701,7 @@ static int ngbe_probe(struct pci_dev *pdev, + err_clear_interrupt_scheme: + wx_clear_interrupt_scheme(wx); + err_free_mac_table: ++ kfree(wx->rss_key); + kfree(wx->mac_table); + err_pci_release_regions: + pci_release_selected_regions(pdev, +diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c +index f7745026803643..7e352837184fad 100644 +--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c ++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c +@@ -559,7 +559,7 @@ static int txgbe_probe(struct pci_dev *pdev, + /* setup the private structure */ + err = txgbe_sw_init(wx); + if (err) +- goto err_free_mac_table; ++ goto err_pci_release_regions; + + /* check if flash load is done after hw power up */ + err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PERST); +@@ -717,6 +717,7 @@ static int txgbe_probe(struct pci_dev *pdev, + wx_clear_interrupt_scheme(wx); + wx_control_hw(wx, false); + err_free_mac_table: ++ kfree(wx->rss_key); + kfree(wx->mac_table); + err_pci_release_regions: + pci_release_selected_regions(pdev, +diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c +index 1706ec27eb9c0f..4c98b9de1e5840 100644 +--- a/drivers/net/wireless/ath/ath12k/dp_mon.c ++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c +@@ -2118,7 +2118,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id, int *budget, + dest_idx = 0; + move_next: + ath12k_dp_mon_buf_replenish(ab, buf_ring, 1); +- ath12k_hal_srng_src_get_next_entry(ab, srng); ++ ath12k_hal_srng_dst_get_next_entry(ab, srng); + num_buffs_reaped++; + } + +@@ -2533,7 +2533,7 @@ int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id, + dest_idx = 0; + move_next: + ath12k_dp_mon_buf_replenish(ab, buf_ring, 1); +- ath12k_hal_srng_dst_get_next_entry(ab, srng); ++ ath12k_hal_srng_src_get_next_entry(ab, srng); + num_buffs_reaped++; + } + +diff --git a/drivers/net/wireless/atmel/at76c50x-usb.c b/drivers/net/wireless/atmel/at76c50x-usb.c +index 504e05ea30f298..97ea7ab0f49102 100644 +--- a/drivers/net/wireless/atmel/at76c50x-usb.c ++++ b/drivers/net/wireless/atmel/at76c50x-usb.c +@@ -2552,7 +2552,7 @@ static void at76_disconnect(struct usb_interface *interface) + + wiphy_info(priv->hw->wiphy, "disconnecting\n"); + at76_delete_device(priv); +- usb_put_dev(priv->udev); ++ usb_put_dev(interface_to_usbdev(interface)); + dev_info(&interface->dev, "disconnected\n"); + } + +diff --git a/drivers/net/wireless/ti/wl1251/tx.c b/drivers/net/wireless/ti/wl1251/tx.c +index 474b603c121cba..adb4840b048932 100644 +--- a/drivers/net/wireless/ti/wl1251/tx.c ++++ b/drivers/net/wireless/ti/wl1251/tx.c +@@ -342,8 +342,10 @@ void wl1251_tx_work(struct work_struct *work) + while ((skb = skb_dequeue(&wl->tx_queue))) { + if (!woken_up) { + ret = wl1251_ps_elp_wakeup(wl); +- if (ret < 0) ++ if (ret < 0) { ++ skb_queue_head(&wl->tx_queue, skb); + goto out; ++ } + woken_up = true; + } + +diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c +index e79a0adf13950b..328f5a103628fe 100644 +--- a/drivers/nvme/host/apple.c ++++ b/drivers/nvme/host/apple.c +@@ -650,7 +650,7 @@ static bool apple_nvme_handle_cq(struct apple_nvme_queue *q, bool force) + + found = apple_nvme_poll_cq(q, &iob); + +- if (!rq_list_empty(iob.req_list)) ++ if (!rq_list_empty(&iob.req_list)) + apple_nvme_complete_batch(&iob); + + return found; +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index af45a1b865ee10..e70618e8d35eb4 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -985,7 +985,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, + return BLK_STS_OK; + } + +-static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist) ++static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct rq_list *rqlist) + { + struct request *req; + +@@ -1013,11 +1013,10 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req) + return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK; + } + +-static void nvme_queue_rqs(struct request **rqlist) ++static void nvme_queue_rqs(struct rq_list *rqlist) + { +- struct request *submit_list = NULL; +- struct request *requeue_list = NULL; +- struct request **requeue_lastp = &requeue_list; ++ struct rq_list submit_list = { }; ++ struct rq_list requeue_list = { }; + struct nvme_queue *nvmeq = NULL; + struct request *req; + +@@ -1027,9 +1026,9 @@ static void nvme_queue_rqs(struct request **rqlist) + nvmeq = req->mq_hctx->driver_data; + + if (nvme_prep_rq_batch(nvmeq, req)) +- rq_list_add(&submit_list, req); /* reverse order */ ++ rq_list_add_tail(&submit_list, req); + else +- rq_list_add_tail(&requeue_lastp, req); ++ rq_list_add_tail(&requeue_list, req); + } + + if (nvmeq) +@@ -1176,7 +1175,7 @@ static irqreturn_t nvme_irq(int irq, void *data) + DEFINE_IO_COMP_BATCH(iob); + + if (nvme_poll_cq(nvmeq, &iob)) { +- if (!rq_list_empty(iob.req_list)) ++ if (!rq_list_empty(&iob.req_list)) + nvme_pci_complete_batch(&iob); + return IRQ_HANDLED; + } +diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c +index 3ef4beacde3257..7318b736d41417 100644 +--- a/drivers/nvme/target/fc.c ++++ b/drivers/nvme/target/fc.c +@@ -172,20 +172,6 @@ struct nvmet_fc_tgt_assoc { + struct work_struct del_work; + }; + +- +-static inline int +-nvmet_fc_iodnum(struct nvmet_fc_ls_iod *iodptr) +-{ +- return (iodptr - iodptr->tgtport->iod); +-} +- +-static inline int +-nvmet_fc_fodnum(struct nvmet_fc_fcp_iod *fodptr) +-{ +- return (fodptr - fodptr->queue->fod); +-} +- +- + /* + * Association and Connection IDs: + * +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index be61fa93d39712..25c07af1686b9b 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -5534,8 +5534,6 @@ static bool pci_bus_resettable(struct pci_bus *bus) + return false; + + list_for_each_entry(dev, &bus->devices, bus_list) { +- if (!pci_reset_supported(dev)) +- return false; + if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || + (dev->subordinate && !pci_bus_resettable(dev->subordinate))) + return false; +@@ -5612,8 +5610,6 @@ static bool pci_slot_resettable(struct pci_slot *slot) + list_for_each_entry(dev, &slot->bus->devices, bus_list) { + if (!dev->slot || dev->slot != slot) + continue; +- if (!pci_reset_supported(dev)) +- return false; + if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || + (dev->subordinate && !pci_bus_resettable(dev->subordinate))) + return false; +diff --git a/drivers/platform/x86/amd/pmf/auto-mode.c b/drivers/platform/x86/amd/pmf/auto-mode.c +index 02ff68be10d012..a184922bba8d65 100644 +--- a/drivers/platform/x86/amd/pmf/auto-mode.c ++++ b/drivers/platform/x86/amd/pmf/auto-mode.c +@@ -120,9 +120,9 @@ static void amd_pmf_set_automode(struct amd_pmf_dev *dev, int idx, + amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, pwr_ctrl->sppt_apu_only, NULL); + amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, pwr_ctrl->stt_min, NULL); + amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, +- pwr_ctrl->stt_skin_temp[STT_TEMP_APU], NULL); ++ fixp_q88_fromint(pwr_ctrl->stt_skin_temp[STT_TEMP_APU]), NULL); + amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, +- pwr_ctrl->stt_skin_temp[STT_TEMP_HS2], NULL); ++ fixp_q88_fromint(pwr_ctrl->stt_skin_temp[STT_TEMP_HS2]), NULL); + + if (is_apmf_func_supported(dev, APMF_FUNC_SET_FAN_IDX)) + apmf_update_fan_idx(dev, config_store.mode_set[idx].fan_control.manual, +diff --git a/drivers/platform/x86/amd/pmf/cnqf.c b/drivers/platform/x86/amd/pmf/cnqf.c +index bc8899e15c914b..207a0b33d8d368 100644 +--- a/drivers/platform/x86/amd/pmf/cnqf.c ++++ b/drivers/platform/x86/amd/pmf/cnqf.c +@@ -81,10 +81,10 @@ static int amd_pmf_set_cnqf(struct amd_pmf_dev *dev, int src, int idx, + amd_pmf_send_cmd(dev, SET_SPPT, false, pc->sppt, NULL); + amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, pc->sppt_apu_only, NULL); + amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, pc->stt_min, NULL); +- amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, pc->stt_skin_temp[STT_TEMP_APU], +- NULL); +- amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, pc->stt_skin_temp[STT_TEMP_HS2], +- NULL); ++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, ++ fixp_q88_fromint(pc->stt_skin_temp[STT_TEMP_APU]), NULL); ++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, ++ fixp_q88_fromint(pc->stt_skin_temp[STT_TEMP_HS2]), NULL); + + if (is_apmf_func_supported(dev, APMF_FUNC_SET_FAN_IDX)) + apmf_update_fan_idx(dev, +diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c +index 347bb43a5f2b75..719caa2a00f056 100644 +--- a/drivers/platform/x86/amd/pmf/core.c ++++ b/drivers/platform/x86/amd/pmf/core.c +@@ -176,6 +176,20 @@ static void __maybe_unused amd_pmf_dump_registers(struct amd_pmf_dev *dev) + dev_dbg(dev->dev, "AMD_PMF_REGISTER_MESSAGE:%x\n", value); + } + ++/** ++ * fixp_q88_fromint: Convert integer to Q8.8 ++ * @val: input value ++ * ++ * Converts an integer into binary fixed point format where 8 bits ++ * are used for integer and 8 bits are used for the decimal. ++ * ++ * Return: unsigned integer converted to Q8.8 format ++ */ ++u32 fixp_q88_fromint(u32 val) ++{ ++ return val << 8; ++} ++ + int amd_pmf_send_cmd(struct amd_pmf_dev *dev, u8 message, bool get, u32 arg, u32 *data) + { + int rc; +diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h +index 43ba1b9aa1811a..34ba0309a33a2f 100644 +--- a/drivers/platform/x86/amd/pmf/pmf.h ++++ b/drivers/platform/x86/amd/pmf/pmf.h +@@ -746,6 +746,7 @@ int apmf_install_handler(struct amd_pmf_dev *pmf_dev); + int apmf_os_power_slider_update(struct amd_pmf_dev *dev, u8 flag); + int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer); + int amd_pmf_notify_sbios_heartbeat_event_v2(struct amd_pmf_dev *dev, u8 flag); ++u32 fixp_q88_fromint(u32 val); + + /* SPS Layer */ + int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf); +diff --git a/drivers/platform/x86/amd/pmf/sps.c b/drivers/platform/x86/amd/pmf/sps.c +index 92f7fb22277dca..3a24209f7df03e 100644 +--- a/drivers/platform/x86/amd/pmf/sps.c ++++ b/drivers/platform/x86/amd/pmf/sps.c +@@ -198,9 +198,11 @@ static void amd_pmf_update_slider_v2(struct amd_pmf_dev *dev, int idx) + amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, + apts_config_store.val[idx].stt_min_limit, NULL); + amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, +- apts_config_store.val[idx].stt_skin_temp_limit_apu, NULL); ++ fixp_q88_fromint(apts_config_store.val[idx].stt_skin_temp_limit_apu), ++ NULL); + amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, +- apts_config_store.val[idx].stt_skin_temp_limit_hs2, NULL); ++ fixp_q88_fromint(apts_config_store.val[idx].stt_skin_temp_limit_hs2), ++ NULL); + } + + void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx, +@@ -217,9 +219,11 @@ void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx, + amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, + config_store.prop[src][idx].stt_min, NULL); + amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, +- config_store.prop[src][idx].stt_skin_temp[STT_TEMP_APU], NULL); ++ fixp_q88_fromint(config_store.prop[src][idx].stt_skin_temp[STT_TEMP_APU]), ++ NULL); + amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, +- config_store.prop[src][idx].stt_skin_temp[STT_TEMP_HS2], NULL); ++ fixp_q88_fromint(config_store.prop[src][idx].stt_skin_temp[STT_TEMP_HS2]), ++ NULL); + } else if (op == SLIDER_OP_GET) { + amd_pmf_send_cmd(dev, GET_SPL, true, ARG_NONE, &table->prop[src][idx].spl); + amd_pmf_send_cmd(dev, GET_FPPT, true, ARG_NONE, &table->prop[src][idx].fppt); +diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c +index 09131507d7a925..cb5abab2210a7b 100644 +--- a/drivers/platform/x86/amd/pmf/tee-if.c ++++ b/drivers/platform/x86/amd/pmf/tee-if.c +@@ -123,7 +123,8 @@ static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_ + + case PMF_POLICY_STT_SKINTEMP_APU: + if (dev->prev_data->stt_skintemp_apu != val) { +- amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, val, NULL); ++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, ++ fixp_q88_fromint(val), NULL); + dev_dbg(dev->dev, "update STT_SKINTEMP_APU: %u\n", val); + dev->prev_data->stt_skintemp_apu = val; + } +@@ -131,7 +132,8 @@ static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_ + + case PMF_POLICY_STT_SKINTEMP_HS2: + if (dev->prev_data->stt_skintemp_hs2 != val) { +- amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, val, NULL); ++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, ++ fixp_q88_fromint(val), NULL); + dev_dbg(dev->dev, "update STT_SKINTEMP_HS2: %u\n", val); + dev->prev_data->stt_skintemp_hs2 = val; + } +diff --git a/drivers/platform/x86/asus-laptop.c b/drivers/platform/x86/asus-laptop.c +index 9d7e6b712abf11..8d2e6d8be9e54a 100644 +--- a/drivers/platform/x86/asus-laptop.c ++++ b/drivers/platform/x86/asus-laptop.c +@@ -426,11 +426,14 @@ static int asus_pega_lucid_set(struct asus_laptop *asus, int unit, bool enable) + + static int pega_acc_axis(struct asus_laptop *asus, int curr, char *method) + { ++ unsigned long long val = (unsigned long long)curr; ++ acpi_status status; + int i, delta; +- unsigned long long val; +- for (i = 0; i < PEGA_ACC_RETRIES; i++) { +- acpi_evaluate_integer(asus->handle, method, NULL, &val); + ++ for (i = 0; i < PEGA_ACC_RETRIES; i++) { ++ status = acpi_evaluate_integer(asus->handle, method, NULL, &val); ++ if (ACPI_FAILURE(status)) ++ continue; + /* The output is noisy. From reading the ASL + * dissassembly, timeout errors are returned with 1's + * in the high word, and the lack of locking around +diff --git a/drivers/platform/x86/msi-wmi-platform.c b/drivers/platform/x86/msi-wmi-platform.c +index 9b5c7f8c79b0dd..dc5e9878cb6822 100644 +--- a/drivers/platform/x86/msi-wmi-platform.c ++++ b/drivers/platform/x86/msi-wmi-platform.c +@@ -10,6 +10,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -17,6 +18,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -76,8 +78,13 @@ enum msi_wmi_platform_method { + MSI_PLATFORM_GET_WMI = 0x1d, + }; + +-struct msi_wmi_platform_debugfs_data { ++struct msi_wmi_platform_data { + struct wmi_device *wdev; ++ struct mutex wmi_lock; /* Necessary when calling WMI methods */ ++}; ++ ++struct msi_wmi_platform_debugfs_data { ++ struct msi_wmi_platform_data *data; + enum msi_wmi_platform_method method; + struct rw_semaphore buffer_lock; /* Protects debugfs buffer */ + size_t length; +@@ -132,8 +139,9 @@ static int msi_wmi_platform_parse_buffer(union acpi_object *obj, u8 *output, siz + return 0; + } + +-static int msi_wmi_platform_query(struct wmi_device *wdev, enum msi_wmi_platform_method method, +- u8 *input, size_t input_length, u8 *output, size_t output_length) ++static int msi_wmi_platform_query(struct msi_wmi_platform_data *data, ++ enum msi_wmi_platform_method method, u8 *input, ++ size_t input_length, u8 *output, size_t output_length) + { + struct acpi_buffer out = { ACPI_ALLOCATE_BUFFER, NULL }; + struct acpi_buffer in = { +@@ -147,9 +155,15 @@ static int msi_wmi_platform_query(struct wmi_device *wdev, enum msi_wmi_platform + if (!input_length || !output_length) + return -EINVAL; + +- status = wmidev_evaluate_method(wdev, 0x0, method, &in, &out); +- if (ACPI_FAILURE(status)) +- return -EIO; ++ /* ++ * The ACPI control method responsible for handling the WMI method calls ++ * is not thread-safe. Because of this we have to do the locking ourself. ++ */ ++ scoped_guard(mutex, &data->wmi_lock) { ++ status = wmidev_evaluate_method(data->wdev, 0x0, method, &in, &out); ++ if (ACPI_FAILURE(status)) ++ return -EIO; ++ } + + obj = out.pointer; + if (!obj) +@@ -170,22 +184,22 @@ static umode_t msi_wmi_platform_is_visible(const void *drvdata, enum hwmon_senso + static int msi_wmi_platform_read(struct device *dev, enum hwmon_sensor_types type, u32 attr, + int channel, long *val) + { +- struct wmi_device *wdev = dev_get_drvdata(dev); ++ struct msi_wmi_platform_data *data = dev_get_drvdata(dev); + u8 input[32] = { 0 }; + u8 output[32]; +- u16 data; ++ u16 value; + int ret; + +- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_FAN, input, sizeof(input), output, ++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_FAN, input, sizeof(input), output, + sizeof(output)); + if (ret < 0) + return ret; + +- data = get_unaligned_be16(&output[channel * 2 + 1]); +- if (!data) ++ value = get_unaligned_be16(&output[channel * 2 + 1]); ++ if (!value) + *val = 0; + else +- *val = 480000 / data; ++ *val = 480000 / value; + + return 0; + } +@@ -231,7 +245,7 @@ static ssize_t msi_wmi_platform_write(struct file *fp, const char __user *input, + return ret; + + down_write(&data->buffer_lock); +- ret = msi_wmi_platform_query(data->wdev, data->method, payload, data->length, data->buffer, ++ ret = msi_wmi_platform_query(data->data, data->method, payload, data->length, data->buffer, + data->length); + up_write(&data->buffer_lock); + +@@ -277,17 +291,17 @@ static void msi_wmi_platform_debugfs_remove(void *data) + debugfs_remove_recursive(dir); + } + +-static void msi_wmi_platform_debugfs_add(struct wmi_device *wdev, struct dentry *dir, ++static void msi_wmi_platform_debugfs_add(struct msi_wmi_platform_data *drvdata, struct dentry *dir, + const char *name, enum msi_wmi_platform_method method) + { + struct msi_wmi_platform_debugfs_data *data; + struct dentry *entry; + +- data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL); ++ data = devm_kzalloc(&drvdata->wdev->dev, sizeof(*data), GFP_KERNEL); + if (!data) + return; + +- data->wdev = wdev; ++ data->data = drvdata; + data->method = method; + init_rwsem(&data->buffer_lock); + +@@ -298,82 +312,82 @@ static void msi_wmi_platform_debugfs_add(struct wmi_device *wdev, struct dentry + + entry = debugfs_create_file(name, 0600, dir, data, &msi_wmi_platform_debugfs_fops); + if (IS_ERR(entry)) +- devm_kfree(&wdev->dev, data); ++ devm_kfree(&drvdata->wdev->dev, data); + } + +-static void msi_wmi_platform_debugfs_init(struct wmi_device *wdev) ++static void msi_wmi_platform_debugfs_init(struct msi_wmi_platform_data *data) + { + struct dentry *dir; + char dir_name[64]; + int ret, method; + +- scnprintf(dir_name, ARRAY_SIZE(dir_name), "%s-%s", DRIVER_NAME, dev_name(&wdev->dev)); ++ scnprintf(dir_name, ARRAY_SIZE(dir_name), "%s-%s", DRIVER_NAME, dev_name(&data->wdev->dev)); + + dir = debugfs_create_dir(dir_name, NULL); + if (IS_ERR(dir)) + return; + +- ret = devm_add_action_or_reset(&wdev->dev, msi_wmi_platform_debugfs_remove, dir); ++ ret = devm_add_action_or_reset(&data->wdev->dev, msi_wmi_platform_debugfs_remove, dir); + if (ret < 0) + return; + + for (method = MSI_PLATFORM_GET_PACKAGE; method <= MSI_PLATFORM_GET_WMI; method++) +- msi_wmi_platform_debugfs_add(wdev, dir, msi_wmi_platform_debugfs_names[method - 1], ++ msi_wmi_platform_debugfs_add(data, dir, msi_wmi_platform_debugfs_names[method - 1], + method); + } + +-static int msi_wmi_platform_hwmon_init(struct wmi_device *wdev) ++static int msi_wmi_platform_hwmon_init(struct msi_wmi_platform_data *data) + { + struct device *hdev; + +- hdev = devm_hwmon_device_register_with_info(&wdev->dev, "msi_wmi_platform", wdev, ++ hdev = devm_hwmon_device_register_with_info(&data->wdev->dev, "msi_wmi_platform", data, + &msi_wmi_platform_chip_info, NULL); + + return PTR_ERR_OR_ZERO(hdev); + } + +-static int msi_wmi_platform_ec_init(struct wmi_device *wdev) ++static int msi_wmi_platform_ec_init(struct msi_wmi_platform_data *data) + { + u8 input[32] = { 0 }; + u8 output[32]; + u8 flags; + int ret; + +- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_EC, input, sizeof(input), output, ++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_EC, input, sizeof(input), output, + sizeof(output)); + if (ret < 0) + return ret; + + flags = output[MSI_PLATFORM_EC_FLAGS_OFFSET]; + +- dev_dbg(&wdev->dev, "EC RAM version %lu.%lu\n", ++ dev_dbg(&data->wdev->dev, "EC RAM version %lu.%lu\n", + FIELD_GET(MSI_PLATFORM_EC_MAJOR_MASK, flags), + FIELD_GET(MSI_PLATFORM_EC_MINOR_MASK, flags)); +- dev_dbg(&wdev->dev, "EC firmware version %.28s\n", ++ dev_dbg(&data->wdev->dev, "EC firmware version %.28s\n", + &output[MSI_PLATFORM_EC_VERSION_OFFSET]); + + if (!(flags & MSI_PLATFORM_EC_IS_TIGERLAKE)) { + if (!force) + return -ENODEV; + +- dev_warn(&wdev->dev, "Loading on a non-Tigerlake platform\n"); ++ dev_warn(&data->wdev->dev, "Loading on a non-Tigerlake platform\n"); + } + + return 0; + } + +-static int msi_wmi_platform_init(struct wmi_device *wdev) ++static int msi_wmi_platform_init(struct msi_wmi_platform_data *data) + { + u8 input[32] = { 0 }; + u8 output[32]; + int ret; + +- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_WMI, input, sizeof(input), output, ++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_WMI, input, sizeof(input), output, + sizeof(output)); + if (ret < 0) + return ret; + +- dev_dbg(&wdev->dev, "WMI interface version %u.%u\n", ++ dev_dbg(&data->wdev->dev, "WMI interface version %u.%u\n", + output[MSI_PLATFORM_WMI_MAJOR_OFFSET], + output[MSI_PLATFORM_WMI_MINOR_OFFSET]); + +@@ -381,7 +395,8 @@ static int msi_wmi_platform_init(struct wmi_device *wdev) + if (!force) + return -ENODEV; + +- dev_warn(&wdev->dev, "Loading despite unsupported WMI interface version (%u.%u)\n", ++ dev_warn(&data->wdev->dev, ++ "Loading despite unsupported WMI interface version (%u.%u)\n", + output[MSI_PLATFORM_WMI_MAJOR_OFFSET], + output[MSI_PLATFORM_WMI_MINOR_OFFSET]); + } +@@ -391,19 +406,31 @@ static int msi_wmi_platform_init(struct wmi_device *wdev) + + static int msi_wmi_platform_probe(struct wmi_device *wdev, const void *context) + { ++ struct msi_wmi_platform_data *data; + int ret; + +- ret = msi_wmi_platform_init(wdev); ++ data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL); ++ if (!data) ++ return -ENOMEM; ++ ++ data->wdev = wdev; ++ dev_set_drvdata(&wdev->dev, data); ++ ++ ret = devm_mutex_init(&wdev->dev, &data->wmi_lock); ++ if (ret < 0) ++ return ret; ++ ++ ret = msi_wmi_platform_init(data); + if (ret < 0) + return ret; + +- ret = msi_wmi_platform_ec_init(wdev); ++ ret = msi_wmi_platform_ec_init(data); + if (ret < 0) + return ret; + +- msi_wmi_platform_debugfs_init(wdev); ++ msi_wmi_platform_debugfs_init(data); + +- return msi_wmi_platform_hwmon_init(wdev); ++ return msi_wmi_platform_hwmon_init(data); + } + + static const struct wmi_device_id msi_wmi_platform_id_table[] = { +diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c +index 120db96d9e95d6..0eeb503e06c230 100644 +--- a/drivers/ptp/ptp_ocp.c ++++ b/drivers/ptp/ptp_ocp.c +@@ -2067,6 +2067,7 @@ ptp_ocp_signal_set(struct ptp_ocp *bp, int gen, struct ptp_ocp_signal *s) + if (!s->start) { + /* roundup() does not work on 32-bit systems */ + s->start = DIV64_U64_ROUND_UP(start_ns, s->period); ++ s->start *= s->period; + s->start = ktime_add(s->start, s->phase); + } + +diff --git a/drivers/ras/amd/atl/internal.h b/drivers/ras/amd/atl/internal.h +index 143d04c779a821..b7c7d5ba4d9dd1 100644 +--- a/drivers/ras/amd/atl/internal.h ++++ b/drivers/ras/amd/atl/internal.h +@@ -361,4 +361,7 @@ static inline void atl_debug_on_bad_intlv_mode(struct addr_ctx *ctx) + atl_debug(ctx, "Unrecognized interleave mode: %u", ctx->map.intlv_mode); + } + ++#define MI300_UMC_MCA_COL GENMASK(5, 1) ++#define MI300_UMC_MCA_ROW13 BIT(23) ++ + #endif /* __AMD_ATL_INTERNAL_H__ */ +diff --git a/drivers/ras/amd/atl/umc.c b/drivers/ras/amd/atl/umc.c +index dc8aa12f63c811..6e072b7667e98b 100644 +--- a/drivers/ras/amd/atl/umc.c ++++ b/drivers/ras/amd/atl/umc.c +@@ -229,7 +229,6 @@ int get_umc_info_mi300(void) + * Additionally, the PC and Bank bits may be hashed. This must be accounted for before + * reconstructing the normalized address. + */ +-#define MI300_UMC_MCA_COL GENMASK(5, 1) + #define MI300_UMC_MCA_BANK GENMASK(9, 6) + #define MI300_UMC_MCA_ROW GENMASK(24, 10) + #define MI300_UMC_MCA_PC BIT(25) +@@ -320,7 +319,7 @@ static unsigned long convert_dram_to_norm_addr_mi300(unsigned long addr) + * See amd_atl::convert_dram_to_norm_addr_mi300() for MI300 address formats. + */ + #define MI300_NUM_COL BIT(HWEIGHT(MI300_UMC_MCA_COL)) +-static void retire_row_mi300(struct atl_err *a_err) ++static void _retire_row_mi300(struct atl_err *a_err) + { + unsigned long addr; + struct page *p; +@@ -351,6 +350,22 @@ static void retire_row_mi300(struct atl_err *a_err) + } + } + ++/* ++ * In addition to the column bits, the row[13] bit should also be included when ++ * calculating addresses affected by a physical row. ++ * ++ * Instead of running through another loop over a single bit, just run through ++ * the column bits twice and flip the row[13] bit in-between. ++ * ++ * See MI300_UMC_MCA_ROW for the row bits in MCA_ADDR_UMC value. ++ */ ++static void retire_row_mi300(struct atl_err *a_err) ++{ ++ _retire_row_mi300(a_err); ++ a_err->addr ^= MI300_UMC_MCA_ROW13; ++ _retire_row_mi300(a_err); ++} ++ + void amd_retire_dram_row(struct atl_err *a_err) + { + if (df_cfg.rev == DF4p5 && df_cfg.flags.heterogeneous) +diff --git a/drivers/ras/amd/fmpm.c b/drivers/ras/amd/fmpm.c +index 90de737fbc9097..8877c6ff64c468 100644 +--- a/drivers/ras/amd/fmpm.c ++++ b/drivers/ras/amd/fmpm.c +@@ -250,6 +250,13 @@ static bool rec_has_valid_entries(struct fru_rec *rec) + return true; + } + ++/* ++ * Row retirement is done on MI300 systems, and some bits are 'don't ++ * care' for comparing addresses with unique physical rows. This ++ * includes all column bits and the row[13] bit. ++ */ ++#define MASK_ADDR(addr) ((addr) & ~(MI300_UMC_MCA_ROW13 | MI300_UMC_MCA_COL)) ++ + static bool fpds_equal(struct cper_fru_poison_desc *old, struct cper_fru_poison_desc *new) + { + /* +@@ -258,7 +265,7 @@ static bool fpds_equal(struct cper_fru_poison_desc *old, struct cper_fru_poison_ + * + * Also, order the checks from most->least likely to fail to shortcut the code. + */ +- if (old->addr != new->addr) ++ if (MASK_ADDR(old->addr) != MASK_ADDR(new->addr)) + return false; + + if (old->hw_id != new->hw_id) +diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c +index adec0df24bc475..1cb517f731f4ac 100644 +--- a/drivers/scsi/fnic/fnic_main.c ++++ b/drivers/scsi/fnic/fnic_main.c +@@ -16,7 +16,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -601,7 +600,7 @@ void fnic_mq_map_queues_cpus(struct Scsi_Host *host) + return; + } + +- blk_mq_pci_map_queues(qmap, l_pdev, FNIC_PCI_OFFSET); ++ blk_mq_map_hw_queues(qmap, &l_pdev->dev, FNIC_PCI_OFFSET); + } + + static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) +diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h +index d223f482488fc6..010479a354eeeb 100644 +--- a/drivers/scsi/hisi_sas/hisi_sas.h ++++ b/drivers/scsi/hisi_sas/hisi_sas.h +@@ -9,7 +9,6 @@ + + #include + #include +-#include + #include + #include + #include +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +index 342d75f12051d2..89ff33daba4041 100644 +--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c ++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +@@ -2501,6 +2501,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba, + struct hisi_sas_port *port = to_hisi_sas_port(sas_port); + struct sas_ata_task *ata_task = &task->ata_task; + struct sas_tmf_task *tmf = slot->tmf; ++ int phy_id; + u8 *buf_cmd; + int has_data = 0, hdr_tag = 0; + u32 dw0, dw1 = 0, dw2 = 0; +@@ -2508,10 +2509,14 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba, + /* create header */ + /* dw0 */ + dw0 = port->id << CMD_HDR_PORT_OFF; +- if (parent_dev && dev_is_expander(parent_dev->dev_type)) ++ if (parent_dev && dev_is_expander(parent_dev->dev_type)) { + dw0 |= 3 << CMD_HDR_CMD_OFF; +- else ++ } else { ++ phy_id = device->phy->identify.phy_identifier; ++ dw0 |= (1U << phy_id) << CMD_HDR_PHY_ID_OFF; ++ dw0 |= CMD_HDR_FORCE_PHY_MSK; + dw0 |= 4 << CMD_HDR_CMD_OFF; ++ } + + if (tmf && ata_task->force_phy) { + dw0 |= CMD_HDR_FORCE_PHY_MSK; +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +index cd394d8c9f07f0..2b04556681a1ac 100644 +--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c ++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +@@ -358,6 +358,10 @@ + #define CMD_HDR_RESP_REPORT_MSK (0x1 << CMD_HDR_RESP_REPORT_OFF) + #define CMD_HDR_TLR_CTRL_OFF 6 + #define CMD_HDR_TLR_CTRL_MSK (0x3 << CMD_HDR_TLR_CTRL_OFF) ++#define CMD_HDR_PHY_ID_OFF 8 ++#define CMD_HDR_PHY_ID_MSK (0x1ff << CMD_HDR_PHY_ID_OFF) ++#define CMD_HDR_FORCE_PHY_OFF 17 ++#define CMD_HDR_FORCE_PHY_MSK (0x1U << CMD_HDR_FORCE_PHY_OFF) + #define CMD_HDR_PORT_OFF 18 + #define CMD_HDR_PORT_MSK (0xf << CMD_HDR_PORT_OFF) + #define CMD_HDR_PRIORITY_OFF 27 +@@ -1425,15 +1429,21 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba, + struct hisi_sas_cmd_hdr *hdr = slot->cmd_hdr; + struct asd_sas_port *sas_port = device->port; + struct hisi_sas_port *port = to_hisi_sas_port(sas_port); ++ int phy_id; + u8 *buf_cmd; + int has_data = 0, hdr_tag = 0; + u32 dw1 = 0, dw2 = 0; + + hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF); +- if (parent_dev && dev_is_expander(parent_dev->dev_type)) ++ if (parent_dev && dev_is_expander(parent_dev->dev_type)) { + hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF); +- else ++ } else { ++ phy_id = device->phy->identify.phy_identifier; ++ hdr->dw0 |= cpu_to_le32((1U << phy_id) ++ << CMD_HDR_PHY_ID_OFF); ++ hdr->dw0 |= CMD_HDR_FORCE_PHY_MSK; + hdr->dw0 |= cpu_to_le32(4U << CMD_HDR_CMD_OFF); ++ } + + switch (task->data_dir) { + case DMA_TO_DEVICE: +@@ -3323,8 +3333,8 @@ static void hisi_sas_map_queues(struct Scsi_Host *shost) + if (i == HCTX_TYPE_POLL) + blk_mq_map_queues(qmap); + else +- blk_mq_pci_map_queues(qmap, hisi_hba->pci_dev, +- BASE_VECTORS_V3_HW); ++ blk_mq_map_hw_queues(qmap, hisi_hba->dev, ++ BASE_VECTORS_V3_HW); + qoff += qmap->nr_queues; + } + } +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c +index 50f1dcb6d58460..21f22e913cd08d 100644 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c +@@ -37,7 +37,6 @@ + #include + #include + #include +-#include + + #include + #include +@@ -2104,6 +2103,9 @@ static int megasas_device_configure(struct scsi_device *sdev, + /* This sdev property may change post OCR */ + megasas_set_dynamic_target_properties(sdev, lim, is_target_prop); + ++ if (!MEGASAS_IS_LOGICAL(sdev)) ++ sdev->no_vpd_size = 1; ++ + mutex_unlock(&instance->reset_mutex); + + return 0; +@@ -3193,7 +3195,7 @@ static void megasas_map_queues(struct Scsi_Host *shost) + map = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; + map->nr_queues = instance->msix_vectors - offset; + map->queue_offset = 0; +- blk_mq_pci_map_queues(map, instance->pdev, offset); ++ blk_mq_map_hw_queues(map, &instance->pdev->dev, offset); + qoff += map->nr_queues; + offset += map->nr_queues; + +@@ -3663,8 +3665,10 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, + + case MFI_STAT_SCSI_IO_FAILED: + case MFI_STAT_LD_INIT_IN_PROGRESS: +- cmd->scmd->result = +- (DID_ERROR << 16) | hdr->scsi_status; ++ if (hdr->scsi_status == 0xf0) ++ cmd->scmd->result = (DID_ERROR << 16) | SAM_STAT_CHECK_CONDITION; ++ else ++ cmd->scmd->result = (DID_ERROR << 16) | hdr->scsi_status; + break; + + case MFI_STAT_SCSI_DONE_WITH_ERROR: +diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c +index 1eec23da28e2d6..1eea4df9e47d35 100644 +--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c ++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c +@@ -2043,7 +2043,10 @@ map_cmd_status(struct fusion_context *fusion, + + case MFI_STAT_SCSI_IO_FAILED: + case MFI_STAT_LD_INIT_IN_PROGRESS: +- scmd->result = (DID_ERROR << 16) | ext_status; ++ if (ext_status == 0xf0) ++ scmd->result = (DID_ERROR << 16) | SAM_STAT_CHECK_CONDITION; ++ else ++ scmd->result = (DID_ERROR << 16) | ext_status; + break; + + case MFI_STAT_SCSI_DONE_WITH_ERROR: +diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h +index ee5a75a4b3bb80..ab7c5f1fc04121 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr.h ++++ b/drivers/scsi/mpi3mr/mpi3mr.h +@@ -12,7 +12,6 @@ + + #include + #include +-#include + #include + #include + #include +diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c +index 1bef88130d0c06..1e8735538b238e 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr_os.c ++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c +@@ -4042,7 +4042,7 @@ static void mpi3mr_map_queues(struct Scsi_Host *shost) + */ + map->queue_offset = qoff; + if (i != HCTX_TYPE_POLL) +- blk_mq_pci_map_queues(map, mrioc->pdev, offset); ++ blk_mq_map_hw_queues(map, &mrioc->pdev->dev, offset); + else + blk_mq_map_queues(map); + +diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c +index f2a55aa5fe6503..9599d7a5002868 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c +@@ -53,7 +53,6 @@ + #include + #include + #include +-#include + #include + + #include "mpt3sas_base.h" +@@ -11890,7 +11889,7 @@ static void scsih_map_queues(struct Scsi_Host *shost) + */ + map->queue_offset = qoff; + if (i != HCTX_TYPE_POLL) +- blk_mq_pci_map_queues(map, ioc->pdev, offset); ++ blk_mq_map_hw_queues(map, &ioc->pdev->dev, offset); + else + blk_mq_map_queues(map); + +diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c +index 33e1eba62ca12c..b53b1ae5b74c30 100644 +--- a/drivers/scsi/pm8001/pm8001_init.c ++++ b/drivers/scsi/pm8001/pm8001_init.c +@@ -101,7 +101,7 @@ static void pm8001_map_queues(struct Scsi_Host *shost) + struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; + + if (pm8001_ha->number_of_intr > 1) { +- blk_mq_pci_map_queues(qmap, pm8001_ha->pdev, 1); ++ blk_mq_map_hw_queues(qmap, &pm8001_ha->pdev->dev, 1); + return; + } + +diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h +index ced6721380a853..c46470e0cf63b7 100644 +--- a/drivers/scsi/pm8001/pm8001_sas.h ++++ b/drivers/scsi/pm8001/pm8001_sas.h +@@ -56,7 +56,6 @@ + #include + #include + #include +-#include + #include "pm8001_defs.h" + + #define DRV_NAME "pm80xx" +diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c +index 8f4cc136a9c9c4..8ee2e337c9e1b7 100644 +--- a/drivers/scsi/qla2xxx/qla_nvme.c ++++ b/drivers/scsi/qla2xxx/qla_nvme.c +@@ -8,7 +8,6 @@ + #include + #include + #include +-#include + #include + + static struct nvme_fc_port_template qla_nvme_fc_transport; +@@ -841,7 +840,7 @@ static void qla_nvme_map_queues(struct nvme_fc_local_port *lport, + { + struct scsi_qla_host *vha = lport->private; + +- blk_mq_pci_map_queues(map, vha->hw->pdev, vha->irq_offset); ++ blk_mq_map_hw_queues(map, &vha->hw->pdev->dev, vha->irq_offset); + } + + static void qla_nvme_localport_delete(struct nvme_fc_local_port *lport) +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c +index 7ab717ed72327e..31535beaaa161c 100644 +--- a/drivers/scsi/qla2xxx/qla_os.c ++++ b/drivers/scsi/qla2xxx/qla_os.c +@@ -13,7 +13,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -8071,7 +8070,8 @@ static void qla2xxx_map_queues(struct Scsi_Host *shost) + if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase) + blk_mq_map_queues(qmap); + else +- blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset); ++ blk_mq_map_hw_queues(qmap, &vha->hw->pdev->dev, ++ vha->irq_offset); + } + + struct scsi_host_template qla2xxx_driver_template = { +diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c +index 9b47f91c5b9720..8274fe0ec7146f 100644 +--- a/drivers/scsi/scsi_transport_iscsi.c ++++ b/drivers/scsi/scsi_transport_iscsi.c +@@ -3209,11 +3209,14 @@ iscsi_set_host_param(struct iscsi_transport *transport, + } + + /* see similar check in iscsi_if_set_param() */ +- if (strlen(data) > ev->u.set_host_param.len) +- return -EINVAL; ++ if (strlen(data) > ev->u.set_host_param.len) { ++ err = -EINVAL; ++ goto out; ++ } + + err = transport->set_host_param(shost, ev->u.set_host_param.param, + data, ev->u.set_host_param.len); ++out: + scsi_host_put(shost); + return err; + } +diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c +index 870f37b7054644..d919a74746a056 100644 +--- a/drivers/scsi/smartpqi/smartpqi_init.c ++++ b/drivers/scsi/smartpqi/smartpqi_init.c +@@ -19,7 +19,7 @@ + #include + #include + #include +-#include ++#include + #include + #include + #include +@@ -5247,7 +5247,7 @@ static void pqi_calculate_io_resources(struct pqi_ctrl_info *ctrl_info) + ctrl_info->error_buffer_length = + ctrl_info->max_io_slots * PQI_ERROR_BUFFER_ELEMENT_LENGTH; + +- if (reset_devices) ++ if (is_kdump_kernel()) + max_transfer_size = min(ctrl_info->max_transfer_size, + PQI_MAX_TRANSFER_SIZE_KDUMP); + else +@@ -5276,7 +5276,7 @@ static void pqi_calculate_queue_resources(struct pqi_ctrl_info *ctrl_info) + u16 num_elements_per_iq; + u16 num_elements_per_oq; + +- if (reset_devices) { ++ if (is_kdump_kernel()) { + num_queue_groups = 1; + } else { + int num_cpus; +@@ -6547,10 +6547,10 @@ static void pqi_map_queues(struct Scsi_Host *shost) + struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); + + if (!ctrl_info->disable_managed_interrupts) +- return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], +- ctrl_info->pci_dev, 0); ++ blk_mq_map_hw_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], ++ &ctrl_info->pci_dev->dev, 0); + else +- return blk_mq_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT]); ++ blk_mq_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT]); + } + + static inline bool pqi_is_tape_changer_device(struct pqi_scsi_dev *device) +@@ -8288,12 +8288,12 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) + u32 product_id; + + if (reset_devices) { +- if (pqi_is_fw_triage_supported(ctrl_info)) { ++ if (is_kdump_kernel() && pqi_is_fw_triage_supported(ctrl_info)) { + rc = sis_wait_for_fw_triage_completion(ctrl_info); + if (rc) + return rc; + } +- if (sis_is_ctrl_logging_supported(ctrl_info)) { ++ if (is_kdump_kernel() && sis_is_ctrl_logging_supported(ctrl_info)) { + sis_notify_kdump(ctrl_info); + rc = sis_wait_for_ctrl_logging_completion(ctrl_info); + if (rc) +@@ -8344,7 +8344,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) + ctrl_info->product_id = (u8)product_id; + ctrl_info->product_revision = (u8)(product_id >> 8); + +- if (reset_devices) { ++ if (is_kdump_kernel()) { + if (ctrl_info->max_outstanding_requests > + PQI_MAX_OUTSTANDING_REQUESTS_KDUMP) + ctrl_info->max_outstanding_requests = +@@ -8480,7 +8480,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) + if (rc) + return rc; + +- if (ctrl_info->ctrl_logging_supported && !reset_devices) { ++ if (ctrl_info->ctrl_logging_supported && !is_kdump_kernel()) { + pqi_host_setup_buffer(ctrl_info, &ctrl_info->ctrl_log_memory, PQI_CTRL_LOG_TOTAL_SIZE, PQI_CTRL_LOG_MIN_SIZE); + pqi_host_memory_update(ctrl_info, &ctrl_info->ctrl_log_memory, PQI_VENDOR_GENERAL_CTRL_LOG_MEMORY_UPDATE); + } +diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c +index 98505c68103d0e..f2cbfc2d399cdb 100644 +--- a/drivers/ufs/host/ufs-exynos.c ++++ b/drivers/ufs/host/ufs-exynos.c +@@ -915,6 +915,12 @@ static int exynos_ufs_phy_init(struct exynos_ufs *ufs) + } + + phy_set_bus_width(generic_phy, ufs->avail_ln_rx); ++ ++ if (generic_phy->power_count) { ++ phy_power_off(generic_phy); ++ phy_exit(generic_phy); ++ } ++ + ret = phy_init(generic_phy); + if (ret) { + dev_err(hba->dev, "%s: phy init failed, ret = %d\n", +diff --git a/fs/Kconfig b/fs/Kconfig +index aae170fc279524..3117304676331c 100644 +--- a/fs/Kconfig ++++ b/fs/Kconfig +@@ -369,6 +369,7 @@ config GRACE_PERIOD + config LOCKD + tristate + depends on FILE_LOCKING ++ select CRC32 + select GRACE_PERIOD + + config LOCKD_V4 +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index 73343503ea60e4..08ccf5d5e14407 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -1140,8 +1140,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) + subvol_name = btrfs_get_subvol_name_from_objectid(info, + btrfs_root_id(BTRFS_I(d_inode(dentry))->root)); + if (!IS_ERR(subvol_name)) { +- seq_puts(seq, ",subvol="); +- seq_escape(seq, subvol_name, " \t\n\\"); ++ seq_show_option(seq, "subvol", subvol_name); + kfree(subvol_name); + } + return 0; +diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c +index d220e28e755fef..749c9f66d74c6d 100644 +--- a/fs/fuse/virtio_fs.c ++++ b/fs/fuse/virtio_fs.c +@@ -1663,6 +1663,9 @@ static int virtio_fs_get_tree(struct fs_context *fsc) + unsigned int virtqueue_size; + int err = -EIO; + ++ if (!fsc->source) ++ return invalf(fsc, "No source specified"); ++ + /* This gets a reference on virtio_fs object. This ptr gets installed + * in fc->iq->priv. Once fuse_conn is going away, it calls ->put() + * to drop the reference to this object. +diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c +index 6add6ebfef8967..cb823a8a6ba960 100644 +--- a/fs/hfs/bnode.c ++++ b/fs/hfs/bnode.c +@@ -67,6 +67,12 @@ void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off) + else + key_len = tree->max_key_len + 1; + ++ if (key_len > sizeof(hfs_btree_key) || key_len < 1) { ++ memset(key, 0, sizeof(hfs_btree_key)); ++ pr_err("hfs: Invalid key length: %d\n", key_len); ++ return; ++ } ++ + hfs_bnode_read(node, key, off, key_len); + } + +diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c +index 87974d5e679156..079ea80534f7de 100644 +--- a/fs/hfsplus/bnode.c ++++ b/fs/hfsplus/bnode.c +@@ -67,6 +67,12 @@ void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off) + else + key_len = tree->max_key_len + 2; + ++ if (key_len > sizeof(hfsplus_btree_key) || key_len < 1) { ++ memset(key, 0, sizeof(hfsplus_btree_key)); ++ pr_err("hfsplus: Invalid key length: %d\n", key_len); ++ return; ++ } ++ + hfs_bnode_read(node, key, off, key_len); + } + +diff --git a/fs/isofs/export.c b/fs/isofs/export.c +index 35768a63fb1d23..421d247fae5230 100644 +--- a/fs/isofs/export.c ++++ b/fs/isofs/export.c +@@ -180,7 +180,7 @@ static struct dentry *isofs_fh_to_parent(struct super_block *sb, + return NULL; + + return isofs_export_iget(sb, +- fh_len > 2 ? ifid->parent_block : 0, ++ fh_len > 3 ? ifid->parent_block : 0, + ifid->parent_offset, + fh_len > 4 ? ifid->parent_generation : 0); + } +diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig +index d3f76101ad4b91..07932ce9246c17 100644 +--- a/fs/nfs/Kconfig ++++ b/fs/nfs/Kconfig +@@ -2,6 +2,7 @@ + config NFS_FS + tristate "NFS client support" + depends on INET && FILE_LOCKING && MULTIUSER ++ select CRC32 + select LOCKD + select SUNRPC + select NFS_COMMON +@@ -196,7 +197,6 @@ config NFS_USE_KERNEL_DNS + config NFS_DEBUG + bool + depends on NFS_FS && SUNRPC_DEBUG +- select CRC32 + default y + + config NFS_DISABLE_UDP_SUPPORT +diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h +index 6bcc4b0e00ab72..8b568a514fd1c6 100644 +--- a/fs/nfs/internal.h ++++ b/fs/nfs/internal.h +@@ -895,18 +895,11 @@ u64 nfs_timespec_to_change_attr(const struct timespec64 *ts) + return ((u64)ts->tv_sec << 30) + ts->tv_nsec; + } + +-#ifdef CONFIG_CRC32 + static inline u32 nfs_stateid_hash(const nfs4_stateid *stateid) + { + return ~crc32_le(0xFFFFFFFF, &stateid->other[0], + NFS4_STATEID_OTHER_SIZE); + } +-#else +-static inline u32 nfs_stateid_hash(nfs4_stateid *stateid) +-{ +- return 0; +-} +-#endif + + static inline bool nfs_error_is_fatal(int err) + { +diff --git a/fs/nfs/nfs4session.h b/fs/nfs/nfs4session.h +index 351616c61df541..f9c291e2165cd8 100644 +--- a/fs/nfs/nfs4session.h ++++ b/fs/nfs/nfs4session.h +@@ -148,16 +148,12 @@ static inline void nfs4_copy_sessionid(struct nfs4_sessionid *dst, + memcpy(dst->data, src->data, NFS4_MAX_SESSIONID_LEN); + } + +-#ifdef CONFIG_CRC32 + /* + * nfs_session_id_hash - calculate the crc32 hash for the session id + * @session - pointer to session + */ + #define nfs_session_id_hash(sess_id) \ + (~crc32_le(0xFFFFFFFF, &(sess_id)->data[0], sizeof((sess_id)->data))) +-#else +-#define nfs_session_id_hash(session) (0) +-#endif + #else /* defined(CONFIG_NFS_V4_1) */ + + static inline int nfs4_init_session(struct nfs_client *clp) +diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig +index c0bd1509ccd480..9eb2e795c43c4c 100644 +--- a/fs/nfsd/Kconfig ++++ b/fs/nfsd/Kconfig +@@ -4,6 +4,7 @@ config NFSD + depends on INET + depends on FILE_LOCKING + depends on FSNOTIFY ++ select CRC32 + select LOCKD + select SUNRPC + select EXPORTFS +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 5e81c819c3846a..c50839a015e94f 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -5287,7 +5287,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp) + queued = nfsd4_run_cb(&dp->dl_recall); + WARN_ON_ONCE(!queued); + if (!queued) +- nfs4_put_stid(&dp->dl_stid); ++ refcount_dec(&dp->dl_stid.sc_count); + } + + /* Called from break_lease() with flc_lock held. */ +diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h +index 876152a91f122f..5103c2f4d2253a 100644 +--- a/fs/nfsd/nfsfh.h ++++ b/fs/nfsd/nfsfh.h +@@ -267,7 +267,6 @@ static inline bool fh_fsid_match(const struct knfsd_fh *fh1, + return true; + } + +-#ifdef CONFIG_CRC32 + /** + * knfsd_fh_hash - calculate the crc32 hash for the filehandle + * @fh - pointer to filehandle +@@ -279,12 +278,6 @@ static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh) + { + return ~crc32_le(0xFFFFFFFF, fh->fh_raw, fh->fh_size); + } +-#else +-static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh) +-{ +- return 0; +-} +-#endif + + /** + * fh_clear_pre_post_attrs - Reset pre/post attributes +diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h +index 844874b4a91a94..500a9634ad5334 100644 +--- a/fs/overlayfs/overlayfs.h ++++ b/fs/overlayfs/overlayfs.h +@@ -547,8 +547,6 @@ int ovl_set_metacopy_xattr(struct ovl_fs *ofs, struct dentry *d, + bool ovl_is_metacopy_dentry(struct dentry *dentry); + char *ovl_get_redirect_xattr(struct ovl_fs *ofs, const struct path *path, int padding); + int ovl_ensure_verity_loaded(struct path *path); +-int ovl_get_verity_xattr(struct ovl_fs *ofs, const struct path *path, +- u8 *digest_buf, int *buf_length); + int ovl_validate_verity(struct ovl_fs *ofs, + struct path *metapath, + struct path *datapath); +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c +index fe511192f83cb0..87a36c6eea5f36 100644 +--- a/fs/overlayfs/super.c ++++ b/fs/overlayfs/super.c +@@ -1119,6 +1119,11 @@ static struct ovl_entry *ovl_get_lowerstack(struct super_block *sb, + return ERR_PTR(-EINVAL); + } + ++ if (ctx->nr == ctx->nr_data) { ++ pr_err("at least one non-data lowerdir is required\n"); ++ return ERR_PTR(-EINVAL); ++ } ++ + err = -EINVAL; + for (i = 0; i < ctx->nr; i++) { + l = &ctx->lower[i]; +diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h +index 907af3cbffd1bc..90b7b30abfbd87 100644 +--- a/fs/smb/client/cifsproto.h ++++ b/fs/smb/client/cifsproto.h +@@ -160,6 +160,8 @@ extern int cifs_get_writable_path(struct cifs_tcon *tcon, const char *name, + extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *, bool); + extern int cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, + struct cifsFileInfo **ret_file); ++extern int cifs_get_hardlink_path(struct cifs_tcon *tcon, struct inode *inode, ++ struct file *file); + extern unsigned int smbCalcSize(void *buf); + extern int decode_negTokenInit(unsigned char *security_blob, int length, + struct TCP_Server_Info *server); +diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c +index 3aaf5cdce1b720..d5549e06a533d8 100644 +--- a/fs/smb/client/connect.c ++++ b/fs/smb/client/connect.c +@@ -316,7 +316,6 @@ cifs_abort_connection(struct TCP_Server_Info *server) + server->ssocket->flags); + sock_release(server->ssocket); + server->ssocket = NULL; +- put_net(cifs_net_ns(server)); + } + server->sequence_number = 0; + server->session_estab = false; +@@ -988,13 +987,9 @@ clean_demultiplex_info(struct TCP_Server_Info *server) + msleep(125); + if (cifs_rdma_enabled(server)) + smbd_destroy(server); +- + if (server->ssocket) { + sock_release(server->ssocket); + server->ssocket = NULL; +- +- /* Release netns reference for the socket. */ +- put_net(cifs_net_ns(server)); + } + + if (!list_empty(&server->pending_mid_q)) { +@@ -1042,7 +1037,6 @@ clean_demultiplex_info(struct TCP_Server_Info *server) + */ + } + +- /* Release netns reference for this server. */ + put_net(cifs_net_ns(server)); + kfree(server->leaf_fullpath); + kfree(server->hostname); +@@ -1718,8 +1712,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx, + + tcp_ses->ops = ctx->ops; + tcp_ses->vals = ctx->vals; +- +- /* Grab netns reference for this server. */ + cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns)); + + tcp_ses->sign = ctx->sign; +@@ -1852,7 +1844,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx, + out_err_crypto_release: + cifs_crypto_secmech_release(tcp_ses); + +- /* Release netns reference for this server. */ + put_net(cifs_net_ns(tcp_ses)); + + out_err: +@@ -1861,10 +1852,8 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx, + cifs_put_tcp_session(tcp_ses->primary_server, false); + kfree(tcp_ses->hostname); + kfree(tcp_ses->leaf_fullpath); +- if (tcp_ses->ssocket) { ++ if (tcp_ses->ssocket) + sock_release(tcp_ses->ssocket); +- put_net(cifs_net_ns(tcp_ses)); +- } + kfree(tcp_ses); + } + return ERR_PTR(rc); +@@ -3132,24 +3121,20 @@ generic_ip_connect(struct TCP_Server_Info *server) + socket = server->ssocket; + } else { + struct net *net = cifs_net_ns(server); ++ struct sock *sk; + +- rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket); ++ rc = __sock_create(net, sfamily, SOCK_STREAM, ++ IPPROTO_TCP, &server->ssocket, 1); + if (rc < 0) { + cifs_server_dbg(VFS, "Error %d creating socket\n", rc); + return rc; + } + +- /* +- * Grab netns reference for the socket. +- * +- * This reference will be released in several situations: +- * - In the failure path before the cifsd thread is started. +- * - In the all place where server->socket is released, it is +- * also set to NULL. +- * - Ultimately in clean_demultiplex_info(), during the final +- * teardown. +- */ +- get_net(net); ++ sk = server->ssocket->sk; ++ __netns_tracker_free(net, &sk->ns_tracker, false); ++ sk->sk_net_refcnt = 1; ++ get_net_track(net, &sk->ns_tracker, GFP_KERNEL); ++ sock_inuse_add(net, 1); + + /* BB other socket options to set KEEPALIVE, NODELAY? */ + cifs_dbg(FYI, "Socket created\n"); +@@ -3201,7 +3186,6 @@ generic_ip_connect(struct TCP_Server_Info *server) + if (rc < 0) { + cifs_dbg(FYI, "Error %d connecting to server\n", rc); + trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc); +- put_net(cifs_net_ns(server)); + sock_release(socket); + server->ssocket = NULL; + return rc; +diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c +index 313c851fc1c122..0f6fec042f6a03 100644 +--- a/fs/smb/client/file.c ++++ b/fs/smb/client/file.c +@@ -1002,6 +1002,11 @@ int cifs_open(struct inode *inode, struct file *file) + } else { + _cifsFileInfo_put(cfile, true, false); + } ++ } else { ++ /* hard link on the defeered close file */ ++ rc = cifs_get_hardlink_path(tcon, inode, file); ++ if (rc) ++ cifs_close_deferred_file(CIFS_I(inode)); + } + + if (server->oplocks) +@@ -2066,6 +2071,29 @@ cifs_move_llist(struct list_head *source, struct list_head *dest) + list_move(li, dest); + } + ++int ++cifs_get_hardlink_path(struct cifs_tcon *tcon, struct inode *inode, ++ struct file *file) ++{ ++ struct cifsFileInfo *open_file = NULL; ++ struct cifsInodeInfo *cinode = CIFS_I(inode); ++ int rc = 0; ++ ++ spin_lock(&tcon->open_file_lock); ++ spin_lock(&cinode->open_file_lock); ++ ++ list_for_each_entry(open_file, &cinode->openFileList, flist) { ++ if (file->f_flags == open_file->f_flags) { ++ rc = -EINVAL; ++ break; ++ } ++ } ++ ++ spin_unlock(&cinode->open_file_lock); ++ spin_unlock(&tcon->open_file_lock); ++ return rc; ++} ++ + void + cifs_free_llist(struct list_head *llist) + { +diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c +index deacf78b4400cc..e2ba0dadb5fbf7 100644 +--- a/fs/smb/server/oplock.c ++++ b/fs/smb/server/oplock.c +@@ -129,14 +129,6 @@ static void free_opinfo(struct oplock_info *opinfo) + kfree(opinfo); + } + +-static inline void opinfo_free_rcu(struct rcu_head *rcu_head) +-{ +- struct oplock_info *opinfo; +- +- opinfo = container_of(rcu_head, struct oplock_info, rcu_head); +- free_opinfo(opinfo); +-} +- + struct oplock_info *opinfo_get(struct ksmbd_file *fp) + { + struct oplock_info *opinfo; +@@ -157,8 +149,8 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci) + if (list_empty(&ci->m_op_list)) + return NULL; + +- rcu_read_lock(); +- opinfo = list_first_or_null_rcu(&ci->m_op_list, struct oplock_info, ++ down_read(&ci->m_lock); ++ opinfo = list_first_entry(&ci->m_op_list, struct oplock_info, + op_entry); + if (opinfo) { + if (opinfo->conn == NULL || +@@ -171,8 +163,7 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci) + } + } + } +- +- rcu_read_unlock(); ++ up_read(&ci->m_lock); + + return opinfo; + } +@@ -185,7 +176,7 @@ void opinfo_put(struct oplock_info *opinfo) + if (!atomic_dec_and_test(&opinfo->refcount)) + return; + +- call_rcu(&opinfo->rcu_head, opinfo_free_rcu); ++ free_opinfo(opinfo); + } + + static void opinfo_add(struct oplock_info *opinfo) +@@ -193,7 +184,7 @@ static void opinfo_add(struct oplock_info *opinfo) + struct ksmbd_inode *ci = opinfo->o_fp->f_ci; + + down_write(&ci->m_lock); +- list_add_rcu(&opinfo->op_entry, &ci->m_op_list); ++ list_add(&opinfo->op_entry, &ci->m_op_list); + up_write(&ci->m_lock); + } + +@@ -207,7 +198,7 @@ static void opinfo_del(struct oplock_info *opinfo) + write_unlock(&lease_list_lock); + } + down_write(&ci->m_lock); +- list_del_rcu(&opinfo->op_entry); ++ list_del(&opinfo->op_entry); + up_write(&ci->m_lock); + } + +@@ -1347,8 +1338,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp, + ci = fp->f_ci; + op = opinfo_get(fp); + +- rcu_read_lock(); +- list_for_each_entry_rcu(brk_op, &ci->m_op_list, op_entry) { ++ down_read(&ci->m_lock); ++ list_for_each_entry(brk_op, &ci->m_op_list, op_entry) { + if (brk_op->conn == NULL) + continue; + +@@ -1358,7 +1349,6 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp, + if (ksmbd_conn_releasing(brk_op->conn)) + continue; + +- rcu_read_unlock(); + if (brk_op->is_lease && (brk_op->o_lease->state & + (~(SMB2_LEASE_READ_CACHING_LE | + SMB2_LEASE_HANDLE_CACHING_LE)))) { +@@ -1388,9 +1378,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp, + oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL); + next: + opinfo_put(brk_op); +- rcu_read_lock(); + } +- rcu_read_unlock(); ++ up_read(&ci->m_lock); + + if (op) + opinfo_put(op); +diff --git a/fs/smb/server/oplock.h b/fs/smb/server/oplock.h +index 3f64f07872638e..9a56eaadd0dd8f 100644 +--- a/fs/smb/server/oplock.h ++++ b/fs/smb/server/oplock.h +@@ -71,7 +71,6 @@ struct oplock_info { + struct list_head lease_entry; + wait_queue_head_t oplock_q; /* Other server threads */ + wait_queue_head_t oplock_brk; /* oplock breaking wait */ +- struct rcu_head rcu_head; + }; + + struct lease_break_info { +diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c +index 7fea86edc71763..129517a0c5c739 100644 +--- a/fs/smb/server/smb2pdu.c ++++ b/fs/smb/server/smb2pdu.c +@@ -1599,8 +1599,10 @@ static int krb5_authenticate(struct ksmbd_work *work, + if (prev_sess_id && prev_sess_id != sess->id) + destroy_previous_session(conn, sess->user, prev_sess_id); + +- if (sess->state == SMB2_SESSION_VALID) ++ if (sess->state == SMB2_SESSION_VALID) { + ksmbd_free_user(sess->user); ++ sess->user = NULL; ++ } + + retval = ksmbd_krb5_authenticate(sess, in_blob, in_len, + out_blob, &out_len); +diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c +index 87af57cf35a157..9b3c68014aee28 100644 +--- a/fs/smb/server/transport_ipc.c ++++ b/fs/smb/server/transport_ipc.c +@@ -310,7 +310,11 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req) + server_conf.signing = req->signing; + server_conf.tcp_port = req->tcp_port; + server_conf.ipc_timeout = req->ipc_timeout * HZ; +- server_conf.deadtime = req->deadtime * SMB_ECHO_INTERVAL; ++ if (check_mul_overflow(req->deadtime, SMB_ECHO_INTERVAL, ++ &server_conf.deadtime)) { ++ ret = -EINVAL; ++ goto out; ++ } + server_conf.share_fake_fscaps = req->share_fake_fscaps; + ksmbd_init_domain(req->sub_auth); + +@@ -336,6 +340,7 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req) + ret |= ksmbd_set_work_group(req->work_group); + ret |= ksmbd_tcp_set_interfaces(KSMBD_STARTUP_CONFIG_INTERFACES(req), + req->ifc_list_sz); ++out: + if (ret) { + pr_err("Server configuration error: %s %s %s\n", + req->netbios_name, req->server_string, +diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c +index ee825971abd9ab..8fd070e31fa7dd 100644 +--- a/fs/smb/server/vfs.c ++++ b/fs/smb/server/vfs.c +@@ -496,7 +496,8 @@ int ksmbd_vfs_write(struct ksmbd_work *work, struct ksmbd_file *fp, + int err = 0; + + if (work->conn->connection_type) { +- if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE))) { ++ if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE)) || ++ S_ISDIR(file_inode(fp->filp)->i_mode)) { + pr_err("no right to write(%pD)\n", fp->filp); + err = -EACCES; + goto out; +diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h +index 8e7af9a03b41dd..e721148c95d07d 100644 +--- a/include/linux/backing-dev.h ++++ b/include/linux/backing-dev.h +@@ -249,6 +249,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode) + { + #ifdef CONFIG_LOCKDEP + WARN_ON_ONCE(debug_locks && ++ (inode->i_sb->s_iflags & SB_I_CGROUPWB) && + (!lockdep_is_held(&inode->i_lock) && + !lockdep_is_held(&inode->i_mapping->i_pages.xa_lock) && + !lockdep_is_held(&inode->i_wb->list_lock))); +diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h +index 7b5e5388c3801a..318245b4e38fb3 100644 +--- a/include/linux/blk-mq.h ++++ b/include/linux/blk-mq.h +@@ -230,62 +230,61 @@ static inline unsigned short req_get_ioprio(struct request *req) + #define rq_dma_dir(rq) \ + (op_is_write(req_op(rq)) ? DMA_TO_DEVICE : DMA_FROM_DEVICE) + +-#define rq_list_add(listptr, rq) do { \ +- (rq)->rq_next = *(listptr); \ +- *(listptr) = rq; \ +-} while (0) +- +-#define rq_list_add_tail(lastpptr, rq) do { \ +- (rq)->rq_next = NULL; \ +- **(lastpptr) = rq; \ +- *(lastpptr) = &rq->rq_next; \ +-} while (0) +- +-#define rq_list_pop(listptr) \ +-({ \ +- struct request *__req = NULL; \ +- if ((listptr) && *(listptr)) { \ +- __req = *(listptr); \ +- *(listptr) = __req->rq_next; \ +- } \ +- __req; \ +-}) ++static inline int rq_list_empty(const struct rq_list *rl) ++{ ++ return rl->head == NULL; ++} + +-#define rq_list_peek(listptr) \ +-({ \ +- struct request *__req = NULL; \ +- if ((listptr) && *(listptr)) \ +- __req = *(listptr); \ +- __req; \ +-}) ++static inline void rq_list_init(struct rq_list *rl) ++{ ++ rl->head = NULL; ++ rl->tail = NULL; ++} + +-#define rq_list_for_each(listptr, pos) \ +- for (pos = rq_list_peek((listptr)); pos; pos = rq_list_next(pos)) ++static inline void rq_list_add_tail(struct rq_list *rl, struct request *rq) ++{ ++ rq->rq_next = NULL; ++ if (rl->tail) ++ rl->tail->rq_next = rq; ++ else ++ rl->head = rq; ++ rl->tail = rq; ++} + +-#define rq_list_for_each_safe(listptr, pos, nxt) \ +- for (pos = rq_list_peek((listptr)), nxt = rq_list_next(pos); \ +- pos; pos = nxt, nxt = pos ? rq_list_next(pos) : NULL) ++static inline void rq_list_add_head(struct rq_list *rl, struct request *rq) ++{ ++ rq->rq_next = rl->head; ++ rl->head = rq; ++ if (!rl->tail) ++ rl->tail = rq; ++} + +-#define rq_list_next(rq) (rq)->rq_next +-#define rq_list_empty(list) ((list) == (struct request *) NULL) ++static inline struct request *rq_list_pop(struct rq_list *rl) ++{ ++ struct request *rq = rl->head; + +-/** +- * rq_list_move() - move a struct request from one list to another +- * @src: The source list @rq is currently in +- * @dst: The destination list that @rq will be appended to +- * @rq: The request to move +- * @prev: The request preceding @rq in @src (NULL if @rq is the head) +- */ +-static inline void rq_list_move(struct request **src, struct request **dst, +- struct request *rq, struct request *prev) ++ if (rq) { ++ rl->head = rl->head->rq_next; ++ if (!rl->head) ++ rl->tail = NULL; ++ rq->rq_next = NULL; ++ } ++ ++ return rq; ++} ++ ++static inline struct request *rq_list_peek(struct rq_list *rl) + { +- if (prev) +- prev->rq_next = rq->rq_next; +- else +- *src = rq->rq_next; +- rq_list_add(dst, rq); ++ return rl->head; + } + ++#define rq_list_for_each(rl, pos) \ ++ for (pos = rq_list_peek((rl)); (pos); pos = pos->rq_next) ++ ++#define rq_list_for_each_safe(rl, pos, nxt) \ ++ for (pos = rq_list_peek((rl)), nxt = pos->rq_next; \ ++ pos; pos = nxt, nxt = pos ? pos->rq_next : NULL) ++ + /** + * enum blk_eh_timer_return - How the timeout handler should proceed + * @BLK_EH_DONE: The block driver completed the command or will complete it at +@@ -577,7 +576,7 @@ struct blk_mq_ops { + * empty the @rqlist completely, then the rest will be queued + * individually by the block layer upon return. + */ +- void (*queue_rqs)(struct request **rqlist); ++ void (*queue_rqs)(struct rq_list *rqlist); + + /** + * @get_budget: Reserve budget before queue request, once .queue_rq is +@@ -910,7 +909,7 @@ static inline bool blk_mq_add_to_batch(struct request *req, + else if (iob->complete != complete) + return false; + iob->need_ts |= blk_mq_need_time_stamp(req); +- rq_list_add(&iob->req_list, req); ++ rq_list_add_head(&iob->req_list, req); + return true; + } + +@@ -947,6 +946,8 @@ void blk_mq_unfreeze_queue_non_owner(struct request_queue *q); + void blk_freeze_queue_start_non_owner(struct request_queue *q); + + void blk_mq_map_queues(struct blk_mq_queue_map *qmap); ++void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap, ++ struct device *dev, unsigned int offset); + void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); + + void blk_mq_quiesce_queue_nowait(struct request_queue *q); +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index 8f37c5dd52b215..b94dc4b796f5a1 100644 +--- a/include/linux/blkdev.h ++++ b/include/linux/blkdev.h +@@ -995,6 +995,11 @@ extern void blk_put_queue(struct request_queue *); + + void blk_mark_disk_dead(struct gendisk *disk); + ++struct rq_list { ++ struct request *head; ++ struct request *tail; ++}; ++ + #ifdef CONFIG_BLOCK + /* + * blk_plug permits building a queue of related requests by holding the I/O +@@ -1008,10 +1013,10 @@ void blk_mark_disk_dead(struct gendisk *disk); + * blk_flush_plug() is called. + */ + struct blk_plug { +- struct request *mq_list; /* blk-mq requests */ ++ struct rq_list mq_list; /* blk-mq requests */ + + /* if ios_left is > 1, we can batch tag/rq allocations */ +- struct request *cached_rq; ++ struct rq_list cached_rqs; + u64 cur_ktime; + unsigned short nr_ios; + +@@ -1660,7 +1665,7 @@ int bdev_thaw(struct block_device *bdev); + void bdev_fput(struct file *bdev_file); + + struct io_comp_batch { +- struct request *req_list; ++ struct rq_list req_list; + bool need_ts; + void (*complete)(struct io_comp_batch *); + }; +diff --git a/include/linux/bpf.h b/include/linux/bpf.h +index a7af13f550e0d4..1150a595aa54c2 100644 +--- a/include/linux/bpf.h ++++ b/include/linux/bpf.h +@@ -1499,6 +1499,7 @@ struct bpf_prog_aux { + bool exception_cb; + bool exception_boundary; + bool is_extended; /* true if extended by freplace program */ ++ bool changes_pkt_data; + u64 prog_array_member_cnt; /* counts how many times as member of prog_array */ + struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */ + struct bpf_arena *arena; +diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h +index 4513372c5bc8e0..50eeb5b86ed70b 100644 +--- a/include/linux/bpf_verifier.h ++++ b/include/linux/bpf_verifier.h +@@ -668,6 +668,7 @@ struct bpf_subprog_info { + bool args_cached: 1; + /* true if bpf_fastcall stack region is used by functions that can't be inlined */ + bool keep_fastcall_stack: 1; ++ bool changes_pkt_data: 1; + + u8 arg_cnt; + struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS]; +diff --git a/include/linux/device/bus.h b/include/linux/device/bus.h +index cdc4757217f9bb..b18658bce2c381 100644 +--- a/include/linux/device/bus.h ++++ b/include/linux/device/bus.h +@@ -48,6 +48,7 @@ struct fwnode_handle; + * will never get called until they do. + * @remove: Called when a device removed from this bus. + * @shutdown: Called at shut-down time to quiesce the device. ++ * @irq_get_affinity: Get IRQ affinity mask for the device on this bus. + * + * @online: Called to put the device back online (after offlining it). + * @offline: Called to put the device offline for hot-removal. May fail. +@@ -87,6 +88,8 @@ struct bus_type { + void (*sync_state)(struct device *dev); + void (*remove)(struct device *dev); + void (*shutdown)(struct device *dev); ++ const struct cpumask *(*irq_get_affinity)(struct device *dev, ++ unsigned int irq_vec); + + int (*online)(struct device *dev); + int (*offline)(struct device *dev); +diff --git a/include/linux/nfs.h b/include/linux/nfs.h +index 9ad727ddfedb34..0906a0b40c6aa5 100644 +--- a/include/linux/nfs.h ++++ b/include/linux/nfs.h +@@ -55,7 +55,6 @@ enum nfs3_stable_how { + NFS_INVALID_STABLE_HOW = -1 + }; + +-#ifdef CONFIG_CRC32 + /** + * nfs_fhandle_hash - calculate the crc32 hash for the filehandle + * @fh - pointer to filehandle +@@ -67,10 +66,4 @@ static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh) + { + return ~crc32_le(0xFFFFFFFF, &fh->data[0], fh->size); + } +-#else /* CONFIG_CRC32 */ +-static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh) +-{ +- return 0; +-} +-#endif /* CONFIG_CRC32 */ + #endif /* _LINUX_NFS_H */ +diff --git a/io_uring/rw.c b/io_uring/rw.c +index 6abc495602a4e9..a1ed64760eba2d 100644 +--- a/io_uring/rw.c ++++ b/io_uring/rw.c +@@ -1190,12 +1190,12 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) + poll_flags |= BLK_POLL_ONESHOT; + + /* iopoll may have completed current req */ +- if (!rq_list_empty(iob.req_list) || ++ if (!rq_list_empty(&iob.req_list) || + READ_ONCE(req->iopoll_completed)) + break; + } + +- if (!rq_list_empty(iob.req_list)) ++ if (!rq_list_empty(&iob.req_list)) + iob.complete(&iob); + else if (!pos) + return 0; +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 9000806ee3bae8..d2ef289993f20d 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -2528,16 +2528,36 @@ static int cmp_subprogs(const void *a, const void *b) + ((struct bpf_subprog_info *)b)->start; + } + ++/* Find subprogram that contains instruction at 'off' */ ++static struct bpf_subprog_info *find_containing_subprog(struct bpf_verifier_env *env, int off) ++{ ++ struct bpf_subprog_info *vals = env->subprog_info; ++ int l, r, m; ++ ++ if (off >= env->prog->len || off < 0 || env->subprog_cnt == 0) ++ return NULL; ++ ++ l = 0; ++ r = env->subprog_cnt - 1; ++ while (l < r) { ++ m = l + (r - l + 1) / 2; ++ if (vals[m].start <= off) ++ l = m; ++ else ++ r = m - 1; ++ } ++ return &vals[l]; ++} ++ ++/* Find subprogram that starts exactly at 'off' */ + static int find_subprog(struct bpf_verifier_env *env, int off) + { + struct bpf_subprog_info *p; + +- p = bsearch(&off, env->subprog_info, env->subprog_cnt, +- sizeof(env->subprog_info[0]), cmp_subprogs); +- if (!p) ++ p = find_containing_subprog(env, off); ++ if (!p || p->start != off) + return -ENOENT; + return p - env->subprog_info; +- + } + + static int add_subprog(struct bpf_verifier_env *env, int off) +@@ -9811,6 +9831,8 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, + + verbose(env, "Func#%d ('%s') is global and assumed valid.\n", + subprog, sub_name); ++ if (env->subprog_info[subprog].changes_pkt_data) ++ clear_all_pkt_pointers(env); + /* mark global subprog for verifying after main prog */ + subprog_aux(env, subprog)->called = true; + clear_caller_saved_regs(env, caller->regs); +@@ -16001,6 +16023,29 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char + return 0; + } + ++static void mark_subprog_changes_pkt_data(struct bpf_verifier_env *env, int off) ++{ ++ struct bpf_subprog_info *subprog; ++ ++ subprog = find_containing_subprog(env, off); ++ subprog->changes_pkt_data = true; ++} ++ ++/* 't' is an index of a call-site. ++ * 'w' is a callee entry point. ++ * Eventually this function would be called when env->cfg.insn_state[w] == EXPLORED. ++ * Rely on DFS traversal order and absence of recursive calls to guarantee that ++ * callee's change_pkt_data marks would be correct at that moment. ++ */ ++static void merge_callee_effects(struct bpf_verifier_env *env, int t, int w) ++{ ++ struct bpf_subprog_info *caller, *callee; ++ ++ caller = find_containing_subprog(env, t); ++ callee = find_containing_subprog(env, w); ++ caller->changes_pkt_data |= callee->changes_pkt_data; ++} ++ + /* non-recursive DFS pseudo code + * 1 procedure DFS-iterative(G,v): + * 2 label v as discovered +@@ -16134,6 +16179,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns, + bool visit_callee) + { + int ret, insn_sz; ++ int w; + + insn_sz = bpf_is_ldimm64(&insns[t]) ? 2 : 1; + ret = push_insn(t, t + insn_sz, FALLTHROUGH, env); +@@ -16145,8 +16191,10 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns, + mark_jmp_point(env, t + insn_sz); + + if (visit_callee) { ++ w = t + insns[t].imm + 1; + mark_prune_point(env, t); +- ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env); ++ merge_callee_effects(env, t, w); ++ ret = push_insn(t, w, BRANCH, env); + } + return ret; + } +@@ -16466,6 +16514,8 @@ static int visit_insn(int t, struct bpf_verifier_env *env) + mark_prune_point(env, t); + mark_jmp_point(env, t); + } ++ if (bpf_helper_call(insn) && bpf_helper_changes_pkt_data(insn->imm)) ++ mark_subprog_changes_pkt_data(env, t); + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { + struct bpf_kfunc_call_arg_meta meta; + +@@ -16600,6 +16650,7 @@ static int check_cfg(struct bpf_verifier_env *env) + } + } + ret = 0; /* cfg looks good */ ++ env->prog->aux->changes_pkt_data = env->subprog_info[0].changes_pkt_data; + + err_free: + kvfree(insn_state); +@@ -20102,6 +20153,7 @@ static int jit_subprogs(struct bpf_verifier_env *env) + func[i]->aux->num_exentries = num_exentries; + func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable; + func[i]->aux->exception_cb = env->subprog_info[i].is_exception_cb; ++ func[i]->aux->changes_pkt_data = env->subprog_info[i].changes_pkt_data; + if (!i) + func[i]->aux->exception_boundary = env->seen_exception; + func[i] = bpf_int_jit_compile(func[i]); +@@ -21938,6 +21990,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, + } + if (tgt_prog) { + struct bpf_prog_aux *aux = tgt_prog->aux; ++ bool tgt_changes_pkt_data; + + if (bpf_prog_is_dev_bound(prog->aux) && + !bpf_prog_dev_bound_match(prog, tgt_prog)) { +@@ -21972,6 +22025,14 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, + "Extension programs should be JITed\n"); + return -EINVAL; + } ++ tgt_changes_pkt_data = aux->func ++ ? aux->func[subprog]->aux->changes_pkt_data ++ : aux->changes_pkt_data; ++ if (prog->aux->changes_pkt_data && !tgt_changes_pkt_data) { ++ bpf_log(log, ++ "Extension program changes packet data, while original does not\n"); ++ return -EINVAL; ++ } + } + if (!tgt_prog->jited) { + bpf_log(log, "Can attach to only JITed progs\n"); +@@ -22437,10 +22498,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 + if (ret < 0) + goto skip_full_check; + +- ret = check_attach_btf_id(env); +- if (ret) +- goto skip_full_check; +- + ret = resolve_pseudo_ldimm64(env); + if (ret < 0) + goto skip_full_check; +@@ -22455,6 +22512,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 + if (ret < 0) + goto skip_full_check; + ++ ret = check_attach_btf_id(env); ++ if (ret) ++ goto skip_full_check; ++ + ret = mark_fastcall_patterns(env); + if (ret < 0) + goto skip_full_check; +diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c +index e51d5ce730be15..e5ced97d9681c1 100644 +--- a/kernel/sched/cpufreq_schedutil.c ++++ b/kernel/sched/cpufreq_schedutil.c +@@ -81,9 +81,20 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) + if (!cpufreq_this_cpu_can_update(sg_policy->policy)) + return false; + +- if (unlikely(sg_policy->limits_changed)) { +- sg_policy->limits_changed = false; +- sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS); ++ if (unlikely(READ_ONCE(sg_policy->limits_changed))) { ++ WRITE_ONCE(sg_policy->limits_changed, false); ++ sg_policy->need_freq_update = true; ++ ++ /* ++ * The above limits_changed update must occur before the reads ++ * of policy limits in cpufreq_driver_resolve_freq() or a policy ++ * limits update might be missed, so use a memory barrier to ++ * ensure it. ++ * ++ * This pairs with the write memory barrier in sugov_limits(). ++ */ ++ smp_mb(); ++ + return true; + } + +@@ -95,10 +106,22 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) + static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time, + unsigned int next_freq) + { +- if (sg_policy->need_freq_update) ++ if (sg_policy->need_freq_update) { + sg_policy->need_freq_update = false; +- else if (sg_policy->next_freq == next_freq) ++ /* ++ * The policy limits have changed, but if the return value of ++ * cpufreq_driver_resolve_freq() after applying the new limits ++ * is still equal to the previously selected frequency, the ++ * driver callback need not be invoked unless the driver ++ * specifically wants that to happen on every update of the ++ * policy limits. ++ */ ++ if (sg_policy->next_freq == next_freq && ++ !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS)) ++ return false; ++ } else if (sg_policy->next_freq == next_freq) { + return false; ++ } + + sg_policy->next_freq = next_freq; + sg_policy->last_freq_update_time = time; +@@ -365,7 +388,7 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; } + static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu) + { + if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min) +- sg_cpu->sg_policy->limits_changed = true; ++ WRITE_ONCE(sg_cpu->sg_policy->limits_changed, true); + } + + static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu, +@@ -888,7 +911,16 @@ static void sugov_limits(struct cpufreq_policy *policy) + mutex_unlock(&sg_policy->work_lock); + } + +- sg_policy->limits_changed = true; ++ /* ++ * The limits_changed update below must take place before the updates ++ * of policy limits in cpufreq_set_policy() or a policy limits update ++ * might be missed, so use a memory barrier to ensure it. ++ * ++ * This pairs with the memory barrier in sugov_should_update_freq(). ++ */ ++ smp_wmb(); ++ ++ WRITE_ONCE(sg_policy->limits_changed, true); + } + + struct cpufreq_governor schedutil_gov = { +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index 90b59c627bb8e7..e67d67f7b90650 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -5944,9 +5944,10 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr) + + /* Make a copy hash to place the new and the old entries in */ + size = hash->count + direct_functions->count; +- if (size > 32) +- size = 32; +- new_hash = alloc_ftrace_hash(fls(size)); ++ size = fls(size); ++ if (size > FTRACE_HASH_MAX_BITS) ++ size = FTRACE_HASH_MAX_BITS; ++ new_hash = alloc_ftrace_hash(size); + if (!new_hash) + goto out_unlock; + +diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c +index 0c611b281a5b5f..f50c2ad43f3d82 100644 +--- a/kernel/trace/trace_events_filter.c ++++ b/kernel/trace/trace_events_filter.c +@@ -808,7 +808,7 @@ static __always_inline char *test_string(char *str) + kstr = ubuf->buffer; + + /* For safety, do not trust the string pointer */ +- if (!strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE)) ++ if (strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE) < 0) + return NULL; + return kstr; + } +@@ -827,7 +827,7 @@ static __always_inline char *test_ustring(char *str) + + /* user space address? */ + ustr = (char __user *)str; +- if (!strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE)) ++ if (strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE) < 0) + return NULL; + + return kstr; +diff --git a/lib/string.c b/lib/string.c +index 76327b51e36f25..e657809fa71892 100644 +--- a/lib/string.c ++++ b/lib/string.c +@@ -113,6 +113,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) + if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) + return -E2BIG; + ++#ifndef CONFIG_DCACHE_WORD_ACCESS + #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS + /* + * If src is unaligned, don't cross a page boundary, +@@ -127,12 +128,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) + /* If src or dest is unaligned, don't do word-at-a-time. */ + if (((long) dest | (long) src) & (sizeof(long) - 1)) + max = 0; ++#endif + #endif + + /* +- * read_word_at_a_time() below may read uninitialized bytes after the +- * trailing zero and use them in comparisons. Disable this optimization +- * under KMSAN to prevent false positive reports. ++ * load_unaligned_zeropad() or read_word_at_a_time() below may read ++ * uninitialized bytes after the trailing zero and use them in ++ * comparisons. Disable this optimization under KMSAN to prevent ++ * false positive reports. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + max = 0; +@@ -140,7 +143,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) + while (max >= sizeof(unsigned long)) { + unsigned long c, data; + ++#ifdef CONFIG_DCACHE_WORD_ACCESS ++ c = load_unaligned_zeropad(src+res); ++#else + c = read_word_at_a_time(src+res); ++#endif + if (has_zero(c, &data, &constants)) { + data = prep_zero_mask(c, data, &constants); + data = create_zero_mask(data); +diff --git a/mm/compaction.c b/mm/compaction.c +index 77dbb9022b47f0..eb5474dea04d9d 100644 +--- a/mm/compaction.c ++++ b/mm/compaction.c +@@ -980,13 +980,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, + } + + if (PageHuge(page)) { ++ const unsigned int order = compound_order(page); + /* + * skip hugetlbfs if we are not compacting for pages + * bigger than its order. THPs and other compound pages + * are handled below. + */ + if (!cc->alloc_contig) { +- const unsigned int order = compound_order(page); + + if (order <= MAX_PAGE_ORDER) { + low_pfn += (1UL << order) - 1; +@@ -1010,8 +1010,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, + /* Do not report -EBUSY down the chain */ + if (ret == -EBUSY) + ret = 0; +- low_pfn += compound_nr(page) - 1; +- nr_scanned += compound_nr(page) - 1; ++ low_pfn += (1UL << order) - 1; ++ nr_scanned += (1UL << order) - 1; + goto isolate_fail; + } + +diff --git a/mm/filemap.c b/mm/filemap.c +index 3c37ad6c598bbb..fa18e71f9c8895 100644 +--- a/mm/filemap.c ++++ b/mm/filemap.c +@@ -2222,6 +2222,7 @@ unsigned filemap_get_folios_contig(struct address_space *mapping, + *start = folio->index + nr; + goto out; + } ++ xas_advance(&xas, folio_next_index(folio) - 1); + continue; + put_folio: + folio_put(folio); +diff --git a/mm/gup.c b/mm/gup.c +index d27e7c9e2596ce..90866b827b60f4 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -2213,8 +2213,8 @@ size_t fault_in_safe_writeable(const char __user *uaddr, size_t size) + } while (start != end); + mmap_read_unlock(mm); + +- if (size > (unsigned long)uaddr - start) +- return size - ((unsigned long)uaddr - start); ++ if (size > start - (unsigned long)uaddr) ++ return size - (start - (unsigned long)uaddr); + return 0; + } + EXPORT_SYMBOL(fault_in_safe_writeable); +diff --git a/mm/memory.c b/mm/memory.c +index 99dceaf6a10579..b6daa0e673a549 100644 +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -2811,11 +2811,11 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, + if (fn) { + do { + if (create || !pte_none(ptep_get(pte))) { +- err = fn(pte++, addr, data); ++ err = fn(pte, addr, data); + if (err) + break; + } +- } while (addr += PAGE_SIZE, addr != end); ++ } while (pte++, addr += PAGE_SIZE, addr != end); + } + *mask |= PGTBL_PTE_MODIFIED; + +diff --git a/mm/slub.c b/mm/slub.c +index b9447a955f6112..c26d9cd107ccbc 100644 +--- a/mm/slub.c ++++ b/mm/slub.c +@@ -1960,6 +1960,11 @@ static inline void handle_failed_objexts_alloc(unsigned long obj_exts, + #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \ + __GFP_ACCOUNT | __GFP_NOFAIL) + ++static inline void init_slab_obj_exts(struct slab *slab) ++{ ++ slab->obj_exts = 0; ++} ++ + int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) + { +@@ -2044,6 +2049,10 @@ static inline bool need_slab_obj_ext(void) + + #else /* CONFIG_SLAB_OBJ_EXT */ + ++static inline void init_slab_obj_exts(struct slab *slab) ++{ ++} ++ + static int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) + { +@@ -2613,6 +2622,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) + slab->objects = oo_objects(oo); + slab->inuse = 0; + slab->frozen = 0; ++ init_slab_obj_exts(slab); + + account_slab(slab, oo_order(oo), s, flags); + +diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c +index 080a00d916f6b6..748b52ce856755 100644 +--- a/mm/userfaultfd.c ++++ b/mm/userfaultfd.c +@@ -1873,6 +1873,14 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi, + unsigned long end) + { + struct vm_area_struct *ret; ++ bool give_up_on_oom = false; ++ ++ /* ++ * If we are modifying only and not splitting, just give up on the merge ++ * if OOM prevents us from merging successfully. ++ */ ++ if (start == vma->vm_start && end == vma->vm_end) ++ give_up_on_oom = true; + + /* Reset ptes for the whole vma range if wr-protected */ + if (userfaultfd_wp(vma)) +@@ -1880,7 +1888,7 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi, + + ret = vma_modify_flags_uffd(vmi, prev, vma, start, end, + vma->vm_flags & ~__VM_UFFD_FLAGS, +- NULL_VM_UFFD_CTX); ++ NULL_VM_UFFD_CTX, give_up_on_oom); + + /* + * In the vma_merge() successful mprotect-like case 8: +@@ -1931,7 +1939,8 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx, + new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; + vma = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end, + new_flags, +- (struct vm_userfaultfd_ctx){ctx}); ++ (struct vm_userfaultfd_ctx){ctx}, ++ /* give_up_on_oom = */false); + if (IS_ERR(vma)) + return PTR_ERR(vma); + +diff --git a/mm/vma.c b/mm/vma.c +index c9ddc06b672a52..9b4517944901dd 100644 +--- a/mm/vma.c ++++ b/mm/vma.c +@@ -846,7 +846,13 @@ static struct vm_area_struct *vma_merge_existing_range(struct vma_merge_struct * + if (anon_dup) + unlink_anon_vmas(anon_dup); + +- vmg->state = VMA_MERGE_ERROR_NOMEM; ++ /* ++ * We've cleaned up any cloned anon_vma's, no VMAs have been ++ * modified, no harm no foul if the user requests that we not ++ * report this and just give up, leaving the VMAs unmerged. ++ */ ++ if (!vmg->give_up_on_oom) ++ vmg->state = VMA_MERGE_ERROR_NOMEM; + return NULL; + } + +@@ -859,7 +865,15 @@ static struct vm_area_struct *vma_merge_existing_range(struct vma_merge_struct * + abort: + vma_iter_set(vmg->vmi, start); + vma_iter_load(vmg->vmi); +- vmg->state = VMA_MERGE_ERROR_NOMEM; ++ ++ /* ++ * This means we have failed to clone anon_vma's correctly, but no ++ * actual changes to VMAs have occurred, so no harm no foul - if the ++ * user doesn't want this reported and instead just wants to give up on ++ * the merge, allow it. ++ */ ++ if (!vmg->give_up_on_oom) ++ vmg->state = VMA_MERGE_ERROR_NOMEM; + return NULL; + } + +@@ -1033,9 +1047,15 @@ int vma_expand(struct vma_merge_struct *vmg) + return 0; + + nomem: +- vmg->state = VMA_MERGE_ERROR_NOMEM; + if (anon_dup) + unlink_anon_vmas(anon_dup); ++ /* ++ * If the user requests that we just give upon OOM, we are safe to do so ++ * here, as commit merge provides this contract to us. Nothing has been ++ * changed - no harm no foul, just don't report it. ++ */ ++ if (!vmg->give_up_on_oom) ++ vmg->state = VMA_MERGE_ERROR_NOMEM; + return -ENOMEM; + } + +@@ -1428,6 +1448,13 @@ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg) + if (vmg_nomem(vmg)) + return ERR_PTR(-ENOMEM); + ++ /* ++ * Split can fail for reasons other than OOM, so if the user requests ++ * this it's probably a mistake. ++ */ ++ VM_WARN_ON(vmg->give_up_on_oom && ++ (vma->vm_start != start || vma->vm_end != end)); ++ + /* Split any preceding portion of the VMA. */ + if (vma->vm_start < start) { + int err = split_vma(vmg->vmi, vma, start, 1); +@@ -1496,12 +1523,15 @@ struct vm_area_struct + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags, +- struct vm_userfaultfd_ctx new_ctx) ++ struct vm_userfaultfd_ctx new_ctx, ++ bool give_up_on_oom) + { + VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); + + vmg.flags = new_flags; + vmg.uffd_ctx = new_ctx; ++ if (give_up_on_oom) ++ vmg.give_up_on_oom = true; + + return vma_modify(&vmg); + } +diff --git a/mm/vma.h b/mm/vma.h +index d58068c0ff2eaa..729fe3741e897b 100644 +--- a/mm/vma.h ++++ b/mm/vma.h +@@ -87,6 +87,12 @@ struct vma_merge_struct { + struct anon_vma_name *anon_name; + enum vma_merge_flags merge_flags; + enum vma_merge_state state; ++ ++ /* ++ * If a merge is possible, but an OOM error occurs, give up and don't ++ * execute the merge, returning NULL. ++ */ ++ bool give_up_on_oom :1; + }; + + static inline bool vmg_nomem(struct vma_merge_struct *vmg) +@@ -303,7 +309,8 @@ struct vm_area_struct + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags, +- struct vm_userfaultfd_ctx new_ctx); ++ struct vm_userfaultfd_ctx new_ctx, ++ bool give_up_on_oom); + + struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg); + +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index d64117be62cc44..96ad1b75d1c4d4 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -6150,11 +6150,12 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr, + * event or send an immediate device found event if the data + * should not be stored for later. + */ +- if (!ext_adv && !has_pending_adv_report(hdev)) { ++ if (!has_pending_adv_report(hdev)) { + /* If the report will trigger a SCAN_REQ store it for + * later merging. + */ +- if (type == LE_ADV_IND || type == LE_ADV_SCAN_IND) { ++ if (!ext_adv && (type == LE_ADV_IND || ++ type == LE_ADV_SCAN_IND)) { + store_pending_adv_report(hdev, bdaddr, bdaddr_type, + rssi, flags, data, len); + return; +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index c27ea70f71e1e1..a55388fbf07c84 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -3956,7 +3956,8 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, + + /* Check if the ACL is secure enough (if not SDP) */ + if (psm != cpu_to_le16(L2CAP_PSM_SDP) && +- !hci_conn_check_link_mode(conn->hcon)) { ++ (!hci_conn_check_link_mode(conn->hcon) || ++ !l2cap_check_enc_key_size(conn->hcon))) { + conn->disc_reason = HCI_ERROR_AUTH_FAILURE; + result = L2CAP_CR_SEC_BLOCK; + goto response; +@@ -7503,8 +7504,24 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags) + if (skb->len > len) { + BT_ERR("Frame is too long (len %u, expected len %d)", + skb->len, len); ++ /* PTS test cases L2CAP/COS/CED/BI-14-C and BI-15-C ++ * (Multiple Signaling Command in one PDU, Data ++ * Truncated, BR/EDR) send a C-frame to the IUT with ++ * PDU Length set to 8 and Channel ID set to the ++ * correct signaling channel for the logical link. ++ * The Information payload contains one L2CAP_ECHO_REQ ++ * packet with Data Length set to 0 with 0 octets of ++ * echo data and one invalid command packet due to ++ * data truncated in PDU but present in HCI packet. ++ * ++ * Shorter the socket buffer to the PDU length to ++ * allow to process valid commands from the PDU before ++ * setting the socket unreliable. ++ */ ++ skb->len = len; ++ l2cap_recv_frame(conn, skb); + l2cap_conn_unreliable(conn, ECOMM); +- goto drop; ++ goto unlock; + } + + /* Append fragment into frame (with header) */ +diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c +index 89f51ea4cabece..f2efb58d152bc2 100644 +--- a/net/bridge/br_vlan.c ++++ b/net/bridge/br_vlan.c +@@ -715,8 +715,8 @@ static int br_vlan_add_existing(struct net_bridge *br, + u16 flags, bool *changed, + struct netlink_ext_ack *extack) + { +- bool would_change = __vlan_flags_would_change(vlan, flags); + bool becomes_brentry = false; ++ bool would_change = false; + int err; + + if (!br_vlan_is_brentry(vlan)) { +@@ -725,6 +725,8 @@ static int br_vlan_add_existing(struct net_bridge *br, + return -EINVAL; + + becomes_brentry = true; ++ } else { ++ would_change = __vlan_flags_would_change(vlan, flags); + } + + /* Master VLANs that aren't brentries weren't notified before, +diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c +index 1664547deffd07..ac3a252969cb61 100644 +--- a/net/dsa/dsa.c ++++ b/net/dsa/dsa.c +@@ -862,6 +862,16 @@ static void dsa_tree_teardown_lags(struct dsa_switch_tree *dst) + kfree(dst->lags); + } + ++static void dsa_tree_teardown_routing_table(struct dsa_switch_tree *dst) ++{ ++ struct dsa_link *dl, *next; ++ ++ list_for_each_entry_safe(dl, next, &dst->rtable, list) { ++ list_del(&dl->list); ++ kfree(dl); ++ } ++} ++ + static int dsa_tree_setup(struct dsa_switch_tree *dst) + { + bool complete; +@@ -879,7 +889,7 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst) + + err = dsa_tree_setup_cpu_ports(dst); + if (err) +- return err; ++ goto teardown_rtable; + + err = dsa_tree_setup_switches(dst); + if (err) +@@ -911,14 +921,14 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst) + dsa_tree_teardown_switches(dst); + teardown_cpu_ports: + dsa_tree_teardown_cpu_ports(dst); ++teardown_rtable: ++ dsa_tree_teardown_routing_table(dst); + + return err; + } + + static void dsa_tree_teardown(struct dsa_switch_tree *dst) + { +- struct dsa_link *dl, *next; +- + if (!dst->setup) + return; + +@@ -932,10 +942,7 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst) + + dsa_tree_teardown_cpu_ports(dst); + +- list_for_each_entry_safe(dl, next, &dst->rtable, list) { +- list_del(&dl->list); +- kfree(dl); +- } ++ dsa_tree_teardown_routing_table(dst); + + pr_info("DSA: tree %d torn down\n", dst->index); + +@@ -1478,12 +1485,44 @@ static int dsa_switch_parse(struct dsa_switch *ds, struct dsa_chip_data *cd) + + static void dsa_switch_release_ports(struct dsa_switch *ds) + { ++ struct dsa_mac_addr *a, *tmp; + struct dsa_port *dp, *next; ++ struct dsa_vlan *v, *n; + + dsa_switch_for_each_port_safe(dp, next, ds) { +- WARN_ON(!list_empty(&dp->fdbs)); +- WARN_ON(!list_empty(&dp->mdbs)); +- WARN_ON(!list_empty(&dp->vlans)); ++ /* These are either entries that upper layers lost track of ++ * (probably due to bugs), or installed through interfaces ++ * where one does not necessarily have to remove them, like ++ * ndo_dflt_fdb_add(). ++ */ ++ list_for_each_entry_safe(a, tmp, &dp->fdbs, list) { ++ dev_info(ds->dev, ++ "Cleaning up unicast address %pM vid %u from port %d\n", ++ a->addr, a->vid, dp->index); ++ list_del(&a->list); ++ kfree(a); ++ } ++ ++ list_for_each_entry_safe(a, tmp, &dp->mdbs, list) { ++ dev_info(ds->dev, ++ "Cleaning up multicast address %pM vid %u from port %d\n", ++ a->addr, a->vid, dp->index); ++ list_del(&a->list); ++ kfree(a); ++ } ++ ++ /* These are entries that upper layers have lost track of, ++ * probably due to bugs, but also due to dsa_port_do_vlan_del() ++ * having failed and the VLAN entry still lingering on. ++ */ ++ list_for_each_entry_safe(v, n, &dp->vlans, list) { ++ dev_info(ds->dev, ++ "Cleaning up vid %u from port %d\n", ++ v->vid, dp->index); ++ list_del(&v->list); ++ kfree(v); ++ } ++ + list_del(&dp->list); + kfree(dp); + } +diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c +index 3ee53e28ec2e9f..53e03fd8071b4a 100644 +--- a/net/dsa/tag_8021q.c ++++ b/net/dsa/tag_8021q.c +@@ -197,7 +197,7 @@ static int dsa_port_do_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid) + + err = ds->ops->tag_8021q_vlan_del(ds, port, vid); + if (err) { +- refcount_inc(&v->refcount); ++ refcount_set(&v->refcount, 1); + return err; + } + +diff --git a/net/ethtool/cmis_cdb.c b/net/ethtool/cmis_cdb.c +index 4d558114795203..8bf99295bfbe96 100644 +--- a/net/ethtool/cmis_cdb.c ++++ b/net/ethtool/cmis_cdb.c +@@ -346,7 +346,7 @@ ethtool_cmis_module_poll(struct net_device *dev, + struct netlink_ext_ack extack = {}; + int err; + +- ethtool_cmis_page_init(&page_data, 0, offset, sizeof(rpl)); ++ ethtool_cmis_page_init(&page_data, 0, offset, sizeof(*rpl)); + page_data.data = (u8 *)rpl; + + err = ops->get_module_eeprom_by_page(dev, &page_data, &extack); +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index bae8ece3e881e0..d9ab070e78e052 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -1771,6 +1771,7 @@ static int rt6_insert_exception(struct rt6_info *nrt, + if (!err) { + spin_lock_bh(&f6i->fib6_table->tb6_lock); + fib6_update_sernum(net, f6i); ++ fib6_add_gc_list(f6i); + spin_unlock_bh(&f6i->fib6_table->tb6_lock); + fib6_force_start_gc(net); + } +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c +index dbcd75c5d778e6..7e1e561ef76c1c 100644 +--- a/net/mac80211/iface.c ++++ b/net/mac80211/iface.c +@@ -667,6 +667,9 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do + if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) + ieee80211_txq_remove_vlan(local, sdata); + ++ if (sdata->vif.txq) ++ ieee80211_txq_purge(sdata->local, to_txq_info(sdata->vif.txq)); ++ + sdata->bss = NULL; + + if (local->open_count == 0) +diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c +index f6de136008f6f9..57850d4dac5db9 100644 +--- a/net/mctp/af_mctp.c ++++ b/net/mctp/af_mctp.c +@@ -630,6 +630,9 @@ static int mctp_sk_hash(struct sock *sk) + { + struct net *net = sock_net(sk); + ++ /* Bind lookup runs under RCU, remain live during that. */ ++ sock_set_flag(sk, SOCK_RCU_FREE); ++ + mutex_lock(&net->mctp.bind_lock); + sk_add_node_rcu(sk, &net->mctp.binds); + mutex_unlock(&net->mctp.bind_lock); +diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c +index 0df89240b73361..305daf57a4f9dd 100644 +--- a/net/openvswitch/flow_netlink.c ++++ b/net/openvswitch/flow_netlink.c +@@ -2876,7 +2876,8 @@ static int validate_set(const struct nlattr *a, + size_t key_len; + + /* There can be only one key in a action */ +- if (nla_total_size(nla_len(ovs_key)) != nla_len(a)) ++ if (!nla_ok(ovs_key, nla_len(a)) || ++ nla_total_size(nla_len(ovs_key)) != nla_len(a)) + return -EINVAL; + + key_len = nla_len(ovs_key); +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index ebc41a7b13dbec..78b0e6dba0a2b7 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -362,6 +362,9 @@ static void smc_destruct(struct sock *sk) + return; + } + ++static struct lock_class_key smc_key; ++static struct lock_class_key smc_slock_key; ++ + void smc_sk_init(struct net *net, struct sock *sk, int protocol) + { + struct smc_sock *smc = smc_sk(sk); +@@ -375,6 +378,8 @@ void smc_sk_init(struct net *net, struct sock *sk, int protocol) + INIT_WORK(&smc->connect_work, smc_connect_work); + INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work); + INIT_LIST_HEAD(&smc->accept_q); ++ sock_lock_init_class_and_name(sk, "slock-AF_SMC", &smc_slock_key, ++ "sk_lock-AF_SMC", &smc_key); + spin_lock_init(&smc->accept_q_lock); + spin_lock_init(&smc->conn.send_lock); + sk->sk_prot->hash(sk); +diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler +index e0842496d26ed7..c6cd729b65cbfb 100644 +--- a/scripts/Makefile.compiler ++++ b/scripts/Makefile.compiler +@@ -75,8 +75,8 @@ ld-option = $(call try-run, $(LD) $(KBUILD_LDFLAGS) $(1) -v,$(1),$(2),$(3)) + # Usage: MY_RUSTFLAGS += $(call __rustc-option,$(RUSTC),$(MY_RUSTFLAGS),-Cinstrument-coverage,-Zinstrument-coverage) + # TODO: remove RUSTC_BOOTSTRAP=1 when we raise the minimum GNU Make version to 4.4 + __rustc-option = $(call try-run,\ +- echo '#![allow(missing_docs)]#![feature(no_core)]#![no_core]' | RUSTC_BOOTSTRAP=1\ +- $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null,$(2)) $(3)\ ++ echo '$(pound)![allow(missing_docs)]$(pound)![feature(no_core)]$(pound)![no_core]' | RUSTC_BOOTSTRAP=1\ ++ $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null --target=%,$(2)) $(3)\ + --crate-type=rlib --out-dir=$(TMPOUT) --emit=obj=- - >/dev/null,$(3),$(4)) + + # rustc-option +diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py +index d1f5adbf33f91c..690f9830f06482 100755 +--- a/scripts/generate_rust_analyzer.py ++++ b/scripts/generate_rust_analyzer.py +@@ -90,6 +90,12 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs): + ["core", "compiler_builtins"], + ) + ++ append_crate( ++ "ffi", ++ srctree / "rust" / "ffi.rs", ++ ["core", "compiler_builtins"], ++ ) ++ + def append_crate_with_generated( + display_name, + deps, +@@ -109,9 +115,9 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs): + "exclude_dirs": [], + } + +- append_crate_with_generated("bindings", ["core"]) +- append_crate_with_generated("uapi", ["core"]) +- append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"]) ++ append_crate_with_generated("bindings", ["core", "ffi"]) ++ append_crate_with_generated("uapi", ["core", "ffi"]) ++ append_crate_with_generated("kernel", ["core", "macros", "build_error", "ffi", "bindings", "uapi"]) + + def is_root_crate(build_file, target): + try: +diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig +index dbf933c18a8219..fd9391e61b3d98 100644 +--- a/sound/pci/hda/Kconfig ++++ b/sound/pci/hda/Kconfig +@@ -96,9 +96,7 @@ config SND_HDA_CIRRUS_SCODEC + + config SND_HDA_CIRRUS_SCODEC_KUNIT_TEST + tristate "KUnit test for Cirrus side-codec library" if !KUNIT_ALL_TESTS +- select SND_HDA_CIRRUS_SCODEC +- select GPIOLIB +- depends on KUNIT ++ depends on SND_HDA_CIRRUS_SCODEC && GPIOLIB && KUNIT + default KUNIT_ALL_TESTS + help + This builds KUnit tests for the cirrus side-codec library. +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 0bf833c9602155..4171aa22747c33 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -6603,6 +6603,16 @@ static void alc285_fixup_speaker2_to_dac1(struct hda_codec *codec, + } + } + ++/* disable DAC3 (0x06) selection on NID 0x15 - share Speaker/Bass Speaker DAC 0x03 */ ++static void alc294_fixup_bass_speaker_15(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) { ++ static const hda_nid_t conn[] = { 0x02, 0x03 }; ++ snd_hda_override_conn_list(codec, 0x15, ARRAY_SIZE(conn), conn); ++ } ++} ++ + /* Hook to update amp GPIO4 for automute */ + static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec, + struct hda_jack_callback *jack) +@@ -7587,6 +7597,16 @@ static void alc287_fixup_lenovo_thinkpad_with_alc1318(struct hda_codec *codec, + spec->gen.pcm_playback_hook = alc287_alc1318_playback_pcm_hook; + } + ++/* ++ * Clear COEF 0x0d (PCBEEP passthrough) bit 0x40 where BIOS sets it wrongly ++ * at PM resume ++ */ ++static void alc283_fixup_dell_hp_resume(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ if (action == HDA_FIXUP_ACT_INIT) ++ alc_write_coef_idx(codec, 0xd, 0x2800); ++} + + enum { + ALC269_FIXUP_GPIO2, +@@ -7888,6 +7908,9 @@ enum { + ALC245_FIXUP_CLEVO_NOISY_MIC, + ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE, + ALC233_FIXUP_MEDION_MTL_SPK, ++ ALC294_FIXUP_BASS_SPEAKER_15, ++ ALC283_FIXUP_DELL_HP_RESUME, ++ ALC294_FIXUP_ASUS_CS35L41_SPI_2, + }; + + /* A special fixup for Lenovo C940 and Yoga Duet 7; +@@ -10222,6 +10245,20 @@ static const struct hda_fixup alc269_fixups[] = { + { } + }, + }, ++ [ALC294_FIXUP_BASS_SPEAKER_15] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc294_fixup_bass_speaker_15, ++ }, ++ [ALC283_FIXUP_DELL_HP_RESUME] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc283_fixup_dell_hp_resume, ++ }, ++ [ALC294_FIXUP_ASUS_CS35L41_SPI_2] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = cs35l41_fixup_spi_two, ++ .chained = true, ++ .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC, ++ }, + }; + + static const struct hda_quirk alc269_fixup_tbl[] = { +@@ -10282,6 +10319,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1028, 0x05f4, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1028, 0x0604, "Dell Venue 11 Pro 7130", ALC283_FIXUP_DELL_HP_RESUME), + SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK), + SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK), + SND_PCI_QUIRK(0x1028, 0x062c, "Dell Latitude E5550", ALC292_FIXUP_DELL_E7X), +@@ -10684,7 +10722,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM), + SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), +- SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE), +@@ -10750,6 +10788,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401), + SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), + SND_PCI_QUIRK(0x1043, 0x1da2, "ASUS UP6502ZA/ZD", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x1df3, "ASUS UM5606WA", ALC294_FIXUP_BASS_SPEAKER_15), + SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), + SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), +@@ -10772,14 +10811,14 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1fb3, "ASUS ROG Flow Z13 GZ302EA", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x3011, "ASUS B5605CVA", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), +- SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC245_FIXUP_CS35L41_SPI_2), +- SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2), +- SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC245_FIXUP_CS35L41_SPI_2), +- SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2), +- SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27), +- SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2), +- SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), + SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), + SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), +diff --git a/sound/soc/codecs/cs42l43-jack.c b/sound/soc/codecs/cs42l43-jack.c +index d9ab003e166bfa..73d764fc853929 100644 +--- a/sound/soc/codecs/cs42l43-jack.c ++++ b/sound/soc/codecs/cs42l43-jack.c +@@ -702,6 +702,9 @@ static void cs42l43_clear_jack(struct cs42l43_codec *priv) + CS42L43_PGA_WIDESWING_MODE_EN_MASK, 0); + regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CTRL, + CS42L43_JACK_STEREO_CONFIG_MASK, 0); ++ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CLAMP_CTRL, ++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_MASK, ++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_MASK); + regmap_update_bits(cs42l43->regmap, CS42L43_HS2, + CS42L43_HSDET_MODE_MASK | CS42L43_HSDET_MANUAL_MODE_MASK, + 0x2 << CS42L43_HSDET_MODE_SHIFT); +diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c +index c989d82d1d3c17..81bab8299eae4b 100644 +--- a/sound/soc/codecs/lpass-wsa-macro.c ++++ b/sound/soc/codecs/lpass-wsa-macro.c +@@ -63,6 +63,10 @@ + #define CDC_WSA_TX_SPKR_PROT_CLK_DISABLE 0 + #define CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK GENMASK(3, 0) + #define CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K 0 ++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_16K 1 ++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_24K 2 ++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_32K 3 ++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_48K 4 + #define CDC_WSA_TX0_SPKR_PROT_PATH_CFG0 (0x0248) + #define CDC_WSA_TX1_SPKR_PROT_PATH_CTL (0x0264) + #define CDC_WSA_TX1_SPKR_PROT_PATH_CFG0 (0x0268) +@@ -407,6 +411,7 @@ struct wsa_macro { + int ear_spkr_gain; + int spkr_gain_offset; + int spkr_mode; ++ u32 pcm_rate_vi; + int is_softclip_on[WSA_MACRO_SOFTCLIP_MAX]; + int softclip_clk_users[WSA_MACRO_SOFTCLIP_MAX]; + struct regmap *regmap; +@@ -1280,6 +1285,7 @@ static int wsa_macro_hw_params(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) + { + struct snd_soc_component *component = dai->component; ++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component); + int ret; + + switch (substream->stream) { +@@ -1291,6 +1297,11 @@ static int wsa_macro_hw_params(struct snd_pcm_substream *substream, + __func__, params_rate(params)); + return ret; + } ++ break; ++ case SNDRV_PCM_STREAM_CAPTURE: ++ if (dai->id == WSA_MACRO_AIF_VI) ++ wsa->pcm_rate_vi = params_rate(params); ++ + break; + default: + break; +@@ -1448,35 +1459,11 @@ static void wsa_macro_mclk_enable(struct wsa_macro *wsa, bool mclk_enable) + } + } + +-static int wsa_macro_mclk_event(struct snd_soc_dapm_widget *w, +- struct snd_kcontrol *kcontrol, int event) ++static void wsa_macro_enable_disable_vi_sense(struct snd_soc_component *component, bool enable, ++ u32 tx_reg0, u32 tx_reg1, u32 val) + { +- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); +- struct wsa_macro *wsa = snd_soc_component_get_drvdata(component); +- +- wsa_macro_mclk_enable(wsa, event == SND_SOC_DAPM_PRE_PMU); +- return 0; +-} +- +-static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w, +- struct snd_kcontrol *kcontrol, +- int event) +-{ +- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); +- struct wsa_macro *wsa = snd_soc_component_get_drvdata(component); +- u32 tx_reg0, tx_reg1; +- +- if (test_bit(WSA_MACRO_TX0, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) { +- tx_reg0 = CDC_WSA_TX0_SPKR_PROT_PATH_CTL; +- tx_reg1 = CDC_WSA_TX1_SPKR_PROT_PATH_CTL; +- } else if (test_bit(WSA_MACRO_TX1, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) { +- tx_reg0 = CDC_WSA_TX2_SPKR_PROT_PATH_CTL; +- tx_reg1 = CDC_WSA_TX3_SPKR_PROT_PATH_CTL; +- } +- +- switch (event) { +- case SND_SOC_DAPM_POST_PMU: +- /* Enable V&I sensing */ ++ if (enable) { ++ /* Enable V&I sensing */ + snd_soc_component_update_bits(component, tx_reg0, + CDC_WSA_TX_SPKR_PROT_RESET_MASK, + CDC_WSA_TX_SPKR_PROT_RESET); +@@ -1485,10 +1472,10 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w, + CDC_WSA_TX_SPKR_PROT_RESET); + snd_soc_component_update_bits(component, tx_reg0, + CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK, +- CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K); ++ val); + snd_soc_component_update_bits(component, tx_reg1, + CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK, +- CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K); ++ val); + snd_soc_component_update_bits(component, tx_reg0, + CDC_WSA_TX_SPKR_PROT_CLK_EN_MASK, + CDC_WSA_TX_SPKR_PROT_CLK_ENABLE); +@@ -1501,9 +1488,7 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w, + snd_soc_component_update_bits(component, tx_reg1, + CDC_WSA_TX_SPKR_PROT_RESET_MASK, + CDC_WSA_TX_SPKR_PROT_NO_RESET); +- break; +- case SND_SOC_DAPM_POST_PMD: +- /* Disable V&I sensing */ ++ } else { + snd_soc_component_update_bits(component, tx_reg0, + CDC_WSA_TX_SPKR_PROT_RESET_MASK, + CDC_WSA_TX_SPKR_PROT_RESET); +@@ -1516,6 +1501,72 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w, + snd_soc_component_update_bits(component, tx_reg1, + CDC_WSA_TX_SPKR_PROT_CLK_EN_MASK, + CDC_WSA_TX_SPKR_PROT_CLK_DISABLE); ++ } ++} ++ ++static void wsa_macro_enable_disable_vi_feedback(struct snd_soc_component *component, ++ bool enable, u32 rate) ++{ ++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component); ++ ++ if (test_bit(WSA_MACRO_TX0, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) ++ wsa_macro_enable_disable_vi_sense(component, enable, ++ CDC_WSA_TX0_SPKR_PROT_PATH_CTL, ++ CDC_WSA_TX1_SPKR_PROT_PATH_CTL, rate); ++ ++ if (test_bit(WSA_MACRO_TX1, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) ++ wsa_macro_enable_disable_vi_sense(component, enable, ++ CDC_WSA_TX2_SPKR_PROT_PATH_CTL, ++ CDC_WSA_TX3_SPKR_PROT_PATH_CTL, rate); ++} ++ ++static int wsa_macro_mclk_event(struct snd_soc_dapm_widget *w, ++ struct snd_kcontrol *kcontrol, int event) ++{ ++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); ++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component); ++ ++ wsa_macro_mclk_enable(wsa, event == SND_SOC_DAPM_PRE_PMU); ++ return 0; ++} ++ ++static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w, ++ struct snd_kcontrol *kcontrol, ++ int event) ++{ ++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); ++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component); ++ u32 rate_val; ++ ++ switch (wsa->pcm_rate_vi) { ++ case 8000: ++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K; ++ break; ++ case 16000: ++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_16K; ++ break; ++ case 24000: ++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_24K; ++ break; ++ case 32000: ++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_32K; ++ break; ++ case 48000: ++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_48K; ++ break; ++ default: ++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K; ++ break; ++ } ++ ++ switch (event) { ++ case SND_SOC_DAPM_POST_PMU: ++ /* Enable V&I sensing */ ++ wsa_macro_enable_disable_vi_feedback(component, true, rate_val); ++ break; ++ case SND_SOC_DAPM_POST_PMD: ++ /* Disable V&I sensing */ ++ wsa_macro_enable_disable_vi_feedback(component, false, rate_val); + break; + } + +diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c +index 57b789d7fbedd4..5b4f20dbf7bba4 100644 +--- a/sound/soc/dwc/dwc-i2s.c ++++ b/sound/soc/dwc/dwc-i2s.c +@@ -199,12 +199,10 @@ static void i2s_start(struct dw_i2s_dev *dev, + else + i2s_write_reg(dev->i2s_base, IRER, 1); + +- /* I2S needs to enable IRQ to make a handshake with DMAC on the JH7110 SoC */ +- if (dev->use_pio || dev->is_jh7110) +- i2s_enable_irqs(dev, substream->stream, config->chan_nr); +- else ++ if (!(dev->use_pio || dev->is_jh7110)) + i2s_enable_dma(dev, substream->stream); + ++ i2s_enable_irqs(dev, substream->stream, config->chan_nr); + i2s_write_reg(dev->i2s_base, CER, 1); + } + +@@ -218,11 +216,12 @@ static void i2s_stop(struct dw_i2s_dev *dev, + else + i2s_write_reg(dev->i2s_base, IRER, 0); + +- if (dev->use_pio || dev->is_jh7110) +- i2s_disable_irqs(dev, substream->stream, 8); +- else ++ if (!(dev->use_pio || dev->is_jh7110)) + i2s_disable_dma(dev, substream->stream); + ++ i2s_disable_irqs(dev, substream->stream, 8); ++ ++ + if (!dev->active) { + i2s_write_reg(dev->i2s_base, CER, 0); + i2s_write_reg(dev->i2s_base, IER, 0); +diff --git a/sound/soc/fsl/fsl_qmc_audio.c b/sound/soc/fsl/fsl_qmc_audio.c +index 8668abd3520800..d41cb6f3efcacc 100644 +--- a/sound/soc/fsl/fsl_qmc_audio.c ++++ b/sound/soc/fsl/fsl_qmc_audio.c +@@ -250,6 +250,9 @@ static int qmc_audio_pcm_trigger(struct snd_soc_component *component, + switch (cmd) { + case SNDRV_PCM_TRIGGER_START: + bitmap_zero(prtd->chans_pending, 64); ++ prtd->buffer_ended = 0; ++ prtd->ch_dma_addr_current = prtd->ch_dma_addr_start; ++ + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + for (i = 0; i < prtd->channels; i++) + prtd->qmc_dai->chans[i].prtd_tx = prtd; +diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c +index 945f9c0a6a5455..15defce0f3eb84 100644 +--- a/sound/soc/intel/avs/pcm.c ++++ b/sound/soc/intel/avs/pcm.c +@@ -925,7 +925,8 @@ static int avs_component_probe(struct snd_soc_component *component) + else + mach->tplg_filename = devm_kasprintf(adev->dev, GFP_KERNEL, + "hda-generic-tplg.bin"); +- ++ if (!mach->tplg_filename) ++ return -ENOMEM; + filename = kasprintf(GFP_KERNEL, "%s/%s", component->driver->topology_name_prefix, + mach->tplg_filename); + if (!filename) +diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c +index 380fc3be8c932e..5911a055865160 100644 +--- a/sound/soc/intel/boards/sof_sdw.c ++++ b/sound/soc/intel/boards/sof_sdw.c +@@ -688,6 +688,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = { + + static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = { + SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC), ++ SND_PCI_QUIRK(0x1043, 0x1f43, "ASUS Zenbook S16", SOC_SDW_CODEC_MIC), + {} + }; + +diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h +index 27a2bf9a661393..de3ec6f594c11c 100644 +--- a/sound/soc/qcom/lpass.h ++++ b/sound/soc/qcom/lpass.h +@@ -13,10 +13,11 @@ + #include + #include + #include ++#include + #include "lpass-hdmi.h" + + #define LPASS_AHBIX_CLOCK_FREQUENCY 131072000 +-#define LPASS_MAX_PORTS (LPASS_CDC_DMA_VA_TX8 + 1) ++#define LPASS_MAX_PORTS (DISPLAY_PORT_RX_7 + 1) + #define LPASS_MAX_MI2S_PORTS (8) + #define LPASS_MAX_DMA_CHANNELS (8) + #define LPASS_MAX_HDMI_DMA_CHANNELS (4) +diff --git a/tools/objtool/check.c b/tools/objtool/check.c +index 127862fa05c619..ce3ea0c2de0425 100644 +--- a/tools/objtool/check.c ++++ b/tools/objtool/check.c +@@ -217,6 +217,7 @@ static bool is_rust_noreturn(const struct symbol *func) + str_ends_with(func->name, "_4core9panicking14panic_nounwind") || + str_ends_with(func->name, "_4core9panicking18panic_bounds_check") || + str_ends_with(func->name, "_4core9panicking19assert_failed_inner") || ++ str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") || + str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") || + strstr(func->name, "_4core9panicking13assert_failed") || + strstr(func->name, "_4core9panicking11panic_const24panic_const_") || +diff --git a/tools/testing/kunit/qemu_configs/sh.py b/tools/testing/kunit/qemu_configs/sh.py +index 78a474a5b95f3a..f00cb89fdef6aa 100644 +--- a/tools/testing/kunit/qemu_configs/sh.py ++++ b/tools/testing/kunit/qemu_configs/sh.py +@@ -7,7 +7,9 @@ CONFIG_CPU_SUBTYPE_SH7751R=y + CONFIG_MEMORY_START=0x0c000000 + CONFIG_SH_RTS7751R2D=y + CONFIG_RTS7751R2D_PLUS=y +-CONFIG_SERIAL_SH_SCI=y''', ++CONFIG_SERIAL_SH_SCI=y ++CONFIG_CMDLINE_EXTEND=y ++''', + qemu_arch='sh4', + kernel_path='arch/sh/boot/zImage', + kernel_command_line='console=ttySC1', +diff --git a/tools/testing/selftests/bpf/prog_tests/changes_pkt_data.c b/tools/testing/selftests/bpf/prog_tests/changes_pkt_data.c +new file mode 100644 +index 00000000000000..7526de3790814c +--- /dev/null ++++ b/tools/testing/selftests/bpf/prog_tests/changes_pkt_data.c +@@ -0,0 +1,107 @@ ++// SPDX-License-Identifier: GPL-2.0 ++#include "bpf/libbpf.h" ++#include "changes_pkt_data_freplace.skel.h" ++#include "changes_pkt_data.skel.h" ++#include ++ ++static void print_verifier_log(const char *log) ++{ ++ if (env.verbosity >= VERBOSE_VERY) ++ fprintf(stdout, "VERIFIER LOG:\n=============\n%s=============\n", log); ++} ++ ++static void test_aux(const char *main_prog_name, ++ const char *to_be_replaced, ++ const char *replacement, ++ bool expect_load) ++{ ++ struct changes_pkt_data_freplace *freplace = NULL; ++ struct bpf_program *freplace_prog = NULL; ++ struct bpf_program *main_prog = NULL; ++ LIBBPF_OPTS(bpf_object_open_opts, opts); ++ struct changes_pkt_data *main = NULL; ++ char log[16*1024]; ++ int err; ++ ++ opts.kernel_log_buf = log; ++ opts.kernel_log_size = sizeof(log); ++ if (env.verbosity >= VERBOSE_SUPER) ++ opts.kernel_log_level = 1 | 2 | 4; ++ main = changes_pkt_data__open_opts(&opts); ++ if (!ASSERT_OK_PTR(main, "changes_pkt_data__open")) ++ goto out; ++ main_prog = bpf_object__find_program_by_name(main->obj, main_prog_name); ++ if (!ASSERT_OK_PTR(main_prog, "main_prog")) ++ goto out; ++ bpf_program__set_autoload(main_prog, true); ++ err = changes_pkt_data__load(main); ++ print_verifier_log(log); ++ if (!ASSERT_OK(err, "changes_pkt_data__load")) ++ goto out; ++ freplace = changes_pkt_data_freplace__open_opts(&opts); ++ if (!ASSERT_OK_PTR(freplace, "changes_pkt_data_freplace__open")) ++ goto out; ++ freplace_prog = bpf_object__find_program_by_name(freplace->obj, replacement); ++ if (!ASSERT_OK_PTR(freplace_prog, "freplace_prog")) ++ goto out; ++ bpf_program__set_autoload(freplace_prog, true); ++ bpf_program__set_autoattach(freplace_prog, true); ++ bpf_program__set_attach_target(freplace_prog, ++ bpf_program__fd(main_prog), ++ to_be_replaced); ++ err = changes_pkt_data_freplace__load(freplace); ++ print_verifier_log(log); ++ if (expect_load) { ++ ASSERT_OK(err, "changes_pkt_data_freplace__load"); ++ } else { ++ ASSERT_ERR(err, "changes_pkt_data_freplace__load"); ++ ASSERT_HAS_SUBSTR(log, "Extension program changes packet data", "error log"); ++ } ++ ++out: ++ changes_pkt_data_freplace__destroy(freplace); ++ changes_pkt_data__destroy(main); ++} ++ ++/* There are two global subprograms in both changes_pkt_data.skel.h: ++ * - one changes packet data; ++ * - another does not. ++ * It is ok to freplace subprograms that change packet data with those ++ * that either do or do not. It is only ok to freplace subprograms ++ * that do not change packet data with those that do not as well. ++ * The below tests check outcomes for each combination of such freplace. ++ * Also test a case when main subprogram itself is replaced and is a single ++ * subprogram in a program. ++ */ ++void test_changes_pkt_data_freplace(void) ++{ ++ struct { ++ const char *main; ++ const char *to_be_replaced; ++ bool changes; ++ } mains[] = { ++ { "main_with_subprogs", "changes_pkt_data", true }, ++ { "main_with_subprogs", "does_not_change_pkt_data", false }, ++ { "main_changes", "main_changes", true }, ++ { "main_does_not_change", "main_does_not_change", false }, ++ }; ++ struct { ++ const char *func; ++ bool changes; ++ } replacements[] = { ++ { "changes_pkt_data", true }, ++ { "does_not_change_pkt_data", false } ++ }; ++ char buf[64]; ++ ++ for (int i = 0; i < ARRAY_SIZE(mains); ++i) { ++ for (int j = 0; j < ARRAY_SIZE(replacements); ++j) { ++ snprintf(buf, sizeof(buf), "%s_with_%s", ++ mains[i].to_be_replaced, replacements[j].func); ++ if (!test__start_subtest(buf)) ++ continue; ++ test_aux(mains[i].main, mains[i].to_be_replaced, replacements[j].func, ++ mains[i].changes || !replacements[j].changes); ++ } ++ } ++} +diff --git a/tools/testing/selftests/bpf/progs/changes_pkt_data.c b/tools/testing/selftests/bpf/progs/changes_pkt_data.c +new file mode 100644 +index 00000000000000..43cada48b28ad4 +--- /dev/null ++++ b/tools/testing/selftests/bpf/progs/changes_pkt_data.c +@@ -0,0 +1,39 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++#include ++#include ++ ++__noinline ++long changes_pkt_data(struct __sk_buff *sk) ++{ ++ return bpf_skb_pull_data(sk, 0); ++} ++ ++__noinline __weak ++long does_not_change_pkt_data(struct __sk_buff *sk) ++{ ++ return 0; ++} ++ ++SEC("?tc") ++int main_with_subprogs(struct __sk_buff *sk) ++{ ++ changes_pkt_data(sk); ++ does_not_change_pkt_data(sk); ++ return 0; ++} ++ ++SEC("?tc") ++int main_changes(struct __sk_buff *sk) ++{ ++ bpf_skb_pull_data(sk, 0); ++ return 0; ++} ++ ++SEC("?tc") ++int main_does_not_change(struct __sk_buff *sk) ++{ ++ return 0; ++} ++ ++char _license[] SEC("license") = "GPL"; +diff --git a/tools/testing/selftests/bpf/progs/changes_pkt_data_freplace.c b/tools/testing/selftests/bpf/progs/changes_pkt_data_freplace.c +new file mode 100644 +index 00000000000000..f9a622705f1b3b +--- /dev/null ++++ b/tools/testing/selftests/bpf/progs/changes_pkt_data_freplace.c +@@ -0,0 +1,18 @@ ++// SPDX-License-Identifier: GPL-2.0 ++ ++#include ++#include ++ ++SEC("?freplace") ++long changes_pkt_data(struct __sk_buff *sk) ++{ ++ return bpf_skb_pull_data(sk, 0); ++} ++ ++SEC("?freplace") ++long does_not_change_pkt_data(struct __sk_buff *sk) ++{ ++ return 0; ++} ++ ++char _license[] SEC("license") = "GPL"; +diff --git a/tools/testing/selftests/bpf/progs/raw_tp_null.c b/tools/testing/selftests/bpf/progs/raw_tp_null.c +index 457f34c151e32f..5927054b6dd96f 100644 +--- a/tools/testing/selftests/bpf/progs/raw_tp_null.c ++++ b/tools/testing/selftests/bpf/progs/raw_tp_null.c +@@ -3,6 +3,7 @@ + + #include + #include ++#include "bpf_misc.h" + + char _license[] SEC("license") = "GPL"; + +@@ -17,16 +18,14 @@ int BPF_PROG(test_raw_tp_null, struct sk_buff *skb) + if (task->pid != tid) + return 0; + +- i = i + skb->mark + 1; +- /* The compiler may move the NULL check before this deref, which causes +- * the load to fail as deref of scalar. Prevent that by using a barrier. ++ /* If dead code elimination kicks in, the increment +=2 will be ++ * removed. For raw_tp programs attaching to tracepoints in kernel ++ * modules, we mark input arguments as PTR_MAYBE_NULL, so branch ++ * prediction should never kick in. + */ +- barrier(); +- /* If dead code elimination kicks in, the increment below will +- * be removed. For raw_tp programs, we mark input arguments as +- * PTR_MAYBE_NULL, so branch prediction should never kick in. +- */ +- if (!skb) +- i += 2; ++ asm volatile ("%[i] += 1; if %[ctx] != 0 goto +1; %[i] += 2;" ++ : [i]"+r"(i) ++ : [ctx]"r"(skb) ++ : "memory"); + return 0; + } +diff --git a/tools/testing/selftests/bpf/progs/verifier_sock.c b/tools/testing/selftests/bpf/progs/verifier_sock.c +index ee76b51005abe7..3c8f6646e33dae 100644 +--- a/tools/testing/selftests/bpf/progs/verifier_sock.c ++++ b/tools/testing/selftests/bpf/progs/verifier_sock.c +@@ -50,6 +50,13 @@ struct { + __uint(map_flags, BPF_F_NO_PREALLOC); + } sk_storage_map SEC(".maps"); + ++struct { ++ __uint(type, BPF_MAP_TYPE_PROG_ARRAY); ++ __uint(max_entries, 1); ++ __uint(key_size, sizeof(__u32)); ++ __uint(value_size, sizeof(__u32)); ++} jmp_table SEC(".maps"); ++ + SEC("cgroup/skb") + __description("skb->sk: no NULL check") + __failure __msg("invalid mem access 'sock_common_or_null'") +@@ -977,4 +984,53 @@ l1_%=: r0 = *(u8*)(r7 + 0); \ + : __clobber_all); + } + ++__noinline ++long skb_pull_data2(struct __sk_buff *sk, __u32 len) ++{ ++ return bpf_skb_pull_data(sk, len); ++} ++ ++__noinline ++long skb_pull_data1(struct __sk_buff *sk, __u32 len) ++{ ++ return skb_pull_data2(sk, len); ++} ++ ++/* global function calls bpf_skb_pull_data(), which invalidates packet ++ * pointers established before global function call. ++ */ ++SEC("tc") ++__failure __msg("invalid mem access") ++int invalidate_pkt_pointers_from_global_func(struct __sk_buff *sk) ++{ ++ int *p = (void *)(long)sk->data; ++ ++ if ((void *)(p + 1) > (void *)(long)sk->data_end) ++ return TCX_DROP; ++ skb_pull_data1(sk, 0); ++ *p = 42; /* this is unsafe */ ++ return TCX_PASS; ++} ++ ++__noinline ++int tail_call(struct __sk_buff *sk) ++{ ++ bpf_tail_call_static(sk, &jmp_table, 0); ++ return 0; ++} ++ ++/* Tail calls invalidate packet pointers. */ ++SEC("tc") ++__failure __msg("invalid mem access") ++int invalidate_pkt_pointers_by_tail_call(struct __sk_buff *sk) ++{ ++ int *p = (void *)(long)sk->data; ++ ++ if ((void *)(p + 1) > (void *)(long)sk->data_end) ++ return TCX_DROP; ++ tail_call(sk); ++ *p = 42; /* this is unsafe */ ++ return TCX_PASS; ++} ++ + char _license[] SEC("license") = "GPL"; +diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh +index 67df7b47087f03..e1fe16bcbbe880 100755 +--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh ++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh +@@ -29,7 +29,7 @@ fi + if [[ $cgroup2 ]]; then + cgroup_path=$(mount -t cgroup2 | head -1 | awk '{print $3}') + if [[ -z "$cgroup_path" ]]; then +- cgroup_path=/dev/cgroup/memory ++ cgroup_path=$(mktemp -d) + mount -t cgroup2 none $cgroup_path + do_umount=1 + fi +@@ -37,7 +37,7 @@ if [[ $cgroup2 ]]; then + else + cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}') + if [[ -z "$cgroup_path" ]]; then +- cgroup_path=/dev/cgroup/memory ++ cgroup_path=$(mktemp -d) + mount -t cgroup memory,hugetlb $cgroup_path + do_umount=1 + fi +diff --git a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh +index 11f9bbe7dc222b..0b0d4ba1af2771 100755 +--- a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh ++++ b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh +@@ -23,7 +23,7 @@ fi + if [[ $cgroup2 ]]; then + CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk '{print $3}') + if [[ -z "$CGROUP_ROOT" ]]; then +- CGROUP_ROOT=/dev/cgroup/memory ++ CGROUP_ROOT=$(mktemp -d) + mount -t cgroup2 none $CGROUP_ROOT + do_umount=1 + fi +diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c +index 17263696b5d880..61b3f571f7a708 100644 +--- a/tools/testing/shared/linux.c ++++ b/tools/testing/shared/linux.c +@@ -147,7 +147,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp) + void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **list) + { + if (kmalloc_verbose) +- pr_debug("Bulk free %p[0-%lu]\n", list, size - 1); ++ pr_debug("Bulk free %p[0-%zu]\n", list, size - 1); + + pthread_mutex_lock(&cachep->lock); + for (int i = 0; i < size; i++) +@@ -165,7 +165,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t size, + size_t i; + + if (kmalloc_verbose) +- pr_debug("Bulk alloc %lu\n", size); ++ pr_debug("Bulk alloc %zu\n", size); + + pthread_mutex_lock(&cachep->lock); + if (cachep->nr_objs >= size) {