public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.15 commit in: /
Date: Thu, 19 Jun 2025 14:21:50 +0000 (UTC)	[thread overview]
Message-ID: <1750342899.17ff67232f48bbe882b1a95d4b9447752f5ac7d0.mpagano@gentoo> (raw)

commit:     17ff67232f48bbe882b1a95d4b9447752f5ac7d0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 19 14:21:39 2025 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 19 14:21:39 2025 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=17ff6723

Linux patch 6.15.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1002_linux-6.15.3.patch | 37139 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 37143 insertions(+)

diff --git a/0000_README b/0000_README
index 1f59639d..e54206b2 100644
--- a/0000_README
+++ b/0000_README
@@ -50,6 +50,10 @@ Patch:  1001_linux-6.15.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.15.2
 
+Patch:  1002_linux-6.15.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.15.3
+
 Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.

diff --git a/1002_linux-6.15.3.patch b/1002_linux-6.15.3.patch
new file mode 100644
index 00000000..2b161b0f
--- /dev/null
+++ b/1002_linux-6.15.3.patch
@@ -0,0 +1,37139 @@
+diff --git a/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml b/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml
+index 6327bb2f6ee080..698266c09e2535 100644
+--- a/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mediatek,mt6357-regulator.yaml
+@@ -33,7 +33,7 @@ patternProperties:
+ 
+   "^ldo-v(camio18|aud28|aux18|io18|io28|rf12|rf18|cn18|cn28|fe28)$":
+     type: object
+-    $ref: fixed-regulator.yaml#
++    $ref: regulator.yaml#
+     unevaluatedProperties: false
+     description:
+       Properties for single fixed LDO regulator.
+@@ -112,7 +112,6 @@ examples:
+           regulator-enable-ramp-delay = <220>;
+         };
+         mt6357_vfe28_reg: ldo-vfe28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vfe28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+@@ -125,14 +124,12 @@ examples:
+           regulator-enable-ramp-delay = <110>;
+         };
+         mt6357_vrf18_reg: ldo-vrf18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vrf18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+           regulator-enable-ramp-delay = <110>;
+         };
+         mt6357_vrf12_reg: ldo-vrf12 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vrf12";
+           regulator-min-microvolt = <1200000>;
+           regulator-max-microvolt = <1200000>;
+@@ -157,14 +154,12 @@ examples:
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vcn28_reg: ldo-vcn28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vcn28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vcn18_reg: ldo-vcn18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vcn18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+@@ -183,7 +178,6 @@ examples:
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vcamio_reg: ldo-vcamio18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vcamio";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+@@ -212,28 +206,24 @@ examples:
+           regulator-always-on;
+         };
+         mt6357_vaux18_reg: ldo-vaux18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vaux18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vaud28_reg: ldo-vaud28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vaud28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vio28_reg: ldo-vio28 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vio28";
+           regulator-min-microvolt = <2800000>;
+           regulator-max-microvolt = <2800000>;
+           regulator-enable-ramp-delay = <264>;
+         };
+         mt6357_vio18_reg: ldo-vio18 {
+-          compatible = "regulator-fixed";
+           regulator-name = "vio18";
+           regulator-min-microvolt = <1800000>;
+           regulator-max-microvolt = <1800000>;
+diff --git a/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml b/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml
+index de0b4ae740ff23..a975bce599750e 100644
+--- a/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml
++++ b/Documentation/devicetree/bindings/soc/fsl/fsl,qman-fqd.yaml
+@@ -50,7 +50,7 @@ required:
+   - compatible
+ 
+ allOf:
+-  - $ref: reserved-memory.yaml
++  - $ref: /schemas/reserved-memory/reserved-memory.yaml
+ 
+ unevaluatedProperties: false
+ 
+@@ -61,7 +61,7 @@ examples:
+         #size-cells = <2>;
+ 
+         qman-fqd {
+-            compatible = "shared-dma-pool";
++            compatible = "fsl,qman-fqd";
+             size = <0 0x400000>;
+             alignment = <0 0x400000>;
+             no-map;
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index 86f6a19b28ae21..190ab40cf23afc 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -864,6 +864,8 @@ patternProperties:
+     description: Linux-specific binding
+   "^linx,.*":
+     description: Linx Technologies
++  "^liontron,.*":
++    description: Shenzhen Liontron Technology Co., Ltd
+   "^liteon,.*":
+     description: LITE-ON Technology Corp.
+   "^litex,.*":
+diff --git a/Documentation/gpu/xe/index.rst b/Documentation/gpu/xe/index.rst
+index 92cfb25e64d327..b53a0cc7f66a36 100644
+--- a/Documentation/gpu/xe/index.rst
++++ b/Documentation/gpu/xe/index.rst
+@@ -16,6 +16,7 @@ DG2, etc is provided to prototype the driver.
+    xe_migrate
+    xe_cs
+    xe_pm
++   xe_gt_freq
+    xe_pcode
+    xe_gt_mcr
+    xe_wa
+diff --git a/Documentation/gpu/xe/xe_gt_freq.rst b/Documentation/gpu/xe/xe_gt_freq.rst
+new file mode 100644
+index 00000000000000..c0811200e32755
+--- /dev/null
++++ b/Documentation/gpu/xe/xe_gt_freq.rst
+@@ -0,0 +1,14 @@
++.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
++
++==========================
++Xe GT Frequency Management
++==========================
++
++.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_freq.c
++   :doc: Xe GT Frequency Management
++
++Internal API
++============
++
++.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_freq.c
++   :internal:
+diff --git a/Documentation/misc-devices/lis3lv02d.rst b/Documentation/misc-devices/lis3lv02d.rst
+index 959bd2b822cfa9..6b3b7405ebdf6e 100644
+--- a/Documentation/misc-devices/lis3lv02d.rst
++++ b/Documentation/misc-devices/lis3lv02d.rst
+@@ -22,10 +22,10 @@ sporting the feature officially called "HP Mobile Data Protection System 3D" or
+ models (full list can be found in drivers/platform/x86/hp_accel.c) will have
+ their axis automatically oriented on standard way (eg: you can directly play
+ neverball). The accelerometer data is readable via
+-/sys/devices/platform/lis3lv02d. Reported values are scaled
++/sys/devices/faux/lis3lv02d. Reported values are scaled
+ to mg values (1/1000th of earth gravity).
+ 
+-Sysfs attributes under /sys/devices/platform/lis3lv02d/:
++Sysfs attributes under /sys/devices/faux/lis3lv02d/:
+ 
+ position
+       - 3D position that the accelerometer reports. Format: "(x,y,z)"
+@@ -85,7 +85,7 @@ the accelerometer are converted into a "standard" organisation of the axes
+ If your laptop model is not recognized (cf "dmesg"), you can send an
+ email to the maintainer to add it to the database.  When reporting a new
+ laptop, please include the output of "dmidecode" plus the value of
+-/sys/devices/platform/lis3lv02d/position in these four cases.
++/sys/devices/faux/lis3lv02d/position in these four cases.
+ 
+ Q&A
+ ---
+diff --git a/Documentation/netlink/specs/rt_link.yaml b/Documentation/netlink/specs/rt_link.yaml
+index 6b9d5ee87d93a8..2ac0e9fda1582d 100644
+--- a/Documentation/netlink/specs/rt_link.yaml
++++ b/Documentation/netlink/specs/rt_link.yaml
+@@ -1778,15 +1778,19 @@ attribute-sets:
+       -
+         name: iflags
+         type: u16
++        byte-order: big-endian
+       -
+         name: oflags
+         type: u16
++        byte-order: big-endian
+       -
+         name: ikey
+         type: u32
++        byte-order: big-endian
+       -
+         name: okey
+         type: u32
++        byte-order: big-endian
+       -
+         name: local
+         type: binary
+@@ -1806,10 +1810,11 @@ attribute-sets:
+         type: u8
+       -
+         name: encap-limit
+-        type: u32
++        type: u8
+       -
+         name: flowinfo
+         type: u32
++        byte-order: big-endian
+       -
+         name: flags
+         type: u32
+@@ -1822,9 +1827,11 @@ attribute-sets:
+       -
+         name: encap-sport
+         type: u16
++        byte-order: big-endian
+       -
+         name: encap-dport
+         type: u16
++        byte-order: big-endian
+       -
+         name: collect-metadata
+         type: flag
+@@ -1846,6 +1853,54 @@ attribute-sets:
+       -
+         name: erspan-hwid
+         type: u16
++  -
++    name: linkinfo-gre6-attrs
++    subset-of: linkinfo-gre-attrs
++    attributes:
++      -
++        name: link
++      -
++        name: iflags
++      -
++        name: oflags
++      -
++        name: ikey
++      -
++        name: okey
++      -
++        name: local
++        display-hint: ipv6
++      -
++        name: remote
++        display-hint: ipv6
++      -
++        name: ttl
++      -
++        name: encap-limit
++      -
++        name: flowinfo
++      -
++        name: flags
++      -
++        name: encap-type
++      -
++        name: encap-flags
++      -
++        name: encap-sport
++      -
++        name: encap-dport
++      -
++        name: collect-metadata
++      -
++        name: fwmark
++      -
++        name: erspan-index
++      -
++        name: erspan-ver
++      -
++        name: erspan-dir
++      -
++        name: erspan-hwid
+   -
+     name: linkinfo-vti-attrs
+     name-prefix: ifla-vti-
+@@ -1856,9 +1911,11 @@ attribute-sets:
+       -
+         name: ikey
+         type: u32
++        byte-order: big-endian
+       -
+         name: okey
+         type: u32
++        byte-order: big-endian
+       -
+         name: local
+         type: binary
+@@ -1908,6 +1965,7 @@ attribute-sets:
+       -
+         name: port
+         type: u16
++        byte-order: big-endian
+       -
+         name: collect-metadata
+         type: flag
+@@ -1927,6 +1985,7 @@ attribute-sets:
+       -
+         name: label
+         type: u32
++        byte-order: big-endian
+       -
+         name: ttl-inherit
+         type: u8
+@@ -1967,9 +2026,11 @@ attribute-sets:
+       -
+         name: flowinfo
+         type: u32
++        byte-order: big-endian
+       -
+         name: flags
+         type: u16
++        byte-order: big-endian
+       -
+         name: proto
+         type: u8
+@@ -1999,9 +2060,11 @@ attribute-sets:
+       -
+         name: encap-sport
+         type: u16
++        byte-order: big-endian
+       -
+         name: encap-dport
+         type: u16
++        byte-order: big-endian
+       -
+         name: collect-metadata
+         type: flag
+@@ -2299,6 +2362,9 @@ sub-messages:
+       -
+         value: gretap
+         attribute-set: linkinfo-gre-attrs
++      -
++        value: ip6gre
++        attribute-set: linkinfo-gre6-attrs
+       -
+         value: geneve
+         attribute-set: linkinfo-geneve-attrs
+diff --git a/Documentation/networking/xfrm_device.rst b/Documentation/networking/xfrm_device.rst
+index 7f24c09f269431..122204da0fff69 100644
+--- a/Documentation/networking/xfrm_device.rst
++++ b/Documentation/networking/xfrm_device.rst
+@@ -65,9 +65,13 @@ Callbacks to implement
+   /* from include/linux/netdevice.h */
+   struct xfrmdev_ops {
+         /* Crypto and Packet offload callbacks */
+-	int	(*xdo_dev_state_add) (struct xfrm_state *x, struct netlink_ext_ack *extack);
+-	void	(*xdo_dev_state_delete) (struct xfrm_state *x);
+-	void	(*xdo_dev_state_free) (struct xfrm_state *x);
++	int	(*xdo_dev_state_add)(struct net_device *dev,
++                                     struct xfrm_state *x,
++                                     struct netlink_ext_ack *extack);
++	void	(*xdo_dev_state_delete)(struct net_device *dev,
++                                        struct xfrm_state *x);
++	void	(*xdo_dev_state_free)(struct net_device *dev,
++                                      struct xfrm_state *x);
+ 	bool	(*xdo_dev_offload_ok) (struct sk_buff *skb,
+ 				       struct xfrm_state *x);
+ 	void    (*xdo_dev_state_advance_esn) (struct xfrm_state *x);
+diff --git a/Makefile b/Makefile
+index 7138d1fabfa4ae..01ddb4eb3659f4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 15
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+ 
+diff --git a/arch/arm/boot/dts/microchip/at91sam9263ek.dts b/arch/arm/boot/dts/microchip/at91sam9263ek.dts
+index 471ea25296aa14..93c5268a0845d0 100644
+--- a/arch/arm/boot/dts/microchip/at91sam9263ek.dts
++++ b/arch/arm/boot/dts/microchip/at91sam9263ek.dts
+@@ -152,7 +152,7 @@ nand_controller: nand-controller {
+ 				nand@3 {
+ 					reg = <0x3 0x0 0x800000>;
+ 					rb-gpios = <&pioA 22 GPIO_ACTIVE_HIGH>;
+-					cs-gpios = <&pioA 15 GPIO_ACTIVE_HIGH>;
++					cs-gpios = <&pioD 15 GPIO_ACTIVE_HIGH>;
+ 					nand-bus-width = <8>;
+ 					nand-ecc-mode = "soft";
+ 					nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/microchip/tny_a9263.dts b/arch/arm/boot/dts/microchip/tny_a9263.dts
+index 3dd48b3e06da57..fd8244b56e0593 100644
+--- a/arch/arm/boot/dts/microchip/tny_a9263.dts
++++ b/arch/arm/boot/dts/microchip/tny_a9263.dts
+@@ -64,7 +64,7 @@ nand_controller: nand-controller {
+ 				nand@3 {
+ 					reg = <0x3 0x0 0x800000>;
+ 					rb-gpios = <&pioA 22 GPIO_ACTIVE_HIGH>;
+-					cs-gpios = <&pioA 15 GPIO_ACTIVE_HIGH>;
++					cs-gpios = <&pioD 15 GPIO_ACTIVE_HIGH>;
+ 					nand-bus-width = <8>;
+ 					nand-ecc-mode = "soft";
+ 					nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/microchip/usb_a9263.dts b/arch/arm/boot/dts/microchip/usb_a9263.dts
+index 60d7936dc56274..8e1a3fb61087ca 100644
+--- a/arch/arm/boot/dts/microchip/usb_a9263.dts
++++ b/arch/arm/boot/dts/microchip/usb_a9263.dts
+@@ -58,7 +58,7 @@ usb1: gadget@fff78000 {
+ 			};
+ 
+ 			spi0: spi@fffa4000 {
+-				cs-gpios = <&pioB 15 GPIO_ACTIVE_HIGH>;
++				cs-gpios = <&pioA 5 GPIO_ACTIVE_LOW>;
+ 				status = "okay";
+ 				flash@0 {
+ 					compatible = "atmel,at45", "atmel,dataflash";
+@@ -84,7 +84,7 @@ nand_controller: nand-controller {
+ 				nand@3 {
+ 					reg = <0x3 0x0 0x800000>;
+ 					rb-gpios = <&pioA 22 GPIO_ACTIVE_HIGH>;
+-					cs-gpios = <&pioA 15 GPIO_ACTIVE_HIGH>;
++					cs-gpios = <&pioD 15 GPIO_ACTIVE_HIGH>;
+ 					nand-bus-width = <8>;
+ 					nand-ecc-mode = "soft";
+ 					nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
+index 5f1a6b4b764492..1dad4e4493926f 100644
+--- a/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-apq8064.dtsi
+@@ -213,12 +213,6 @@ sleep_clk: sleep_clk {
+ 		};
+ 	};
+ 
+-	sfpb_mutex: hwmutex {
+-		compatible = "qcom,sfpb-mutex";
+-		syscon = <&sfpb_wrapper_mutex 0x604 0x4>;
+-		#hwlock-cells = <1>;
+-	};
+-
+ 	smem {
+ 		compatible = "qcom,smem";
+ 		memory-region = <&smem_region>;
+@@ -284,6 +278,40 @@ scm {
+ 		};
+ 	};
+ 
++	replicator {
++		compatible = "arm,coresight-static-replicator";
++
++		clocks = <&rpmcc RPM_QDSS_CLK>;
++		clock-names = "apb_pclk";
++
++		in-ports {
++			port {
++				replicator_in: endpoint {
++					remote-endpoint = <&funnel_out>;
++				};
++			};
++		};
++
++		out-ports {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			port@0 {
++				reg = <0>;
++				replicator_out0: endpoint {
++					remote-endpoint = <&etb_in>;
++				};
++			};
++
++			port@1 {
++				reg = <1>;
++				replicator_out1: endpoint {
++					remote-endpoint = <&tpiu_in>;
++				};
++			};
++		};
++	};
++
+ 	soc: soc {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+@@ -305,9 +333,10 @@ tlmm_pinmux: pinctrl@800000 {
+ 			pinctrl-0 = <&ps_hold_default_state>;
+ 		};
+ 
+-		sfpb_wrapper_mutex: syscon@1200000 {
+-			compatible = "syscon";
+-			reg = <0x01200000 0x8000>;
++		sfpb_mutex: hwmutex@1200600 {
++			compatible = "qcom,sfpb-mutex";
++			reg = <0x01200600 0x100>;
++			#hwlock-cells = <1>;
+ 		};
+ 
+ 		intc: interrupt-controller@2000000 {
+@@ -326,6 +355,8 @@ timer@200a000 {
+ 				     <GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_EDGE_RISING)>;
+ 			reg = <0x0200a000 0x100>;
+ 			clock-frequency = <27000000>;
++			clocks = <&sleep_clk>;
++			clock-names = "sleep";
+ 			cpu-offset = <0x80000>;
+ 		};
+ 
+@@ -1532,39 +1563,6 @@ tpiu_in: endpoint {
+ 			};
+ 		};
+ 
+-		replicator {
+-			compatible = "arm,coresight-static-replicator";
+-
+-			clocks = <&rpmcc RPM_QDSS_CLK>;
+-			clock-names = "apb_pclk";
+-
+-			out-ports {
+-				#address-cells = <1>;
+-				#size-cells = <0>;
+-
+-				port@0 {
+-					reg = <0>;
+-					replicator_out0: endpoint {
+-						remote-endpoint = <&etb_in>;
+-					};
+-				};
+-				port@1 {
+-					reg = <1>;
+-					replicator_out1: endpoint {
+-						remote-endpoint = <&tpiu_in>;
+-					};
+-				};
+-			};
+-
+-			in-ports {
+-				port {
+-					replicator_in: endpoint {
+-						remote-endpoint = <&funnel_out>;
+-					};
+-				};
+-			};
+-		};
+-
+ 		funnel@1a04000 {
+ 			compatible = "arm,coresight-dynamic-funnel", "arm,primecell";
+ 			reg = <0x1a04000 0x1000>;
+diff --git a/arch/arm/mach-aspeed/Kconfig b/arch/arm/mach-aspeed/Kconfig
+index 080019aa6fcd89..fcf287edd0e5e6 100644
+--- a/arch/arm/mach-aspeed/Kconfig
++++ b/arch/arm/mach-aspeed/Kconfig
+@@ -2,7 +2,6 @@
+ menuconfig ARCH_ASPEED
+ 	bool "Aspeed BMC architectures"
+ 	depends on (CPU_LITTLE_ENDIAN && ARCH_MULTI_V5) || ARCH_MULTI_V6 || ARCH_MULTI_V7
+-	select SRAM
+ 	select WATCHDOG
+ 	select ASPEED_WATCHDOG
+ 	select MFD_SYSCON
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index a182295e6f08bf..6527d0d5656a13 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -333,9 +333,9 @@ config ARCH_MMAP_RND_BITS_MAX
+ 	default 24 if ARM64_VA_BITS=39
+ 	default 27 if ARM64_VA_BITS=42
+ 	default 30 if ARM64_VA_BITS=47
+-	default 29 if ARM64_VA_BITS=48 && ARM64_64K_PAGES
+-	default 31 if ARM64_VA_BITS=48 && ARM64_16K_PAGES
+-	default 33 if ARM64_VA_BITS=48
++	default 29 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52) && ARM64_64K_PAGES
++	default 31 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52) && ARM64_16K_PAGES
++	default 33 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52)
+ 	default 14 if ARM64_64K_PAGES
+ 	default 16 if ARM64_16K_PAGES
+ 	default 18
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
+index f9f6fea03b7446..bd366389b2389d 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a100.dtsi
+@@ -252,6 +252,7 @@ mmc0: mmc@4020000 {
+ 			interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc0_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -267,6 +268,7 @@ mmc1: mmc@4021000 {
+ 			interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc1_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+@@ -282,6 +284,7 @@ mmc2: mmc@4022000 {
+ 			interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&mmc2_pins>;
++			max-frequency = <150000000>;
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts b/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts
+index 97ff1ddd631888..734a75198f06e0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-kit.dts
+@@ -124,6 +124,7 @@ &sai5 {
+ 	assigned-clock-parents = <&clk IMX8MM_AUDIO_PLL1_OUT>;
+ 	assigned-clock-rates = <24576000>;
+ 	#sound-dai-cells = <0>;
++	fsl,sai-mclk-direction-output;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+index 62ed64663f4952..9ba0cb89fa24e0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-som.dtsi
+@@ -233,6 +233,7 @@ eeprom@50 {
+ 	rtc: rtc@51 {
+ 		compatible = "nxp,pcf85263";
+ 		reg = <0x51>;
++		quartz-load-femtofarads = <12500>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts b/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts
+index 1df5ceb1138793..37fc5ed98d7f61 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-kit.dts
+@@ -124,6 +124,7 @@ &sai5 {
+ 	assigned-clock-parents = <&clk IMX8MN_AUDIO_PLL1_OUT>;
+ 	assigned-clock-rates = <24576000>;
+ 	#sound-dai-cells = <0>;
++	fsl,sai-mclk-direction-output;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index 2a64115eebf1c6..bb11590473a4c7 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -242,6 +242,7 @@ eeprom@50 {
+ 	rtc: rtc@51 {
+ 		compatible = "nxp,pcf85263";
+ 		reg = <0x51>;
++		quartz-load-femtofarads = <12500>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi
+index 15f7ab58db36cc..88561df70d03ac 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-beacon-som.dtsi
+@@ -257,6 +257,7 @@ eeprom@50 {
+ 	rtc: rtc@51 {
+ 		compatible = "nxp,pcf85263";
+ 		reg = <0x51>;
++		quartz-load-femtofarads = <12500>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt6357.dtsi b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+index 5fafa842d312f3..dca4e5c3d8e210 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6357.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+@@ -60,7 +60,6 @@ mt6357_vpa_reg: buck-vpa {
+ 			};
+ 
+ 			mt6357_vfe28_reg: ldo-vfe28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vfe28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -75,7 +74,6 @@ mt6357_vxo22_reg: ldo-vxo22 {
+ 			};
+ 
+ 			mt6357_vrf18_reg: ldo-vrf18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vrf18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -83,7 +81,6 @@ mt6357_vrf18_reg: ldo-vrf18 {
+ 			};
+ 
+ 			mt6357_vrf12_reg: ldo-vrf12 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vrf12";
+ 				regulator-min-microvolt = <1200000>;
+ 				regulator-max-microvolt = <1200000>;
+@@ -112,7 +109,6 @@ mt6357_vcn33_wifi_reg: ldo-vcn33-wifi {
+ 			};
+ 
+ 			mt6357_vcn28_reg: ldo-vcn28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vcn28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -120,7 +116,6 @@ mt6357_vcn28_reg: ldo-vcn28 {
+ 			};
+ 
+ 			mt6357_vcn18_reg: ldo-vcn18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vcn18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -142,7 +137,6 @@ mt6357_vcamd_reg: ldo-vcamd {
+ 			};
+ 
+ 			mt6357_vcamio_reg: ldo-vcamio18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vcamio";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -175,7 +169,6 @@ mt6357_vsram_proc_reg: ldo-vsram-proc {
+ 			};
+ 
+ 			mt6357_vaux18_reg: ldo-vaux18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vaux18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+@@ -183,7 +176,6 @@ mt6357_vaux18_reg: ldo-vaux18 {
+ 			};
+ 
+ 			mt6357_vaud28_reg: ldo-vaud28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vaud28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -191,7 +183,6 @@ mt6357_vaud28_reg: ldo-vaud28 {
+ 			};
+ 
+ 			mt6357_vio28_reg: ldo-vio28 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vio28";
+ 				regulator-min-microvolt = <2800000>;
+ 				regulator-max-microvolt = <2800000>;
+@@ -199,7 +190,6 @@ mt6357_vio28_reg: ldo-vio28 {
+ 			};
+ 
+ 			mt6357_vio18_reg: ldo-vio18 {
+-				compatible = "regulator-fixed";
+ 				regulator-name = "vio18";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt6359.dtsi b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+index 7b10f9c59819a9..467d8a4c2aa7f1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6359.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+@@ -20,6 +20,8 @@ mt6359codec: audio-codec {
+ 		};
+ 
+ 		regulators {
++			compatible = "mediatek,mt6359-regulator";
++
+ 			mt6359_vs1_buck_reg: buck_vs1 {
+ 				regulator-name = "vs1";
+ 				regulator-min-microvolt = <800000>;
+@@ -298,7 +300,7 @@ mt6359_vsram_others_sshub_ldo: ldo_vsram_others_sshub {
+ 			};
+ 		};
+ 
+-		mt6359rtc: mt6359rtc {
++		mt6359rtc: rtc {
+ 			compatible = "mediatek,mt6358-rtc";
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index e1495f1900a7b4..f9ca6b3720e915 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -259,14 +259,10 @@ panel_in: endpoint {
+ 			};
+ 		};
+ 	};
++};
+ 
+-	ports {
+-		port {
+-			dsi_out: endpoint {
+-				remote-endpoint = <&panel_in>;
+-			};
+-		};
+-	};
++&dsi_out {
++	remote-endpoint = <&panel_in>;
+ };
+ 
+ &gic {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 0aa34e5bbaaa87..3c1fe80e64b9c5 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1836,6 +1836,10 @@ dsi0: dsi@14014000 {
+ 			phys = <&mipi_tx0>;
+ 			phy-names = "dphy";
+ 			status = "disabled";
++
++			port {
++				dsi_out: endpoint { };
++			};
+ 		};
+ 
+ 		dpi0: dpi@14015000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+index 69a8423d385890..29d35ca945973c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+@@ -2579,7 +2579,7 @@ rdma0: rdma@1c002000 {
+ 			reg = <0 0x1c002000 0 0x1000>;
+ 			clocks = <&vdosys0 CLK_VDO0_DISP_RDMA0>;
+ 			interrupts = <GIC_SPI 638 IRQ_TYPE_LEVEL_HIGH 0>;
+-			iommus = <&vdo_iommu M4U_PORT_L1_DISP_RDMA0>;
++			iommus = <&vpp_iommu M4U_PORT_L1_DISP_RDMA0>;
+ 			power-domains = <&spm MT8188_POWER_DOMAIN_VDOSYS0>;
+ 			mediatek,gce-client-reg = <&gce0 SUBSYS_1c00XXXX 0x2000 0x1000>;
+ 
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index 4f2dc0a7556610..1ded4b3f87605f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -617,22 +617,6 @@ power-domain@MT8195_POWER_DOMAIN_VPPSYS0 {
+ 					#size-cells = <0>;
+ 					#power-domain-cells = <1>;
+ 
+-					power-domain@MT8195_POWER_DOMAIN_VDEC1 {
+-						reg = <MT8195_POWER_DOMAIN_VDEC1>;
+-						clocks = <&vdecsys CLK_VDEC_LARB1>;
+-						clock-names = "vdec1-0";
+-						mediatek,infracfg = <&infracfg_ao>;
+-						#power-domain-cells = <0>;
+-					};
+-
+-					power-domain@MT8195_POWER_DOMAIN_VENC_CORE1 {
+-						reg = <MT8195_POWER_DOMAIN_VENC_CORE1>;
+-						clocks = <&vencsys_core1 CLK_VENC_CORE1_LARB>;
+-						clock-names = "venc1-larb";
+-						mediatek,infracfg = <&infracfg_ao>;
+-						#power-domain-cells = <0>;
+-					};
+-
+ 					power-domain@MT8195_POWER_DOMAIN_VDOSYS0 {
+ 						reg = <MT8195_POWER_DOMAIN_VDOSYS0>;
+ 						clocks = <&topckgen CLK_TOP_CFG_VDO0>,
+@@ -678,15 +662,25 @@ power-domain@MT8195_POWER_DOMAIN_VDEC0 {
+ 							clocks = <&vdecsys_soc CLK_VDEC_SOC_LARB1>;
+ 							clock-names = "vdec0-0";
+ 							mediatek,infracfg = <&infracfg_ao>;
++							#address-cells = <1>;
++							#size-cells = <0>;
+ 							#power-domain-cells = <0>;
+-						};
+ 
+-						power-domain@MT8195_POWER_DOMAIN_VDEC2 {
+-							reg = <MT8195_POWER_DOMAIN_VDEC2>;
+-							clocks = <&vdecsys_core1 CLK_VDEC_CORE1_LARB1>;
+-							clock-names = "vdec2-0";
+-							mediatek,infracfg = <&infracfg_ao>;
+-							#power-domain-cells = <0>;
++							power-domain@MT8195_POWER_DOMAIN_VDEC1 {
++								reg = <MT8195_POWER_DOMAIN_VDEC1>;
++								clocks = <&vdecsys CLK_VDEC_LARB1>;
++								clock-names = "vdec1-0";
++								mediatek,infracfg = <&infracfg_ao>;
++								#power-domain-cells = <0>;
++							};
++
++							power-domain@MT8195_POWER_DOMAIN_VDEC2 {
++								reg = <MT8195_POWER_DOMAIN_VDEC2>;
++								clocks = <&vdecsys_core1 CLK_VDEC_CORE1_LARB1>;
++								clock-names = "vdec2-0";
++								mediatek,infracfg = <&infracfg_ao>;
++								#power-domain-cells = <0>;
++							};
+ 						};
+ 
+ 						power-domain@MT8195_POWER_DOMAIN_VENC {
+@@ -694,7 +688,17 @@ power-domain@MT8195_POWER_DOMAIN_VENC {
+ 							clocks = <&vencsys CLK_VENC_LARB>;
+ 							clock-names = "venc0-larb";
+ 							mediatek,infracfg = <&infracfg_ao>;
++							#address-cells = <1>;
++							#size-cells = <0>;
+ 							#power-domain-cells = <0>;
++
++							power-domain@MT8195_POWER_DOMAIN_VENC_CORE1 {
++								reg = <MT8195_POWER_DOMAIN_VENC_CORE1>;
++								clocks = <&vencsys_core1 CLK_VENC_CORE1_LARB>;
++								clock-names = "venc1-larb";
++								mediatek,infracfg = <&infracfg_ao>;
++								#power-domain-cells = <0>;
++							};
+ 						};
+ 
+ 						power-domain@MT8195_POWER_DOMAIN_VDOSYS1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi b/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
+index 60139e6dffd8e0..6a75b230282eda 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8390-genio-common.dtsi
+@@ -1199,8 +1199,18 @@ xhci_ss_ep: endpoint {
+ };
+ 
+ &ssusb2 {
++	/*
++	 * the ssusb2 controller is one but we got two ports : one is routed
++	 * to the M.2 slot, the other is on the RPi header who does support
++	 * full OTG.
++	 * As the controller is shared between them, the role switch default
++	 * mode is set to host to make any peripheral inserted in the M.2
++	 * slot (i.e BT/WIFI module) be detected when the other port is
++	 * unused.
++	 */
+ 	dr_mode = "otg";
+ 	maximum-speed = "high-speed";
++	role-switch-default-mode = "host";
+ 	usb-role-switch;
+ 	vusb33-supply = <&mt6359_vusb_ldo_reg>;
+ 	wakeup-source;
+@@ -1211,7 +1221,7 @@ &ssusb2 {
+ 	connector {
+ 		compatible = "gpio-usb-b-connector", "usb-b-connector";
+ 		type = "micro";
+-		id-gpios = <&pio 89 GPIO_ACTIVE_HIGH>;
++		id-gpios = <&pio 89 GPIO_ACTIVE_LOW>;
+ 		vbus-supply = <&usb_p2_vbus>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+index 2b3bb5d0af17bd..f0b7949df92c05 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+@@ -621,9 +621,7 @@ uartb: serial@3110000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTB>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTB>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -633,9 +631,7 @@ uartd: serial@3130000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTD>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTD>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -645,9 +641,7 @@ uarte: serial@3140000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTE>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTE>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -657,9 +651,7 @@ uartf: serial@3150000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTF>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTF>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -1236,9 +1228,7 @@ uartc: serial@c280000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTC>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTC>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+@@ -1248,9 +1238,7 @@ uartg: serial@c290000 {
+ 		reg-shift = <2>;
+ 		interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&bpmp TEGRA186_CLK_UARTG>;
+-		clock-names = "serial";
+ 		resets = <&bpmp TEGRA186_RESET_UARTG>;
+-		reset-names = "serial";
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 33f92b77cd9d9e..c3695077478514 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -766,9 +766,7 @@ uartd: serial@3130000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTD>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTD>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -778,9 +776,7 @@ uarte: serial@3140000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTE>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTE>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -790,9 +786,7 @@ uartf: serial@3150000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTF>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTF>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -817,9 +811,7 @@ uarth: serial@3170000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 207 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTH>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTH>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -1616,9 +1608,7 @@ uartc: serial@c280000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTC>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTC>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+@@ -1628,9 +1618,7 @@ uartg: serial@c290000 {
+ 			reg-shift = <2>;
+ 			interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&bpmp TEGRA194_CLK_UARTG>;
+-			clock-names = "serial";
+ 			resets = <&bpmp TEGRA194_RESET_UARTG>;
+-			reset-names = "serial";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+index 9b9d1d15b0c7ea..1bb1f9640a800a 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+@@ -11,6 +11,7 @@ aliases {
+ 		rtc0 = "/i2c@7000d000/pmic@3c";
+ 		rtc1 = "/rtc@7000e000";
+ 		serial0 = &uarta;
++		serial3 = &uartd;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi b/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi
+index ae12f069f26fa5..b24b795873d416 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574-rdp-common.dtsi
+@@ -111,6 +111,13 @@ mp5496_l2: l2 {
+ 			regulator-always-on;
+ 			regulator-boot-on;
+ 		};
++
++		mp5496_l5: l5 {
++			regulator-min-microvolt = <1800000>;
++			regulator-max-microvolt = <1800000>;
++			regulator-always-on;
++			regulator-boot-on;
++		};
+ 	};
+ };
+ 
+@@ -146,7 +153,7 @@ &usb_0_dwc3 {
+ };
+ 
+ &usb_0_qmpphy {
+-	vdda-pll-supply = <&mp5496_l2>;
++	vdda-pll-supply = <&mp5496_l5>;
+ 	vdda-phy-supply = <&regulator_fixed_0p925>;
+ 
+ 	status = "okay";
+@@ -154,7 +161,7 @@ &usb_0_qmpphy {
+ 
+ &usb_0_qusbphy {
+ 	vdd-supply = <&regulator_fixed_0p925>;
+-	vdda-pll-supply = <&mp5496_l2>;
++	vdda-pll-supply = <&mp5496_l5>;
+ 	vdda-phy-dpdm-supply = <&regulator_fixed_3p3>;
+ 
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+index 3c02351fbb156a..b790a6b288abb8 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+@@ -974,14 +974,14 @@ pcie3: pcie@18000000 {
+ 			ranges = <0x01000000 0x0 0x00000000 0x18200000 0x0 0x100000>,
+ 				 <0x02000000 0x0 0x18300000 0x18300000 0x0 0x7d00000>;
+ 
+-			interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
+-				     <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 221 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 222 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 225 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 415 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 495 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "msi0",
+ 					  "msi1",
+ 					  "msi2",
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index c2caad85c668df..fa6769320a238c 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -2,6 +2,7 @@
+ /* Copyright (c) 2016, The Linux Foundation. All rights reserved. */
+ 
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
++#include <dt-bindings/clock/qcom,dsi-phy-28nm.h>
+ #include <dt-bindings/clock/qcom,gcc-msm8998.h>
+ #include <dt-bindings/clock/qcom,gpucc-msm8998.h>
+ #include <dt-bindings/clock/qcom,mmcc-msm8998.h>
+@@ -2790,11 +2791,11 @@ mmcc: clock-controller@c8c0000 {
+ 				      "gpll0_div";
+ 			clocks = <&rpmcc RPM_SMD_XO_CLK_SRC>,
+ 				 <&gcc GCC_MMSS_GPLL0_CLK>,
+-				 <&mdss_dsi0_phy 1>,
+-				 <&mdss_dsi0_phy 0>,
+-				 <&mdss_dsi1_phy 1>,
+-				 <&mdss_dsi1_phy 0>,
+-				 <&mdss_hdmi_phy 0>,
++				 <&mdss_dsi0_phy DSI_PIXEL_PLL_CLK>,
++				 <&mdss_dsi0_phy DSI_BYTE_PLL_CLK>,
++				 <&mdss_dsi1_phy DSI_PIXEL_PLL_CLK>,
++				 <&mdss_dsi1_phy DSI_BYTE_PLL_CLK>,
++				 <&mdss_hdmi_phy>,
+ 				 <0>,
+ 				 <0>,
+ 				 <&gcc GCC_MMSS_GPLL0_DIV_CLK>;
+@@ -2932,8 +2933,8 @@ mdss_dsi0: dsi@c994000 {
+ 					      "bus";
+ 				assigned-clocks = <&mmcc BYTE0_CLK_SRC>,
+ 						  <&mmcc PCLK0_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dsi0_phy 0>,
+-							 <&mdss_dsi0_phy 1>;
++				assigned-clock-parents = <&mdss_dsi0_phy DSI_BYTE_PLL_CLK>,
++							 <&mdss_dsi0_phy DSI_PIXEL_PLL_CLK>;
+ 
+ 				operating-points-v2 = <&dsi_opp_table>;
+ 				power-domains = <&rpmpd MSM8998_VDDCX>;
+@@ -3008,8 +3009,8 @@ mdss_dsi1: dsi@c996000 {
+ 					      "bus";
+ 				assigned-clocks = <&mmcc BYTE1_CLK_SRC>,
+ 						  <&mmcc PCLK1_CLK_SRC>;
+-				assigned-clock-parents = <&mdss_dsi1_phy 0>,
+-							 <&mdss_dsi1_phy 1>;
++				assigned-clock-parents = <&mdss_dsi1_phy DSI_BYTE_PLL_CLK>,
++							 <&mdss_dsi1_phy DSI_PIXEL_PLL_CLK>;
+ 
+ 				operating-points-v2 = <&dsi_opp_table>;
+ 				power-domains = <&rpmpd MSM8998_VDDCX>;
+diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+index f0746123e594d5..6e3e57dd02612f 100644
+--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+@@ -1073,7 +1073,7 @@ spi0: spi@4a80000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1092,7 +1092,7 @@ uart0: serial@4a80000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				status = "disabled";
+@@ -1137,7 +1137,7 @@ spi1: spi@4a84000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1184,7 +1184,7 @@ spi2: spi@4a88000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1231,7 +1231,7 @@ spi3: spi@4a8c000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1278,7 +1278,7 @@ spi4: spi@4a90000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+@@ -1297,7 +1297,7 @@ uart4: serial@4a90000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				status = "disabled";
+@@ -1342,7 +1342,7 @@ spi5: spi@4a94000 {
+ 				interconnects = <&qup_virt MASTER_QUP_CORE_0 RPM_ALWAYS_TAG
+ 						 &qup_virt SLAVE_QUP_CORE_0 RPM_ALWAYS_TAG>,
+ 						<&bimc MASTER_APPSS_PROC RPM_ALWAYS_TAG
+-						 &config_noc MASTER_APPSS_PROC RPM_ALWAYS_TAG>;
++						 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>;
+ 				interconnect-names = "qup-core",
+ 						     "qup-config";
+ 				#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs615.dtsi b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+index f4abfad474ea62..12065484904380 100644
+--- a/arch/arm64/boot/dts/qcom/qcs615.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs615.dtsi
+@@ -1022,10 +1022,10 @@ ufs_mem_hc: ufshc@1d84000 {
+ 				      "bus_aggr_clk",
+ 				      "iface_clk",
+ 				      "core_clk_unipro",
+-				      "core_clk_ice",
+ 				      "ref_clk",
+ 				      "tx_lane0_sync_clk",
+-				      "rx_lane0_sync_clk";
++				      "rx_lane0_sync_clk",
++				      "ice_core_clk";
+ 
+ 			resets = <&gcc GCC_UFS_PHY_BCR>;
+ 			reset-names = "rst";
+@@ -1060,10 +1060,10 @@ opp-50000000 {
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <37500000>,
+-						 /bits/ 64 <75000000>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+-						 /bits/ 64 <0>;
++						 /bits/ 64 <0>,
++						 /bits/ 64 <75000000>;
+ 					required-opps = <&rpmhpd_opp_low_svs>;
+ 				};
+ 
+@@ -1072,10 +1072,10 @@ opp-100000000 {
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <75000000>,
+-						 /bits/ 64 <150000000>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+-						 /bits/ 64 <0>;
++						 /bits/ 64 <0>,
++						 /bits/ 64 <150000000>;
+ 					required-opps = <&rpmhpd_opp_svs>;
+ 				};
+ 
+@@ -1084,10 +1084,10 @@ opp-200000000 {
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <150000000>,
+-						 /bits/ 64 <300000000>,
+ 						 /bits/ 64 <0>,
+ 						 /bits/ 64 <0>,
+-						 /bits/ 64 <0>;
++						 /bits/ 64 <0>,
++						 /bits/ 64 <300000000>;
+ 					required-opps = <&rpmhpd_opp_nom>;
+ 				};
+ 			};
+@@ -3304,7 +3304,6 @@ spmi_bus: spmi@c440000 {
+ 			#interrupt-cells = <4>;
+ 			#address-cells = <2>;
+ 			#size-cells = <0>;
+-			cell-index = <0>;
+ 			qcom,channel = <0>;
+ 			qcom,ee = <0>;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/qcs8300.dtsi b/arch/arm64/boot/dts/qcom/qcs8300.dtsi
+index 4a057f7c0d9fae..13b1121cdf175b 100644
+--- a/arch/arm64/boot/dts/qcom/qcs8300.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs8300.dtsi
+@@ -798,18 +798,6 @@ cryptobam: dma-controller@1dc4000 {
+ 				 <&apps_smmu 0x481 0x00>;
+ 		};
+ 
+-		crypto: crypto@1dfa000 {
+-			compatible = "qcom,qcs8300-qce", "qcom,qce";
+-			reg = <0x0 0x01dfa000 0x0 0x6000>;
+-			dmas = <&cryptobam 4>, <&cryptobam 5>;
+-			dma-names = "rx", "tx";
+-			iommus = <&apps_smmu 0x480 0x00>,
+-				 <&apps_smmu 0x481 0x00>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO_CORE0 QCOM_ICC_TAG_ALWAYS
+-					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-			interconnect-names = "memory";
+-		};
+-
+ 		ice: crypto@1d88000 {
+ 			compatible = "qcom,qcs8300-inline-crypto-engine",
+ 				     "qcom,inline-crypto-engine";
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index 2329460b210381..2010b7988b6cc4 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -2420,17 +2420,6 @@ cryptobam: dma-controller@1dc4000 {
+ 				 <&apps_smmu 0x481 0x00>;
+ 		};
+ 
+-		crypto: crypto@1dfa000 {
+-			compatible = "qcom,sa8775p-qce", "qcom,qce";
+-			reg = <0x0 0x01dfa000 0x0 0x6000>;
+-			dmas = <&cryptobam 4>, <&cryptobam 5>;
+-			dma-names = "rx", "tx";
+-			iommus = <&apps_smmu 0x480 0x00>,
+-				 <&apps_smmu 0x481 0x00>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO_CORE0 0 &mc_virt SLAVE_EBI1 0>;
+-			interconnect-names = "memory";
+-		};
+-
+ 		stm: stm@4002000 {
+ 			compatible = "arm,coresight-stm", "arm,primecell";
+ 			reg = <0x0 0x4002000 0x0 0x1000>,
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+index f3190f408f4b2c..0f1ebd869ce315 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+@@ -1202,9 +1202,6 @@ &sound {
+ 		"VA DMIC0", "MIC BIAS1",
+ 		"VA DMIC1", "MIC BIAS1",
+ 		"VA DMIC2", "MIC BIAS3",
+-		"VA DMIC0", "VA MIC BIAS1",
+-		"VA DMIC1", "VA MIC BIAS1",
+-		"VA DMIC2", "VA MIC BIAS3",
+ 		"TX SWR_ADC1", "ADC2_OUTPUT";
+ 
+ 	wcd-playback-dai-link {
+diff --git a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+index d402f4c85b11d1..ee696317f78cc3 100644
+--- a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
++++ b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+@@ -175,6 +175,7 @@ &blsp1_dma {
+ 	 * BAM DMA interconnects support is in place.
+ 	 */
+ 	/delete-property/ clocks;
++	/delete-property/ clock-names;
+ };
+ 
+ &blsp1_uart2 {
+@@ -187,6 +188,7 @@ &blsp2_dma {
+ 	 * BAM DMA interconnects support is in place.
+ 	 */
+ 	/delete-property/ clocks;
++	/delete-property/ clock-names;
+ };
+ 
+ &blsp2_uart1 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts b/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts
+index 7167f75bced3fd..a9926ad6c6f9f5 100644
+--- a/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts
++++ b/arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts
+@@ -107,6 +107,7 @@ &qusb2phy0 {
+ 	status = "okay";
+ 
+ 	vdd-supply = <&vreg_l1b_0p925>;
++	vdda-pll-supply = <&vreg_l10a_1p8>;
+ 	vdda-phy-dpdm-supply = <&vreg_l7b_3p125>;
+ };
+ 
+@@ -404,6 +405,8 @@ &sdhc_1 {
+ &sdhc_2 {
+ 	status = "okay";
+ 
++	cd-gpios = <&tlmm 54 GPIO_ACTIVE_HIGH>;
++
+ 	vmmc-supply = <&vreg_l5b_2p95>;
+ 	vqmmc-supply = <&vreg_l2b_2p95>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts b/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts
+index d37a433130b98f..5948b401165ce9 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-samsung-starqltechn.dts
+@@ -135,8 +135,6 @@ vdda_pll_cc_ebi23:
+ 		vdda_sp_sensor:
+ 		vdda_ufs1_core:
+ 		vdda_ufs2_core:
+-		vdda_usb1_ss_core:
+-		vdda_usb2_ss_core:
+ 		vreg_l1a_0p875: ldo1 {
+ 			regulator-min-microvolt = <880000>;
+ 			regulator-max-microvolt = <880000>;
+@@ -157,6 +155,7 @@ vreg_l3a_1p0: ldo3 {
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ 		};
+ 
++		vdda_usb1_ss_core:
+ 		vdd_wcss_cx:
+ 		vdd_wcss_mx:
+ 		vdda_wcss_pll:
+@@ -383,8 +382,8 @@ &ufs_mem_phy {
+ };
+ 
+ &sdhc_2 {
+-	pinctrl-names = "default";
+ 	pinctrl-0 = <&sdc2_clk_state &sdc2_cmd_state &sdc2_data_state &sd_card_det_n_state>;
++	pinctrl-names = "default";
+ 	cd-gpios = <&tlmm 126 GPIO_ACTIVE_LOW>;
+ 	vmmc-supply = <&vreg_l21a_2p95>;
+ 	vqmmc-supply = <&vddpx_2>;
+@@ -418,16 +417,9 @@ &usb_1_qmpphy {
+ 	status = "okay";
+ };
+ 
+-&wifi {
+-	vdd-0.8-cx-mx-supply = <&vreg_l5a_0p8>;
+-	vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
+-	vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
+-	vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
+-	status = "okay";
+-};
+-
+ &tlmm {
+-	gpio-reserved-ranges = <0 4>, <27 4>, <81 4>, <85 4>;
++	gpio-reserved-ranges = <27 4>, /* SPI (eSE - embedded Secure Element) */
++			       <85 4>; /* SPI (fingerprint reader) */
+ 
+ 	sdc2_clk_state: sdc2-clk-state {
+ 		pins = "sdc2_clk";
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index c2937b4d9f1802..68613ea7146c88 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -606,7 +606,7 @@ cpu7_opp8: opp-1632000000 {
+ 		};
+ 
+ 		cpu7_opp9: opp-1747200000 {
+-			opp-hz = /bits/ 64 <1708800000>;
++			opp-hz = /bits/ 64 <1747200000>;
+ 			opp-peak-kBps = <5412000 42393600>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index f055600d6cfe5b..a86d0067634e81 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1806,11 +1806,11 @@ cryptobam: dma-controller@1dc4000 {
+ 			interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
++			qcom,num-ees = <4>;
++			num-channels = <16>;
+ 			qcom,controlled-remotely;
+ 			iommus = <&apps_smmu 0x594 0x0011>,
+ 				 <&apps_smmu 0x596 0x0011>;
+-			/* FIXME: Probing BAM DMA causes some abort and system hang */
+-			status = "fail";
+ 		};
+ 
+ 		crypto: crypto@1dfa000 {
+@@ -1822,8 +1822,6 @@ crypto: crypto@1dfa000 {
+ 				 <&apps_smmu 0x596 0x0011>;
+ 			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
+ 			interconnect-names = "memory";
+-			/* FIXME: dependency BAM DMA is disabled */
+-			status = "disabled";
+ 		};
+ 
+ 		ipa: ipa@1e40000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index ac3e00ad417719..65ebddd124e2a8 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -331,7 +331,8 @@ firmware {
+ 		scm: scm {
+ 			compatible = "qcom,scm-sm8550", "qcom,scm";
+ 			qcom,dload-mode = <&tcsr 0x19000>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&aggre2_noc MASTER_CRYPTO QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 		};
+ 	};
+ 
+@@ -850,9 +851,12 @@ i2c8: i2c@880000 {
+ 				interrupts = <GIC_SPI 373 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 0 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 0 QCOM_GPI_I2C>;
+@@ -868,9 +872,12 @@ spi8: spi@880000 {
+ 				interrupts = <GIC_SPI 373 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi8_data_clk>, <&qup_spi8_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 0 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 0 QCOM_GPI_SPI>;
+@@ -890,9 +897,12 @@ i2c9: i2c@884000 {
+ 				interrupts = <GIC_SPI 583 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 1 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 1 QCOM_GPI_I2C>;
+@@ -908,9 +918,12 @@ spi9: spi@884000 {
+ 				interrupts = <GIC_SPI 583 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi9_data_clk>, <&qup_spi9_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 1 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 1 QCOM_GPI_SPI>;
+@@ -930,9 +943,12 @@ i2c10: i2c@888000 {
+ 				interrupts = <GIC_SPI 584 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 2 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 2 QCOM_GPI_I2C>;
+@@ -948,9 +964,12 @@ spi10: spi@888000 {
+ 				interrupts = <GIC_SPI 584 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi10_data_clk>, <&qup_spi10_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 2 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 2 QCOM_GPI_SPI>;
+@@ -970,9 +989,12 @@ i2c11: i2c@88c000 {
+ 				interrupts = <GIC_SPI 585 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 3 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 3 QCOM_GPI_I2C>;
+@@ -988,9 +1010,12 @@ spi11: spi@88c000 {
+ 				interrupts = <GIC_SPI 585 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi11_data_clk>, <&qup_spi11_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 3 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 3 QCOM_GPI_I2C>;
+@@ -1010,9 +1035,12 @@ i2c12: i2c@890000 {
+ 				interrupts = <GIC_SPI 586 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 4 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 4 QCOM_GPI_I2C>;
+@@ -1028,9 +1056,12 @@ spi12: spi@890000 {
+ 				interrupts = <GIC_SPI 586 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi12_data_clk>, <&qup_spi12_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 4 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 4 QCOM_GPI_I2C>;
+@@ -1050,9 +1081,12 @@ i2c13: i2c@894000 {
+ 				interrupts = <GIC_SPI 587 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 5 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 5 QCOM_GPI_I2C>;
+@@ -1068,9 +1102,12 @@ spi13: spi@894000 {
+ 				interrupts = <GIC_SPI 587 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi13_data_clk>, <&qup_spi13_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 5 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 5 QCOM_GPI_SPI>;
+@@ -1088,8 +1125,10 @@ uart14: serial@898000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_uart14_default>, <&qup_uart14_cts_rts>;
+ 				interrupts = <GIC_SPI 461 IRQ_TYPE_LEVEL_HIGH>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1104,9 +1143,12 @@ i2c15: i2c@89c000 {
+ 				interrupts = <GIC_SPI 462 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 7 QCOM_GPI_I2C>,
+ 				       <&gpi_dma2 1 7 QCOM_GPI_I2C>;
+@@ -1122,9 +1164,12 @@ spi15: spi@89c000 {
+ 				interrupts = <GIC_SPI 462 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi15_data_clk>, <&qup_spi15_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_2 0 &clk_virt SLAVE_QUP_CORE_2 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_2 0>,
+-						<&aggre2_noc MASTER_QUP_2 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_2 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre2_noc MASTER_QUP_2 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt  SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma2 0 7 QCOM_GPI_SPI>,
+ 				       <&gpi_dma2 1 7 QCOM_GPI_SPI>;
+@@ -1156,8 +1201,10 @@ i2c_hub_0: i2c@980000 {
+ 				interrupts = <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1173,8 +1220,10 @@ i2c_hub_1: i2c@984000 {
+ 				interrupts = <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1190,8 +1239,10 @@ i2c_hub_2: i2c@988000 {
+ 				interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1207,8 +1258,10 @@ i2c_hub_3: i2c@98c000 {
+ 				interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1224,8 +1277,10 @@ i2c_hub_4: i2c@990000 {
+ 				interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1241,8 +1296,10 @@ i2c_hub_5: i2c@994000 {
+ 				interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1258,8 +1315,10 @@ i2c_hub_6: i2c@998000 {
+ 				interrupts = <GIC_SPI 470 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1275,8 +1334,10 @@ i2c_hub_7: i2c@99c000 {
+ 				interrupts = <GIC_SPI 471 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1292,8 +1353,10 @@ i2c_hub_8: i2c@9a0000 {
+ 				interrupts = <GIC_SPI 472 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1309,8 +1372,10 @@ i2c_hub_9: i2c@9a4000 {
+ 				interrupts = <GIC_SPI 473 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_I2C 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_0 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_I2C QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config";
+ 				status = "disabled";
+ 			};
+@@ -1347,7 +1412,8 @@ qupv3_id_0: geniqup@ac0000 {
+ 			clocks = <&gcc GCC_QUPV3_WRAP_1_M_AHB_CLK>,
+ 				 <&gcc GCC_QUPV3_WRAP_1_S_AHB_CLK>;
+ 			iommus = <&apps_smmu 0xa3 0>;
+-			interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>;
++			interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++					 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "qup-core";
+ 			dma-coherent;
+ 			#address-cells = <2>;
+@@ -1364,9 +1430,12 @@ i2c0: i2c@a80000 {
+ 				interrupts = <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 0 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 0 QCOM_GPI_I2C>;
+@@ -1382,9 +1451,12 @@ spi0: spi@a80000 {
+ 				interrupts = <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi0_data_clk>, <&qup_spi0_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 0 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 0 QCOM_GPI_SPI>;
+@@ -1404,9 +1476,12 @@ i2c1: i2c@a84000 {
+ 				interrupts = <GIC_SPI 354 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 1 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 1 QCOM_GPI_I2C>;
+@@ -1422,9 +1497,12 @@ spi1: spi@a84000 {
+ 				interrupts = <GIC_SPI 354 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi1_data_clk>, <&qup_spi1_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 1 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 1 QCOM_GPI_SPI>;
+@@ -1444,9 +1522,12 @@ i2c2: i2c@a88000 {
+ 				interrupts = <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 2 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 2 QCOM_GPI_I2C>;
+@@ -1462,9 +1543,12 @@ spi2: spi@a88000 {
+ 				interrupts = <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi2_data_clk>, <&qup_spi2_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 2 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 2 QCOM_GPI_SPI>;
+@@ -1484,9 +1568,12 @@ i2c3: i2c@a8c000 {
+ 				interrupts = <GIC_SPI 356 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 3 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 3 QCOM_GPI_I2C>;
+@@ -1502,9 +1589,12 @@ spi3: spi@a8c000 {
+ 				interrupts = <GIC_SPI 356 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi3_data_clk>, <&qup_spi3_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 3 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 3 QCOM_GPI_SPI>;
+@@ -1524,9 +1614,12 @@ i2c4: i2c@a90000 {
+ 				interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 4 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 4 QCOM_GPI_I2C>;
+@@ -1542,9 +1635,12 @@ spi4: spi@a90000 {
+ 				interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi4_data_clk>, <&qup_spi4_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 4 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 4 QCOM_GPI_SPI>;
+@@ -1562,9 +1658,12 @@ i2c5: i2c@a94000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_i2c5_data_clk>;
+ 				interrupts = <GIC_SPI 358 IRQ_TYPE_LEVEL_HIGH>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 5 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 5 QCOM_GPI_I2C>;
+@@ -1582,9 +1681,12 @@ spi5: spi@a94000 {
+ 				interrupts = <GIC_SPI 358 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi5_data_clk>, <&qup_spi5_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 5 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 5 QCOM_GPI_SPI>;
+@@ -1602,9 +1704,12 @@ i2c6: i2c@a98000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_i2c6_data_clk>;
+ 				interrupts = <GIC_SPI 363 IRQ_TYPE_LEVEL_HIGH>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 6 QCOM_GPI_I2C>,
+ 				       <&gpi_dma1 1 6 QCOM_GPI_I2C>;
+@@ -1622,9 +1727,12 @@ spi6: spi@a98000 {
+ 				interrupts = <GIC_SPI 363 IRQ_TYPE_LEVEL_HIGH>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_spi6_data_clk>, <&qup_spi6_cs>;
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>,
+-						<&aggre1_noc MASTER_QUP_1 0 &mc_virt  SLAVE_EBI1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>,
++						<&aggre1_noc MASTER_QUP_1 QCOM_ICC_TAG_ALWAYS
++						 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 				interconnect-names = "qup-core", "qup-config", "qup-memory";
+ 				dmas = <&gpi_dma1 0 6 QCOM_GPI_SPI>,
+ 				       <&gpi_dma1 1 6 QCOM_GPI_SPI>;
+@@ -1643,8 +1751,10 @@ uart7: serial@a9c000 {
+ 				pinctrl-0 = <&qup_uart7_default>;
+ 				interrupts = <GIC_SPI 579 IRQ_TYPE_LEVEL_HIGH>;
+ 				interconnect-names = "qup-core", "qup-config";
+-				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+-						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>;
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS
++						 &clk_virt SLAVE_QUP_CORE_1 QCOM_ICC_TAG_ALWAYS>,
++						<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++						 &config_noc SLAVE_QUP_1 QCOM_ICC_TAG_ALWAYS>;
+ 				status = "disabled";
+ 			};
+ 		};
+@@ -1768,8 +1878,10 @@ pcie0: pcie@1c00000 {
+ 				      "ddrss_sf_tbu",
+ 				      "noc_aggr";
+ 
+-			interconnects = <&pcie_noc MASTER_PCIE_0 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_0 0>;
++			interconnects = <&pcie_noc MASTER_PCIE_0 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &cnoc_main SLAVE_PCIE_0 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			msi-map = <0x0 &gic_its 0x1400 0x1>,
+@@ -1891,8 +2003,10 @@ pcie1: pcie@1c08000 {
+ 			assigned-clocks = <&gcc GCC_PCIE_1_AUX_CLK>;
+ 			assigned-clock-rates = <19200000>;
+ 
+-			interconnects = <&pcie_noc MASTER_PCIE_1 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_1 0>;
++			interconnects = <&pcie_noc MASTER_PCIE_1 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &cnoc_main SLAVE_PCIE_1 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "pcie-mem", "cpu-pcie";
+ 
+ 			msi-map = <0x0 &gic_its 0x1480 0x1>,
+@@ -1971,7 +2085,8 @@ crypto: crypto@1dfa000 {
+ 			dma-names = "rx", "tx";
+ 			iommus = <&apps_smmu 0x480 0x0>,
+ 				 <&apps_smmu 0x481 0x0>;
+-			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&aggre2_noc MASTER_CRYPTO QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "memory";
+ 		};
+ 
+@@ -2015,8 +2130,10 @@ ufs_mem_hc: ufshc@1d84000 {
+ 			dma-coherent;
+ 
+ 			operating-points-v2 = <&ufs_opp_table>;
+-			interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>;
++			interconnects = <&aggre1_noc MASTER_UFS_MEM QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_UFS_MEM_CFG QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			interconnect-names = "ufs-ddr", "cpu-ufs";
+ 			clock-names = "core_clk",
+@@ -2316,8 +2433,10 @@ ipa: ipa@3f40000 {
+ 			clocks = <&rpmhcc RPMH_IPA_CLK>;
+ 			clock-names = "core";
+ 
+-			interconnects = <&aggre2_noc MASTER_IPA 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_IPA_CFG 0>;
++			interconnects = <&aggre2_noc MASTER_IPA QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_IPA_CFG QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "memory",
+ 					     "config";
+ 
+@@ -2351,7 +2470,8 @@ remoteproc_mpss: remoteproc@4080000 {
+ 					<&rpmhpd RPMHPD_MSS>;
+ 			power-domain-names = "cx", "mss";
+ 
+-			interconnects = <&mc_virt MASTER_LLCC 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&mc_virt MASTER_LLCC QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			memory-region = <&mpss_mem>, <&q6_mpss_dtb_mem>, <&mpss_dsm_mem>;
+ 
+@@ -2392,7 +2512,8 @@ remoteproc_adsp: remoteproc@6800000 {
+ 					<&rpmhpd RPMHPD_LMX>;
+ 			power-domain-names = "lcx", "lmx";
+ 
+-			interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
+ 
+@@ -2850,8 +2971,10 @@ sdhc_2: mmc@8804000 {
+ 			power-domains = <&rpmhpd RPMHPD_CX>;
+ 			operating-points-v2 = <&sdhc2_opp_table>;
+ 
+-			interconnects = <&aggre2_noc MASTER_SDCC_2 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_SDCC_2 0>;
++			interconnects = <&aggre2_noc MASTER_SDCC_2 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_SDCC_2 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "sdhc-ddr", "cpu-sdhc";
+ 			bus-width = <4>;
+ 			dma-coherent;
+@@ -3022,8 +3145,11 @@ mdss: display-subsystem@ae00000 {
+ 
+ 			power-domains = <&dispcc MDSS_GDSC>;
+ 
+-			interconnects = <&mmss_noc MASTER_MDP 0 &mc_virt SLAVE_EBI1 0>;
+-			interconnect-names = "mdp0-mem";
++			interconnects = <&mmss_noc MASTER_MDP QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &config_noc SLAVE_DISPLAY_CFG QCOM_ICC_TAG_ACTIVE_ONLY>;
++			interconnect-names = "mdp0-mem", "cpu-cfg";
+ 
+ 			iommus = <&apps_smmu 0x1c00 0x2>;
+ 
+@@ -3495,8 +3621,10 @@ usb_1: usb@a6f8800 {
+ 
+ 			resets = <&gcc GCC_USB30_PRIM_BCR>;
+ 
+-			interconnects = <&aggre1_noc MASTER_USB3_0 0 &mc_virt SLAVE_EBI1 0>,
+-					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>;
++			interconnects = <&aggre1_noc MASTER_USB3_0 QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ALWAYS
++					 &config_noc SLAVE_USB3_0 QCOM_ICC_TAG_ALWAYS>;
+ 			interconnect-names = "usb-ddr", "apps-usb";
+ 
+ 			status = "disabled";
+@@ -4619,7 +4747,8 @@ pmu@24091000 {
+ 			compatible = "qcom,sm8550-llcc-bwmon", "qcom,sc7280-llcc-bwmon";
+ 			reg = <0 0x24091000 0 0x1000>;
+ 			interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
+-			interconnects = <&mc_virt MASTER_LLCC 3 &mc_virt SLAVE_EBI1 3>;
++			interconnects = <&mc_virt MASTER_LLCC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ACTIVE_ONLY>;
+ 
+ 			operating-points-v2 = <&llcc_bwmon_opp_table>;
+ 
+@@ -4668,7 +4797,8 @@ pmu@240b6400 {
+ 			compatible = "qcom,sm8550-cpu-bwmon", "qcom,sdm845-bwmon";
+ 			reg = <0 0x240b6400 0 0x600>;
+ 			interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>;
+-			interconnects = <&gem_noc MASTER_APPSS_PROC 3 &gem_noc SLAVE_LLCC 3>;
++			interconnects = <&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &gem_noc SLAVE_LLCC QCOM_ICC_TAG_ACTIVE_ONLY>;
+ 
+ 			operating-points-v2 = <&cpu_bwmon_opp_table>;
+ 
+@@ -4752,7 +4882,8 @@ remoteproc_cdsp: remoteproc@32300000 {
+ 					<&rpmhpd RPMHPD_NSP>;
+ 			power-domain-names = "cx", "mxc", "nsp";
+ 
+-			interconnects = <&nsp_noc MASTER_CDSP_PROC 0 &mc_virt SLAVE_EBI1 0>;
++			interconnects = <&nsp_noc MASTER_CDSP_PROC QCOM_ICC_TAG_ALWAYS
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ 
+ 			memory-region = <&cdsp_mem>, <&q6_cdsp_dtb_mem>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index c8a2a76a98f000..76acce6754986a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -159,13 +159,20 @@ cpu3: cpu@300 {
+ 			power-domain-names = "psci";
+ 
+ 			enable-method = "psci";
+-			next-level-cache = <&l2_200>;
++			next-level-cache = <&l2_300>;
+ 			capacity-dmips-mhz = <1792>;
+ 			dynamic-power-coefficient = <238>;
+ 
+ 			qcom,freq-domain = <&cpufreq_hw 3>;
+ 
+ 			#cooling-cells = <2>;
++
++			l2_300: l2-cache {
++				compatible = "cache";
++				cache-level = <2>;
++				cache-unified;
++				next-level-cache = <&l3_0>;
++			};
+ 		};
+ 
+ 		cpu4: cpu@400 {
+@@ -460,7 +467,7 @@ cpu_pd1: power-domain-cpu1 {
+ 		cpu_pd2: power-domain-cpu2 {
+ 			#power-domain-cells = <0>;
+ 			power-domains = <&cluster_pd>;
+-			domain-idle-states = <&silver_cpu_sleep_0>;
++			domain-idle-states = <&gold_cpu_sleep_0>;
+ 		};
+ 
+ 		cpu_pd3: power-domain-cpu3 {
+@@ -3658,8 +3665,11 @@ mdss: display-subsystem@ae00000 {
+ 			resets = <&dispcc DISP_CC_MDSS_CORE_BCR>;
+ 
+ 			interconnects = <&mmss_noc MASTER_MDP QCOM_ICC_TAG_ALWAYS
+-					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-			interconnect-names = "mdp0-mem";
++					 &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
++					<&gem_noc MASTER_APPSS_PROC QCOM_ICC_TAG_ACTIVE_ONLY
++					 &config_noc SLAVE_DISPLAY_CFG QCOM_ICC_TAG_ACTIVE_ONLY>;
++			interconnect-names = "mdp0-mem",
++					     "cpu-cfg";
+ 
+ 			power-domains = <&dispcc MDSS_GDSC>;
+ 
+@@ -6537,20 +6547,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu0_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6570,20 +6580,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu1_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6603,20 +6613,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu2_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6636,20 +6646,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu3_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6669,20 +6679,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu4_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6702,20 +6712,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu5_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6735,20 +6745,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu6_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+@@ -6768,20 +6778,20 @@ map0 {
+ 
+ 			trips {
+ 				gpu7_alert0: trip-point0 {
+-					temperature = <85000>;
++					temperature = <95000>;
+ 					hysteresis = <1000>;
+ 					type = "passive";
+ 				};
+ 
+ 				trip-point1 {
+-					temperature = <90000>;
++					temperature = <110000>;
+ 					hysteresis = <1000>;
+ 					type = "hot";
+ 				};
+ 
+ 				trip-point2 {
+-					temperature = <110000>;
+-					hysteresis = <1000>;
++					temperature = <115000>;
++					hysteresis = <0>;
+ 					type = "critical";
+ 				};
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sm8750.dtsi b/arch/arm64/boot/dts/qcom/sm8750.dtsi
+index 3bbd7d18598ee0..e8bb587a7813f9 100644
+--- a/arch/arm64/boot/dts/qcom/sm8750.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8750.dtsi
+@@ -233,53 +233,59 @@ psci {
+ 
+ 		cpu_pd0: power-domain-cpu0 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd1: power-domain-cpu1 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd2: power-domain-cpu2 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd3: power-domain-cpu3 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd4: power-domain-cpu4 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd5: power-domain-cpu5 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster0_pd>;
+ 			domain-idle-states = <&cluster0_c4>;
+ 		};
+ 
+ 		cpu_pd6: power-domain-cpu6 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster1_pd>;
+ 			domain-idle-states = <&cluster1_c4>;
+ 		};
+ 
+ 		cpu_pd7: power-domain-cpu7 {
+ 			#power-domain-cells = <0>;
+-			power-domains = <&cluster_pd>;
++			power-domains = <&cluster1_pd>;
+ 			domain-idle-states = <&cluster1_c4>;
+ 		};
+ 
+-		cluster_pd: power-domain-cluster {
++		cluster0_pd: power-domain-cluster0 {
++			#power-domain-cells = <0>;
++			domain-idle-states = <&cluster_cl5>;
++			power-domains = <&system_pd>;
++		};
++
++		cluster1_pd: power-domain-cluster1 {
+ 			#power-domain-cells = <0>;
+ 			domain-idle-states = <&cluster_cl5>;
+ 			power-domains = <&system_pd>;
+@@ -987,7 +993,7 @@ uart14: serial@898000 {
+ 
+ 				interrupts = <GIC_SPI 461 IRQ_TYPE_LEVEL_HIGH>;
+ 
+-				clocks = <&gcc GCC_QUPV3_WRAP2_S5_CLK>;
++				clocks = <&gcc GCC_QUPV3_WRAP2_S6_CLK>;
+ 				clock-names = "se";
+ 
+ 				interconnects =	<&clk_virt MASTER_QUP_CORE_2 QCOM_ICC_TAG_ALWAYS
+diff --git a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+index f5063a0df9fbfa..3cfe42ec089141 100644
+--- a/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
++++ b/arch/arm64/boot/dts/qcom/x1e001de-devkit.dts
+@@ -790,6 +790,9 @@ typec-mux@8 {
+ 
+ 		reset-gpios = <&tlmm 185 GPIO_ACTIVE_HIGH>;
+ 
++		pinctrl-0 = <&rtmr2_default>;
++		pinctrl-names = "default";
++
+ 		orientation-switch;
+ 		retimer-switch;
+ 
+@@ -845,6 +848,9 @@ typec-mux@8 {
+ 
+ 		reset-gpios = <&pm8550_gpios 10 GPIO_ACTIVE_HIGH>;
+ 
++		pinctrl-0 = <&rtmr0_default>;
++		pinctrl-names = "default";
++
+ 		retimer-switch;
+ 		orientation-switch;
+ 
+@@ -900,6 +906,9 @@ typec-mux@8 {
+ 
+ 		reset-gpios = <&tlmm 176 GPIO_ACTIVE_HIGH>;
+ 
++		pinctrl-0 = <&rtmr1_default>;
++		pinctrl-names = "default";
++
+ 		retimer-switch;
+ 		orientation-switch;
+ 
+@@ -1018,9 +1027,22 @@ &pcie6a_phy {
+ };
+ 
+ &pm8550_gpios {
++	rtmr0_default: rtmr0-reset-n-active-state {
++		pins = "gpio10";
++		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
++	};
++
+ 	usb0_3p3_reg_en: usb0-3p3-reg-en-state {
+ 		pins = "gpio11";
+ 		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
+ 	};
+ };
+ 
+@@ -1028,6 +1050,10 @@ &pmc8380_5_gpios {
+ 	usb0_pwr_1p15_en: usb0-pwr-1p15-en-state {
+ 		pins = "gpio8";
+ 		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
+ 	};
+ };
+ 
+@@ -1035,6 +1061,10 @@ &pm8550ve_9_gpios {
+ 	usb0_1p8_reg_en: usb0-1p8-reg-en-state {
+ 		pins = "gpio8";
+ 		function = "normal";
++		power-source = <1>; /* 1.8 V */
++		bias-disable;
++		input-disable;
++		output-enable;
+ 	};
+ };
+ 
+@@ -1205,6 +1235,20 @@ wake-n-pins {
+ 		};
+ 	};
+ 
++	rtmr1_default: rtmr1-reset-n-active-state {
++		pins = "gpio176";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
++	rtmr2_default: rtmr2-reset-n-active-state {
++		pins = "gpio185";
++		function = "gpio";
++		drive-strength = <2>;
++		bias-disable;
++	};
++
+ 	rtmr1_1p15_reg_en: rtmr1-1p15-reg-en-state {
+ 		pins = "gpio188";
+ 		function = "gpio";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+index 5867953c73564c..6a883fafe3c77a 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+@@ -510,6 +510,7 @@ vreg_l12b: ldo12 {
+ 			regulator-min-microvolt = <1200000>;
+ 			regulator-max-microvolt = <1200000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l13b: ldo13 {
+@@ -531,6 +532,7 @@ vreg_l15b: ldo15 {
+ 			regulator-min-microvolt = <1800000>;
+ 			regulator-max-microvolt = <1800000>;
+ 			regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
++			regulator-always-on;
+ 		};
+ 
+ 		vreg_l16b: ldo16 {
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 5aeecf711340d2..607d32f68c3406 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -4815,6 +4815,8 @@ usb_2_dwc3: usb@a200000 {
+ 				snps,dis-u1-entry-quirk;
+ 				snps,dis-u2-entry-quirk;
+ 
++				dma-coherent;
++
+ 				ports {
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso b/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso
+index c27b9b3d4e5f4a..f2d53e958da116 100644
+--- a/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso
++++ b/arch/arm64/boot/dts/renesas/white-hawk-ard-audio-da7212.dtso
+@@ -108,7 +108,7 @@ sound_clk_pins: sound-clk {
+ 	};
+ 
+ 	tpu0_pins: tpu0 {
+-		groups = "tpu_to0_a";
++		groups = "tpu_to0_b";
+ 		function = "tpu";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
+index 20e8232f2f3234..976a3ab44e5a52 100644
+--- a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
++++ b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
+@@ -11,6 +11,10 @@
+ / {
+ 	model = "Renesas White Hawk Single board";
+ 	compatible = "renesas,white-hawk-single";
++
++	aliases {
++		ethernet3 = &tsn0;
++	};
+ };
+ 
+ &hscif0 {
+@@ -53,7 +57,7 @@ &tsn0 {
+ 	pinctrl-0 = <&tsn0_pins>;
+ 	pinctrl-names = "default";
+ 	phy-mode = "rgmii";
+-	phy-handle = <&phy3>;
++	phy-handle = <&tsn0_phy>;
+ 	status = "okay";
+ 
+ 	mdio {
+@@ -63,7 +67,7 @@ mdio {
+ 		reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>;
+ 		reset-post-delay-us = <4000>;
+ 
+-		phy3: ethernet-phy@0 {
++		tsn0_phy: ethernet-phy@0 {
+ 			compatible = "ethernet-phy-id002b.0980",
+ 				     "ethernet-phy-ieee802.3-c22";
+ 			reg = <0>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+index f2234dabd66411..70979079923c10 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
+@@ -312,14 +312,6 @@ &uart2 {
+ 	status = "okay";
+ };
+ 
+-&usb_host0_ehci {
+-	status = "okay";
+-};
+-
+-&usb_host0_ohci {
+-	status = "okay";
+-};
+-
+ &vopb {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 314d9dfdba5732..587e89d7fc5e42 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -585,10 +585,6 @@ &u2phy1 {
+ 	u2phy1_otg: otg-port {
+ 		status = "okay";
+ 	};
+-
+-	u2phy1_host: host-port {
+-		status = "okay";
+-	};
+ };
+ 
+ &usbdrd3_1 {
+@@ -622,11 +618,3 @@ hub_3_0: hub@2 {
+ 		vdd2-supply = <&vcc3v3_sys>;
+ 	};
+ };
+-
+-&usb_host1_ehci {
+-	status = "okay";
+-};
+-
+-&usb_host1_ohci {
+-	status = "okay";
+-};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3528.dtsi b/arch/arm64/boot/dts/rockchip/rk3528.dtsi
+index 26c3559d6a6deb..7f1ffd6003f581 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3528.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3528.dtsi
+@@ -404,9 +404,10 @@ uart2: serial@ffa00000 {
+ 
+ 		uart3: serial@ffa08000 {
+ 			compatible = "rockchip,rk3528-uart", "snps,dw-apb-uart";
++			reg = <0x0 0xffa08000 0x0 0x100>;
+ 			clocks = <&cru SCLK_UART3>, <&cru PCLK_UART3>;
+ 			clock-names = "baudclk", "apb_pclk";
+-			reg = <0x0 0xffa08000 0x0 0x100>;
++			interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>;
+ 			reg-io-width = <4>;
+ 			reg-shift = <2>;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts b/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts
+index 53e71528e4c4c7..6224d72813e593 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-rock-3c.dts
+@@ -636,6 +636,7 @@ flash@0 {
+ 		spi-max-frequency = <104000000>;
+ 		spi-rx-bus-width = <4>;
+ 		spi-tx-bus-width = <1>;
++		vcc-supply = <&vcc_1v8>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi b/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi
+index 00c479aa18711a..a28b4af10d13a2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568-nanopi-r5s.dtsi
+@@ -486,9 +486,12 @@ &saradc {
+ &sdhci {
+ 	bus-width = <8>;
+ 	max-frequency = <200000000>;
++	mmc-hs200-1_8v;
+ 	non-removable;
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&emmc_bus8 &emmc_clk &emmc_cmd>;
++	pinctrl-0 = <&emmc_bus8 &emmc_clk &emmc_cmd &emmc_datastrobe>;
++	vmmc-supply = <&vcc_3v3>;
++	vqmmc-supply = <&vcc_1v8>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+index 1e18ad93ba0ebd..c52af310c7062e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+@@ -439,16 +439,15 @@ xin32k: clock-2 {
+ 		#clock-cells = <0>;
+ 	};
+ 
+-	pmu_sram: sram@10f000 {
+-		compatible = "mmio-sram";
+-		reg = <0x0 0x0010f000 0x0 0x100>;
+-		ranges = <0 0x0 0x0010f000 0x100>;
+-		#address-cells = <1>;
+-		#size-cells = <1>;
++	reserved-memory {
++		#address-cells = <2>;
++		#size-cells = <2>;
++		ranges;
+ 
+-		scmi_shmem: sram@0 {
++		scmi_shmem: shmem@10f000 {
+ 			compatible = "arm,scmi-shmem";
+-			reg = <0x0 0x100>;
++			reg = <0x0 0x0010f000 0x0 0x100>;
++			no-map;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+index 4421852161dd65..da4e0cacd6d72d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+@@ -573,6 +573,7 @@ &usb1 {
+ &ospi1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&mcu_fss0_ospi1_pins_default>;
++	status = "okay";
+ 
+ 	flash@0 {
+ 		compatible = "jedec,spi-nor";
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index c4ce2c67c0e067..8e5d4dbd74e50d 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1587,6 +1587,9 @@ CONFIG_PHY_HISTB_COMBPHY=y
+ CONFIG_PHY_HISI_INNO_USB2=y
+ CONFIG_PHY_MVEBU_CP110_COMPHY=y
+ CONFIG_PHY_MTK_TPHY=y
++CONFIG_PHY_MTK_HDMI=m
++CONFIG_PHY_MTK_MIPI_DSI=m
++CONFIG_PHY_MTK_DP=m
+ CONFIG_PHY_QCOM_EDP=m
+ CONFIG_PHY_QCOM_PCIE2=m
+ CONFIG_PHY_QCOM_QMP=m
+diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
+index e4f77757937e65..71f0cbf7b28872 100644
+--- a/arch/arm64/include/asm/esr.h
++++ b/arch/arm64/include/asm/esr.h
+@@ -378,12 +378,14 @@
+ /*
+  * ISS values for SME traps
+  */
+-
+-#define ESR_ELx_SME_ISS_SME_DISABLED	0
+-#define ESR_ELx_SME_ISS_ILL		1
+-#define ESR_ELx_SME_ISS_SM_DISABLED	2
+-#define ESR_ELx_SME_ISS_ZA_DISABLED	3
+-#define ESR_ELx_SME_ISS_ZT_DISABLED	4
++#define ESR_ELx_SME_ISS_SMTC_MASK		GENMASK(2, 0)
++#define ESR_ELx_SME_ISS_SMTC(esr)		((esr) & ESR_ELx_SME_ISS_SMTC_MASK)
++
++#define ESR_ELx_SME_ISS_SMTC_SME_DISABLED	0
++#define ESR_ELx_SME_ISS_SMTC_ILL		1
++#define ESR_ELx_SME_ISS_SMTC_SM_DISABLED	2
++#define ESR_ELx_SME_ISS_SMTC_ZA_DISABLED	3
++#define ESR_ELx_SME_ISS_SMTC_ZT_DISABLED	4
+ 
+ /* ISS field definitions for MOPS exceptions */
+ #define ESR_ELx_MOPS_ISS_MEM_INST	(UL(1) << 24)
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 8370d55f035334..0e649d0e59b06e 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -359,9 +359,6 @@ static void task_fpsimd_load(void)
+ 	WARN_ON(preemptible());
+ 	WARN_ON(test_thread_flag(TIF_KERNEL_FPSTATE));
+ 
+-	if (system_supports_fpmr())
+-		write_sysreg_s(current->thread.uw.fpmr, SYS_FPMR);
+-
+ 	if (system_supports_sve() || system_supports_sme()) {
+ 		switch (current->thread.fp_type) {
+ 		case FP_STATE_FPSIMD:
+@@ -413,6 +410,9 @@ static void task_fpsimd_load(void)
+ 			restore_ffr = system_supports_fa64();
+ 	}
+ 
++	if (system_supports_fpmr())
++		write_sysreg_s(current->thread.uw.fpmr, SYS_FPMR);
++
+ 	if (restore_sve_regs) {
+ 		WARN_ON_ONCE(current->thread.fp_type != FP_STATE_SVE);
+ 		sve_load_state(sve_pffr(&current->thread),
+@@ -651,7 +651,7 @@ static void __fpsimd_to_sve(void *sst, struct user_fpsimd_state const *fst,
+  * task->thread.uw.fpsimd_state must be up to date before calling this
+  * function.
+  */
+-static void fpsimd_to_sve(struct task_struct *task)
++static inline void fpsimd_to_sve(struct task_struct *task)
+ {
+ 	unsigned int vq;
+ 	void *sst = task->thread.sve_state;
+@@ -675,7 +675,7 @@ static void fpsimd_to_sve(struct task_struct *task)
+  * bytes of allocated kernel memory.
+  * task->thread.sve_state must be up to date before calling this function.
+  */
+-static void sve_to_fpsimd(struct task_struct *task)
++static inline void sve_to_fpsimd(struct task_struct *task)
+ {
+ 	unsigned int vq, vl;
+ 	void const *sst = task->thread.sve_state;
+@@ -1436,7 +1436,7 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
+ 	 * If this not a trap due to SME being disabled then something
+ 	 * is being used in the wrong mode, report as SIGILL.
+ 	 */
+-	if (ESR_ELx_ISS(esr) != ESR_ELx_SME_ISS_SME_DISABLED) {
++	if (ESR_ELx_SME_ISS_SMTC(esr) != ESR_ELx_SME_ISS_SMTC_SME_DISABLED) {
+ 		force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
+ 		return;
+ 	}
+@@ -1460,6 +1460,8 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
+ 		sme_set_vq(vq_minus_one);
+ 
+ 		fpsimd_bind_task_to_cpu();
++	} else {
++		fpsimd_flush_task_state(current);
+ 	}
+ 
+ 	put_cpu_fpsimd_context();
+@@ -1573,8 +1575,8 @@ void fpsimd_thread_switch(struct task_struct *next)
+ 		fpsimd_save_user_state();
+ 
+ 	if (test_tsk_thread_flag(next, TIF_KERNEL_FPSTATE)) {
+-		fpsimd_load_kernel_state(next);
+ 		fpsimd_flush_cpu_state();
++		fpsimd_load_kernel_state(next);
+ 	} else {
+ 		/*
+ 		 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
+@@ -1661,6 +1663,9 @@ void fpsimd_flush_thread(void)
+ 		current->thread.svcr = 0;
+ 	}
+ 
++	if (system_supports_fpmr())
++		current->thread.uw.fpmr = 0;
++
+ 	current->thread.fp_type = FP_STATE_FPSIMD;
+ 
+ 	put_cpu_fpsimd_context();
+@@ -1801,7 +1806,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state)
+ 	get_cpu_fpsimd_context();
+ 
+ 	current->thread.uw.fpsimd_state = *state;
+-	if (test_thread_flag(TIF_SVE))
++	if (current->thread.fp_type == FP_STATE_SVE)
+ 		fpsimd_to_sve(current);
+ 
+ 	task_fpsimd_load();
+diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
+index 9d01361696a145..ae551b8571374f 100644
+--- a/arch/arm64/xen/hypercall.S
++++ b/arch/arm64/xen/hypercall.S
+@@ -83,7 +83,26 @@ HYPERCALL3(vcpu_op);
+ HYPERCALL1(platform_op_raw);
+ HYPERCALL2(multicall);
+ HYPERCALL2(vm_assist);
+-HYPERCALL3(dm_op);
++
++SYM_FUNC_START(HYPERVISOR_dm_op)
++	mov x16, #__HYPERVISOR_dm_op;	\
++	/*
++	 * dm_op hypercalls are issued by the userspace. The kernel needs to
++	 * enable access to TTBR0_EL1 as the hypervisor would issue stage 1
++	 * translations to user memory via AT instructions. Since AT
++	 * instructions are not affected by the PAN bit (ARMv8.1), we only
++	 * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
++	 * is enabled (it implies that hardware UAO and PAN disabled).
++	 */
++	uaccess_ttbr0_enable x6, x7, x8
++	hvc XEN_IMM
++
++	/*
++	 * Disable userspace access from kernel once the hyp call completed.
++	 */
++	uaccess_ttbr0_disable x6, x7
++	ret
++SYM_FUNC_END(HYPERVISOR_dm_op);
+ 
+ SYM_FUNC_START(privcmd_call)
+ 	mov x16, x0
+diff --git a/arch/m68k/mac/config.c b/arch/m68k/mac/config.c
+index e324410ef239c0..d26c7f4f8c360a 100644
+--- a/arch/m68k/mac/config.c
++++ b/arch/m68k/mac/config.c
+@@ -793,7 +793,7 @@ static void __init mac_identify(void)
+ 	}
+ 
+ 	macintosh_config = mac_data_table;
+-	for (m = macintosh_config; m->ident != -1; m++) {
++	for (m = &mac_data_table[1]; m->ident != -1; m++) {
+ 		if (m->ident == model) {
+ 			macintosh_config = m;
+ 			break;
+diff --git a/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts b/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts
+index c7ea4f1c0bb21f..6c277ab83d4b94 100644
+--- a/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts
++++ b/arch/mips/boot/dts/loongson/loongson64c_4core_ls7a.dts
+@@ -29,6 +29,7 @@ msi: msi-controller@2ff00000 {
+ 		compatible = "loongson,pch-msi-1.0";
+ 		reg = <0 0x2ff00000 0 0x8>;
+ 		interrupt-controller;
++		#interrupt-cells = <1>;
+ 		msi-controller;
+ 		loongson,msi-base-vec = <64>;
+ 		loongson,msi-num-vecs = <64>;
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index 6ac621155ec3c8..9d1ab3971694ae 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -160,9 +160,7 @@ endif
+ 
+ obj64-$(CONFIG_PPC_TRANSACTIONAL_MEM)	+= tm.o
+ 
+-ifneq ($(CONFIG_XMON)$(CONFIG_KEXEC_CORE)(CONFIG_PPC_BOOK3S),)
+ obj-y				+= ppc_save_regs.o
+-endif
+ 
+ obj-$(CONFIG_EPAPR_PARAVIRT)	+= epapr_paravirt.o epapr_hcalls.o
+ obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvm_emul.o
+diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c
+index 9ac3266e496522..a325c1c02f96dc 100644
+--- a/arch/powerpc/kexec/crash.c
++++ b/arch/powerpc/kexec/crash.c
+@@ -359,7 +359,10 @@ void default_machine_crash_shutdown(struct pt_regs *regs)
+ 	if (TRAP(regs) == INTERRUPT_SYSTEM_RESET)
+ 		is_via_system_reset = 1;
+ 
+-	crash_smp_send_stop();
++	if (IS_ENABLED(CONFIG_SMP))
++		crash_smp_send_stop();
++	else
++		crash_kexec_prepare();
+ 
+ 	crash_save_cpu(regs, crashing_cpu);
+ 
+diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
+index 0b6365d85d1171..dc6f75d3ac6ef7 100644
+--- a/arch/powerpc/platforms/book3s/vas-api.c
++++ b/arch/powerpc/platforms/book3s/vas-api.c
+@@ -521,6 +521,15 @@ static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
+ 		return -EINVAL;
+ 	}
+ 
++	/*
++	 * Map complete page to the paste address. So the user
++	 * space should pass 0ULL to the offset parameter.
++	 */
++	if (vma->vm_pgoff) {
++		pr_debug("Page offset unsupported to map paste address\n");
++		return -EINVAL;
++	}
++
+ 	/* Ensure instance has an open send window */
+ 	if (!txwin) {
+ 		pr_err("No send window open?\n");
+diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
+index 4ac9808e55a44d..2ea30b34335415 100644
+--- a/arch/powerpc/platforms/powernv/memtrace.c
++++ b/arch/powerpc/platforms/powernv/memtrace.c
+@@ -48,11 +48,15 @@ static ssize_t memtrace_read(struct file *filp, char __user *ubuf,
+ static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ 	struct memtrace_entry *ent = filp->private_data;
++	unsigned long ent_nrpages = ent->size >> PAGE_SHIFT;
++	unsigned long vma_nrpages = vma_pages(vma);
+ 
+-	if (ent->size < vma->vm_end - vma->vm_start)
++	/* The requested page offset should be within object's page count */
++	if (vma->vm_pgoff >= ent_nrpages)
+ 		return -EINVAL;
+ 
+-	if (vma->vm_pgoff << PAGE_SHIFT >= ent->size)
++	/* The requested mapping range should remain within the bounds */
++	if (vma_nrpages > ent_nrpages - vma->vm_pgoff)
+ 		return -EINVAL;
+ 
+ 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index d6ebc19fb99c51..eec333dd2e598c 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -197,7 +197,7 @@ static void tce_iommu_userspace_view_free(struct iommu_table *tbl)
+ 
+ static void tce_free_pSeries(struct iommu_table *tbl)
+ {
+-	if (!tbl->it_userspace)
++	if (tbl->it_userspace)
+ 		tce_iommu_userspace_view_free(tbl);
+ }
+ 
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 77c788660223b3..56f06a27d45fb1 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -455,7 +455,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
+ 
+ 	val.data_u64 = 0;
+ 	if (user_mode(regs)) {
+-		if (copy_from_user(&val, (u8 __user *)addr, len))
++		if (copy_from_user_nofault(&val, (u8 __user *)addr, len))
+ 			return -1;
+ 	} else {
+ 		memcpy(&val, (u8 *)addr, len);
+@@ -556,7 +556,7 @@ static int handle_scalar_misaligned_store(struct pt_regs *regs)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (user_mode(regs)) {
+-		if (copy_to_user((u8 __user *)addr, &val, len))
++		if (copy_to_user_nofault((u8 __user *)addr, &val, len))
+ 			return -1;
+ 	} else {
+ 		memcpy((u8 *)addr, &val, len);
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index d1c83a77735e05..0000ecf49b188b 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -143,9 +143,9 @@ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu,
+ 	struct kvm_vcpu *tmp;
+ 
+ 	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+-		spin_lock(&vcpu->arch.mp_state_lock);
++		spin_lock(&tmp->arch.mp_state_lock);
+ 		WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED);
+-		spin_unlock(&vcpu->arch.mp_state_lock);
++		spin_unlock(&tmp->arch.mp_state_lock);
+ 	}
+ 	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
+ 
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index 9a5d5be8acf41e..d278bf0c09d1b3 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -15,6 +15,7 @@
+ #include <linux/pagemap.h>
+ #include <linux/swap.h>
+ #include <linux/pagewalk.h>
++#include <linux/backing-dev.h>
+ #include <asm/facility.h>
+ #include <asm/sections.h>
+ #include <asm/uv.h>
+@@ -324,32 +325,87 @@ static int make_folio_secure(struct mm_struct *mm, struct folio *folio, struct u
+ }
+ 
+ /**
+- * s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split.
++ * s390_wiggle_split_folio() - try to drain extra references to a folio and
++ *			       split the folio if it is large.
+  * @mm:    the mm containing the folio to work on
+  * @folio: the folio
+- * @split: whether to split a large folio
+  *
+  * Context: Must be called while holding an extra reference to the folio;
+  *          the mm lock should not be held.
+- * Return: 0 if the folio was split successfully;
+- *         -EAGAIN if the folio was not split successfully but another attempt
+- *                 can be made, or if @split was set to false;
+- *         -EINVAL in case of other errors. See split_folio().
++ * Return: 0 if the operation was successful;
++ *	   -EAGAIN if splitting the large folio was not successful,
++ *		   but another attempt can be made;
++ *	   -EINVAL in case of other folio splitting errors. See split_folio().
+  */
+-static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
++static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio)
+ {
+-	int rc;
++	int rc, tried_splits;
+ 
+ 	lockdep_assert_not_held(&mm->mmap_lock);
+ 	folio_wait_writeback(folio);
+ 	lru_add_drain_all();
+-	if (split) {
++
++	if (!folio_test_large(folio))
++		return 0;
++
++	for (tried_splits = 0; tried_splits < 2; tried_splits++) {
++		struct address_space *mapping;
++		loff_t lstart, lend;
++		struct inode *inode;
++
+ 		folio_lock(folio);
+ 		rc = split_folio(folio);
++		if (rc != -EBUSY) {
++			folio_unlock(folio);
++			return rc;
++		}
++
++		/*
++		 * Splitting with -EBUSY can fail for various reasons, but we
++		 * have to handle one case explicitly for now: some mappings
++		 * don't allow for splitting dirty folios; writeback will
++		 * mark them clean again, including marking all page table
++		 * entries mapping the folio read-only, to catch future write
++		 * attempts.
++		 *
++		 * While the system should be writing back dirty folios in the
++		 * background, we obtained this folio by looking up a writable
++		 * page table entry. On these problematic mappings, writable
++		 * page table entries imply dirty folios, preventing the
++		 * split in the first place.
++		 *
++		 * To prevent a livelock when trigger writeback manually and
++		 * letting the caller look up the folio again in the page
++		 * table (turning it dirty), immediately try to split again.
++		 *
++		 * This is only a problem for some mappings (e.g., XFS);
++		 * mappings that do not support writeback (e.g., shmem) do not
++		 * apply.
++		 */
++		if (!folio_test_dirty(folio) || folio_test_anon(folio) ||
++		    !folio->mapping || !mapping_can_writeback(folio->mapping)) {
++			folio_unlock(folio);
++			break;
++		}
++
++		/*
++		 * Ideally, we'd only trigger writeback on this exact folio. But
++		 * there is no easy way to do that, so we'll stabilize the
++		 * mapping while we still hold the folio lock, so we can drop
++		 * the folio lock to trigger writeback on the range currently
++		 * covered by the folio instead.
++		 */
++		mapping = folio->mapping;
++		lstart = folio_pos(folio);
++		lend = lstart + folio_size(folio) - 1;
++		inode = igrab(mapping->host);
+ 		folio_unlock(folio);
+ 
+-		if (rc != -EBUSY)
+-			return rc;
++		if (unlikely(!inode))
++			break;
++
++		filemap_write_and_wait_range(mapping, lstart, lend);
++		iput(mapping->host);
+ 	}
+ 	return -EAGAIN;
+ }
+@@ -393,8 +449,11 @@ int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header
+ 	folio_walk_end(&fw, vma);
+ 	mmap_read_unlock(mm);
+ 
+-	if (rc == -E2BIG || rc == -EBUSY)
+-		rc = s390_wiggle_split_folio(mm, folio, rc == -E2BIG);
++	if (rc == -E2BIG || rc == -EBUSY) {
++		rc = s390_wiggle_split_folio(mm, folio);
++		if (!rc)
++			rc = -EAGAIN;
++	}
+ 	folio_put(folio);
+ 
+ 	return rc;
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 0776dfde2dba9c..945106b5562db0 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -605,17 +605,15 @@ static void bpf_jit_prologue(struct bpf_jit *jit, struct bpf_prog *fp,
+ 	}
+ 	/* Setup stack and backchain */
+ 	if (is_first_pass(jit) || (jit->seen & SEEN_STACK)) {
+-		if (is_first_pass(jit) || (jit->seen & SEEN_FUNC))
+-			/* lgr %w1,%r15 (backchain) */
+-			EMIT4(0xb9040000, REG_W1, REG_15);
++		/* lgr %w1,%r15 (backchain) */
++		EMIT4(0xb9040000, REG_W1, REG_15);
+ 		/* la %bfp,STK_160_UNUSED(%r15) (BPF frame pointer) */
+ 		EMIT4_DISP(0x41000000, BPF_REG_FP, REG_15, STK_160_UNUSED);
+ 		/* aghi %r15,-STK_OFF */
+ 		EMIT4_IMM(0xa70b0000, REG_15, -(STK_OFF + stack_depth));
+-		if (is_first_pass(jit) || (jit->seen & SEEN_FUNC))
+-			/* stg %w1,152(%r15) (backchain) */
+-			EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
+-				      REG_15, 152);
++		/* stg %w1,152(%r15) (backchain) */
++		EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
++			      REG_15, 152);
+ 	}
+ }
+ 
+diff --git a/arch/um/os-Linux/sigio.c b/arch/um/os-Linux/sigio.c
+index a05a6ecee75615..6de145f8fe3d93 100644
+--- a/arch/um/os-Linux/sigio.c
++++ b/arch/um/os-Linux/sigio.c
+@@ -12,6 +12,7 @@
+ #include <signal.h>
+ #include <string.h>
+ #include <sys/epoll.h>
++#include <asm/unistd.h>
+ #include <kern_util.h>
+ #include <init.h>
+ #include <os.h>
+@@ -46,7 +47,7 @@ static void *write_sigio_thread(void *unused)
+ 			       __func__, errno);
+ 		}
+ 
+-		CATCH_EINTR(r = tgkill(pid, pid, SIGIO));
++		CATCH_EINTR(r = syscall(__NR_tgkill, pid, pid, SIGIO));
+ 		if (r < 0)
+ 			printk(UM_KERN_ERR "%s: tgkill failed, errno = %d\n",
+ 			       __func__, errno);
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 49c26ce2b11522..a6fa01ef35a10e 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -38,7 +38,6 @@ struct amd_uncore_ctx {
+ 	int refcnt;
+ 	int cpu;
+ 	struct perf_event **events;
+-	struct hlist_node node;
+ };
+ 
+ struct amd_uncore_pmu {
+@@ -890,6 +889,39 @@ static void amd_uncore_umc_start(struct perf_event *event, int flags)
+ 	perf_event_update_userpage(event);
+ }
+ 
++static void amd_uncore_umc_read(struct perf_event *event)
++{
++	struct hw_perf_event *hwc = &event->hw;
++	u64 prev, new, shift;
++	s64 delta;
++
++	shift = COUNTER_SHIFT + 1;
++	prev = local64_read(&hwc->prev_count);
++
++	/*
++	 * UMC counters do not have RDPMC assignments. Read counts directly
++	 * from the corresponding PERF_CTR.
++	 */
++	rdmsrl(hwc->event_base, new);
++
++	/*
++	 * Unlike the other uncore counters, UMC counters saturate and set the
++	 * Overflow bit (bit 48) on overflow. Since they do not roll over,
++	 * proactively reset the corresponding PERF_CTR when bit 47 is set so
++	 * that the counter never gets a chance to saturate.
++	 */
++	if (new & BIT_ULL(63 - COUNTER_SHIFT)) {
++		wrmsrl(hwc->event_base, 0);
++		local64_set(&hwc->prev_count, 0);
++	} else {
++		local64_set(&hwc->prev_count, new);
++	}
++
++	delta = (new << shift) - (prev << shift);
++	delta >>= shift;
++	local64_add(delta, &event->count);
++}
++
+ static
+ void amd_uncore_umc_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
+ {
+@@ -968,7 +1000,7 @@ int amd_uncore_umc_ctx_init(struct amd_uncore *uncore, unsigned int cpu)
+ 				.del		= amd_uncore_del,
+ 				.start		= amd_uncore_umc_start,
+ 				.stop		= amd_uncore_stop,
+-				.read		= amd_uncore_read,
++				.read		= amd_uncore_umc_read,
+ 				.capabilities	= PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,
+ 				.module		= THIS_MODULE,
+ 			};
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index ddeb40930bc802..3ca16e1dbbb833 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -706,3 +706,36 @@ bool hv_is_hyperv_initialized(void)
+ 	return hypercall_msr.enable;
+ }
+ EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized);
++
++int hv_apicid_to_vp_index(u32 apic_id)
++{
++	u64 control;
++	u64 status;
++	unsigned long irq_flags;
++	struct hv_get_vp_from_apic_id_in *input;
++	u32 *output, ret;
++
++	local_irq_save(irq_flags);
++
++	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
++	memset(input, 0, sizeof(*input));
++	input->partition_id = HV_PARTITION_ID_SELF;
++	input->apic_ids[0] = apic_id;
++
++	output = *this_cpu_ptr(hyperv_pcpu_output_arg);
++
++	control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_INDEX_FROM_APIC_ID;
++	status = hv_do_hypercall(control, input, output);
++	ret = output[0];
++
++	local_irq_restore(irq_flags);
++
++	if (!hv_result_success(status)) {
++		pr_err("failed to get vp index from apic id %d, status %#llx\n",
++		       apic_id, status);
++		return -EINVAL;
++	}
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(hv_apicid_to_vp_index);
+diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
+index 13242ed8ff16fe..5d59e1e05e6491 100644
+--- a/arch/x86/hyperv/hv_vtl.c
++++ b/arch/x86/hyperv/hv_vtl.c
+@@ -206,41 +206,9 @@ static int hv_vtl_bringup_vcpu(u32 target_vp_index, int cpu, u64 eip_ignored)
+ 	return ret;
+ }
+ 
+-static int hv_vtl_apicid_to_vp_id(u32 apic_id)
+-{
+-	u64 control;
+-	u64 status;
+-	unsigned long irq_flags;
+-	struct hv_get_vp_from_apic_id_in *input;
+-	u32 *output, ret;
+-
+-	local_irq_save(irq_flags);
+-
+-	input = *this_cpu_ptr(hyperv_pcpu_input_arg);
+-	memset(input, 0, sizeof(*input));
+-	input->partition_id = HV_PARTITION_ID_SELF;
+-	input->apic_ids[0] = apic_id;
+-
+-	output = *this_cpu_ptr(hyperv_pcpu_output_arg);
+-
+-	control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_ID_FROM_APIC_ID;
+-	status = hv_do_hypercall(control, input, output);
+-	ret = output[0];
+-
+-	local_irq_restore(irq_flags);
+-
+-	if (!hv_result_success(status)) {
+-		pr_err("failed to get vp id from apic id %d, status %#llx\n",
+-		       apic_id, status);
+-		return -EINVAL;
+-	}
+-
+-	return ret;
+-}
+-
+ static int hv_vtl_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip)
+ {
+-	int vp_id, cpu;
++	int vp_index, cpu;
+ 
+ 	/* Find the logical CPU for the APIC ID */
+ 	for_each_present_cpu(cpu) {
+@@ -251,18 +219,18 @@ static int hv_vtl_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip)
+ 		return -EINVAL;
+ 
+ 	pr_debug("Bringing up CPU with APIC ID %d in VTL2...\n", apicid);
+-	vp_id = hv_vtl_apicid_to_vp_id(apicid);
++	vp_index = hv_apicid_to_vp_index(apicid);
+ 
+-	if (vp_id < 0) {
++	if (vp_index < 0) {
+ 		pr_err("Couldn't find CPU with APIC ID %d\n", apicid);
+ 		return -EINVAL;
+ 	}
+-	if (vp_id > ms_hyperv.max_vp_index) {
+-		pr_err("Invalid CPU id %d for APIC ID %d\n", vp_id, apicid);
++	if (vp_index > ms_hyperv.max_vp_index) {
++		pr_err("Invalid CPU id %d for APIC ID %d\n", vp_index, apicid);
+ 		return -EINVAL;
+ 	}
+ 
+-	return hv_vtl_bringup_vcpu(vp_id, cpu, start_eip);
++	return hv_vtl_bringup_vcpu(vp_index, cpu, start_eip);
+ }
+ 
+ int __init hv_vtl_early_init(void)
+diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
+index 77bf05f06b9efa..0cc239cdb4dad8 100644
+--- a/arch/x86/hyperv/ivm.c
++++ b/arch/x86/hyperv/ivm.c
+@@ -9,6 +9,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/types.h>
+ #include <linux/slab.h>
++#include <linux/cpu.h>
+ #include <asm/svm.h>
+ #include <asm/sev.h>
+ #include <asm/io.h>
+@@ -288,7 +289,7 @@ static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa)
+ 		free_page((unsigned long)vmsa);
+ }
+ 
+-int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
++int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip)
+ {
+ 	struct sev_es_save_area *vmsa = (struct sev_es_save_area *)
+ 		__get_free_page(GFP_KERNEL | __GFP_ZERO);
+@@ -297,10 +298,27 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
+ 	u64 ret, retry = 5;
+ 	struct hv_enable_vp_vtl *start_vp_input;
+ 	unsigned long flags;
++	int cpu, vp_index;
+ 
+ 	if (!vmsa)
+ 		return -ENOMEM;
+ 
++	/* Find the Hyper-V VP index which might be not the same as APIC ID */
++	vp_index = hv_apicid_to_vp_index(apic_id);
++	if (vp_index < 0 || vp_index > ms_hyperv.max_vp_index)
++		return -EINVAL;
++
++	/*
++	 * Find the Linux CPU number for addressing the per-CPU data, and it
++	 * might not be the same as APIC ID.
++	 */
++	for_each_present_cpu(cpu) {
++		if (arch_match_cpu_phys_id(cpu, apic_id))
++			break;
++	}
++	if (cpu >= nr_cpu_ids)
++		return -EINVAL;
++
+ 	native_store_gdt(&gdtr);
+ 
+ 	vmsa->gdtr.base = gdtr.address;
+@@ -348,7 +366,7 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
+ 	start_vp_input = (struct hv_enable_vp_vtl *)ap_start_input_arg;
+ 	memset(start_vp_input, 0, sizeof(*start_vp_input));
+ 	start_vp_input->partition_id = -1;
+-	start_vp_input->vp_index = cpu;
++	start_vp_input->vp_index = vp_index;
+ 	start_vp_input->target_vtl.target_vtl = ms_hyperv.vtl;
+ 	*(u64 *)&start_vp_input->vp_context = __pa(vmsa) | 1;
+ 
+diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
+index bab5ccfc60a748..0b9a3a307d0655 100644
+--- a/arch/x86/include/asm/mshyperv.h
++++ b/arch/x86/include/asm/mshyperv.h
+@@ -268,11 +268,11 @@ int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ bool hv_ghcb_negotiate_protocol(void);
+ void __noreturn hv_ghcb_terminate(unsigned int set, unsigned int reason);
+-int hv_snp_boot_ap(u32 cpu, unsigned long start_ip);
++int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip);
+ #else
+ static inline bool hv_ghcb_negotiate_protocol(void) { return false; }
+ static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {}
+-static inline int hv_snp_boot_ap(u32 cpu, unsigned long start_ip) { return 0; }
++static inline int hv_snp_boot_ap(u32 apic_id, unsigned long start_ip) { return 0; }
+ #endif
+ 
+ #if defined(CONFIG_AMD_MEM_ENCRYPT) || defined(CONFIG_INTEL_TDX_GUEST)
+@@ -306,6 +306,7 @@ static __always_inline u64 hv_raw_get_msr(unsigned int reg)
+ {
+ 	return __rdmsr(reg);
+ }
++int hv_apicid_to_vp_index(u32 apic_id);
+ 
+ #else /* CONFIG_HYPERV */
+ static inline void hyperv_init(void) {}
+@@ -327,6 +328,7 @@ static inline void hv_set_msr(unsigned int reg, u64 value) { }
+ static inline u64 hv_get_msr(unsigned int reg) { return 0; }
+ static inline void hv_set_non_nested_msr(unsigned int reg, u64 value) { }
+ static inline u64 hv_get_non_nested_msr(unsigned int reg) { return 0; }
++static inline int hv_apicid_to_vp_index(u32 apic_id) { return -EINVAL; }
+ #endif /* CONFIG_HYPERV */
+ 
+ 
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index ce857ef54cf158..54dc313bcdf018 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -116,13 +116,10 @@ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+ {
+ 	if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
+-		if (static_cpu_has_bug(X86_BUG_CLFLUSH_MONITOR)) {
+-			mb();
+-			clflush((void *)&current_thread_info()->flags);
+-			mb();
+-		}
++		const void *addr = &current_thread_info()->flags;
+ 
+-		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
++		__monitor(addr, 0, 0);
+ 
+ 		if (!need_resched()) {
+ 			if (ecx & 1) {
+diff --git a/arch/x86/include/asm/sighandling.h b/arch/x86/include/asm/sighandling.h
+index e770c4fc47f4c5..8727c7e21dd1e6 100644
+--- a/arch/x86/include/asm/sighandling.h
++++ b/arch/x86/include/asm/sighandling.h
+@@ -24,4 +24,26 @@ int ia32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ 
++/*
++ * To prevent immediate repeat of single step trap on return from SIGTRAP
++ * handler if the trap flag (TF) is set without an external debugger attached,
++ * clear the software event flag in the augmented SS, ensuring no single-step
++ * trap is pending upon ERETU completion.
++ *
++ * Note, this function should be called in sigreturn() before the original
++ * state is restored to make sure the TF is read from the entry frame.
++ */
++static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs)
++{
++	/*
++	 * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction
++	 * is being single-stepped, do not clear the software event flag in the
++	 * augmented SS, thus a debugger won't skip over the following instruction.
++	 */
++#ifdef CONFIG_X86_FRED
++	if (!(regs->flags & X86_EFLAGS_TF))
++		regs->fred_ss.swevent = 0;
++#endif
++}
++
+ #endif /* _ASM_X86_SIGHANDLING_H */
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 0ff057ff11ce93..5de4a879232a6c 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1005,17 +1005,18 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ 		c->x86_capability[CPUID_D_1_EAX] = eax;
+ 	}
+ 
+-	/* AMD-defined flags: level 0x80000001 */
++	/*
++	 * Check if extended CPUID leaves are implemented: Max extended
++	 * CPUID leaf must be in the 0x80000001-0x8000ffff range.
++	 */
+ 	eax = cpuid_eax(0x80000000);
+-	c->extended_cpuid_level = eax;
++	c->extended_cpuid_level = ((eax & 0xffff0000) == 0x80000000) ? eax : 0;
+ 
+-	if ((eax & 0xffff0000) == 0x80000000) {
+-		if (eax >= 0x80000001) {
+-			cpuid(0x80000001, &eax, &ebx, &ecx, &edx);
++	if (c->extended_cpuid_level >= 0x80000001) {
++		cpuid(0x80000001, &eax, &ebx, &ecx, &edx);
+ 
+-			c->x86_capability[CPUID_8000_0001_ECX] = ecx;
+-			c->x86_capability[CPUID_8000_0001_EDX] = edx;
+-		}
++		c->x86_capability[CPUID_8000_0001_ECX] = ecx;
++		c->x86_capability[CPUID_8000_0001_EDX] = edx;
+ 	}
+ 
+ 	if (c->extended_cpuid_level >= 0x80000007) {
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 079f046ee26d19..e8021d3e58824a 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -696,6 +696,8 @@ static int load_late_locked(void)
+ 		return load_late_stop_cpus(true);
+ 	case UCODE_NFOUND:
+ 		return -ENOENT;
++	case UCODE_OK:
++		return 0;
+ 	default:
+ 		return -EBADFD;
+ 	}
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index e2c6b471d2302a..8c18327eb10bbb 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -593,7 +593,7 @@ static void get_fixed_ranges(mtrr_type *frs)
+ 
+ void mtrr_save_fixed_ranges(void *info)
+ {
+-	if (boot_cpu_has(X86_FEATURE_MTRR))
++	if (mtrr_state.have_fixed)
+ 		get_fixed_ranges(mtrr_state.fixed_ranges);
+ }
+ 
+diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
+index 6290dd120f5e45..ff40f09ad9116c 100644
+--- a/arch/x86/kernel/ioport.c
++++ b/arch/x86/kernel/ioport.c
+@@ -33,8 +33,9 @@ void io_bitmap_share(struct task_struct *tsk)
+ 	set_tsk_thread_flag(tsk, TIF_IO_BITMAP);
+ }
+ 
+-static void task_update_io_bitmap(struct task_struct *tsk)
++static void task_update_io_bitmap(void)
+ {
++	struct task_struct *tsk = current;
+ 	struct thread_struct *t = &tsk->thread;
+ 
+ 	if (t->iopl_emul == 3 || t->io_bitmap) {
+@@ -54,7 +55,12 @@ void io_bitmap_exit(struct task_struct *tsk)
+ 	struct io_bitmap *iobm = tsk->thread.io_bitmap;
+ 
+ 	tsk->thread.io_bitmap = NULL;
+-	task_update_io_bitmap(tsk);
++	/*
++	 * Don't touch the TSS when invoked on a failed fork(). TSS
++	 * reflects the state of @current and not the state of @tsk.
++	 */
++	if (tsk == current)
++		task_update_io_bitmap();
+ 	if (iobm && refcount_dec_and_test(&iobm->refcnt))
+ 		kfree(iobm);
+ }
+@@ -192,8 +198,7 @@ SYSCALL_DEFINE1(iopl, unsigned int, level)
+ 	}
+ 
+ 	t->iopl_emul = level;
+-	task_update_io_bitmap(current);
+-
++	task_update_io_bitmap();
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 81f9b78e0f7baa..6cd5d2d6c58af6 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -419,7 +419,7 @@ static __always_inline bool handle_pending_pir(u64 *pir, struct pt_regs *regs)
+ 	bool handled = false;
+ 
+ 	for (i = 0; i < 4; i++)
+-		pir_copy[i] = pir[i];
++		pir_copy[i] = READ_ONCE(pir[i]);
+ 
+ 	for (i = 0; i < 4; i++) {
+ 		if (!pir_copy[i])
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 962c3ce39323e7..4940fcd409251c 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -181,6 +181,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
+ 	frame->ret_addr = (unsigned long) ret_from_fork_asm;
+ 	p->thread.sp = (unsigned long) fork_frame;
+ 	p->thread.io_bitmap = NULL;
++	clear_tsk_thread_flag(p, TIF_IO_BITMAP);
+ 	p->thread.iopl_warn = 0;
+ 	memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
+ 
+@@ -469,6 +470,11 @@ void native_tss_update_io_bitmap(void)
+ 	} else {
+ 		struct io_bitmap *iobm = t->io_bitmap;
+ 
++		if (WARN_ON_ONCE(!iobm)) {
++			clear_thread_flag(TIF_IO_BITMAP);
++			native_tss_invalidate_io_bitmap();
++		}
++
+ 		/*
+ 		 * Only copy bitmap data when the sequence number differs. The
+ 		 * update time is accounted to the incoming task.
+@@ -907,13 +913,10 @@ static __init bool prefer_mwait_c1_over_halt(void)
+ static __cpuidle void mwait_idle(void)
+ {
+ 	if (!current_set_polling_and_test()) {
+-		if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
+-			mb(); /* quirk */
+-			clflush((void *)&current_thread_info()->flags);
+-			mb(); /* quirk */
+-		}
++		const void *addr = &current_thread_info()->flags;
+ 
+-		__monitor((void *)&current_thread_info()->flags, 0, 0);
++		alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
++		__monitor(addr, 0, 0);
+ 		if (!need_resched()) {
+ 			__sti_mwait(0, 0);
+ 			raw_local_irq_disable();
+diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
+index 98123ff10506c6..42bbc42bd3503c 100644
+--- a/arch/x86/kernel/signal_32.c
++++ b/arch/x86/kernel/signal_32.c
+@@ -152,6 +152,8 @@ SYSCALL32_DEFINE0(sigreturn)
+ 	struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8);
+ 	sigset_t set;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	if (!access_ok(frame, sizeof(*frame)))
+ 		goto badframe;
+ 	if (__get_user(set.sig[0], &frame->sc.oldmask)
+@@ -175,6 +177,8 @@ SYSCALL32_DEFINE0(rt_sigreturn)
+ 	struct rt_sigframe_ia32 __user *frame;
+ 	sigset_t set;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4);
+ 
+ 	if (!access_ok(frame, sizeof(*frame)))
+diff --git a/arch/x86/kernel/signal_64.c b/arch/x86/kernel/signal_64.c
+index ee9453891901b7..d483b585c6c604 100644
+--- a/arch/x86/kernel/signal_64.c
++++ b/arch/x86/kernel/signal_64.c
+@@ -250,6 +250,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
+ 	sigset_t set;
+ 	unsigned long uc_flags;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	frame = (struct rt_sigframe __user *)(regs->sp - sizeof(long));
+ 	if (!access_ok(frame, sizeof(*frame)))
+ 		goto badframe;
+@@ -366,6 +368,8 @@ COMPAT_SYSCALL_DEFINE0(x32_rt_sigreturn)
+ 	sigset_t set;
+ 	unsigned long uc_flags;
+ 
++	prevent_single_step_upon_eretu(regs);
++
+ 	frame = (struct rt_sigframe_x32 __user *)(regs->sp - 8);
+ 
+ 	if (!access_ok(frame, sizeof(*frame)))
+diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt
+index f5dd84eb55dcda..cd3fd5155f6ece 100644
+--- a/arch/x86/lib/x86-opcode-map.txt
++++ b/arch/x86/lib/x86-opcode-map.txt
+@@ -35,7 +35,7 @@
+ #  - (!F3) : the last prefix is not 0xF3 (including non-last prefix case)
+ #  - (66&F2): Both 0x66 and 0xF2 prefixes are specified.
+ #
+-# REX2 Prefix
++# REX2 Prefix Superscripts
+ #  - (!REX2): REX2 is not allowed
+ #  - (REX2): REX2 variant e.g. JMPABS
+ 
+@@ -286,10 +286,10 @@ df: ESC
+ # Note: "forced64" is Intel CPU behavior: they ignore 0x66 prefix
+ # in 64-bit mode. AMD CPUs accept 0x66 prefix, it causes RIP truncation
+ # to 16 bits. In 32-bit mode, 0x66 is accepted by both Intel and AMD.
+-e0: LOOPNE/LOOPNZ Jb (f64) (!REX2)
+-e1: LOOPE/LOOPZ Jb (f64) (!REX2)
+-e2: LOOP Jb (f64) (!REX2)
+-e3: JrCXZ Jb (f64) (!REX2)
++e0: LOOPNE/LOOPNZ Jb (f64),(!REX2)
++e1: LOOPE/LOOPZ Jb (f64),(!REX2)
++e2: LOOP Jb (f64),(!REX2)
++e3: JrCXZ Jb (f64),(!REX2)
+ e4: IN AL,Ib (!REX2)
+ e5: IN eAX,Ib (!REX2)
+ e6: OUT Ib,AL (!REX2)
+@@ -298,10 +298,10 @@ e7: OUT Ib,eAX (!REX2)
+ # in "near" jumps and calls is 16-bit. For CALL,
+ # push of return address is 16-bit wide, RSP is decremented by 2
+ # but is not truncated to 16 bits, unlike RIP.
+-e8: CALL Jz (f64) (!REX2)
+-e9: JMP-near Jz (f64) (!REX2)
+-ea: JMP-far Ap (i64) (!REX2)
+-eb: JMP-short Jb (f64) (!REX2)
++e8: CALL Jz (f64),(!REX2)
++e9: JMP-near Jz (f64),(!REX2)
++ea: JMP-far Ap (i64),(!REX2)
++eb: JMP-short Jb (f64),(!REX2)
+ ec: IN AL,DX (!REX2)
+ ed: IN eAX,DX (!REX2)
+ ee: OUT DX,AL (!REX2)
+@@ -478,22 +478,22 @@ AVXcode: 1
+ 7f: movq Qq,Pq | vmovdqa Wx,Vx (66) | vmovdqa32/64 Wx,Vx (66),(evo) | vmovdqu Wx,Vx (F3) | vmovdqu32/64 Wx,Vx (F3),(evo) | vmovdqu8/16 Wx,Vx (F2),(ev)
+ # 0x0f 0x80-0x8f
+ # Note: "forced64" is Intel CPU behavior (see comment about CALL insn).
+-80: JO Jz (f64) (!REX2)
+-81: JNO Jz (f64) (!REX2)
+-82: JB/JC/JNAE Jz (f64) (!REX2)
+-83: JAE/JNB/JNC Jz (f64) (!REX2)
+-84: JE/JZ Jz (f64) (!REX2)
+-85: JNE/JNZ Jz (f64) (!REX2)
+-86: JBE/JNA Jz (f64) (!REX2)
+-87: JA/JNBE Jz (f64) (!REX2)
+-88: JS Jz (f64) (!REX2)
+-89: JNS Jz (f64) (!REX2)
+-8a: JP/JPE Jz (f64) (!REX2)
+-8b: JNP/JPO Jz (f64) (!REX2)
+-8c: JL/JNGE Jz (f64) (!REX2)
+-8d: JNL/JGE Jz (f64) (!REX2)
+-8e: JLE/JNG Jz (f64) (!REX2)
+-8f: JNLE/JG Jz (f64) (!REX2)
++80: JO Jz (f64),(!REX2)
++81: JNO Jz (f64),(!REX2)
++82: JB/JC/JNAE Jz (f64),(!REX2)
++83: JAE/JNB/JNC Jz (f64),(!REX2)
++84: JE/JZ Jz (f64),(!REX2)
++85: JNE/JNZ Jz (f64),(!REX2)
++86: JBE/JNA Jz (f64),(!REX2)
++87: JA/JNBE Jz (f64),(!REX2)
++88: JS Jz (f64),(!REX2)
++89: JNS Jz (f64),(!REX2)
++8a: JP/JPE Jz (f64),(!REX2)
++8b: JNP/JPO Jz (f64),(!REX2)
++8c: JL/JNGE Jz (f64),(!REX2)
++8d: JNL/JGE Jz (f64),(!REX2)
++8e: JLE/JNG Jz (f64),(!REX2)
++8f: JNLE/JG Jz (f64),(!REX2)
+ # 0x0f 0x90-0x9f
+ 90: SETO Eb | kmovw/q Vk,Wk | kmovb/d Vk,Wk (66)
+ 91: SETNO Eb | kmovw/q Mv,Vk | kmovb/d Mv,Vk (66)
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index a1678f0a9f81f9..e4e2567061f9db 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -117,13 +117,8 @@ int blk_rq_integrity_map_user(struct request *rq, void __user *ubuf,
+ {
+ 	int ret;
+ 	struct iov_iter iter;
+-	unsigned int direction;
+ 
+-	if (op_is_write(req_op(rq)))
+-		direction = ITER_DEST;
+-	else
+-		direction = ITER_SOURCE;
+-	iov_iter_ubuf(&iter, direction, ubuf, bytes);
++	iov_iter_ubuf(&iter, rq_data_dir(rq), ubuf, bytes);
+ 	ret = bio_integrity_map_user(rq->bio, &iter);
+ 	if (ret)
+ 		return ret;
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index d6dd2e04787491..7437de947120ed 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -644,6 +644,18 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw,
+ 	u64 bps_limit = tg_bps_limit(tg, rw);
+ 	u32 iops_limit = tg_iops_limit(tg, rw);
+ 
++	/*
++	 * If the queue is empty, carryover handling is not needed. In such cases,
++	 * tg->[bytes/io]_disp should be reset to 0 to avoid impacting the dispatch
++	 * of subsequent bios. The same handling applies when the previous BPS/IOPS
++	 * limit was set to max.
++	 */
++	if (tg->service_queue.nr_queued[rw] == 0) {
++		tg->bytes_disp[rw] = 0;
++		tg->io_disp[rw] = 0;
++		return;
++	}
++
+ 	/*
+ 	 * If config is updated while bios are still throttled, calculate and
+ 	 * accumulate how many bytes/ios are waited across changes. And
+@@ -656,8 +668,8 @@ static void __tg_update_carryover(struct throtl_grp *tg, bool rw,
+ 	if (iops_limit != UINT_MAX)
+ 		*ios = calculate_io_allowed(iops_limit, jiffy_elapsed) -
+ 			tg->io_disp[rw];
+-	tg->bytes_disp[rw] -= *bytes;
+-	tg->io_disp[rw] -= *ios;
++	tg->bytes_disp[rw] = -*bytes;
++	tg->io_disp[rw] = -*ios;
+ }
+ 
+ static void tg_update_carryover(struct throtl_grp *tg)
+@@ -665,10 +677,8 @@ static void tg_update_carryover(struct throtl_grp *tg)
+ 	long long bytes[2] = {0};
+ 	int ios[2] = {0};
+ 
+-	if (tg->service_queue.nr_queued[READ])
+-		__tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]);
+-	if (tg->service_queue.nr_queued[WRITE])
+-		__tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]);
++	__tg_update_carryover(tg, READ, &bytes[READ], &ios[READ]);
++	__tg_update_carryover(tg, WRITE, &bytes[WRITE], &ios[WRITE]);
+ 
+ 	/* see comments in struct throtl_grp for meaning of these fields. */
+ 	throtl_log(&tg->service_queue, "%s: %lld %lld %d %d\n", __func__,
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 8f15d1aa6eb89a..45c91016cef38a 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1306,7 +1306,6 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ 	spin_unlock_irqrestore(&zwplug->lock, flags);
+ 
+ 	bdev = bio->bi_bdev;
+-	submit_bio_noacct_nocheck(bio);
+ 
+ 	/*
+ 	 * blk-mq devices will reuse the extra reference on the request queue
+@@ -1314,8 +1313,12 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ 	 * path for BIO-based devices will not do that. So drop this extra
+ 	 * reference here.
+ 	 */
+-	if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO))
++	if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {
++		bdev->bd_disk->fops->submit_bio(bio);
+ 		blk_queue_exit(bdev->bd_disk->queue);
++	} else {
++		blk_mq_submit_bio(bio);
++	}
+ 
+ put_zwplug:
+ 	/* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
+diff --git a/block/elevator.c b/block/elevator.c
+index b4d08026b02cef..dc4cadef728e55 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -744,7 +744,6 @@ ssize_t elv_iosched_store(struct gendisk *disk, const char *buf,
+ ssize_t elv_iosched_show(struct gendisk *disk, char *name)
+ {
+ 	struct request_queue *q = disk->queue;
+-	struct elevator_queue *eq = q->elevator;
+ 	struct elevator_type *cur = NULL, *e;
+ 	int len = 0;
+ 
+@@ -753,7 +752,7 @@ ssize_t elv_iosched_show(struct gendisk *disk, char *name)
+ 		len += sprintf(name+len, "[none] ");
+ 	} else {
+ 		len += sprintf(name+len, "none ");
+-		cur = eq->type;
++		cur = q->elevator->type;
+ 	}
+ 
+ 	spin_lock(&elv_list_lock);
+diff --git a/crypto/api.c b/crypto/api.c
+index 3416e98128a059..8592d3dccc64e6 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -220,10 +220,19 @@ static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
+ 		if (crypto_is_test_larval(larval))
+ 			crypto_larval_kill(larval);
+ 		alg = ERR_PTR(-ETIMEDOUT);
+-	} else if (!alg) {
++	} else if (!alg || PTR_ERR(alg) == -EEXIST) {
++		int err = alg ? -EEXIST : -EAGAIN;
++
++		/*
++		 * EEXIST is expected because two probes can be scheduled
++		 * at the same time with one using alg_name and the other
++		 * using driver_name.  Do a re-lookup but do not retry in
++		 * case we hit a quirk like gcm_base(ctr(aes),...) which
++		 * will never match.
++		 */
+ 		alg = &larval->alg;
+ 		alg = crypto_alg_lookup(alg->cra_name, type, mask) ?:
+-		      ERR_PTR(-EAGAIN);
++		      ERR_PTR(err);
+ 	} else if (IS_ERR(alg))
+ 		;
+ 	else if (crypto_is_test_larval(larval) &&
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index bf165d321440d5..89dc887d2c5c7e 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -188,6 +188,8 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 	ptr = pkey_pack_u32(ptr, pkey->paramlen);
+ 	memcpy(ptr, pkey->params, pkey->paramlen);
+ 
++	memset(info, 0, sizeof(*info));
++
+ 	if (issig) {
+ 		sig = crypto_alloc_sig(alg_name, 0, 0);
+ 		if (IS_ERR(sig)) {
+@@ -203,6 +205,7 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 			goto error_free_tfm;
+ 
+ 		len = crypto_sig_keysize(sig);
++		info->key_size = len;
+ 		info->max_sig_size = crypto_sig_maxsize(sig);
+ 		info->max_data_size = crypto_sig_digestsize(sig);
+ 
+@@ -211,6 +214,9 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 			info->supported_ops |= KEYCTL_SUPPORTS_SIGN;
+ 
+ 		if (strcmp(params->encoding, "pkcs1") == 0) {
++			info->max_enc_size = len / BITS_PER_BYTE;
++			info->max_dec_size = len / BITS_PER_BYTE;
++
+ 			info->supported_ops |= KEYCTL_SUPPORTS_ENCRYPT;
+ 			if (pkey->key_is_private)
+ 				info->supported_ops |= KEYCTL_SUPPORTS_DECRYPT;
+@@ -230,18 +236,17 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ 			goto error_free_tfm;
+ 
+ 		len = crypto_akcipher_maxsize(tfm);
++		info->key_size = len * BITS_PER_BYTE;
+ 		info->max_sig_size = len;
+ 		info->max_data_size = len;
++		info->max_enc_size = len;
++		info->max_dec_size = len;
+ 
+ 		info->supported_ops = KEYCTL_SUPPORTS_ENCRYPT;
+ 		if (pkey->key_is_private)
+ 			info->supported_ops |= KEYCTL_SUPPORTS_DECRYPT;
+ 	}
+ 
+-	info->key_size = len * 8;
+-	info->max_enc_size = len;
+-	info->max_dec_size = len;
+-
+ 	ret = 0;
+ 
+ error_free_tfm:
+diff --git a/crypto/ecdsa-p1363.c b/crypto/ecdsa-p1363.c
+index 4454f1f8f33f58..e0c55c64711c83 100644
+--- a/crypto/ecdsa-p1363.c
++++ b/crypto/ecdsa-p1363.c
+@@ -21,7 +21,8 @@ static int ecdsa_p1363_verify(struct crypto_sig *tfm,
+ 			      const void *digest, unsigned int dlen)
+ {
+ 	struct ecdsa_p1363_ctx *ctx = crypto_sig_ctx(tfm);
+-	unsigned int keylen = crypto_sig_keysize(ctx->child);
++	unsigned int keylen = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++						BITS_PER_BYTE);
+ 	unsigned int ndigits = DIV_ROUND_UP_POW2(keylen, sizeof(u64));
+ 	struct ecdsa_raw_sig sig;
+ 
+@@ -45,7 +46,8 @@ static unsigned int ecdsa_p1363_max_size(struct crypto_sig *tfm)
+ {
+ 	struct ecdsa_p1363_ctx *ctx = crypto_sig_ctx(tfm);
+ 
+-	return 2 * crypto_sig_keysize(ctx->child);
++	return 2 * DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++				     BITS_PER_BYTE);
+ }
+ 
+ static unsigned int ecdsa_p1363_digest_size(struct crypto_sig *tfm)
+diff --git a/crypto/ecdsa-x962.c b/crypto/ecdsa-x962.c
+index 90a04f4b9a2f55..ee71594d10a069 100644
+--- a/crypto/ecdsa-x962.c
++++ b/crypto/ecdsa-x962.c
+@@ -82,7 +82,7 @@ static int ecdsa_x962_verify(struct crypto_sig *tfm,
+ 	int err;
+ 
+ 	sig_ctx.ndigits = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
+-					    sizeof(u64));
++					    sizeof(u64) * BITS_PER_BYTE);
+ 
+ 	err = asn1_ber_decoder(&ecdsasignature_decoder, &sig_ctx, src, slen);
+ 	if (err < 0)
+@@ -103,7 +103,8 @@ static unsigned int ecdsa_x962_max_size(struct crypto_sig *tfm)
+ {
+ 	struct ecdsa_x962_ctx *ctx = crypto_sig_ctx(tfm);
+ 	struct sig_alg *alg = crypto_sig_alg(ctx->child);
+-	int slen = crypto_sig_keysize(ctx->child);
++	int slen = DIV_ROUND_UP_POW2(crypto_sig_keysize(ctx->child),
++				     BITS_PER_BYTE);
+ 
+ 	/*
+ 	 * Verify takes ECDSA-Sig-Value (described in RFC 5480) as input,
+diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c
+index 117526d15ddebf..a70b60a90a3c76 100644
+--- a/crypto/ecdsa.c
++++ b/crypto/ecdsa.c
+@@ -167,7 +167,7 @@ static unsigned int ecdsa_key_size(struct crypto_sig *tfm)
+ {
+ 	struct ecc_ctx *ctx = crypto_sig_ctx(tfm);
+ 
+-	return DIV_ROUND_UP(ctx->curve->nbits, 8);
++	return ctx->curve->nbits;
+ }
+ 
+ static unsigned int ecdsa_digest_size(struct crypto_sig *tfm)
+diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c
+index b3dd8a3ddeb796..2c0602f0cd406f 100644
+--- a/crypto/ecrdsa.c
++++ b/crypto/ecrdsa.c
+@@ -249,7 +249,7 @@ static unsigned int ecrdsa_key_size(struct crypto_sig *tfm)
+ 	 * Verify doesn't need any output, so it's just informational
+ 	 * for keyctl to determine the key bit size.
+ 	 */
+-	return ctx->pub_key.ndigits * sizeof(u64);
++	return ctx->pub_key.ndigits * sizeof(u64) * BITS_PER_BYTE;
+ }
+ 
+ static unsigned int ecrdsa_max_size(struct crypto_sig *tfm)
+diff --git a/crypto/krb5/rfc3961_simplified.c b/crypto/krb5/rfc3961_simplified.c
+index 79180d28baa9fb..e49cbdec7c404d 100644
+--- a/crypto/krb5/rfc3961_simplified.c
++++ b/crypto/krb5/rfc3961_simplified.c
+@@ -89,6 +89,7 @@ int crypto_shash_update_sg(struct shash_desc *desc, struct scatterlist *sg,
+ 
+ 	sg_miter_start(&miter, sg, sg_nents(sg),
+ 		       SG_MITER_FROM_SG | SG_MITER_LOCAL);
++	sg_miter_skip(&miter, offset);
+ 	for (i = 0; i < len; i += n) {
+ 		sg_miter_next(&miter);
+ 		n = min(miter.length, len - i);
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 391ae0f7641ff9..15f579a768614d 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -322,7 +322,7 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 
+ 	err = crypto_grab_skcipher(spawn, skcipher_crypto_instance(inst),
+ 				   cipher_name, 0, mask);
+-	if (err == -ENOENT) {
++	if (err == -ENOENT && memcmp(cipher_name, "ecb(", 4)) {
+ 		err = -ENAMETOOLONG;
+ 		if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ 			     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+@@ -356,7 +356,7 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	/* Alas we screwed up the naming so we have to mangle the
+ 	 * cipher name.
+ 	 */
+-	if (!strncmp(cipher_name, "ecb(", 4)) {
++	if (!memcmp(cipher_name, "ecb(", 4)) {
+ 		int len;
+ 
+ 		len = strscpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
+diff --git a/crypto/rsassa-pkcs1.c b/crypto/rsassa-pkcs1.c
+index d01ac75635e008..94fa5e9600e79d 100644
+--- a/crypto/rsassa-pkcs1.c
++++ b/crypto/rsassa-pkcs1.c
+@@ -301,7 +301,7 @@ static unsigned int rsassa_pkcs1_key_size(struct crypto_sig *tfm)
+ {
+ 	struct rsassa_pkcs1_ctx *ctx = crypto_sig_ctx(tfm);
+ 
+-	return ctx->key_size;
++	return ctx->key_size * BITS_PER_BYTE;
+ }
+ 
+ static int rsassa_pkcs1_set_pub_key(struct crypto_sig *tfm,
+diff --git a/crypto/sig.c b/crypto/sig.c
+index dfc7cae9080282..53a3dd6fbe3fe6 100644
+--- a/crypto/sig.c
++++ b/crypto/sig.c
+@@ -102,6 +102,11 @@ static int sig_default_set_key(struct crypto_sig *tfm,
+ 	return -ENOSYS;
+ }
+ 
++static unsigned int sig_default_size(struct crypto_sig *tfm)
++{
++	return DIV_ROUND_UP_POW2(crypto_sig_keysize(tfm), BITS_PER_BYTE);
++}
++
+ static int sig_prepare_alg(struct sig_alg *alg)
+ {
+ 	struct crypto_alg *base = &alg->base;
+@@ -117,9 +122,9 @@ static int sig_prepare_alg(struct sig_alg *alg)
+ 	if (!alg->key_size)
+ 		return -EINVAL;
+ 	if (!alg->max_size)
+-		alg->max_size = alg->key_size;
++		alg->max_size = sig_default_size;
+ 	if (!alg->digest_size)
+-		alg->digest_size = alg->key_size;
++		alg->digest_size = sig_default_size;
+ 
+ 	base->cra_type = &crypto_sig_type;
+ 	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 31529c9ef08f8f..46b7c70ea54bbf 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -363,7 +363,7 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 
+ 	err = crypto_grab_skcipher(&ctx->spawn, skcipher_crypto_instance(inst),
+ 				   cipher_name, 0, mask);
+-	if (err == -ENOENT) {
++	if (err == -ENOENT && memcmp(cipher_name, "ecb(", 4)) {
+ 		err = -ENAMETOOLONG;
+ 		if (snprintf(name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ 			     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+@@ -397,7 +397,7 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	/* Alas we screwed up the naming so we have to mangle the
+ 	 * cipher name.
+ 	 */
+-	if (!strncmp(cipher_name, "ecb(", 4)) {
++	if (!memcmp(cipher_name, "ecb(", 4)) {
+ 		int len;
+ 
+ 		len = strscpy(name, cipher_name + 4, sizeof(name));
+diff --git a/drivers/accel/amdxdna/aie2_message.c b/drivers/accel/amdxdna/aie2_message.c
+index bf4219e32cc19d..82412eec9a4b8b 100644
+--- a/drivers/accel/amdxdna/aie2_message.c
++++ b/drivers/accel/amdxdna/aie2_message.c
+@@ -525,7 +525,7 @@ aie2_cmdlist_fill_one_slot_cf(void *cmd_buf, u32 offset,
+ 	if (!payload)
+ 		return -EINVAL;
+ 
+-	if (!slot_cf_has_space(offset, payload_len))
++	if (!slot_has_space(*buf, offset, payload_len))
+ 		return -ENOSPC;
+ 
+ 	buf->cu_idx = cu_idx;
+@@ -558,7 +558,7 @@ aie2_cmdlist_fill_one_slot_dpu(void *cmd_buf, u32 offset,
+ 	if (payload_len < sizeof(*sn) || arg_sz > MAX_DPU_ARGS_SIZE)
+ 		return -EINVAL;
+ 
+-	if (!slot_dpu_has_space(offset, arg_sz))
++	if (!slot_has_space(*buf, offset, arg_sz))
+ 		return -ENOSPC;
+ 
+ 	buf->inst_buf_addr = sn->buffer;
+@@ -569,7 +569,7 @@ aie2_cmdlist_fill_one_slot_dpu(void *cmd_buf, u32 offset,
+ 	memcpy(buf->args, sn->prop_args, arg_sz);
+ 
+ 	/* Accurate buf size to hint firmware to do necessary copy */
+-	*size += sizeof(*buf) + arg_sz;
++	*size = sizeof(*buf) + arg_sz;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/accel/amdxdna/aie2_msg_priv.h b/drivers/accel/amdxdna/aie2_msg_priv.h
+index 4e02e744b470eb..6df9065b13f685 100644
+--- a/drivers/accel/amdxdna/aie2_msg_priv.h
++++ b/drivers/accel/amdxdna/aie2_msg_priv.h
+@@ -319,18 +319,16 @@ struct async_event_msg_resp {
+ } __packed;
+ 
+ #define MAX_CHAIN_CMDBUF_SIZE SZ_4K
+-#define slot_cf_has_space(offset, payload_size) \
+-	(MAX_CHAIN_CMDBUF_SIZE - ((offset) + (payload_size)) > \
+-	 offsetof(struct cmd_chain_slot_execbuf_cf, args[0]))
++#define slot_has_space(slot, offset, payload_size)		\
++	(MAX_CHAIN_CMDBUF_SIZE >= (offset) + (payload_size) +	\
++	 sizeof(typeof(slot)))
++
+ struct cmd_chain_slot_execbuf_cf {
+ 	__u32 cu_idx;
+ 	__u32 arg_cnt;
+ 	__u32 args[] __counted_by(arg_cnt);
+ };
+ 
+-#define slot_dpu_has_space(offset, payload_size) \
+-	(MAX_CHAIN_CMDBUF_SIZE - ((offset) + (payload_size)) > \
+-	 offsetof(struct cmd_chain_slot_dpu, args[0]))
+ struct cmd_chain_slot_dpu {
+ 	__u64 inst_buf_addr;
+ 	__u32 inst_size;
+diff --git a/drivers/accel/amdxdna/aie2_psp.c b/drivers/accel/amdxdna/aie2_psp.c
+index dc3a072ce3b6df..f28a060a88109b 100644
+--- a/drivers/accel/amdxdna/aie2_psp.c
++++ b/drivers/accel/amdxdna/aie2_psp.c
+@@ -126,8 +126,8 @@ struct psp_device *aie2m_psp_create(struct drm_device *ddev, struct psp_config *
+ 	psp->ddev = ddev;
+ 	memcpy(psp->psp_regs, conf->psp_regs, sizeof(psp->psp_regs));
+ 
+-	psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN) + PSP_FW_ALIGN;
+-	psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz, GFP_KERNEL);
++	psp->fw_buf_sz = ALIGN(conf->fw_size, PSP_FW_ALIGN);
++	psp->fw_buffer = drmm_kmalloc(ddev, psp->fw_buf_sz + PSP_FW_ALIGN, GFP_KERNEL);
+ 	if (!psp->fw_buffer) {
+ 		drm_err(ddev, "no memory for fw buffer");
+ 		return NULL;
+diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c
+index b28da35c30b675..1c8e283ad98542 100644
+--- a/drivers/accel/ivpu/ivpu_job.c
++++ b/drivers/accel/ivpu/ivpu_job.c
+@@ -247,6 +247,10 @@ static int ivpu_cmdq_unregister(struct ivpu_file_priv *file_priv, struct ivpu_cm
+ 	if (!cmdq->db_id)
+ 		return 0;
+ 
++	ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
++	if (!ret)
++		ivpu_dbg(vdev, JOB, "DB %d unregistered\n", cmdq->db_id);
++
+ 	if (vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW) {
+ 		ret = ivpu_jsm_hws_destroy_cmdq(vdev, file_priv->ctx.id, cmdq->id);
+ 		if (!ret)
+@@ -254,10 +258,6 @@ static int ivpu_cmdq_unregister(struct ivpu_file_priv *file_priv, struct ivpu_cm
+ 				 cmdq->id, file_priv->ctx.id);
+ 	}
+ 
+-	ret = ivpu_jsm_unregister_db(vdev, cmdq->db_id);
+-	if (!ret)
+-		ivpu_dbg(vdev, JOB, "DB %d unregistered\n", cmdq->db_id);
+-
+ 	xa_erase(&file_priv->vdev->db_xa, cmdq->db_id);
+ 	cmdq->db_id = 0;
+ 
+diff --git a/drivers/acpi/acpica/exserial.c b/drivers/acpi/acpica/exserial.c
+index 5241f4c01c7655..89a4ac447a2bea 100644
+--- a/drivers/acpi/acpica/exserial.c
++++ b/drivers/acpi/acpica/exserial.c
+@@ -201,6 +201,12 @@ acpi_ex_read_serial_bus(union acpi_operand_object *obj_desc,
+ 		function = ACPI_READ;
+ 		break;
+ 
++	case ACPI_ADR_SPACE_FIXED_HARDWARE:
++
++		buffer_length = ACPI_FFH_INPUT_BUFFER_SIZE;
++		function = ACPI_READ;
++		break;
++
+ 	default:
+ 		return_ACPI_STATUS(AE_AML_INVALID_SPACE_ID);
+ 	}
+diff --git a/drivers/acpi/apei/Kconfig b/drivers/acpi/apei/Kconfig
+index 3cfe7e7475f2fd..070c07d68dfb2f 100644
+--- a/drivers/acpi/apei/Kconfig
++++ b/drivers/acpi/apei/Kconfig
+@@ -23,6 +23,7 @@ config ACPI_APEI_GHES
+ 	select ACPI_HED
+ 	select IRQ_WORK
+ 	select GENERIC_ALLOCATOR
++	select ARM_SDE_INTERFACE if ARM64
+ 	help
+ 	  Generic Hardware Error Source provides a way to report
+ 	  platform hardware errors (such as that from chipset). It
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 289e365f84b249..0f3c663c1b0a33 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -1715,7 +1715,7 @@ void __init acpi_ghes_init(void)
+ {
+ 	int rc;
+ 
+-	sdei_init();
++	acpi_sdei_init();
+ 
+ 	if (acpi_disabled)
+ 		return;
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index f193e713825ac2..ff23b6edb2df37 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -463,7 +463,7 @@ bool cppc_allow_fast_switch(void)
+ 	struct cpc_desc *cpc_ptr;
+ 	int cpu;
+ 
+-	for_each_possible_cpu(cpu) {
++	for_each_present_cpu(cpu) {
+ 		cpc_ptr = per_cpu(cpc_desc_ptr, cpu);
+ 		desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF];
+ 		if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
+diff --git a/drivers/acpi/osi.c b/drivers/acpi/osi.c
+index df9328c850bd33..f2c943b934be0a 100644
+--- a/drivers/acpi/osi.c
++++ b/drivers/acpi/osi.c
+@@ -42,7 +42,6 @@ static struct acpi_osi_entry
+ osi_setup_entries[OSI_STRING_ENTRIES_MAX] __initdata = {
+ 	{"Module Device", true},
+ 	{"Processor Device", true},
+-	{"3.0 _SCP Extensions", true},
+ 	{"Processor Aggregator Device", true},
+ };
+ 
+diff --git a/drivers/acpi/platform_profile.c b/drivers/acpi/platform_profile.c
+index ffbfd32f4cf1ba..b43f4459a4f61e 100644
+--- a/drivers/acpi/platform_profile.c
++++ b/drivers/acpi/platform_profile.c
+@@ -688,6 +688,9 @@ static int __init platform_profile_init(void)
+ {
+ 	int err;
+ 
++	if (acpi_disabled)
++		return -EOPNOTSUPP;
++
+ 	err = class_register(&platform_profile_class);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 14c7bac4100b46..7d59c6c9185fc1 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -534,7 +534,7 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+  */
+ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ 	{
+-		/* MECHREV Jiaolong17KS Series GM7XG0M */
++		/* MECHREVO Jiaolong17KS Series GM7XG0M */
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "GM7XG0M"),
+ 		},
+diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c
+index 0c874186f8aed4..5c2defe55898f1 100644
+--- a/drivers/acpi/thermal.c
++++ b/drivers/acpi/thermal.c
+@@ -803,6 +803,12 @@ static int acpi_thermal_add(struct acpi_device *device)
+ 
+ 	acpi_thermal_aml_dependency_fix(tz);
+ 
++	/*
++	 * Set the cooling mode [_SCP] to active cooling. This needs to happen before
++	 * we retrieve the trip point values.
++	 */
++	acpi_execute_simple_method(tz->device->handle, "_SCP", ACPI_THERMAL_MODE_ACTIVE);
++
+ 	/* Get trip points [_ACi, _PSV, etc.] (required). */
+ 	acpi_thermal_get_trip_points(tz);
+ 
+@@ -814,10 +820,6 @@ static int acpi_thermal_add(struct acpi_device *device)
+ 	if (result)
+ 		goto free_memory;
+ 
+-	/* Set the cooling mode [_SCP] to active cooling. */
+-	acpi_execute_simple_method(tz->device->handle, "_SCP",
+-				   ACPI_THERMAL_MODE_ACTIVE);
+-
+ 	/* Determine the default polling frequency [_TZP]. */
+ 	if (tzp)
+ 		tz->polling_frequency = tzp;
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index c8b0a9e29ed843..1926454c7a7e8c 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -941,6 +941,8 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ 	if (!dev->power.is_suspended)
+ 		goto Complete;
+ 
++	dev->power.is_suspended = false;
++
+ 	if (dev->power.direct_complete) {
+ 		/*
+ 		 * Allow new children to be added under the device after this
+@@ -1003,7 +1005,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ 
+  End:
+ 	error = dpm_run_callback(callback, dev, state, info);
+-	dev->power.is_suspended = false;
+ 
+ 	device_unlock(dev);
+ 	dpm_watchdog_clear(&wd);
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 0e127b0329c00c..205a4f8828b0ac 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1568,6 +1568,32 @@ void pm_runtime_enable(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(pm_runtime_enable);
+ 
++static void pm_runtime_set_suspended_action(void *data)
++{
++	pm_runtime_set_suspended(data);
++}
++
++/**
++ * devm_pm_runtime_set_active_enabled - set_active version of devm_pm_runtime_enable.
++ *
++ * @dev: Device to handle.
++ */
++int devm_pm_runtime_set_active_enabled(struct device *dev)
++{
++	int err;
++
++	err = pm_runtime_set_active(dev);
++	if (err)
++		return err;
++
++	err = devm_add_action_or_reset(dev, pm_runtime_set_suspended_action, dev);
++	if (err)
++		return err;
++
++	return devm_pm_runtime_enable(dev);
++}
++EXPORT_SYMBOL_GPL(devm_pm_runtime_set_active_enabled);
++
+ static void pm_runtime_disable_action(void *data)
+ {
+ 	pm_runtime_dont_use_autosuspend(data);
+@@ -1590,6 +1616,24 @@ int devm_pm_runtime_enable(struct device *dev)
+ }
+ EXPORT_SYMBOL_GPL(devm_pm_runtime_enable);
+ 
++static void pm_runtime_put_noidle_action(void *data)
++{
++	pm_runtime_put_noidle(data);
++}
++
++/**
++ * devm_pm_runtime_get_noresume - devres-enabled version of pm_runtime_get_noresume.
++ *
++ * @dev: Device to handle.
++ */
++int devm_pm_runtime_get_noresume(struct device *dev)
++{
++	pm_runtime_get_noresume(dev);
++
++	return devm_add_action_or_reset(dev, pm_runtime_put_noidle_action, dev);
++}
++EXPORT_SYMBOL_GPL(devm_pm_runtime_get_noresume);
++
+ /**
+  * pm_runtime_forbid - Block runtime PM of a device.
+  * @dev: Device to handle.
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index 292f127cae0abe..02fa8106ef549f 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -224,19 +224,22 @@ static int brd_do_bvec(struct brd_device *brd, struct page *page,
+ 
+ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
+ {
+-	sector_t aligned_sector = (sector + PAGE_SECTORS) & ~PAGE_SECTORS;
++	sector_t aligned_sector = round_up(sector, PAGE_SECTORS);
++	sector_t aligned_end = round_down(
++			sector + (size >> SECTOR_SHIFT), PAGE_SECTORS);
+ 	struct page *page;
+ 
+-	size -= (aligned_sector - sector) * SECTOR_SIZE;
++	if (aligned_end <= aligned_sector)
++		return;
++
+ 	xa_lock(&brd->brd_pages);
+-	while (size >= PAGE_SIZE && aligned_sector < rd_size * 2) {
++	while (aligned_sector < aligned_end && aligned_sector < rd_size * 2) {
+ 		page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT);
+ 		if (page) {
+ 			__free_page(page);
+ 			brd->brd_nr_pages--;
+ 		}
+ 		aligned_sector += PAGE_SECTORS;
+-		size -= PAGE_SIZE;
+ 	}
+ 	xa_unlock(&brd->brd_pages);
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index e2b1f377f58563..f8d136684109aa 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -308,11 +308,14 @@ static void lo_complete_rq(struct request *rq)
+ static void lo_rw_aio_do_completion(struct loop_cmd *cmd)
+ {
+ 	struct request *rq = blk_mq_rq_from_pdu(cmd);
++	struct loop_device *lo = rq->q->queuedata;
+ 
+ 	if (!atomic_dec_and_test(&cmd->ref))
+ 		return;
+ 	kfree(cmd->bvec);
+ 	cmd->bvec = NULL;
++	if (req_op(rq) == REQ_OP_WRITE)
++		file_end_write(lo->lo_backing_file);
+ 	if (likely(!blk_should_fake_timeout(rq->q)))
+ 		blk_mq_complete_request(rq);
+ }
+@@ -387,9 +390,10 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+ 		cmd->iocb.ki_flags = 0;
+ 	}
+ 
+-	if (rw == ITER_SOURCE)
++	if (rw == ITER_SOURCE) {
++		file_start_write(lo->lo_backing_file);
+ 		ret = file->f_op->write_iter(&cmd->iocb, &iter);
+-	else
++	} else
+ 		ret = file->f_op->read_iter(&cmd->iocb, &iter);
+ 
+ 	lo_rw_aio_do_completion(cmd);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 48e2f400957bc9..46d9bbd8e411b3 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2719,7 +2719,7 @@ static int btintel_uefi_get_dsbr(u32 *dsbr_var)
+ 	} __packed data;
+ 
+ 	efi_status_t status;
+-	unsigned long data_size = 0;
++	unsigned long data_size = sizeof(data);
+ 	efi_guid_t guid = EFI_GUID(0xe65d8884, 0xd4af, 0x4b20, 0x8d, 0x03,
+ 				   0x77, 0x2e, 0xcc, 0x3d, 0xa5, 0x31);
+ 
+@@ -2729,16 +2729,10 @@ static int btintel_uefi_get_dsbr(u32 *dsbr_var)
+ 	if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		return -EOPNOTSUPP;
+ 
+-	status = efi.get_variable(BTINTEL_EFI_DSBR, &guid, NULL, &data_size,
+-				  NULL);
+-
+-	if (status != EFI_BUFFER_TOO_SMALL || !data_size)
+-		return -EIO;
+-
+ 	status = efi.get_variable(BTINTEL_EFI_DSBR, &guid, NULL, &data_size,
+ 				  &data);
+ 
+-	if (status != EFI_SUCCESS)
++	if (status != EFI_SUCCESS || data_size != sizeof(data))
+ 		return -ENXIO;
+ 
+ 	*dsbr_var = data.dsbr;
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 0a759ea26fd38f..385e29367dd1df 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -303,8 +303,13 @@ static int btintel_pcie_submit_rx(struct btintel_pcie_data *data)
+ static int btintel_pcie_start_rx(struct btintel_pcie_data *data)
+ {
+ 	int i, ret;
++	struct rxq *rxq = &data->rxq;
++
++	/* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the
++	 * hardware issues leading to race condition at the firmware.
++	 */
+ 
+-	for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) {
++	for (i = 0; i < rxq->count - 3; i++) {
+ 		ret = btintel_pcie_submit_rx(data);
+ 		if (ret)
+ 			return ret;
+@@ -1664,8 +1669,8 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
+ 	 *  + size of index * Number of queues(2) * type of index array(4)
+ 	 *  + size of context information
+ 	 */
+-	total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd)
+-		+ sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT;
++	total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT;
++	total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT;
+ 
+ 	/* Add the sum of size of index array and size of ci struct */
+ 	total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info);
+@@ -1690,36 +1695,36 @@ static int btintel_pcie_alloc(struct btintel_pcie_data *data)
+ 	data->dma_v_addr = v_addr;
+ 
+ 	/* Setup descriptor count */
+-	data->txq.count = BTINTEL_DESCS_COUNT;
+-	data->rxq.count = BTINTEL_DESCS_COUNT;
++	data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT;
++	data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT;
+ 
+ 	/* Setup tfds */
+ 	data->txq.tfds_p_addr = p_addr;
+ 	data->txq.tfds = v_addr;
+ 
+-	p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
++	v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);
+ 
+ 	/* Setup urbd0 */
+ 	data->txq.urbd0s_p_addr = p_addr;
+ 	data->txq.urbd0s = v_addr;
+ 
+-	p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
++	v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);
+ 
+ 	/* Setup FRBD*/
+ 	data->rxq.frbds_p_addr = p_addr;
+ 	data->rxq.frbds = v_addr;
+ 
+-	p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
++	v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);
+ 
+ 	/* Setup urbd1 */
+ 	data->rxq.urbd1s_p_addr = p_addr;
+ 	data->rxq.urbd1s = v_addr;
+ 
+-	p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
+-	v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);
++	p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
++	v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);
+ 
+ 	/* Setup data buffers for txq */
+ 	err = btintel_pcie_setup_txq_bufs(data, &data->txq);
+diff --git a/drivers/bluetooth/btintel_pcie.h b/drivers/bluetooth/btintel_pcie.h
+index 873178019cad09..a94910ccd5d3c2 100644
+--- a/drivers/bluetooth/btintel_pcie.h
++++ b/drivers/bluetooth/btintel_pcie.h
+@@ -135,8 +135,11 @@ enum btintel_pcie_tlv_type {
+ /* Default interrupt timeout in msec */
+ #define BTINTEL_DEFAULT_INTR_TIMEOUT_MS	3000
+ 
+-/* The number of descriptors in TX/RX queues */
+-#define BTINTEL_DESCS_COUNT	16
++/* The number of descriptors in TX queues */
++#define BTINTEL_PCIE_TX_DESCS_COUNT	32
++
++/* The number of descriptors in RX queues */
++#define BTINTEL_PCIE_RX_DESCS_COUNT	64
+ 
+ /* Number of Queue for TX and RX
+  * It indicates the index of the IA(Index Array)
+@@ -158,9 +161,6 @@ enum {
+ /* Doorbell vector for TFD */
+ #define BTINTEL_PCIE_TX_DB_VEC	0
+ 
+-/* Number of pending RX requests for downlink */
+-#define BTINTEL_PCIE_RX_MAX_QUEUE	6
+-
+ /* Doorbell vector for FRBD */
+ #define BTINTEL_PCIE_RX_DB_VEC	513
+ 
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index a8be8cf246fb6f..7671bd15854551 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -139,9 +139,9 @@ static int fsl_mc_bus_uevent(const struct device *dev, struct kobj_uevent_env *e
+ 
+ static int fsl_mc_dma_configure(struct device *dev)
+ {
++	const struct device_driver *drv = READ_ONCE(dev->driver);
+ 	struct device *dma_dev = dev;
+ 	struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
+-	struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver);
+ 	u32 input_id = mc_dev->icid;
+ 	int ret;
+ 
+@@ -153,8 +153,8 @@ static int fsl_mc_dma_configure(struct device *dev)
+ 	else
+ 		ret = acpi_dma_configure_id(dev, DEV_DMA_COHERENT, &input_id);
+ 
+-	/* @mc_drv may not be valid when we're called from the IOMMU layer */
+-	if (!ret && dev->driver && !mc_drv->driver_managed_dma) {
++	/* @drv may not be valid when we're called from the IOMMU layer */
++	if (!ret && drv && !to_fsl_mc_driver(drv)->driver_managed_dma) {
+ 		ret = iommu_device_use_default_domain(dev);
+ 		if (ret)
+ 			arch_teardown_dma_ops(dev);
+@@ -906,8 +906,10 @@ int fsl_mc_device_add(struct fsl_mc_obj_desc *obj_desc,
+ 
+ error_cleanup_dev:
+ 	kfree(mc_dev->regions);
+-	kfree(mc_bus);
+-	kfree(mc_dev);
++	if (mc_bus)
++		kfree(mc_bus);
++	else
++		kfree(mc_dev);
+ 
+ 	return error;
+ }
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 8fb33c90482f79..ae61967605563c 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -404,7 +404,7 @@ config TELCLOCK
+ 	  configuration of the telecom clock configuration settings.  This
+ 	  device is used for hardware synchronization across the ATCA backplane
+ 	  fabric.  Upon loading, the driver exports a sysfs directory,
+-	  /sys/devices/platform/telco_clock, with a number of files for
++	  /sys/devices/faux/telco_clock, with a number of files for
+ 	  controlling the behavior of this hardware.
+ 
+ source "drivers/s390/char/Kconfig"
+diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c
+index 0e1fe3759530a4..720acc10f8aa45 100644
+--- a/drivers/clk/bcm/clk-raspberrypi.c
++++ b/drivers/clk/bcm/clk-raspberrypi.c
+@@ -286,6 +286,8 @@ static struct clk_hw *raspberrypi_clk_register(struct raspberrypi_clk *rpi,
+ 	init.name = devm_kasprintf(rpi->dev, GFP_KERNEL,
+ 				   "fw-clk-%s",
+ 				   rpi_firmware_clk_names[id]);
++	if (!init.name)
++		return ERR_PTR(-ENOMEM);
+ 	init.ops = &raspberrypi_firmware_clk_ops;
+ 	init.flags = CLK_GET_RATE_NOCACHE;
+ 
+diff --git a/drivers/clk/qcom/camcc-sm6350.c b/drivers/clk/qcom/camcc-sm6350.c
+index 1871970fb046d7..8aac97d29ce3ff 100644
+--- a/drivers/clk/qcom/camcc-sm6350.c
++++ b/drivers/clk/qcom/camcc-sm6350.c
+@@ -1695,6 +1695,9 @@ static struct clk_branch camcc_sys_tmr_clk = {
+ 
+ static struct gdsc bps_gdsc = {
+ 	.gdscr = 0x6004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "bps_gdsc",
+ 	},
+@@ -1704,6 +1707,9 @@ static struct gdsc bps_gdsc = {
+ 
+ static struct gdsc ipe_0_gdsc = {
+ 	.gdscr = 0x7004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ipe_0_gdsc",
+ 	},
+@@ -1713,6 +1719,9 @@ static struct gdsc ipe_0_gdsc = {
+ 
+ static struct gdsc ife_0_gdsc = {
+ 	.gdscr = 0x9004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ife_0_gdsc",
+ 	},
+@@ -1721,6 +1730,9 @@ static struct gdsc ife_0_gdsc = {
+ 
+ static struct gdsc ife_1_gdsc = {
+ 	.gdscr = 0xa004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ife_1_gdsc",
+ 	},
+@@ -1729,6 +1741,9 @@ static struct gdsc ife_1_gdsc = {
+ 
+ static struct gdsc ife_2_gdsc = {
+ 	.gdscr = 0xb004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ife_2_gdsc",
+ 	},
+@@ -1737,6 +1752,9 @@ static struct gdsc ife_2_gdsc = {
+ 
+ static struct gdsc titan_top_gdsc = {
+ 	.gdscr = 0x14004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "titan_top_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/dispcc-sm6350.c b/drivers/clk/qcom/dispcc-sm6350.c
+index e703ecf00e4404..b0bd163a449ccd 100644
+--- a/drivers/clk/qcom/dispcc-sm6350.c
++++ b/drivers/clk/qcom/dispcc-sm6350.c
+@@ -681,6 +681,9 @@ static struct clk_branch disp_cc_xo_clk = {
+ 
+ static struct gdsc mdss_gdsc = {
+ 	.gdscr = 0x1004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "mdss_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/gcc-msm8939.c b/drivers/clk/qcom/gcc-msm8939.c
+index 7431c9a65044f8..45193b3d714bab 100644
+--- a/drivers/clk/qcom/gcc-msm8939.c
++++ b/drivers/clk/qcom/gcc-msm8939.c
+@@ -432,7 +432,7 @@ static const struct parent_map gcc_xo_gpll0_gpll1a_gpll6_sleep_map[] = {
+ 	{ P_XO, 0 },
+ 	{ P_GPLL0, 1 },
+ 	{ P_GPLL1_AUX, 2 },
+-	{ P_GPLL6, 2 },
++	{ P_GPLL6, 3 },
+ 	{ P_SLEEP_CLK, 6 },
+ };
+ 
+@@ -1113,7 +1113,7 @@ static struct clk_rcg2 jpeg0_clk_src = {
+ };
+ 
+ static const struct freq_tbl ftbl_gcc_camss_mclk0_1_clk[] = {
+-	F(24000000, P_GPLL0, 1, 1, 45),
++	F(24000000, P_GPLL6, 1, 1, 45),
+ 	F(66670000, P_GPLL0, 12, 0, 0),
+ 	{ }
+ };
+diff --git a/drivers/clk/qcom/gcc-sm6350.c b/drivers/clk/qcom/gcc-sm6350.c
+index 74346dc026068a..a4d6dff9d0f7f1 100644
+--- a/drivers/clk/qcom/gcc-sm6350.c
++++ b/drivers/clk/qcom/gcc-sm6350.c
+@@ -2320,6 +2320,9 @@ static struct clk_branch gcc_video_xo_clk = {
+ 
+ static struct gdsc usb30_prim_gdsc = {
+ 	.gdscr = 0x1a004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "usb30_prim_gdsc",
+ 	},
+@@ -2328,6 +2331,9 @@ static struct gdsc usb30_prim_gdsc = {
+ 
+ static struct gdsc ufs_phy_gdsc = {
+ 	.gdscr = 0x3a004,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0xf,
+ 	.pd = {
+ 		.name = "ufs_phy_gdsc",
+ 	},
+diff --git a/drivers/clk/qcom/gpucc-sm6350.c b/drivers/clk/qcom/gpucc-sm6350.c
+index 35ed0500bc5931..ee89c42413f885 100644
+--- a/drivers/clk/qcom/gpucc-sm6350.c
++++ b/drivers/clk/qcom/gpucc-sm6350.c
+@@ -413,6 +413,9 @@ static struct clk_branch gpu_cc_gx_vsense_clk = {
+ static struct gdsc gpu_cx_gdsc = {
+ 	.gdscr = 0x106c,
+ 	.gds_hw_ctrl = 0x1540,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0x8,
+ 	.pd = {
+ 		.name = "gpu_cx_gdsc",
+ 	},
+@@ -423,6 +426,9 @@ static struct gdsc gpu_cx_gdsc = {
+ static struct gdsc gpu_gx_gdsc = {
+ 	.gdscr = 0x100c,
+ 	.clamp_io_ctrl = 0x1508,
++	.en_rest_wait_val = 0x2,
++	.en_few_wait_val = 0x2,
++	.clk_dis_wait_val = 0x2,
+ 	.pd = {
+ 		.name = "gpu_gx_gdsc",
+ 		.power_on = gdsc_gx_do_nothing_enable,
+diff --git a/drivers/counter/interrupt-cnt.c b/drivers/counter/interrupt-cnt.c
+index 949598d51575a1..d83848d0fe2af5 100644
+--- a/drivers/counter/interrupt-cnt.c
++++ b/drivers/counter/interrupt-cnt.c
+@@ -3,12 +3,14 @@
+  * Copyright (c) 2021 Pengutronix, Oleksij Rempel <kernel@pengutronix.de>
+  */
+ 
++#include <linux/cleanup.h>
+ #include <linux/counter.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_device.h>
+ #include <linux/types.h>
+ 
+@@ -19,6 +21,7 @@ struct interrupt_cnt_priv {
+ 	struct gpio_desc *gpio;
+ 	int irq;
+ 	bool enabled;
++	struct mutex lock;
+ 	struct counter_signal signals;
+ 	struct counter_synapse synapses;
+ 	struct counter_count cnts;
+@@ -41,6 +44,8 @@ static int interrupt_cnt_enable_read(struct counter_device *counter,
+ {
+ 	struct interrupt_cnt_priv *priv = counter_priv(counter);
+ 
++	guard(mutex)(&priv->lock);
++
+ 	*enable = priv->enabled;
+ 
+ 	return 0;
+@@ -51,6 +56,8 @@ static int interrupt_cnt_enable_write(struct counter_device *counter,
+ {
+ 	struct interrupt_cnt_priv *priv = counter_priv(counter);
+ 
++	guard(mutex)(&priv->lock);
++
+ 	if (priv->enabled == enable)
+ 		return 0;
+ 
+@@ -227,6 +234,8 @@ static int interrupt_cnt_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	mutex_init(&priv->lock);
++
+ 	ret = devm_counter_add(dev, counter);
+ 	if (ret < 0)
+ 		return dev_err_probe(dev, ret, "Failed to add counter\n");
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 19b7fb4a93e86c..05f67661553c9a 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -275,13 +275,16 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
+ 	} else {
+ 		if (nr_sgs > 0)
+ 			dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
+-		dma_unmap_sg(ce->dev, areq->dst, nd, DMA_FROM_DEVICE);
++
++		if (nr_sgd > 0)
++			dma_unmap_sg(ce->dev, areq->dst, nd, DMA_FROM_DEVICE);
+ 	}
+ 
+ theend_iv:
+ 	if (areq->iv && ivsize > 0) {
+-		if (rctx->addr_iv)
++		if (!dma_mapping_error(ce->dev, rctx->addr_iv))
+ 			dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
++
+ 		offset = areq->cryptlen - ivsize;
+ 		if (rctx->op_dir & CE_DECRYPTION) {
+ 			memcpy(areq->iv, chan->backup_iv, ivsize);
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+index ec1ffda9ea32e0..658f520cee0caa 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
+@@ -832,13 +832,12 @@ static int sun8i_ce_pm_init(struct sun8i_ce_dev *ce)
+ 	err = pm_runtime_set_suspended(ce->dev);
+ 	if (err)
+ 		return err;
+-	pm_runtime_enable(ce->dev);
+-	return err;
+-}
+ 
+-static void sun8i_ce_pm_exit(struct sun8i_ce_dev *ce)
+-{
+-	pm_runtime_disable(ce->dev);
++	err = devm_pm_runtime_enable(ce->dev);
++	if (err)
++		return err;
++
++	return 0;
+ }
+ 
+ static int sun8i_ce_get_clks(struct sun8i_ce_dev *ce)
+@@ -1041,7 +1040,7 @@ static int sun8i_ce_probe(struct platform_device *pdev)
+ 			       "sun8i-ce-ns", ce);
+ 	if (err) {
+ 		dev_err(ce->dev, "Cannot request CryptoEngine Non-secure IRQ (err=%d)\n", err);
+-		goto error_irq;
++		goto error_pm;
+ 	}
+ 
+ 	err = sun8i_ce_register_algs(ce);
+@@ -1082,8 +1081,6 @@ static int sun8i_ce_probe(struct platform_device *pdev)
+ 	return 0;
+ error_alg:
+ 	sun8i_ce_unregister_algs(ce);
+-error_irq:
+-	sun8i_ce_pm_exit(ce);
+ error_pm:
+ 	sun8i_ce_free_chanlist(ce, MAXFLOW - 1);
+ 	return err;
+@@ -1104,8 +1101,6 @@ static void sun8i_ce_remove(struct platform_device *pdev)
+ #endif
+ 
+ 	sun8i_ce_free_chanlist(ce, MAXFLOW - 1);
+-
+-	sun8i_ce_pm_exit(ce);
+ }
+ 
+ static const struct of_device_id sun8i_ce_crypto_of_match_table[] = {
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+index 6072dd9f390b40..3f9d79ea01aaa6 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+@@ -343,9 +343,8 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	u32 common;
+ 	u64 byte_count;
+ 	__le32 *bf;
+-	void *buf = NULL;
++	void *buf, *result;
+ 	int j, i, todo;
+-	void *result = NULL;
+ 	u64 bs;
+ 	int digestsize;
+ 	dma_addr_t addr_res, addr_pad;
+@@ -365,14 +364,14 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	buf = kcalloc(2, bs, GFP_KERNEL | GFP_DMA);
+ 	if (!buf) {
+ 		err = -ENOMEM;
+-		goto theend;
++		goto err_out;
+ 	}
+ 	bf = (__le32 *)buf;
+ 
+ 	result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
+ 	if (!result) {
+ 		err = -ENOMEM;
+-		goto theend;
++		goto err_free_buf;
+ 	}
+ 
+ 	flow = rctx->flow;
+@@ -398,7 +397,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
+ 		dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_free_result;
+ 	}
+ 
+ 	len = areq->nbytes;
+@@ -411,7 +410,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (len > 0) {
+ 		dev_err(ce->dev, "remaining len %d\n", len);
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_src;
+ 	}
+ 	addr_res = dma_map_single(ce->dev, result, digestsize, DMA_FROM_DEVICE);
+ 	cet->t_dst[0].addr = desc_addr_val_le32(ce, addr_res);
+@@ -419,7 +418,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (dma_mapping_error(ce->dev, addr_res)) {
+ 		dev_err(ce->dev, "DMA map dest\n");
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_src;
+ 	}
+ 
+ 	byte_count = areq->nbytes;
+@@ -441,7 +440,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	}
+ 	if (!j) {
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_result;
+ 	}
+ 
+ 	addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE);
+@@ -450,7 +449,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	if (dma_mapping_error(ce->dev, addr_pad)) {
+ 		dev_err(ce->dev, "DMA error on padding SG\n");
+ 		err = -EINVAL;
+-		goto theend;
++		goto err_unmap_result;
+ 	}
+ 
+ 	if (ce->variant->hash_t_dlen_in_bits)
+@@ -463,16 +462,25 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ 	err = sun8i_ce_run_task(ce, flow, crypto_ahash_alg_name(tfm));
+ 
+ 	dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE);
+-	dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
++
++err_unmap_result:
+ 	dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
++	if (!err)
++		memcpy(areq->result, result, algt->alg.hash.base.halg.digestsize);
+ 
++err_unmap_src:
++	dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);
+ 
+-	memcpy(areq->result, result, algt->alg.hash.base.halg.digestsize);
+-theend:
+-	kfree(buf);
++err_free_result:
+ 	kfree(result);
++
++err_free_buf:
++	kfree(buf);
++
++err_out:
+ 	local_bh_disable();
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	local_bh_enable();
++
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
+index 3b5c2af013d0da..83df4d71905318 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce.h
+@@ -308,8 +308,8 @@ struct sun8i_ce_hash_tfm_ctx {
+  * @flow:	the flow to use for this request
+  */
+ struct sun8i_ce_hash_reqctx {
+-	struct ahash_request fallback_req;
+ 	int flow;
++	struct ahash_request fallback_req; // keep at the end
+ };
+ 
+ /*
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 9b9605ce8ee629..8831bcb230c2d4 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -141,7 +141,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
+ 
+ 	/* we need to copy all IVs from source in case DMA is bi-directionnal */
+ 	while (sg && len) {
+-		if (sg_dma_len(sg) == 0) {
++		if (sg->length == 0) {
+ 			sg = sg_next(sg);
+ 			continue;
+ 		}
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index 09d9589f2d681d..33a285981dfd45 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -1187,8 +1187,7 @@ static int iaa_compress(struct crypto_tfm *tfm,	struct acomp_req *req,
+ 			" src_addr %llx, dst_addr %llx\n", __func__,
+ 			active_compression_mode->name,
+ 			src_addr, dst_addr);
+-	} else if (ctx->async_mode)
+-		req->base.data = idxd_desc;
++	}
+ 
+ 	dev_dbg(dev, "%s: compression mode %s,"
+ 		" desc->src1_addr %llx, desc->src1_size %d,"
+@@ -1425,8 +1424,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
+ 			" src_addr %llx, dst_addr %llx\n", __func__,
+ 			active_compression_mode->name,
+ 			src_addr, dst_addr);
+-	} else if (ctx->async_mode && !disable_async)
+-		req->base.data = idxd_desc;
++	}
+ 
+ 	dev_dbg(dev, "%s: decompression mode %s,"
+ 		" desc->src1_addr %llx, desc->src1_size %d,"
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index cf62db50f95858..48c5c8ea8c43ec 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -459,6 +459,9 @@ static int mv_cesa_skcipher_queue_req(struct skcipher_request *req,
+ 	struct mv_cesa_skcipher_req *creq = skcipher_request_ctx(req);
+ 	struct mv_cesa_engine *engine;
+ 
++	if (!req->cryptlen)
++		return 0;
++
+ 	ret = mv_cesa_skcipher_req_init(req, tmpl);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/crypto/marvell/cesa/hash.c b/drivers/crypto/marvell/cesa/hash.c
+index f150861ceaf695..6815eddc906812 100644
+--- a/drivers/crypto/marvell/cesa/hash.c
++++ b/drivers/crypto/marvell/cesa/hash.c
+@@ -663,7 +663,7 @@ static int mv_cesa_ahash_dma_req_init(struct ahash_request *req)
+ 	if (ret)
+ 		goto err_free_tdma;
+ 
+-	if (iter.src.sg) {
++	if (iter.base.len > iter.src.op_offset) {
+ 		/*
+ 		 * Add all the new data, inserting an operation block and
+ 		 * launch command between each full SRAM block-worth of
+diff --git a/drivers/crypto/xilinx/zynqmp-sha.c b/drivers/crypto/xilinx/zynqmp-sha.c
+index 580649f9bff81f..0edf8eb264b55f 100644
+--- a/drivers/crypto/xilinx/zynqmp-sha.c
++++ b/drivers/crypto/xilinx/zynqmp-sha.c
+@@ -3,18 +3,19 @@
+  * Xilinx ZynqMP SHA Driver.
+  * Copyright (c) 2022 Xilinx Inc.
+  */
+-#include <linux/cacheflush.h>
+ #include <crypto/hash.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/sha3.h>
+-#include <linux/crypto.h>
++#include <linux/cacheflush.h>
++#include <linux/cleanup.h>
+ #include <linux/device.h>
+ #include <linux/dma-mapping.h>
++#include <linux/err.h>
+ #include <linux/firmware/xlnx-zynqmp.h>
+-#include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/spinlock.h>
+ #include <linux/platform_device.h>
+ 
+ #define ZYNQMP_DMA_BIT_MASK		32U
+@@ -43,6 +44,8 @@ struct zynqmp_sha_desc_ctx {
+ static dma_addr_t update_dma_addr, final_dma_addr;
+ static char *ubuf, *fbuf;
+ 
++static DEFINE_SPINLOCK(zynqmp_sha_lock);
++
+ static int zynqmp_sha_init_tfm(struct crypto_shash *hash)
+ {
+ 	const char *fallback_driver_name = crypto_shash_alg_name(hash);
+@@ -124,7 +127,8 @@ static int zynqmp_sha_export(struct shash_desc *desc, void *out)
+ 	return crypto_shash_export(&dctx->fbk_req, out);
+ }
+ 
+-static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out)
++static int __zynqmp_sha_digest(struct shash_desc *desc, const u8 *data,
++			       unsigned int len, u8 *out)
+ {
+ 	unsigned int remaining_len = len;
+ 	int update_size;
+@@ -159,6 +163,12 @@ static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned i
+ 	return ret;
+ }
+ 
++static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out)
++{
++	scoped_guard(spinlock_bh, &zynqmp_sha_lock)
++		return __zynqmp_sha_digest(desc, data, len, out);
++}
++
+ static struct zynqmp_sha_drv_ctx sha3_drv_ctx = {
+ 	.sha3_384 = {
+ 		.init = zynqmp_sha_init,
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index b6255c0601bb29..aa2dc762140f6e 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -5624,7 +5624,8 @@ static int udma_probe(struct platform_device *pdev)
+ 		uc->config.dir = DMA_MEM_TO_MEM;
+ 		uc->name = devm_kasprintf(dev, GFP_KERNEL, "%s chan%d",
+ 					  dev_name(dev), i);
+-
++		if (!uc->name)
++			return -ENOMEM;
+ 		vchan_init(&uc->vc, &ud->ddev);
+ 		/* Use custom vchan completion handling */
+ 		tasklet_setup(&uc->vc.task, udma_vchan_complete);
+diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c
+index 4942a240c30f25..ae3bb7afa103eb 100644
+--- a/drivers/edac/bluefield_edac.c
++++ b/drivers/edac/bluefield_edac.c
+@@ -199,8 +199,10 @@ static void bluefield_gather_report_ecc(struct mem_ctl_info *mci,
+ 	 * error without the detailed information.
+ 	 */
+ 	err = bluefield_edac_readl(priv, MLXBF_SYNDROM, &dram_syndrom);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "DRAM syndrom read failed.\n");
++		return;
++	}
+ 
+ 	serr = FIELD_GET(MLXBF_SYNDROM__SERR, dram_syndrom);
+ 	derr = FIELD_GET(MLXBF_SYNDROM__DERR, dram_syndrom);
+@@ -213,20 +215,26 @@ static void bluefield_gather_report_ecc(struct mem_ctl_info *mci,
+ 	}
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ADD_INFO, &dram_additional_info);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "DRAM additional info read failed.\n");
++		return;
++	}
+ 
+ 	err_prank = FIELD_GET(MLXBF_ADD_INFO__ERR_PRANK, dram_additional_info);
+ 
+ 	ecc_dimm = (err_prank >= 2 && priv->dimm_ranks[0] <= 2) ? 1 : 0;
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ERR_ADDR_0, &edea0);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "Error addr 0 read failed.\n");
++		return;
++	}
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ERR_ADDR_1, &edea1);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "Error addr 1 read failed.\n");
++		return;
++	}
+ 
+ 	ecc_dimm_addr = ((u64)edea1 << 32) | edea0;
+ 
+@@ -250,8 +258,10 @@ static void bluefield_edac_check(struct mem_ctl_info *mci)
+ 		return;
+ 
+ 	err = bluefield_edac_readl(priv, MLXBF_ECC_CNT, &ecc_count);
+-	if (err)
++	if (err) {
+ 		dev_err(priv->dev, "ECC count read failed.\n");
++		return;
++	}
+ 
+ 	single_error_count = FIELD_GET(MLXBF_ECC_CNT__SERR_CNT, ecc_count);
+ 	double_error_count = FIELD_GET(MLXBF_ECC_CNT__DERR_CNT, ecc_count);
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 355a977019e944..355b527d839e78 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -95,7 +95,7 @@ static u32 offsets_demand2_spr[] = {0x22c70, 0x22d80, 0x22f18, 0x22d58, 0x22c64,
+ static u32 offsets_demand_spr_hbm0[] = {0x2a54, 0x2a60, 0x2b10, 0x2a58, 0x2a5c, 0x0ee0};
+ static u32 offsets_demand_spr_hbm1[] = {0x2e54, 0x2e60, 0x2f10, 0x2e58, 0x2e5c, 0x0fb0};
+ 
+-static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable,
++static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable, u32 *rrl_ctl,
+ 				      u32 *offsets_scrub, u32 *offsets_demand,
+ 				      u32 *offsets_demand2)
+ {
+@@ -108,10 +108,10 @@ static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable
+ 
+ 	if (enable) {
+ 		/* Save default configurations */
+-		imc->chan[chan].retry_rd_err_log_s = s;
+-		imc->chan[chan].retry_rd_err_log_d = d;
++		rrl_ctl[0] = s;
++		rrl_ctl[1] = d;
+ 		if (offsets_demand2)
+-			imc->chan[chan].retry_rd_err_log_d2 = d2;
++			rrl_ctl[2] = d2;
+ 
+ 		s &= ~RETRY_RD_ERR_LOG_NOOVER_UC;
+ 		s |=  RETRY_RD_ERR_LOG_EN;
+@@ -125,25 +125,25 @@ static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable
+ 		}
+ 	} else {
+ 		/* Restore default configurations */
+-		if (imc->chan[chan].retry_rd_err_log_s & RETRY_RD_ERR_LOG_UC)
++		if (rrl_ctl[0] & RETRY_RD_ERR_LOG_UC)
+ 			s |=  RETRY_RD_ERR_LOG_UC;
+-		if (imc->chan[chan].retry_rd_err_log_s & RETRY_RD_ERR_LOG_NOOVER)
++		if (rrl_ctl[0] & RETRY_RD_ERR_LOG_NOOVER)
+ 			s |=  RETRY_RD_ERR_LOG_NOOVER;
+-		if (!(imc->chan[chan].retry_rd_err_log_s & RETRY_RD_ERR_LOG_EN))
++		if (!(rrl_ctl[0] & RETRY_RD_ERR_LOG_EN))
+ 			s &= ~RETRY_RD_ERR_LOG_EN;
+-		if (imc->chan[chan].retry_rd_err_log_d & RETRY_RD_ERR_LOG_UC)
++		if (rrl_ctl[1] & RETRY_RD_ERR_LOG_UC)
+ 			d |=  RETRY_RD_ERR_LOG_UC;
+-		if (imc->chan[chan].retry_rd_err_log_d & RETRY_RD_ERR_LOG_NOOVER)
++		if (rrl_ctl[1] & RETRY_RD_ERR_LOG_NOOVER)
+ 			d |=  RETRY_RD_ERR_LOG_NOOVER;
+-		if (!(imc->chan[chan].retry_rd_err_log_d & RETRY_RD_ERR_LOG_EN))
++		if (!(rrl_ctl[1] & RETRY_RD_ERR_LOG_EN))
+ 			d &= ~RETRY_RD_ERR_LOG_EN;
+ 
+ 		if (offsets_demand2) {
+-			if (imc->chan[chan].retry_rd_err_log_d2 & RETRY_RD_ERR_LOG_UC)
++			if (rrl_ctl[2] & RETRY_RD_ERR_LOG_UC)
+ 				d2 |=  RETRY_RD_ERR_LOG_UC;
+-			if (!(imc->chan[chan].retry_rd_err_log_d2 & RETRY_RD_ERR_LOG_NOOVER))
++			if (!(rrl_ctl[2] & RETRY_RD_ERR_LOG_NOOVER))
+ 				d2 &=  ~RETRY_RD_ERR_LOG_NOOVER;
+-			if (!(imc->chan[chan].retry_rd_err_log_d2 & RETRY_RD_ERR_LOG_EN))
++			if (!(rrl_ctl[2] & RETRY_RD_ERR_LOG_EN))
+ 				d2 &= ~RETRY_RD_ERR_LOG_EN;
+ 		}
+ 	}
+@@ -157,6 +157,7 @@ static void __enable_retry_rd_err_log(struct skx_imc *imc, int chan, bool enable
+ static void enable_retry_rd_err_log(bool enable)
+ {
+ 	int i, j, imc_num, chan_num;
++	struct skx_channel *chan;
+ 	struct skx_imc *imc;
+ 	struct skx_dev *d;
+ 
+@@ -171,8 +172,9 @@ static void enable_retry_rd_err_log(bool enable)
+ 			if (!imc->mbase)
+ 				continue;
+ 
++			chan = d->imc[i].chan;
+ 			for (j = 0; j < chan_num; j++)
+-				__enable_retry_rd_err_log(imc, j, enable,
++				__enable_retry_rd_err_log(imc, j, enable, chan[j].rrl_ctl[0],
+ 							  res_cfg->offsets_scrub,
+ 							  res_cfg->offsets_demand,
+ 							  res_cfg->offsets_demand2);
+@@ -186,12 +188,13 @@ static void enable_retry_rd_err_log(bool enable)
+ 			if (!imc->mbase || !imc->hbm_mc)
+ 				continue;
+ 
++			chan = d->imc[i].chan;
+ 			for (j = 0; j < chan_num; j++) {
+-				__enable_retry_rd_err_log(imc, j, enable,
++				__enable_retry_rd_err_log(imc, j, enable, chan[j].rrl_ctl[0],
+ 							  res_cfg->offsets_scrub_hbm0,
+ 							  res_cfg->offsets_demand_hbm0,
+ 							  NULL);
+-				__enable_retry_rd_err_log(imc, j, enable,
++				__enable_retry_rd_err_log(imc, j, enable, chan[j].rrl_ctl[1],
+ 							  res_cfg->offsets_scrub_hbm1,
+ 							  res_cfg->offsets_demand_hbm1,
+ 							  NULL);
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index fa5b442b184499..c9ade45c1a99f3 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -116,6 +116,7 @@ EXPORT_SYMBOL_GPL(skx_adxl_get);
+ 
+ void skx_adxl_put(void)
+ {
++	adxl_component_count = 0;
+ 	kfree(adxl_values);
+ 	kfree(adxl_msg);
+ }
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index ca5408803f8787..5afd425f3b4ff1 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -79,6 +79,9 @@
+  */
+ #define MCACOD_EXT_MEM_ERR	0x280
+ 
++/* Max RRL register sets per {,sub-,pseudo-}channel. */
++#define NUM_RRL_SET		3
++
+ /*
+  * Each cpu socket contains some pci devices that provide global
+  * information, and also some that are local to each of the two
+@@ -117,9 +120,11 @@ struct skx_dev {
+ 		struct skx_channel {
+ 			struct pci_dev	*cdev;
+ 			struct pci_dev	*edev;
+-			u32 retry_rd_err_log_s;
+-			u32 retry_rd_err_log_d;
+-			u32 retry_rd_err_log_d2;
++			/*
++			 * Two groups of RRL control registers per channel to save default RRL
++			 * settings of two {sub-,pseudo-}channels in Linux RRL control mode.
++			 */
++			u32 rrl_ctl[2][NUM_RRL_SET];
+ 			struct skx_dimm {
+ 				u8 close_pg;
+ 				u8 bank_xor_enable;
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index aadc395ee16813..7df19d82aa689e 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -31,7 +31,6 @@ config ARM_SCPI_PROTOCOL
+ config ARM_SDE_INTERFACE
+ 	bool "ARM Software Delegated Exception Interface (SDEI)"
+ 	depends on ARM64
+-	depends on ACPI_APEI_GHES
+ 	help
+ 	  The Software Delegated Exception Interface (SDEI) is an ARM
+ 	  standard for registering callbacks from the platform firmware
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index 3e8051fe829657..71e2a9a89f6ada 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -1062,13 +1062,12 @@ static bool __init sdei_present_acpi(void)
+ 	return true;
+ }
+ 
+-void __init sdei_init(void)
++void __init acpi_sdei_init(void)
+ {
+ 	struct platform_device *pdev;
+ 	int ret;
+ 
+-	ret = platform_driver_register(&sdei_driver);
+-	if (ret || !sdei_present_acpi())
++	if (!sdei_present_acpi())
+ 		return;
+ 
+ 	pdev = platform_device_register_simple(sdei_driver.driver.name,
+@@ -1081,6 +1080,12 @@ void __init sdei_init(void)
+ 	}
+ }
+ 
++static int __init sdei_init(void)
++{
++	return platform_driver_register(&sdei_driver);
++}
++arch_initcall(sdei_init);
++
+ int sdei_event_handler(struct pt_regs *regs,
+ 		       struct sdei_registered_event *arg)
+ {
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index fd6dc790c5a89d..7aa2f9ad293562 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -601,6 +601,7 @@ efi_status_t efi_load_initrd_cmdline(efi_loaded_image_t *image,
+  * @image:	EFI loaded image protocol
+  * @soft_limit:	preferred address for loading the initrd
+  * @hard_limit:	upper limit address for loading the initrd
++ * @out:	pointer to store the address of the initrd table
+  *
+  * Return:	status code
+  */
+diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
+index a1ebbe9b73b136..38ca190d4a22d6 100644
+--- a/drivers/firmware/psci/psci.c
++++ b/drivers/firmware/psci/psci.c
+@@ -804,8 +804,10 @@ int __init psci_dt_init(void)
+ 
+ 	np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
+ 
+-	if (!np || !of_device_is_available(np))
++	if (!np || !of_device_is_available(np)) {
++		of_node_put(np);
+ 		return -ENODEV;
++	}
+ 
+ 	init_fn = (psci_initcall_t)matched_np->data;
+ 	ret = init_fn(np);
+diff --git a/drivers/firmware/samsung/exynos-acpm-pmic.c b/drivers/firmware/samsung/exynos-acpm-pmic.c
+index 85e90d236da21e..39b33a356ebd24 100644
+--- a/drivers/firmware/samsung/exynos-acpm-pmic.c
++++ b/drivers/firmware/samsung/exynos-acpm-pmic.c
+@@ -43,13 +43,13 @@ static inline u32 acpm_pmic_get_bulk(u32 data, unsigned int i)
+ 	return (data >> (ACPM_PMIC_BULK_SHIFT * i)) & ACPM_PMIC_BULK_MASK;
+ }
+ 
+-static void acpm_pmic_set_xfer(struct acpm_xfer *xfer, u32 *cmd,
++static void acpm_pmic_set_xfer(struct acpm_xfer *xfer, u32 *cmd, size_t cmdlen,
+ 			       unsigned int acpm_chan_id)
+ {
+ 	xfer->txd = cmd;
+ 	xfer->rxd = cmd;
+-	xfer->txlen = sizeof(cmd);
+-	xfer->rxlen = sizeof(cmd);
++	xfer->txlen = cmdlen;
++	xfer->rxlen = cmdlen;
+ 	xfer->acpm_chan_id = acpm_chan_id;
+ }
+ 
+@@ -71,7 +71,7 @@ int acpm_pmic_read_reg(const struct acpm_handle *handle,
+ 	int ret;
+ 
+ 	acpm_pmic_init_read_cmd(cmd, type, reg, chan);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -104,7 +104,7 @@ int acpm_pmic_bulk_read(const struct acpm_handle *handle,
+ 		return -EINVAL;
+ 
+ 	acpm_pmic_init_bulk_read_cmd(cmd, type, reg, chan, count);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -144,7 +144,7 @@ int acpm_pmic_write_reg(const struct acpm_handle *handle,
+ 	int ret;
+ 
+ 	acpm_pmic_init_write_cmd(cmd, type, reg, chan, value);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -184,7 +184,7 @@ int acpm_pmic_bulk_write(const struct acpm_handle *handle,
+ 		return -EINVAL;
+ 
+ 	acpm_pmic_init_bulk_write_cmd(cmd, type, reg, chan, count, buf);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+@@ -214,7 +214,7 @@ int acpm_pmic_update_reg(const struct acpm_handle *handle,
+ 	int ret;
+ 
+ 	acpm_pmic_init_update_cmd(cmd, type, reg, chan, value, mask);
+-	acpm_pmic_set_xfer(&xfer, cmd, acpm_chan_id);
++	acpm_pmic_set_xfer(&xfer, cmd, sizeof(cmd), acpm_chan_id);
+ 
+ 	ret = acpm_do_xfer(handle, &xfer);
+ 	if (ret)
+diff --git a/drivers/firmware/samsung/exynos-acpm.c b/drivers/firmware/samsung/exynos-acpm.c
+index 15e991b99f5a38..e80cb7a8da8f23 100644
+--- a/drivers/firmware/samsung/exynos-acpm.c
++++ b/drivers/firmware/samsung/exynos-acpm.c
+@@ -696,24 +696,17 @@ static const struct acpm_handle *acpm_get_by_phandle(struct device *dev,
+ 		return ERR_PTR(-ENODEV);
+ 
+ 	pdev = of_find_device_by_node(acpm_np);
+-	if (!pdev) {
+-		dev_err(dev, "Cannot find device node %s\n", acpm_np->name);
+-		of_node_put(acpm_np);
+-		return ERR_PTR(-EPROBE_DEFER);
+-	}
+-
+ 	of_node_put(acpm_np);
++	if (!pdev)
++		return ERR_PTR(-EPROBE_DEFER);
+ 
+ 	acpm = platform_get_drvdata(pdev);
+ 	if (!acpm) {
+-		dev_err(dev, "Cannot get drvdata from %s\n",
+-			dev_name(&pdev->dev));
+ 		platform_device_put(pdev);
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
+ 
+ 	if (!try_module_get(pdev->dev.driver->owner)) {
+-		dev_err(dev, "Cannot get module reference.\n");
+ 		platform_device_put(pdev);
+ 		return ERR_PTR(-EPROBE_DEFER);
+ 	}
+diff --git a/drivers/fpga/tests/fpga-mgr-test.c b/drivers/fpga/tests/fpga-mgr-test.c
+index 8748babb050458..62975a39ee14e4 100644
+--- a/drivers/fpga/tests/fpga-mgr-test.c
++++ b/drivers/fpga/tests/fpga-mgr-test.c
+@@ -263,6 +263,7 @@ static void fpga_mgr_test_img_load_sgt(struct kunit *test)
+ 	img_buf = init_test_buffer(test, IMAGE_SIZE);
+ 
+ 	sgt = kunit_kzalloc(test, sizeof(*sgt), GFP_KERNEL);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
+ 	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 	sg_init_one(sgt->sgl, img_buf, IMAGE_SIZE);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 23e6a05359c24b..c68c2e2f4d61aa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4800,7 +4800,7 @@ static int gfx_v10_0_sw_init(struct amdgpu_ip_block *ip_block)
+ 		adev->gfx.cleaner_shader_size = sizeof(gfx_10_1_10_cleaner_shader_hex);
+ 		if (adev->gfx.me_fw_version >= 101 &&
+ 		    adev->gfx.pfp_fw_version  >= 158 &&
+-		    adev->gfx.mec_fw_version >= 152) {
++		    adev->gfx.mec_fw_version >= 151) {
+ 			adev->gfx.enable_cleaner_shader = true;
+ 			r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);
+ 			if (r) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
+index 5255378af53c0a..f67569ccf9f609 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0_cleaner_shader.h
+@@ -43,9 +43,9 @@ static const u32 gfx_10_1_10_cleaner_shader_hex[] = {
+ 	0xd70f6a01, 0x000202ff,
+ 	0x00000400, 0x80828102,
+ 	0xbf84fff7, 0xbefc03ff,
+-	0x00000068, 0xbe803080,
+-	0xbe813080, 0xbe823080,
+-	0xbe833080, 0x80fc847c,
++	0x00000068, 0xbe803000,
++	0xbe813000, 0xbe823000,
++	0xbe833000, 0x80fc847c,
+ 	0xbf84fffa, 0xbeea0480,
+ 	0xbeec0480, 0xbeee0480,
+ 	0xbef00480, 0xbef20480,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm b/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
+index 9ba3359253c95d..54f7ed9e2801c5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_1_10_cleaner_shader.asm
+@@ -40,7 +40,6 @@ shader main
+   type(CS)
+   wave_size(32)
+ // Note: original source code from SQ team
+-
+ //
+ // Create 32 waves in a threadgroup (CS waves)
+ // Each allocates 64 VGPRs
+@@ -71,8 +70,8 @@ label_0005:
+   s_sub_u32     s2, s2, 8
+   s_cbranch_scc0  label_0005
+   //
+-  s_mov_b32     s2, 0x80000000                     // Bit31 is first_wave
+-  s_and_b32     s2, s2, s0                                  // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
++  s_mov_b32     s2, 0x80000000                       // Bit31 is first_wave
++  s_and_b32     s2, s2, s1                           // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
+   s_cbranch_scc0  label_0023                         // Clean LDS if its first wave of ThreadGroup/WorkGroup
+   // CLEAR LDS
+   //
+@@ -99,10 +98,10 @@ label_001F:
+ label_0023:
+   s_mov_b32     m0, 0x00000068  // Loop 108/4=27 times  (loop unrolled for performance)
+ label_sgpr_loop:
+-  s_movreld_b32     s0, 0
+-  s_movreld_b32     s1, 0
+-  s_movreld_b32     s2, 0
+-  s_movreld_b32     s3, 0
++  s_movreld_b32     s0, s0
++  s_movreld_b32     s1, s0
++  s_movreld_b32     s2, s0
++  s_movreld_b32     s3, s0
+   s_sub_u32         m0, m0, 4
+   s_cbranch_scc0  label_sgpr_loop
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c b/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c
+index 88d3f9d7dd556a..452206b5095eb0 100644
+--- a/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c
++++ b/drivers/gpu/drm/amd/display/dc/basics/fixpt31_32.c
+@@ -51,8 +51,6 @@ static inline unsigned long long complete_integer_division_u64(
+ {
+ 	unsigned long long result;
+ 
+-	ASSERT(divisor);
+-
+ 	result = div64_u64_rem(dividend, divisor, remainder);
+ 
+ 	return result;
+@@ -213,9 +211,6 @@ struct fixed31_32 dc_fixpt_recip(struct fixed31_32 arg)
+ 	 * @note
+ 	 * Good idea to use Newton's method
+ 	 */
+-
+-	ASSERT(arg.value);
+-
+ 	return dc_fixpt_from_fraction(
+ 		dc_fixpt_one.value,
+ 		arg.value);
+diff --git a/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c b/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c
+index 52d97918a3bd21..ebf0287417e0eb 100644
+--- a/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c
++++ b/drivers/gpu/drm/amd/display/dc/sspl/spl_fixpt31_32.c
+@@ -29,8 +29,6 @@ static inline unsigned long long spl_complete_integer_division_u64(
+ {
+ 	unsigned long long result;
+ 
+-	SPL_ASSERT(divisor);
+-
+ 	result = spl_div64_u64_rem(dividend, divisor, remainder);
+ 
+ 	return result;
+@@ -196,8 +194,6 @@ struct spl_fixed31_32 spl_fixpt_recip(struct spl_fixed31_32 arg)
+ 	 * Good idea to use Newton's method
+ 	 */
+ 
+-	SPL_ASSERT(arg.value);
+-
+ 	return spl_fixpt_from_fraction(
+ 		spl_fixpt_one.value,
+ 		arg.value);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+index 4bd92fd782be6a..8d40ed0f0e8383 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+@@ -143,6 +143,10 @@ int atomctrl_initialize_mc_reg_table(
+ 	vram_info = (ATOM_VRAM_INFO_HEADER_V2_1 *)
+ 		smu_atom_get_data_table(hwmgr->adev,
+ 				GetIndexIntoMasterTable(DATA, VRAM_Info), &size, &frev, &crev);
++	if (!vram_info) {
++		pr_err("Could not retrieve the VramInfo table!");
++		return -EINVAL;
++	}
+ 
+ 	if (module_index >= vram_info->ucNumOfVRAMModule) {
+ 		pr_err("Invalid VramInfo table.");
+@@ -180,6 +184,10 @@ int atomctrl_initialize_mc_reg_table_v2_2(
+ 	vram_info = (ATOM_VRAM_INFO_HEADER_V2_2 *)
+ 		smu_atom_get_data_table(hwmgr->adev,
+ 				GetIndexIntoMasterTable(DATA, VRAM_Info), &size, &frev, &crev);
++	if (!vram_info) {
++		pr_err("Could not retrieve the VramInfo table!");
++		return -EINVAL;
++	}
+ 
+ 	if (module_index >= vram_info->ucNumOfVRAMModule) {
+ 		pr_err("Invalid VramInfo table.");
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 071168aa0c3bda..5222b1e9f533d0 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1597,10 +1597,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	}
+ 
+ 	dp->reg_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(dp->reg_base)) {
+-		ret = PTR_ERR(dp->reg_base);
+-		goto err_disable_clk;
+-	}
++	if (IS_ERR(dp->reg_base))
++		return ERR_CAST(dp->reg_base);
+ 
+ 	dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd");
+ 
+@@ -1612,8 +1610,7 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	if (IS_ERR(dp->hpd_gpiod)) {
+ 		dev_err(dev, "error getting HDP GPIO: %ld\n",
+ 			PTR_ERR(dp->hpd_gpiod));
+-		ret = PTR_ERR(dp->hpd_gpiod);
+-		goto err_disable_clk;
++		return ERR_CAST(dp->hpd_gpiod);
+ 	}
+ 
+ 	if (dp->hpd_gpiod) {
+@@ -1633,8 +1630,7 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 
+ 	if (dp->irq == -ENXIO) {
+ 		dev_err(&pdev->dev, "failed to get irq\n");
+-		ret = -ENODEV;
+-		goto err_disable_clk;
++		return ERR_PTR(-ENODEV);
+ 	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, dp->irq,
+@@ -1643,15 +1639,22 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 					irq_flags, "analogix-dp", dp);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to request irq\n");
+-		goto err_disable_clk;
++		return ERR_PTR(ret);
+ 	}
+ 	disable_irq(dp->irq);
+ 
+-	return dp;
++	dp->aux.name = "DP-AUX";
++	dp->aux.transfer = analogix_dpaux_transfer;
++	dp->aux.dev = dp->dev;
++	drm_dp_aux_init(&dp->aux);
+ 
+-err_disable_clk:
+-	clk_disable_unprepare(dp->clock);
+-	return ERR_PTR(ret);
++	pm_runtime_use_autosuspend(dp->dev);
++	pm_runtime_set_autosuspend_delay(dp->dev, 100);
++	ret = devm_pm_runtime_enable(dp->dev);
++	if (ret)
++		return ERR_PTR(ret);
++
++	return dp;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_probe);
+ 
+@@ -1696,25 +1699,12 @@ int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev)
+ 	dp->drm_dev = drm_dev;
+ 	dp->encoder = dp->plat_data->encoder;
+ 
+-	if (IS_ENABLED(CONFIG_PM)) {
+-		pm_runtime_use_autosuspend(dp->dev);
+-		pm_runtime_set_autosuspend_delay(dp->dev, 100);
+-		pm_runtime_enable(dp->dev);
+-	} else {
+-		ret = analogix_dp_resume(dp);
+-		if (ret)
+-			return ret;
+-	}
+-
+-	dp->aux.name = "DP-AUX";
+-	dp->aux.transfer = analogix_dpaux_transfer;
+-	dp->aux.dev = dp->dev;
+ 	dp->aux.drm_dev = drm_dev;
+ 
+ 	ret = drm_dp_aux_register(&dp->aux);
+ 	if (ret) {
+ 		DRM_ERROR("failed to register AUX (%d)\n", ret);
+-		goto err_disable_pm_runtime;
++		return ret;
+ 	}
+ 
+ 	ret = analogix_dp_create_bridge(drm_dev, dp);
+@@ -1727,13 +1717,6 @@ int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev)
+ 
+ err_unregister_aux:
+ 	drm_dp_aux_unregister(&dp->aux);
+-err_disable_pm_runtime:
+-	if (IS_ENABLED(CONFIG_PM)) {
+-		pm_runtime_dont_use_autosuspend(dp->dev);
+-		pm_runtime_disable(dp->dev);
+-	} else {
+-		analogix_dp_suspend(dp);
+-	}
+ 
+ 	return ret;
+ }
+@@ -1750,13 +1733,6 @@ void analogix_dp_unbind(struct analogix_dp_device *dp)
+ 	}
+ 
+ 	drm_dp_aux_unregister(&dp->aux);
+-
+-	if (IS_ENABLED(CONFIG_PM)) {
+-		pm_runtime_dont_use_autosuspend(dp->dev);
+-		pm_runtime_disable(dp->dev);
+-	} else {
+-		analogix_dp_suspend(dp);
+-	}
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_unbind);
+ 
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+index f4c3ff1fdc6923..f6e714feeea54c 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611uxc.c
+@@ -880,7 +880,11 @@ static int lt9611uxc_probe(struct i2c_client *client)
+ 		}
+ 	}
+ 
+-	return lt9611uxc_audio_init(dev, lt9611uxc);
++	ret = lt9611uxc_audio_init(dev, lt9611uxc);
++	if (ret)
++		goto err_remove_bridge;
++
++	return 0;
+ 
+ err_remove_bridge:
+ 	free_irq(client->irq, lt9611uxc);
+diff --git a/drivers/gpu/drm/ci/gitlab-ci.yml b/drivers/gpu/drm/ci/gitlab-ci.yml
+index f04aabe8327c6b..b06b9e7d3d09bf 100644
+--- a/drivers/gpu/drm/ci/gitlab-ci.yml
++++ b/drivers/gpu/drm/ci/gitlab-ci.yml
+@@ -143,11 +143,11 @@ stages:
+     # Pre-merge pipeline
+     - if: &is-pre-merge $CI_PIPELINE_SOURCE == "merge_request_event"
+     # Push to a branch on a fork
+-    - if: &is-fork-push $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
++    - if: &is-fork-push $CI_PIPELINE_SOURCE == "push"
+     # nightly pipeline
+     - if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
+     # pipeline for direct pushes that bypassed the CI
+-    - if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
++    - if: &is-direct-push $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
+ 
+ 
+ # Rules applied to every job in the pipeline
+@@ -170,26 +170,15 @@ stages:
+     - !reference [.disable-farm-mr-rules, rules]
+     # Never run immediately after merging, as we just ran everything
+     - !reference [.never-post-merge-rules, rules]
+-    # Build everything in merge pipelines, if any files affecting the pipeline
+-    # were changed
++    # Build everything in merge pipelines
+     - if: *is-merge-attempt
+-      changes: &all_paths
+-      - drivers/gpu/drm/ci/**/*
+       when: on_success
+     # Same as above, but for pre-merge pipelines
+     - if: *is-pre-merge
+-      changes:
+-        *all_paths
+       when: manual
+-    # Skip everything for pre-merge and merge pipelines which don't change
+-    # anything in the build
+-    - if: *is-merge-attempt
+-      when: never
+-    - if: *is-pre-merge
+-      when: never
+     # Build everything after someone bypassed the CI
+     - if: *is-direct-push
+-      when: on_success
++      when: manual
+     # Build everything in scheduled pipelines
+     - if: *is-scheduled-pipeline
+       when: on_success
+diff --git a/drivers/gpu/drm/display/drm_hdmi_audio_helper.c b/drivers/gpu/drm/display/drm_hdmi_audio_helper.c
+index 05afc9f0bdd6b6..ae8a0cf595fc6f 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_audio_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_audio_helper.c
+@@ -103,7 +103,8 @@ static int drm_connector_hdmi_audio_hook_plugged_cb(struct device *dev,
+ 	connector->hdmi_audio.plugged_cb = fn;
+ 	connector->hdmi_audio.plugged_cb_dev = codec_dev;
+ 
+-	fn(codec_dev, connector->hdmi_audio.last_state);
++	if (fn)
++		fn(codec_dev, connector->hdmi_audio.last_state);
+ 
+ 	mutex_unlock(&connector->hdmi_audio.lock);
+ 
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index f2a99681b99858..de2ddf5dbbd3f1 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -366,8 +366,48 @@ fn iter(&self) -> SegmentIterator<'_> {
+         SegmentIterator {
+             segment: self,
+             offset: 0,
+-            carry: 0,
+-            carry_len: 0,
++            decfifo: Default::default(),
++        }
++    }
++}
++
++/// Max fifo size is 17 (max push) + 2 (max remaining)
++const MAX_FIFO_SIZE: usize = 19;
++
++/// A simple Decimal digit FIFO
++#[derive(Default)]
++struct DecFifo {
++    decimals: [u8; MAX_FIFO_SIZE],
++    len: usize,
++}
++
++impl DecFifo {
++    fn push(&mut self, data: u64, len: usize) {
++        let mut chunk = data;
++        for i in (0..self.len).rev() {
++            self.decimals[i + len] = self.decimals[i];
++        }
++        for i in 0..len {
++            self.decimals[i] = (chunk % 10) as u8;
++            chunk /= 10;
++        }
++        self.len += len;
++    }
++
++    /// Pop 3 decimal digits from the FIFO
++    fn pop3(&mut self) -> Option<(u16, usize)> {
++        if self.len == 0 {
++            None
++        } else {
++            let poplen = 3.min(self.len);
++            self.len -= poplen;
++            let mut out = 0;
++            let mut exp = 1;
++            for i in 0..poplen {
++                out += self.decimals[self.len + i] as u16 * exp;
++                exp *= 10;
++            }
++            Some((out, NUM_CHARS_BITS[poplen]))
+         }
+     }
+ }
+@@ -375,8 +415,7 @@ fn iter(&self) -> SegmentIterator<'_> {
+ struct SegmentIterator<'a> {
+     segment: &'a Segment<'a>,
+     offset: usize,
+-    carry: u64,
+-    carry_len: usize,
++    decfifo: DecFifo,
+ }
+ 
+ impl Iterator for SegmentIterator<'_> {
+@@ -394,31 +433,17 @@ fn next(&mut self) -> Option<Self::Item> {
+                 }
+             }
+             Segment::Numeric(data) => {
+-                if self.carry_len < 3 && self.offset < data.len() {
+-                    // If there are less than 3 decimal digits in the carry,
+-                    // take the next 7 bytes of input, and add them to the carry.
++                if self.decfifo.len < 3 && self.offset < data.len() {
++                    // If there are less than 3 decimal digits in the fifo,
++                    // take the next 7 bytes of input, and push them to the fifo.
+                     let mut buf = [0u8; 8];
+                     let len = 7.min(data.len() - self.offset);
+                     buf[..len].copy_from_slice(&data[self.offset..self.offset + len]);
+                     let chunk = u64::from_le_bytes(buf);
+-                    let pow = u64::pow(10, BYTES_TO_DIGITS[len] as u32);
+-                    self.carry = chunk + self.carry * pow;
++                    self.decfifo.push(chunk, BYTES_TO_DIGITS[len]);
+                     self.offset += len;
+-                    self.carry_len += BYTES_TO_DIGITS[len];
+-                }
+-                match self.carry_len {
+-                    0 => None,
+-                    len => {
+-                        // take the next 3 decimal digits of the carry
+-                        // and return 10bits of numeric data.
+-                        let out_len = 3.min(len);
+-                        self.carry_len -= out_len;
+-                        let pow = u64::pow(10, self.carry_len as u32);
+-                        let out = (self.carry / pow) as u16;
+-                        self.carry = self.carry % pow;
+-                        Some((out, NUM_CHARS_BITS[out_len]))
+-                    }
+                 }
++                self.decfifo.pop3()
+             }
+         }
+     }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 392c3653d0d738..cd8f728d5fddc4 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2523,6 +2523,7 @@ intel_dp_dsc_compute_pipe_bpp_limits(struct intel_dp *intel_dp,
+ 
+ bool
+ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
++			       struct intel_connector *connector,
+ 			       struct intel_crtc_state *crtc_state,
+ 			       bool respect_downstream_limits,
+ 			       bool dsc,
+@@ -2576,7 +2577,7 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
+ 	intel_dp_test_compute_config(intel_dp, crtc_state, limits);
+ 
+ 	return intel_dp_compute_config_link_bpp_limits(intel_dp,
+-						       intel_dp->attached_connector,
++						       connector,
+ 						       crtc_state,
+ 						       dsc,
+ 						       limits);
+@@ -2637,7 +2638,7 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 	joiner_needs_dsc = intel_dp_joiner_needs_dsc(display, num_joined_pipes);
+ 
+ 	dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en ||
+-		     !intel_dp_compute_config_limits(intel_dp, pipe_config,
++		     !intel_dp_compute_config_limits(intel_dp, connector, pipe_config,
+ 						     respect_downstream_limits,
+ 						     false,
+ 						     &limits);
+@@ -2671,7 +2672,7 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 			    str_yes_no(ret), str_yes_no(joiner_needs_dsc),
+ 			    str_yes_no(intel_dp->force_dsc_en));
+ 
+-		if (!intel_dp_compute_config_limits(intel_dp, pipe_config,
++		if (!intel_dp_compute_config_limits(intel_dp, connector, pipe_config,
+ 						    respect_downstream_limits,
+ 						    true,
+ 						    &limits))
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
+index 9189db4c25946a..98f90955fdb1db 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.h
++++ b/drivers/gpu/drm/i915/display/intel_dp.h
+@@ -194,6 +194,7 @@ void intel_dp_wait_source_oui(struct intel_dp *intel_dp);
+ int intel_dp_output_bpp(enum intel_output_format output_format, int bpp);
+ 
+ bool intel_dp_compute_config_limits(struct intel_dp *intel_dp,
++				    struct intel_connector *connector,
+ 				    struct intel_crtc_state *crtc_state,
+ 				    bool respect_downstream_limits,
+ 				    bool dsc,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 6dc2d31ccb5a53..fe685f098ba9a2 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -590,12 +590,13 @@ adjust_limits_for_dsc_hblank_expansion_quirk(struct intel_dp *intel_dp,
+ 
+ static bool
+ mst_stream_compute_config_limits(struct intel_dp *intel_dp,
+-				 const struct intel_connector *connector,
++				 struct intel_connector *connector,
+ 				 struct intel_crtc_state *crtc_state,
+ 				 bool dsc,
+ 				 struct link_config_limits *limits)
+ {
+-	if (!intel_dp_compute_config_limits(intel_dp, crtc_state, false, dsc,
++	if (!intel_dp_compute_config_limits(intel_dp, connector,
++					    crtc_state, false, dsc,
+ 					    limits))
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_psr_regs.h b/drivers/gpu/drm/i915/display/intel_psr_regs.h
+index 795e6b9cc575c8..248136456048e3 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr_regs.h
++++ b/drivers/gpu/drm/i915/display/intel_psr_regs.h
+@@ -325,8 +325,8 @@
+ #define  PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK	REG_GENMASK(20, 16)
+ #define  PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val)
+ #define  PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION_MASK	REG_GENMASK(12, 8)
+-#define  PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val)
++#define  PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_FIRST_LFPS_HALF_CYCLE_DURATION_MASK, val)
+ #define  PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK	REG_GENMASK(4, 0)
+-#define  PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LFPS_HALF_CYCLE_DURATION_MASK, val)
++#define  PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION(val)	REG_FIELD_PREP(PORT_ALPM_LFPS_CTL_LAST_LFPS_HALF_CYCLE_DURATION_MASK, val)
+ 
+ #endif /* __INTEL_PSR_REGS_H__ */
+diff --git a/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c b/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
+index c6321dafef4f3c..74bb3bedf30f5d 100644
+--- a/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
++++ b/drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
+@@ -41,12 +41,12 @@ static s64 interp(s64 x, s64 x1, s64 x2, s64 y1, s64 y2)
+ {
+ 	s64 dydx;
+ 
+-	dydx = DIV_ROUND_UP_ULL((y2 - y1) * 100000, (x2 - x1));
++	dydx = DIV64_U64_ROUND_UP((y2 - y1) * 100000, (x2 - x1));
+ 
+-	return (y1 + DIV_ROUND_UP_ULL(dydx * (x - x1), 100000));
++	return (y1 + DIV64_U64_ROUND_UP(dydx * (x - x1), 100000));
+ }
+ 
+-static void get_ana_cp_int_prop(u32 vco_clk,
++static void get_ana_cp_int_prop(u64 vco_clk,
+ 				u32 refclk_postscalar,
+ 				int mpll_ana_v2i,
+ 				int c, int a,
+@@ -115,16 +115,16 @@ static void get_ana_cp_int_prop(u32 vco_clk,
+ 								      CURVE0_MULTIPLIER));
+ 
+ 	scaled_interpolated_sqrt =
+-			int_sqrt(DIV_ROUND_UP_ULL(interpolated_product, vco_div_refclk_float) *
++			int_sqrt(DIV64_U64_ROUND_UP(interpolated_product, vco_div_refclk_float) *
+ 			DIV_ROUND_DOWN_ULL(1000000000000ULL, 55));
+ 
+ 	/* Scale vco_div_refclk for ana_cp_int */
+ 	scaled_vco_div_refclk2 = DIV_ROUND_UP_ULL(vco_div_refclk_float, 1000000);
+-	adjusted_vco_clk2 = 1460281 * DIV_ROUND_UP_ULL(scaled_interpolated_sqrt *
++	adjusted_vco_clk2 = 1460281 * DIV64_U64_ROUND_UP(scaled_interpolated_sqrt *
+ 						       scaled_vco_div_refclk2,
+ 						       curve_1_interpolated);
+ 
+-	*ana_cp_prop = DIV_ROUND_UP_ULL(adjusted_vco_clk2, curve_2_scaled2);
++	*ana_cp_prop = DIV64_U64_ROUND_UP(adjusted_vco_clk2, curve_2_scaled2);
+ 	*ana_cp_prop = max(1, min(*ana_cp_prop, 127));
+ }
+ 
+@@ -165,10 +165,10 @@ static void compute_hdmi_tmds_pll(u64 pixel_clock, u32 refclk,
+ 	/* Select appropriate v2i point */
+ 	if (datarate <= INTEL_SNPS_PHY_HDMI_9999MHZ) {
+ 		mpll_ana_v2i = 2;
+-		tx_clk_div = ilog2(DIV_ROUND_DOWN_ULL(INTEL_SNPS_PHY_HDMI_9999MHZ, datarate));
++		tx_clk_div = ilog2(div64_u64(INTEL_SNPS_PHY_HDMI_9999MHZ, datarate));
+ 	} else {
+ 		mpll_ana_v2i = 3;
+-		tx_clk_div = ilog2(DIV_ROUND_DOWN_ULL(INTEL_SNPS_PHY_HDMI_16GHZ, datarate));
++		tx_clk_div = ilog2(div64_u64(INTEL_SNPS_PHY_HDMI_16GHZ, datarate));
+ 	}
+ 	vco_clk = (datarate << tx_clk_div) >> 1;
+ 
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index f8cb7c630d5b83..127316d2c8aa99 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -633,7 +633,7 @@ static int guc_submission_send_busy_loop(struct intel_guc *guc,
+ 		atomic_inc(&guc->outstanding_submission_g2h);
+ 
+ 	ret = intel_guc_send_busy_loop(guc, action, len, g2h_len_dw, loop);
+-	if (ret)
++	if (ret && g2h_len_dw)
+ 		atomic_dec(&guc->outstanding_submission_g2h);
+ 
+ 	return ret;
+@@ -3443,18 +3443,29 @@ static inline int guc_lrc_desc_unpin(struct intel_context *ce)
+ 	 * GuC is active, lets destroy this context, but at this point we can still be racing
+ 	 * with suspend, so we undo everything if the H2G fails in deregister_context so
+ 	 * that GuC reset will find this context during clean up.
++	 *
++	 * There is a race condition where the reset code could have altered
++	 * this context's state and done a wakeref put before we try to
++	 * deregister it here. So check if the context is still set to be
++	 * destroyed before undoing earlier changes, to avoid two wakeref puts
++	 * on the same context.
+ 	 */
+ 	ret = deregister_context(ce, ce->guc_id.id);
+ 	if (ret) {
++		bool pending_destroyed;
+ 		spin_lock_irqsave(&ce->guc_state.lock, flags);
+-		set_context_registered(ce);
+-		clr_context_destroyed(ce);
++		pending_destroyed = context_destroyed(ce);
++		if (pending_destroyed) {
++			set_context_registered(ce);
++			clr_context_destroyed(ce);
++		}
+ 		spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+ 		/*
+ 		 * As gt-pm is awake at function entry, intel_wakeref_put_async merely decrements
+ 		 * the wakeref immediately but per function spec usage call this after unlock.
+ 		 */
+-		intel_wakeref_put_async(&gt->wakeref);
++		if (pending_destroyed)
++			intel_wakeref_put_async(&gt->wakeref);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 74158b9d65035b..7c0c12dde48859 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -470,7 +470,7 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 
+ 	ret = drmm_mode_config_init(drm);
+ 	if (ret)
+-		goto put_mutex_dev;
++		return ret;
+ 
+ 	drm->mode_config.min_width = 64;
+ 	drm->mode_config.min_height = 64;
+@@ -488,8 +488,11 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ 	for (i = 0; i < private->data->mmsys_dev_num; i++) {
+ 		drm->dev_private = private->all_drm_private[i];
+ 		ret = component_bind_all(private->all_drm_private[i]->dev, drm);
+-		if (ret)
+-			goto put_mutex_dev;
++		if (ret) {
++			while (--i >= 0)
++				component_unbind_all(private->all_drm_private[i]->dev, drm);
++			return ret;
++		}
+ 	}
+ 
+ 	/*
+@@ -582,9 +585,6 @@ static int mtk_drm_kms_init(struct drm_device *drm)
+ err_component_unbind:
+ 	for (i = 0; i < private->data->mmsys_dev_num; i++)
+ 		component_unbind_all(private->all_drm_private[i]->dev, drm);
+-put_mutex_dev:
+-	for (i = 0; i < private->data->mmsys_dev_num; i++)
+-		put_device(private->all_drm_private[i]->mutex_dev);
+ 
+ 	return ret;
+ }
+@@ -655,8 +655,10 @@ static int mtk_drm_bind(struct device *dev)
+ 		return 0;
+ 
+ 	drm = drm_dev_alloc(&mtk_drm_driver, dev);
+-	if (IS_ERR(drm))
+-		return PTR_ERR(drm);
++	if (IS_ERR(drm)) {
++		ret = PTR_ERR(drm);
++		goto err_put_dev;
++	}
+ 
+ 	private->drm_master = true;
+ 	drm->dev_private = private;
+@@ -682,18 +684,31 @@ static int mtk_drm_bind(struct device *dev)
+ 	drm_dev_put(drm);
+ 	for (i = 0; i < private->data->mmsys_dev_num; i++)
+ 		private->all_drm_private[i]->drm = NULL;
++err_put_dev:
++	for (i = 0; i < private->data->mmsys_dev_num; i++) {
++		/* For device_find_child in mtk_drm_get_all_priv() */
++		put_device(private->all_drm_private[i]->dev);
++	}
++	put_device(private->mutex_dev);
+ 	return ret;
+ }
+ 
+ static void mtk_drm_unbind(struct device *dev)
+ {
+ 	struct mtk_drm_private *private = dev_get_drvdata(dev);
++	int i;
+ 
+ 	/* for multi mmsys dev, unregister drm dev in mmsys master */
+ 	if (private->drm_master) {
+ 		drm_dev_unregister(private->drm);
+ 		mtk_drm_kms_deinit(private->drm);
+ 		drm_dev_put(private->drm);
++
++		for (i = 0; i < private->data->mmsys_dev_num; i++) {
++			/* For device_find_child in mtk_drm_get_all_priv() */
++			put_device(private->all_drm_private[i]->dev);
++		}
++		put_device(private->mutex_dev);
+ 	}
+ 	private->mtk_drm_bound = false;
+ 	private->drm_master = false;
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index c08fa93e50a30e..2bccda1e52a17d 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -108,7 +108,7 @@ static void meson_encoder_hdmi_set_vclk(struct meson_encoder_hdmi *encoder_hdmi,
+ 		venc_freq /= 2;
+ 
+ 	dev_dbg(priv->dev,
+-		"vclk:%lluHz phy=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",
++		"phy:%lluHz vclk=%lluHz venc=%lluHz hdmi=%lluHz enci=%d\n",
+ 		phy_freq, vclk_freq, venc_freq, hdmi_freq,
+ 		priv->venc.hdmi_use_enci);
+ 
+diff --git a/drivers/gpu/drm/meson/meson_vclk.c b/drivers/gpu/drm/meson/meson_vclk.c
+index 3325580d885d0a..dfe0c28a0f054c 100644
+--- a/drivers/gpu/drm/meson/meson_vclk.c
++++ b/drivers/gpu/drm/meson/meson_vclk.c
+@@ -110,10 +110,7 @@
+ #define HDMI_PLL_LOCK		BIT(31)
+ #define HDMI_PLL_LOCK_G12A	(3 << 30)
+ 
+-#define PIXEL_FREQ_1000_1001(_freq)	\
+-	DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)
+-#define PHY_FREQ_1000_1001(_freq)	\
+-	(PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)
++#define FREQ_1000_1001(_freq)	DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)
+ 
+ /* VID PLL Dividers */
+ enum {
+@@ -772,6 +769,36 @@ static void meson_hdmi_pll_generic_set(struct meson_drm *priv,
+ 		  pll_freq);
+ }
+ 
++static bool meson_vclk_freqs_are_matching_param(unsigned int idx,
++						unsigned long long phy_freq,
++						unsigned long long vclk_freq)
++{
++	DRM_DEBUG_DRIVER("i = %d vclk_freq = %lluHz alt = %lluHz\n",
++			 idx, params[idx].vclk_freq,
++			 FREQ_1000_1001(params[idx].vclk_freq));
++	DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",
++			 idx, params[idx].phy_freq,
++			 FREQ_1000_1001(params[idx].phy_freq));
++
++	/* Match strict frequency */
++	if (phy_freq == params[idx].phy_freq &&
++	    vclk_freq == params[idx].vclk_freq)
++		return true;
++
++	/* Match 1000/1001 variant: vclk deviation has to be less than 1kHz
++	 * (drm EDID is defined in 1kHz steps, so everything smaller must be
++	 * rounding error) and the PHY freq deviation has to be less than
++	 * 10kHz (as the TMDS clock is 10 times the pixel clock, so anything
++	 * smaller must be rounding error as well).
++	 */
++	if (abs(vclk_freq - FREQ_1000_1001(params[idx].vclk_freq)) < 1000 &&
++	    abs(phy_freq - FREQ_1000_1001(params[idx].phy_freq)) < 10000)
++		return true;
++
++	/* no match */
++	return false;
++}
++
+ enum drm_mode_status
+ meson_vclk_vic_supported_freq(struct meson_drm *priv,
+ 			      unsigned long long phy_freq,
+@@ -790,19 +817,7 @@ meson_vclk_vic_supported_freq(struct meson_drm *priv,
+ 	}
+ 
+ 	for (i = 0 ; params[i].pixel_freq ; ++i) {
+-		DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",
+-				 i, params[i].pixel_freq,
+-				 PIXEL_FREQ_1000_1001(params[i].pixel_freq));
+-		DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",
+-				 i, params[i].phy_freq,
+-				 PHY_FREQ_1000_1001(params[i].phy_freq));
+-		/* Match strict frequency */
+-		if (phy_freq == params[i].phy_freq &&
+-		    vclk_freq == params[i].vclk_freq)
+-			return MODE_OK;
+-		/* Match 1000/1001 variant */
+-		if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&
+-		    vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))
++		if (meson_vclk_freqs_are_matching_param(i, phy_freq, vclk_freq))
+ 			return MODE_OK;
+ 	}
+ 
+@@ -1075,10 +1090,8 @@ void meson_vclk_setup(struct meson_drm *priv, unsigned int target,
+ 	}
+ 
+ 	for (freq = 0 ; params[freq].pixel_freq ; ++freq) {
+-		if ((phy_freq == params[freq].phy_freq ||
+-		     phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&
+-		    (vclk_freq == params[freq].vclk_freq ||
+-		     vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {
++		if (meson_vclk_freqs_are_matching_param(freq, phy_freq,
++							vclk_freq)) {
+ 			if (vclk_freq != params[freq].vclk_freq)
+ 				vic_alternate_clock = true;
+ 			else
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 242d02d48c0cd0..90991ba5a4ae10 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -655,7 +655,6 @@ static void a6xx_calc_ubwc_config(struct adreno_gpu *gpu)
+ 	if (adreno_is_7c3(gpu)) {
+ 		gpu->ubwc_config.highest_bank_bit = 14;
+ 		gpu->ubwc_config.amsbc = 1;
+-		gpu->ubwc_config.rgb565_predicator = 1;
+ 		gpu->ubwc_config.uavflagprd_inv = 2;
+ 		gpu->ubwc_config.macrotile_mode = 1;
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
+index ad60089f18ea6c..39027a21c6feec 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
+@@ -100,14 +100,12 @@ static const struct dpu_pingpong_cfg msm8937_pp[] = {
+ 	{
+ 		.name = "pingpong_0", .id = PINGPONG_0,
+ 		.base = 0x70000, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+ 	}, {
+ 		.name = "pingpong_1", .id = PINGPONG_1,
+ 		.base = 0x70800, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
+index a1cf89a0a42d5f..8d1b43ea1663cf 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
+@@ -93,7 +93,6 @@ static const struct dpu_pingpong_cfg msm8917_pp[] = {
+ 	{
+ 		.name = "pingpong_0", .id = PINGPONG_0,
+ 		.base = 0x70000, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
+index eea9b80e2287a8..16c12499b24bb4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
+@@ -100,14 +100,12 @@ static const struct dpu_pingpong_cfg msm8953_pp[] = {
+ 	{
+ 		.name = "pingpong_0", .id = PINGPONG_0,
+ 		.base = 0x70000, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12),
+ 	}, {
+ 		.name = "pingpong_1", .id = PINGPONG_1,
+ 		.base = 0x70800, .len = 0xd4,
+-		.features = PINGPONG_MSM8996_MASK,
+ 		.sblk = &msm8996_pp_sblk,
+ 		.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 		.intr_rdptr = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 979527d98fbcb1..8e23dbfeef3543 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -76,7 +76,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	{
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+@@ -84,7 +84,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+@@ -92,7 +92,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+@@ -100,7 +100,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+@@ -108,7 +108,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_8", .id = SSPP_DMA0,
+ 		.base = 0x24000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 1,
+ 		.type = SSPP_TYPE_DMA,
+@@ -116,7 +116,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_9", .id = SSPP_DMA1,
+ 		.base = 0x26000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 5,
+ 		.type = SSPP_TYPE_DMA,
+@@ -124,7 +124,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_10", .id = SSPP_DMA2,
+ 		.base = 0x28000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 9,
+ 		.type = SSPP_TYPE_DMA,
+@@ -132,7 +132,7 @@ static const struct dpu_sspp_cfg sm8150_sspp[] = {
+ 	}, {
+ 		.name = "sspp_11", .id = SSPP_DMA3,
+ 		.base = 0x2a000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 13,
+ 		.type = SSPP_TYPE_DMA,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index d76b8992a6c18c..e736eb73a7e615 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -75,7 +75,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	{
+ 		.name = "sspp_0", .id = SSPP_VIG0,
+ 		.base = 0x4000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 0,
+ 		.type = SSPP_TYPE_VIG,
+@@ -83,7 +83,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_1", .id = SSPP_VIG1,
+ 		.base = 0x6000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 4,
+ 		.type = SSPP_TYPE_VIG,
+@@ -91,7 +91,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_2", .id = SSPP_VIG2,
+ 		.base = 0x8000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 8,
+ 		.type = SSPP_TYPE_VIG,
+@@ -99,7 +99,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_3", .id = SSPP_VIG3,
+ 		.base = 0xa000, .len = 0x1f0,
+-		.features = VIG_SDM845_MASK,
++		.features = VIG_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_vig_sblk_qseed3_1_4,
+ 		.xin_id = 12,
+ 		.type = SSPP_TYPE_VIG,
+@@ -107,7 +107,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_8", .id = SSPP_DMA0,
+ 		.base = 0x24000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 1,
+ 		.type = SSPP_TYPE_DMA,
+@@ -115,7 +115,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_9", .id = SSPP_DMA1,
+ 		.base = 0x26000, .len = 0x1f0,
+-		.features = DMA_SDM845_MASK,
++		.features = DMA_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 5,
+ 		.type = SSPP_TYPE_DMA,
+@@ -123,7 +123,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_10", .id = SSPP_DMA2,
+ 		.base = 0x28000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 9,
+ 		.type = SSPP_TYPE_DMA,
+@@ -131,7 +131,7 @@ static const struct dpu_sspp_cfg sc8180x_sspp[] = {
+ 	}, {
+ 		.name = "sspp_11", .id = SSPP_DMA3,
+ 		.base = 0x2a000, .len = 0x1f0,
+-		.features = DMA_CURSOR_SDM845_MASK,
++		.features = DMA_CURSOR_SDM845_MASK_SDMA,
+ 		.sblk = &dpu_dma_sblk,
+ 		.xin_id = 13,
+ 		.type = SSPP_TYPE_DMA,
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index bbc47d86ae9e67..ab8c1f19dcb42d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -367,17 +367,21 @@ static int msm_dp_display_send_hpd_notification(struct msm_dp_display_private *d
+ 	return 0;
+ }
+ 
+-static void msm_dp_display_lttpr_init(struct msm_dp_display_private *dp)
++static int msm_dp_display_lttpr_init(struct msm_dp_display_private *dp, u8 *dpcd)
+ {
+-	u8 lttpr_caps[DP_LTTPR_COMMON_CAP_SIZE];
+-	int rc;
++	int rc, lttpr_count;
+ 
+-	if (drm_dp_read_lttpr_common_caps(dp->aux, dp->panel->dpcd, lttpr_caps))
+-		return;
++	if (drm_dp_read_lttpr_common_caps(dp->aux, dpcd, dp->link->lttpr_common_caps))
++		return 0;
+ 
+-	rc = drm_dp_lttpr_init(dp->aux, drm_dp_lttpr_count(lttpr_caps));
+-	if (rc)
++	lttpr_count = drm_dp_lttpr_count(dp->link->lttpr_common_caps);
++	rc = drm_dp_lttpr_init(dp->aux, lttpr_count);
++	if (rc) {
+ 		DRM_ERROR("failed to set LTTPRs transparency mode, rc=%d\n", rc);
++		return 0;
++	}
++
++	return lttpr_count;
+ }
+ 
+ static int msm_dp_display_process_hpd_high(struct msm_dp_display_private *dp)
+@@ -385,12 +389,17 @@ static int msm_dp_display_process_hpd_high(struct msm_dp_display_private *dp)
+ 	struct drm_connector *connector = dp->msm_dp_display.connector;
+ 	const struct drm_display_info *info = &connector->display_info;
+ 	int rc = 0;
++	u8 dpcd[DP_RECEIVER_CAP_SIZE];
+ 
+-	rc = msm_dp_panel_read_sink_caps(dp->panel, connector);
++	rc = drm_dp_read_dpcd_caps(dp->aux, dpcd);
+ 	if (rc)
+ 		goto end;
+ 
+-	msm_dp_display_lttpr_init(dp);
++	dp->link->lttpr_count = msm_dp_display_lttpr_init(dp, dpcd);
++
++	rc = msm_dp_panel_read_sink_caps(dp->panel, connector);
++	if (rc)
++		goto end;
+ 
+ 	msm_dp_link_process_request(dp->link);
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_link.h b/drivers/gpu/drm/msm/dp/dp_link.h
+index 8db5d5698a97cf..ba47c6d19fbfac 100644
+--- a/drivers/gpu/drm/msm/dp/dp_link.h
++++ b/drivers/gpu/drm/msm/dp/dp_link.h
+@@ -7,6 +7,7 @@
+ #define _DP_LINK_H_
+ 
+ #include "dp_aux.h"
++#include <drm/display/drm_dp_helper.h>
+ 
+ #define DS_PORT_STATUS_CHANGED 0x200
+ #define DP_TEST_BIT_DEPTH_UNKNOWN 0xFFFFFFFF
+@@ -60,6 +61,9 @@ struct msm_dp_link_phy_params {
+ };
+ 
+ struct msm_dp_link {
++	u8 lttpr_common_caps[DP_LTTPR_COMMON_CAP_SIZE];
++	int lttpr_count;
++
+ 	u32 sink_request;
+ 	u32 test_response;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 92415bf8aa1665..4e8ab75c771b1e 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -47,7 +47,7 @@ static void msm_dp_panel_read_psr_cap(struct msm_dp_panel_private *panel)
+ 
+ static int msm_dp_panel_read_dpcd(struct msm_dp_panel *msm_dp_panel)
+ {
+-	int rc;
++	int rc, max_lttpr_lanes, max_lttpr_rate;
+ 	struct msm_dp_panel_private *panel;
+ 	struct msm_dp_link_info *link_info;
+ 	u8 *dpcd, major, minor;
+@@ -75,6 +75,16 @@ static int msm_dp_panel_read_dpcd(struct msm_dp_panel *msm_dp_panel)
+ 	if (link_info->rate > msm_dp_panel->max_dp_link_rate)
+ 		link_info->rate = msm_dp_panel->max_dp_link_rate;
+ 
++	/* Limit data lanes from LTTPR capabilities, if any */
++	max_lttpr_lanes = drm_dp_lttpr_max_lane_count(panel->link->lttpr_common_caps);
++	if (max_lttpr_lanes && max_lttpr_lanes < link_info->num_lanes)
++		link_info->num_lanes = max_lttpr_lanes;
++
++	/* Limit link rate from LTTPR capabilities, if any */
++	max_lttpr_rate = drm_dp_lttpr_max_link_rate(panel->link->lttpr_common_caps);
++	if (max_lttpr_rate && max_lttpr_rate < link_info->rate)
++		link_info->rate = max_lttpr_rate;
++
+ 	drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
+ 	drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
+ 	drm_dbg_dp(panel->drm_dev, "lane_count=%d\n", link_info->num_lanes);
+diff --git a/drivers/gpu/drm/panel/panel-samsung-sofef00.c b/drivers/gpu/drm/panel/panel-samsung-sofef00.c
+index 04ce925b3d9dbd..49cfa84b34f0ca 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-sofef00.c
++++ b/drivers/gpu/drm/panel/panel-samsung-sofef00.c
+@@ -22,7 +22,6 @@ struct sofef00_panel {
+ 	struct mipi_dsi_device *dsi;
+ 	struct regulator *supply;
+ 	struct gpio_desc *reset_gpio;
+-	const struct drm_display_mode *mode;
+ };
+ 
+ static inline
+@@ -159,26 +158,11 @@ static const struct drm_display_mode enchilada_panel_mode = {
+ 	.height_mm = 145,
+ };
+ 
+-static const struct drm_display_mode fajita_panel_mode = {
+-	.clock = (1080 + 72 + 16 + 36) * (2340 + 32 + 4 + 18) * 60 / 1000,
+-	.hdisplay = 1080,
+-	.hsync_start = 1080 + 72,
+-	.hsync_end = 1080 + 72 + 16,
+-	.htotal = 1080 + 72 + 16 + 36,
+-	.vdisplay = 2340,
+-	.vsync_start = 2340 + 32,
+-	.vsync_end = 2340 + 32 + 4,
+-	.vtotal = 2340 + 32 + 4 + 18,
+-	.width_mm = 68,
+-	.height_mm = 145,
+-};
+-
+ static int sofef00_panel_get_modes(struct drm_panel *panel, struct drm_connector *connector)
+ {
+ 	struct drm_display_mode *mode;
+-	struct sofef00_panel *ctx = to_sofef00_panel(panel);
+ 
+-	mode = drm_mode_duplicate(connector->dev, ctx->mode);
++	mode = drm_mode_duplicate(connector->dev, &enchilada_panel_mode);
+ 	if (!mode)
+ 		return -ENOMEM;
+ 
+@@ -239,13 +223,6 @@ static int sofef00_panel_probe(struct mipi_dsi_device *dsi)
+ 	if (!ctx)
+ 		return -ENOMEM;
+ 
+-	ctx->mode = of_device_get_match_data(dev);
+-
+-	if (!ctx->mode) {
+-		dev_err(dev, "Missing device mode\n");
+-		return -ENODEV;
+-	}
+-
+ 	ctx->supply = devm_regulator_get(dev, "vddio");
+ 	if (IS_ERR(ctx->supply))
+ 		return dev_err_probe(dev, PTR_ERR(ctx->supply),
+@@ -295,14 +272,7 @@ static void sofef00_panel_remove(struct mipi_dsi_device *dsi)
+ }
+ 
+ static const struct of_device_id sofef00_panel_of_match[] = {
+-	{ // OnePlus 6 / enchilada
+-		.compatible = "samsung,sofef00",
+-		.data = &enchilada_panel_mode,
+-	},
+-	{ // OnePlus 6T / fajita
+-		.compatible = "samsung,s6e3fc2x01",
+-		.data = &fajita_panel_mode,
+-	},
++	{ .compatible = "samsung,sofef00" },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, sofef00_panel_of_match);
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 33a37539de574e..3aaac96c0bfbf5 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -2199,13 +2199,14 @@ static const struct display_timing evervision_vgg644804_timing = {
+ static const struct panel_desc evervision_vgg644804 = {
+ 	.timings = &evervision_vgg644804_timing,
+ 	.num_timings = 1,
+-	.bpc = 8,
++	.bpc = 6,
+ 	.size = {
+ 		.width = 115,
+ 		.height = 86,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
+-	.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
++	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+ static const struct display_timing evervision_vgg804821_timing = {
+diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
+index a9da1d1eeb7071..1e8811c6716dfa 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.c
++++ b/drivers/gpu/drm/panthor/panthor_device.c
+@@ -171,10 +171,6 @@ int panthor_device_init(struct panthor_device *ptdev)
+ 	struct page *p;
+ 	int ret;
+ 
+-	ret = panthor_gpu_coherency_init(ptdev);
+-	if (ret)
+-		return ret;
+-
+ 	init_completion(&ptdev->unplug.done);
+ 	ret = drmm_mutex_init(&ptdev->base, &ptdev->unplug.lock);
+ 	if (ret)
+@@ -247,6 +243,10 @@ int panthor_device_init(struct panthor_device *ptdev)
+ 	if (ret)
+ 		goto err_rpm_put;
+ 
++	ret = panthor_gpu_coherency_init(ptdev);
++	if (ret)
++		goto err_unplug_gpu;
++
+ 	ret = panthor_mmu_init(ptdev);
+ 	if (ret)
+ 		goto err_unplug_gpu;
+diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
+index 12a02e28f50fd8..7cca97d298ea10 100644
+--- a/drivers/gpu/drm/panthor/panthor_mmu.c
++++ b/drivers/gpu/drm/panthor/panthor_mmu.c
+@@ -781,6 +781,7 @@ int panthor_vm_active(struct panthor_vm *vm)
+ 	if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) {
+ 		gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as));
+ 		ptdev->mmu->as.faulty_mask &= ~panthor_mmu_as_fault_mask(ptdev, as);
++		ptdev->mmu->irq.mask |= panthor_mmu_as_fault_mask(ptdev, as);
+ 		gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/panthor/panthor_regs.h b/drivers/gpu/drm/panthor/panthor_regs.h
+index b7b3b3add16627..a7a323dc5cf92a 100644
+--- a/drivers/gpu/drm/panthor/panthor_regs.h
++++ b/drivers/gpu/drm/panthor/panthor_regs.h
+@@ -133,8 +133,8 @@
+ #define GPU_COHERENCY_PROT_BIT(name)			BIT(GPU_COHERENCY_  ## name)
+ 
+ #define GPU_COHERENCY_PROTOCOL				0x304
+-#define   GPU_COHERENCY_ACE				0
+-#define   GPU_COHERENCY_ACE_LITE			1
++#define   GPU_COHERENCY_ACE_LITE			0
++#define   GPU_COHERENCY_ACE				1
+ #define   GPU_COHERENCY_NONE				31
+ 
+ #define MCU_CONTROL					0x700
+diff --git a/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c
+index 70d8ad065bfa1d..4c8fe83dd6101b 100644
+--- a/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/renesas/rcar-du/rcar_du_kms.c
+@@ -705,7 +705,7 @@ static int rcar_du_vsps_init(struct rcar_du_device *rcdu)
+ 		ret = of_parse_phandle_with_fixed_args(np, vsps_prop_name,
+ 						       cells, i, &args);
+ 		if (ret < 0)
+-			goto error;
++			goto done;
+ 
+ 		/*
+ 		 * Add the VSP to the list or update the corresponding existing
+@@ -743,13 +743,11 @@ static int rcar_du_vsps_init(struct rcar_du_device *rcdu)
+ 		vsp->dev = rcdu;
+ 
+ 		ret = rcar_du_vsp_init(vsp, vsps[i].np, vsps[i].crtcs_mask);
+-		if (ret < 0)
+-			goto error;
++		if (ret)
++			goto done;
+ 	}
+ 
+-	return 0;
+-
+-error:
++done:
+ 	for (i = 0; i < ARRAY_SIZE(vsps); ++i)
+ 		of_node_put(vsps[i].np);
+ 
+diff --git a/drivers/gpu/drm/tegra/rgb.c b/drivers/gpu/drm/tegra/rgb.c
+index 1e8ec50b759e46..ff5a749710db3a 100644
+--- a/drivers/gpu/drm/tegra/rgb.c
++++ b/drivers/gpu/drm/tegra/rgb.c
+@@ -200,6 +200,11 @@ static const struct drm_encoder_helper_funcs tegra_rgb_encoder_helper_funcs = {
+ 	.atomic_check = tegra_rgb_encoder_atomic_check,
+ };
+ 
++static void tegra_dc_of_node_put(void *data)
++{
++	of_node_put(data);
++}
++
+ int tegra_dc_rgb_probe(struct tegra_dc *dc)
+ {
+ 	struct device_node *np;
+@@ -207,7 +212,14 @@ int tegra_dc_rgb_probe(struct tegra_dc *dc)
+ 	int err;
+ 
+ 	np = of_get_child_by_name(dc->dev->of_node, "rgb");
+-	if (!np || !of_device_is_available(np))
++	if (!np)
++		return -ENODEV;
++
++	err = devm_add_action_or_reset(dc->dev, tegra_dc_of_node_put, np);
++	if (err < 0)
++		return err;
++
++	if (!of_device_is_available(np))
+ 		return -ENODEV;
+ 
+ 	rgb = devm_kzalloc(dc->dev, sizeof(*rgb), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/v3d/v3d_debugfs.c b/drivers/gpu/drm/v3d/v3d_debugfs.c
+index 76816f2551c100..7e789e181af0ac 100644
+--- a/drivers/gpu/drm/v3d/v3d_debugfs.c
++++ b/drivers/gpu/drm/v3d/v3d_debugfs.c
+@@ -21,74 +21,74 @@ struct v3d_reg_def {
+ };
+ 
+ static const struct v3d_reg_def v3d_hub_reg_defs[] = {
+-	REGDEF(33, 42, V3D_HUB_AXICFG),
+-	REGDEF(33, 71, V3D_HUB_UIFCFG),
+-	REGDEF(33, 71, V3D_HUB_IDENT0),
+-	REGDEF(33, 71, V3D_HUB_IDENT1),
+-	REGDEF(33, 71, V3D_HUB_IDENT2),
+-	REGDEF(33, 71, V3D_HUB_IDENT3),
+-	REGDEF(33, 71, V3D_HUB_INT_STS),
+-	REGDEF(33, 71, V3D_HUB_INT_MSK_STS),
+-
+-	REGDEF(33, 71, V3D_MMU_CTL),
+-	REGDEF(33, 71, V3D_MMU_VIO_ADDR),
+-	REGDEF(33, 71, V3D_MMU_VIO_ID),
+-	REGDEF(33, 71, V3D_MMU_DEBUG_INFO),
+-
+-	REGDEF(71, 71, V3D_GMP_STATUS(71)),
+-	REGDEF(71, 71, V3D_GMP_CFG(71)),
+-	REGDEF(71, 71, V3D_GMP_VIO_ADDR(71)),
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_HUB_AXICFG),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_UIFCFG),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT0),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT1),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT2),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_IDENT3),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_INT_STS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_HUB_INT_MSK_STS),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_CTL),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_VIO_ADDR),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_VIO_ID),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_MMU_DEBUG_INFO),
++
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_GMP_STATUS(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_GMP_CFG(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_GMP_VIO_ADDR(71)),
+ };
+ 
+ static const struct v3d_reg_def v3d_gca_reg_defs[] = {
+-	REGDEF(33, 33, V3D_GCA_SAFE_SHUTDOWN),
+-	REGDEF(33, 33, V3D_GCA_SAFE_SHUTDOWN_ACK),
++	REGDEF(V3D_GEN_33, V3D_GEN_33, V3D_GCA_SAFE_SHUTDOWN),
++	REGDEF(V3D_GEN_33, V3D_GEN_33, V3D_GCA_SAFE_SHUTDOWN_ACK),
+ };
+ 
+ static const struct v3d_reg_def v3d_core_reg_defs[] = {
+-	REGDEF(33, 71, V3D_CTL_IDENT0),
+-	REGDEF(33, 71, V3D_CTL_IDENT1),
+-	REGDEF(33, 71, V3D_CTL_IDENT2),
+-	REGDEF(33, 71, V3D_CTL_MISCCFG),
+-	REGDEF(33, 71, V3D_CTL_INT_STS),
+-	REGDEF(33, 71, V3D_CTL_INT_MSK_STS),
+-	REGDEF(33, 71, V3D_CLE_CT0CS),
+-	REGDEF(33, 71, V3D_CLE_CT0CA),
+-	REGDEF(33, 71, V3D_CLE_CT0EA),
+-	REGDEF(33, 71, V3D_CLE_CT1CS),
+-	REGDEF(33, 71, V3D_CLE_CT1CA),
+-	REGDEF(33, 71, V3D_CLE_CT1EA),
+-
+-	REGDEF(33, 71, V3D_PTB_BPCA),
+-	REGDEF(33, 71, V3D_PTB_BPCS),
+-
+-	REGDEF(33, 42, V3D_GMP_STATUS(33)),
+-	REGDEF(33, 42, V3D_GMP_CFG(33)),
+-	REGDEF(33, 42, V3D_GMP_VIO_ADDR(33)),
+-
+-	REGDEF(33, 71, V3D_ERR_FDBGO),
+-	REGDEF(33, 71, V3D_ERR_FDBGB),
+-	REGDEF(33, 71, V3D_ERR_FDBGS),
+-	REGDEF(33, 71, V3D_ERR_STAT),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_IDENT0),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_IDENT1),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_IDENT2),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_MISCCFG),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_INT_STS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CTL_INT_MSK_STS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT0CS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT0CA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT0EA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT1CS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT1CA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_CLE_CT1EA),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_PTB_BPCA),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_PTB_BPCS),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_GMP_STATUS(33)),
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_GMP_CFG(33)),
++	REGDEF(V3D_GEN_33, V3D_GEN_42, V3D_GMP_VIO_ADDR(33)),
++
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_FDBGO),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_FDBGB),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_FDBGS),
++	REGDEF(V3D_GEN_33, V3D_GEN_71, V3D_ERR_STAT),
+ };
+ 
+ static const struct v3d_reg_def v3d_csd_reg_defs[] = {
+-	REGDEF(41, 71, V3D_CSD_STATUS),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG0(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG1(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG2(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG3(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG4(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG5(41)),
+-	REGDEF(41, 42, V3D_CSD_CURRENT_CFG6(41)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG0(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG1(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG2(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG3(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG4(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG5(71)),
+-	REGDEF(71, 71, V3D_CSD_CURRENT_CFG6(71)),
+-	REGDEF(71, 71, V3D_V7_CSD_CURRENT_CFG7),
++	REGDEF(V3D_GEN_41, V3D_GEN_71, V3D_CSD_STATUS),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG0(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG1(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG2(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG3(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG4(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG5(41)),
++	REGDEF(V3D_GEN_41, V3D_GEN_42, V3D_CSD_CURRENT_CFG6(41)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG0(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG1(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG2(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG3(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG4(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG5(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_CSD_CURRENT_CFG6(71)),
++	REGDEF(V3D_GEN_71, V3D_GEN_71, V3D_V7_CSD_CURRENT_CFG7),
+ };
+ 
+ static int v3d_v3d_debugfs_regs(struct seq_file *m, void *unused)
+@@ -164,7 +164,7 @@ static int v3d_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ 		   str_yes_no(ident2 & V3D_HUB_IDENT2_WITH_MMU));
+ 	seq_printf(m, "TFU:        %s\n",
+ 		   str_yes_no(ident1 & V3D_HUB_IDENT1_WITH_TFU));
+-	if (v3d->ver <= 42) {
++	if (v3d->ver <= V3D_GEN_42) {
+ 		seq_printf(m, "TSY:        %s\n",
+ 			   str_yes_no(ident1 & V3D_HUB_IDENT1_WITH_TSY));
+ 	}
+@@ -196,11 +196,11 @@ static int v3d_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ 		seq_printf(m, "  QPUs:         %d\n", nslc * qups);
+ 		seq_printf(m, "  Semaphores:   %d\n",
+ 			   V3D_GET_FIELD(ident1, V3D_IDENT1_NSEM));
+-		if (v3d->ver <= 42) {
++		if (v3d->ver <= V3D_GEN_42) {
+ 			seq_printf(m, "  BCG int:      %d\n",
+ 				   (ident2 & V3D_IDENT2_BCG_INT) != 0);
+ 		}
+-		if (v3d->ver < 40) {
++		if (v3d->ver < V3D_GEN_41) {
+ 			seq_printf(m, "  Override TMU: %d\n",
+ 				   (misccfg & V3D_MISCCFG_OVRTMUOUT) != 0);
+ 		}
+@@ -234,7 +234,7 @@ static int v3d_measure_clock(struct seq_file *m, void *unused)
+ 	int core = 0;
+ 	int measure_ms = 1000;
+ 
+-	if (v3d->ver >= 40) {
++	if (v3d->ver >= V3D_GEN_41) {
+ 		int cycle_count_reg = V3D_PCTR_CYCLE_COUNT(v3d->ver);
+ 		V3D_CORE_WRITE(core, V3D_V4_PCTR_0_SRC_0_3,
+ 			       V3D_SET_FIELD_VER(cycle_count_reg,
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index 852015214e971c..aa68be8fe86b71 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -17,6 +17,7 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
++#include <linux/of.h>
+ #include <linux/of_platform.h>
+ #include <linux/platform_device.h>
+ #include <linux/sched/clock.h>
+@@ -92,7 +93,7 @@ static int v3d_get_param_ioctl(struct drm_device *dev, void *data,
+ 		args->value = 1;
+ 		return 0;
+ 	case DRM_V3D_PARAM_SUPPORTS_PERFMON:
+-		args->value = (v3d->ver >= 40);
++		args->value = (v3d->ver >= V3D_GEN_41);
+ 		return 0;
+ 	case DRM_V3D_PARAM_SUPPORTS_MULTISYNC_EXT:
+ 		args->value = 1;
+@@ -254,10 +255,10 @@ static const struct drm_driver v3d_drm_driver = {
+ };
+ 
+ static const struct of_device_id v3d_of_match[] = {
+-	{ .compatible = "brcm,2711-v3d" },
+-	{ .compatible = "brcm,2712-v3d" },
+-	{ .compatible = "brcm,7268-v3d" },
+-	{ .compatible = "brcm,7278-v3d" },
++	{ .compatible = "brcm,2711-v3d", .data = (void *)V3D_GEN_42 },
++	{ .compatible = "brcm,2712-v3d", .data = (void *)V3D_GEN_71 },
++	{ .compatible = "brcm,7268-v3d", .data = (void *)V3D_GEN_33 },
++	{ .compatible = "brcm,7278-v3d", .data = (void *)V3D_GEN_41 },
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(of, v3d_of_match);
+@@ -274,6 +275,7 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct drm_device *drm;
+ 	struct v3d_dev *v3d;
++	enum v3d_gen gen;
+ 	int ret;
+ 	u32 mmu_debug;
+ 	u32 ident1, ident3;
+@@ -287,6 +289,9 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, drm);
+ 
++	gen = (uintptr_t)of_device_get_match_data(dev);
++	v3d->ver = gen;
++
+ 	ret = map_regs(v3d, &v3d->hub_regs, "hub");
+ 	if (ret)
+ 		return ret;
+@@ -316,6 +321,11 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	ident1 = V3D_READ(V3D_HUB_IDENT1);
+ 	v3d->ver = (V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_TVER) * 10 +
+ 		    V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_REV));
++	/* Make sure that the V3D tech version retrieved from the HW is equal
++	 * to the one advertised by the device tree.
++	 */
++	WARN_ON(v3d->ver != gen);
++
+ 	v3d->cores = V3D_GET_FIELD(ident1, V3D_HUB_IDENT1_NCORES);
+ 	WARN_ON(v3d->cores > 1); /* multicore not yet implemented */
+ 
+@@ -340,7 +350,7 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	if (v3d->ver < 41) {
++	if (v3d->ver < V3D_GEN_41) {
+ 		ret = map_regs(v3d, &v3d->gca_regs, "gca");
+ 		if (ret)
+ 			goto clk_disable;
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index 9deaefa0f95b71..de4a9e18f6a903 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -94,11 +94,18 @@ struct v3d_perfmon {
+ 	u64 values[] __counted_by(ncounters);
+ };
+ 
++enum v3d_gen {
++	V3D_GEN_33 = 33,
++	V3D_GEN_41 = 41,
++	V3D_GEN_42 = 42,
++	V3D_GEN_71 = 71,
++};
++
+ struct v3d_dev {
+ 	struct drm_device drm;
+ 
+ 	/* Short representation (e.g. 33, 41) of the V3D tech version */
+-	int ver;
++	enum v3d_gen ver;
+ 
+ 	/* Short representation (e.g. 5, 6) of the V3D tech revision */
+ 	int rev;
+@@ -199,7 +206,7 @@ to_v3d_dev(struct drm_device *dev)
+ static inline bool
+ v3d_has_csd(struct v3d_dev *v3d)
+ {
+-	return v3d->ver >= 41;
++	return v3d->ver >= V3D_GEN_41;
+ }
+ 
+ #define v3d_to_pdev(v3d) to_platform_device((v3d)->drm.dev)
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index b1e681630ded09..1ea6d3832c2212 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -25,7 +25,7 @@ v3d_init_core(struct v3d_dev *v3d, int core)
+ 	 * type.  If you want the default behavior, you can still put
+ 	 * "2" in the indirect texture state's output_type field.
+ 	 */
+-	if (v3d->ver < 40)
++	if (v3d->ver < V3D_GEN_41)
+ 		V3D_CORE_WRITE(core, V3D_CTL_MISCCFG, V3D_MISCCFG_OVRTMUOUT);
+ 
+ 	/* Whenever we flush the L2T cache, we always want to flush
+@@ -58,7 +58,7 @@ v3d_idle_axi(struct v3d_dev *v3d, int core)
+ static void
+ v3d_idle_gca(struct v3d_dev *v3d)
+ {
+-	if (v3d->ver >= 41)
++	if (v3d->ver >= V3D_GEN_41)
+ 		return;
+ 
+ 	V3D_GCA_WRITE(V3D_GCA_SAFE_SHUTDOWN, V3D_GCA_SAFE_SHUTDOWN_EN);
+@@ -132,13 +132,13 @@ v3d_reset(struct v3d_dev *v3d)
+ static void
+ v3d_flush_l3(struct v3d_dev *v3d)
+ {
+-	if (v3d->ver < 41) {
++	if (v3d->ver < V3D_GEN_41) {
+ 		u32 gca_ctrl = V3D_GCA_READ(V3D_GCA_CACHE_CTRL);
+ 
+ 		V3D_GCA_WRITE(V3D_GCA_CACHE_CTRL,
+ 			      gca_ctrl | V3D_GCA_CACHE_CTRL_FLUSH);
+ 
+-		if (v3d->ver < 33) {
++		if (v3d->ver < V3D_GEN_33) {
+ 			V3D_GCA_WRITE(V3D_GCA_CACHE_CTRL,
+ 				      gca_ctrl & ~V3D_GCA_CACHE_CTRL_FLUSH);
+ 		}
+@@ -151,7 +151,7 @@ v3d_flush_l3(struct v3d_dev *v3d)
+ static void
+ v3d_invalidate_l2c(struct v3d_dev *v3d, int core)
+ {
+-	if (v3d->ver > 32)
++	if (v3d->ver >= V3D_GEN_33)
+ 		return;
+ 
+ 	V3D_CORE_WRITE(core, V3D_CTL_L2CACTL,
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 72b6a119412fa7..2cca5d3a26a22c 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -143,7 +143,7 @@ v3d_irq(int irq, void *arg)
+ 	/* We shouldn't be triggering these if we have GMP in
+ 	 * always-allowed mode.
+ 	 */
+-	if (v3d->ver < 71 && (intsts & V3D_INT_GMPV))
++	if (v3d->ver < V3D_GEN_71 && (intsts & V3D_INT_GMPV))
+ 		dev_err(v3d->drm.dev, "GMP violation\n");
+ 
+ 	/* V3D 4.2 wires the hub and core IRQs together, so if we &
+@@ -186,27 +186,59 @@ v3d_hub_irq(int irq, void *arg)
+ 		u32 axi_id = V3D_READ(V3D_MMU_VIO_ID);
+ 		u64 vio_addr = ((u64)V3D_READ(V3D_MMU_VIO_ADDR) <<
+ 				(v3d->va_width - 32));
+-		static const char *const v3d41_axi_ids[] = {
+-			"L2T",
+-			"PTB",
+-			"PSE",
+-			"TLB",
+-			"CLE",
+-			"TFU",
+-			"MMU",
+-			"GMP",
++		static const struct {
++			u32 begin;
++			u32 end;
++			const char *client;
++		} v3d41_axi_ids[] = {
++			{0x00, 0x20, "L2T"},
++			{0x20, 0x21, "PTB"},
++			{0x40, 0x41, "PSE"},
++			{0x60, 0x80, "TLB"},
++			{0x80, 0x88, "CLE"},
++			{0xA0, 0xA1, "TFU"},
++			{0xC0, 0xE0, "MMU"},
++			{0xE0, 0xE1, "GMP"},
++		}, v3d71_axi_ids[] = {
++			{0x00, 0x30, "L2T"},
++			{0x30, 0x38, "CLE"},
++			{0x38, 0x39, "PTB"},
++			{0x39, 0x3A, "PSE"},
++			{0x3A, 0x3B, "CSD"},
++			{0x40, 0x60, "TLB"},
++			{0x60, 0x70, "MMU"},
++			{0x7C, 0x7E, "TFU"},
++			{0x7F, 0x80, "GMP"},
+ 		};
+ 		const char *client = "?";
+ 
+ 		V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL));
+ 
+-		if (v3d->ver >= 41) {
+-			axi_id = axi_id >> 5;
+-			if (axi_id < ARRAY_SIZE(v3d41_axi_ids))
+-				client = v3d41_axi_ids[axi_id];
++		if (v3d->ver >= V3D_GEN_71) {
++			size_t i;
++
++			axi_id = axi_id & 0x7F;
++			for (i = 0; i < ARRAY_SIZE(v3d71_axi_ids); i++) {
++				if (axi_id >= v3d71_axi_ids[i].begin &&
++				    axi_id < v3d71_axi_ids[i].end) {
++					client = v3d71_axi_ids[i].client;
++					break;
++				}
++			}
++		} else if (v3d->ver >= V3D_GEN_41) {
++			size_t i;
++
++			axi_id = axi_id & 0xFF;
++			for (i = 0; i < ARRAY_SIZE(v3d41_axi_ids); i++) {
++				if (axi_id >= v3d41_axi_ids[i].begin &&
++				    axi_id < v3d41_axi_ids[i].end) {
++					client = v3d41_axi_ids[i].client;
++					break;
++				}
++			}
+ 		}
+ 
+-		dev_err(v3d->drm.dev, "MMU error from client %s (%d) at 0x%llx%s%s%s\n",
++		dev_err(v3d->drm.dev, "MMU error from client %s (0x%x) at 0x%llx%s%s%s\n",
+ 			client, axi_id, (long long)vio_addr,
+ 			((intsts & V3D_HUB_INT_MMU_WRV) ?
+ 			 ", write violation" : ""),
+@@ -217,7 +249,7 @@ v3d_hub_irq(int irq, void *arg)
+ 		status = IRQ_HANDLED;
+ 	}
+ 
+-	if (v3d->ver >= 71 && (intsts & V3D_V7_HUB_INT_GMPV)) {
++	if (v3d->ver >= V3D_GEN_71 && (intsts & V3D_V7_HUB_INT_GMPV)) {
+ 		dev_err(v3d->drm.dev, "GMP Violation\n");
+ 		status = IRQ_HANDLED;
+ 	}
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 3ebda2fa46fc47..9a3fe52558746e 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -200,10 +200,10 @@ void v3d_perfmon_init(struct v3d_dev *v3d)
+ 	const struct v3d_perf_counter_desc *counters = NULL;
+ 	unsigned int max = 0;
+ 
+-	if (v3d->ver >= 71) {
++	if (v3d->ver >= V3D_GEN_71) {
+ 		counters = v3d_v71_performance_counters;
+ 		max = ARRAY_SIZE(v3d_v71_performance_counters);
+-	} else if (v3d->ver >= 42) {
++	} else if (v3d->ver >= V3D_GEN_42) {
+ 		counters = v3d_v42_performance_counters;
+ 		max = ARRAY_SIZE(v3d_v42_performance_counters);
+ 	}
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index eb35482f6fb577..35f131a46d0701 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -357,11 +357,11 @@ v3d_tfu_job_run(struct drm_sched_job *sched_job)
+ 	V3D_WRITE(V3D_TFU_ICA(v3d->ver), job->args.ica);
+ 	V3D_WRITE(V3D_TFU_IUA(v3d->ver), job->args.iua);
+ 	V3D_WRITE(V3D_TFU_IOA(v3d->ver), job->args.ioa);
+-	if (v3d->ver >= 71)
++	if (v3d->ver >= V3D_GEN_71)
+ 		V3D_WRITE(V3D_V7_TFU_IOC, job->args.v71.ioc);
+ 	V3D_WRITE(V3D_TFU_IOS(v3d->ver), job->args.ios);
+ 	V3D_WRITE(V3D_TFU_COEF0(v3d->ver), job->args.coef[0]);
+-	if (v3d->ver >= 71 || (job->args.coef[0] & V3D_TFU_COEF0_USECOEF)) {
++	if (v3d->ver >= V3D_GEN_71 || (job->args.coef[0] & V3D_TFU_COEF0_USECOEF)) {
+ 		V3D_WRITE(V3D_TFU_COEF1(v3d->ver), job->args.coef[1]);
+ 		V3D_WRITE(V3D_TFU_COEF2(v3d->ver), job->args.coef[2]);
+ 		V3D_WRITE(V3D_TFU_COEF3(v3d->ver), job->args.coef[3]);
+@@ -412,7 +412,7 @@ v3d_csd_job_run(struct drm_sched_job *sched_job)
+ 	 *
+ 	 * XXX: Set the CFG7 register
+ 	 */
+-	if (v3d->ver >= 71)
++	if (v3d->ver >= V3D_GEN_71)
+ 		V3D_CORE_WRITE(0, V3D_V7_CSD_QUEUED_CFG7, 0);
+ 
+ 	/* CFG0 write kicks off the job. */
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_mock_output.c b/drivers/gpu/drm/vc4/tests/vc4_mock_output.c
+index e70d7c3076acf1..f0ddc223c1f839 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_mock_output.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_mock_output.c
+@@ -75,24 +75,30 @@ int vc4_mock_atomic_add_output(struct kunit *test,
+ 	int ret;
+ 
+ 	encoder = vc4_find_encoder_by_type(drm, type);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, encoder);
++	if (!encoder)
++		return -ENODEV;
+ 
+ 	crtc = vc4_find_crtc_for_encoder(test, drm, encoder);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc);
++	if (!crtc)
++		return -ENODEV;
+ 
+ 	output = encoder_to_vc4_dummy_output(encoder);
+ 	conn = &output->connector;
+ 	conn_state = drm_atomic_get_connector_state(state, conn);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, conn_state);
++	if (IS_ERR(conn_state))
++		return PTR_ERR(conn_state);
+ 
+ 	ret = drm_atomic_set_crtc_for_connector(conn_state, crtc);
+-	KUNIT_EXPECT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	crtc_state = drm_atomic_get_crtc_state(state, crtc);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc_state);
++	if (IS_ERR(crtc_state))
++		return PTR_ERR(crtc_state);
+ 
+ 	ret = drm_atomic_set_mode_for_crtc(crtc_state, &default_mode);
+-	KUNIT_EXPECT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	crtc_state->active = true;
+ 
+@@ -113,26 +119,32 @@ int vc4_mock_atomic_del_output(struct kunit *test,
+ 	int ret;
+ 
+ 	encoder = vc4_find_encoder_by_type(drm, type);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, encoder);
++	if (!encoder)
++		return -ENODEV;
+ 
+ 	crtc = vc4_find_crtc_for_encoder(test, drm, encoder);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc);
++	if (!crtc)
++		return -ENODEV;
+ 
+ 	crtc_state = drm_atomic_get_crtc_state(state, crtc);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, crtc_state);
++	if (IS_ERR(crtc_state))
++		return PTR_ERR(crtc_state);
+ 
+ 	crtc_state->active = false;
+ 
+ 	ret = drm_atomic_set_mode_for_crtc(crtc_state, NULL);
+-	KUNIT_ASSERT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	output = encoder_to_vc4_dummy_output(encoder);
+ 	conn = &output->connector;
+ 	conn_state = drm_atomic_get_connector_state(state, conn);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, conn_state);
++	if (IS_ERR(conn_state))
++		return PTR_ERR(conn_state);
+ 
+ 	ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
+-	KUNIT_ASSERT_EQ(test, ret, 0);
++	if (ret)
++		return ret;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c b/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c
+index 992e8f5c5c6ea8..d1f694029169ad 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.c
+@@ -20,7 +20,6 @@
+ 
+ struct pv_muxing_priv {
+ 	struct vc4_dev *vc4;
+-	struct drm_atomic_state *state;
+ };
+ 
+ static bool check_fifo_conflict(struct kunit *test,
+@@ -677,18 +676,41 @@ static void drm_vc4_test_pv_muxing(struct kunit *test)
+ {
+ 	const struct pv_muxing_param *params = test->param_value;
+ 	const struct pv_muxing_priv *priv = test->priv;
+-	struct drm_atomic_state *state = priv->state;
++	struct drm_modeset_acquire_ctx ctx;
++	struct drm_atomic_state *state;
++	struct drm_device *drm;
++	struct vc4_dev *vc4;
+ 	unsigned int i;
+ 	int ret;
+ 
++	drm_modeset_acquire_init(&ctx, 0);
++
++	vc4 = priv->vc4;
++	drm = &vc4->base;
++
++retry:
++	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 	for (i = 0; i < params->nencoders; i++) {
+ 		enum vc4_encoder_type enc_type = params->encoders[i];
+ 
+ 		ret = vc4_mock_atomic_add_output(test, state, enc_type);
++		if (ret == -EDEADLK) {
++			drm_atomic_state_clear(state);
++			ret = drm_modeset_backoff(&ctx);
++			if (!ret)
++				goto retry;
++		}
+ 		KUNIT_ASSERT_EQ(test, ret, 0);
+ 	}
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry;
++	}
+ 	KUNIT_EXPECT_EQ(test, ret, 0);
+ 
+ 	KUNIT_EXPECT_TRUE(test,
+@@ -700,33 +722,61 @@ static void drm_vc4_test_pv_muxing(struct kunit *test)
+ 		KUNIT_EXPECT_TRUE(test, check_channel_for_encoder(test, state, enc_type,
+ 								  params->check_fn));
+ 	}
++
++	drm_modeset_drop_locks(&ctx);
++	drm_modeset_acquire_fini(&ctx);
+ }
+ 
+ static void drm_vc4_test_pv_muxing_invalid(struct kunit *test)
+ {
+ 	const struct pv_muxing_param *params = test->param_value;
+ 	const struct pv_muxing_priv *priv = test->priv;
+-	struct drm_atomic_state *state = priv->state;
++	struct drm_modeset_acquire_ctx ctx;
++	struct drm_atomic_state *state;
++	struct drm_device *drm;
++	struct vc4_dev *vc4;
+ 	unsigned int i;
+ 	int ret;
+ 
++	drm_modeset_acquire_init(&ctx, 0);
++
++	vc4 = priv->vc4;
++	drm = &vc4->base;
++
++retry:
++	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
++
+ 	for (i = 0; i < params->nencoders; i++) {
+ 		enum vc4_encoder_type enc_type = params->encoders[i];
+ 
+ 		ret = vc4_mock_atomic_add_output(test, state, enc_type);
++		if (ret == -EDEADLK) {
++			drm_atomic_state_clear(state);
++			ret = drm_modeset_backoff(&ctx);
++			if (!ret)
++				goto retry;
++		}
+ 		KUNIT_ASSERT_EQ(test, ret, 0);
+ 	}
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry;
++	}
+ 	KUNIT_EXPECT_LT(test, ret, 0);
++
++	drm_modeset_drop_locks(&ctx);
++	drm_modeset_acquire_fini(&ctx);
+ }
+ 
+ static int vc4_pv_muxing_test_init(struct kunit *test)
+ {
+ 	const struct pv_muxing_param *params = test->param_value;
+-	struct drm_modeset_acquire_ctx ctx;
+ 	struct pv_muxing_priv *priv;
+-	struct drm_device *drm;
+ 	struct vc4_dev *vc4;
+ 
+ 	priv = kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL);
+@@ -737,15 +787,6 @@ static int vc4_pv_muxing_test_init(struct kunit *test)
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4);
+ 	priv->vc4 = vc4;
+ 
+-	drm_modeset_acquire_init(&ctx, 0);
+-
+-	drm = &vc4->base;
+-	priv->state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, priv->state);
+-
+-	drm_modeset_drop_locks(&ctx);
+-	drm_modeset_acquire_fini(&ctx);
+-
+ 	return 0;
+ }
+ 
+@@ -800,13 +841,26 @@ static void drm_test_vc5_pv_muxing_bugs_subsequent_crtc_enable(struct kunit *tes
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 
+ 	drm = &vc4->base;
++retry_first:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -823,13 +877,26 @@ static void drm_test_vc5_pv_muxing_bugs_subsequent_crtc_enable(struct kunit *tes
+ 	ret = drm_atomic_helper_swap_state(state, false);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
++retry_second:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI1);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -874,16 +941,35 @@ static void drm_test_vc5_pv_muxing_bugs_stable_fifo(struct kunit *test)
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 
+ 	drm = &vc4->base;
++retry_first:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI1);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -908,13 +994,26 @@ static void drm_test_vc5_pv_muxing_bugs_stable_fifo(struct kunit *test)
+ 	ret = drm_atomic_helper_swap_state(state, false);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
++retry_second:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_del_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_hvs_state = vc4_hvs_get_new_global_state(state);
+@@ -968,25 +1067,50 @@ drm_test_vc5_pv_muxing_bugs_subsequent_crtc_enable_too_many_crtc_state(struct ku
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 
+ 	drm = &vc4->base;
++retry_first:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI0);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_first;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+-
+ 	ret = drm_atomic_helper_swap_state(state, false);
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
++retry_second:
+ 	state = drm_kunit_helper_atomic_state_alloc(test, drm, &ctx);
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+ 
+ 	ret = vc4_mock_atomic_add_output(test, state, VC4_ENCODER_TYPE_HDMI1);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	ret = drm_atomic_check_only(state);
++	if (ret == -EDEADLK) {
++		drm_atomic_state_clear(state);
++		ret = drm_modeset_backoff(&ctx);
++		if (!ret)
++			goto retry_second;
++	}
+ 	KUNIT_ASSERT_EQ(test, ret, 0);
+ 
+ 	new_vc4_crtc_state = get_vc4_crtc_state_for_encoder(test, state,
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 37238a12baa58a..176aba27b03d36 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -372,13 +372,13 @@ static void vc4_hdmi_handle_hotplug(struct vc4_hdmi *vc4_hdmi,
+ 	 * the lock for now.
+ 	 */
+ 
++	drm_atomic_helper_connector_hdmi_hotplug(connector, status);
++
+ 	if (status == connector_status_disconnected) {
+ 		cec_phys_addr_invalidate(vc4_hdmi->cec_adap);
+ 		return;
+ 	}
+ 
+-	drm_atomic_helper_connector_hdmi_hotplug(connector, status);
+-
+ 	cec_s_phys_addr(vc4_hdmi->cec_adap,
+ 			connector->display_info.source_physical_address, false);
+ 
+@@ -559,12 +559,6 @@ static int vc4_hdmi_connector_init(struct drm_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = drm_connector_hdmi_audio_init(connector, dev->dev,
+-					    &vc4_hdmi_audio_funcs,
+-					    8, false, -1);
+-	if (ret)
+-		return ret;
+-
+ 	drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs);
+ 
+ 	/*
+@@ -2274,6 +2268,12 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi)
+ 		return ret;
+ 	}
+ 
++	ret = drm_connector_hdmi_audio_init(&vc4_hdmi->connector, dev,
++					    &vc4_hdmi_audio_funcs, 8, false,
++					    -1);
++	if (ret)
++		return ret;
++
+ 	dai_link->cpus		= &vc4_hdmi->audio.cpu;
+ 	dai_link->codecs	= &vc4_hdmi->audio.codec;
+ 	dai_link->platforms	= &vc4_hdmi->audio.platform;
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index 12034ec1202990..8c9898b9055d4c 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -194,7 +194,7 @@ static int vkms_crtc_atomic_check(struct drm_crtc *crtc,
+ 		i++;
+ 	}
+ 
+-	vkms_state->active_planes = kcalloc(i, sizeof(plane), GFP_KERNEL);
++	vkms_state->active_planes = kcalloc(i, sizeof(*vkms_state->active_planes), GFP_KERNEL);
+ 	if (!vkms_state->active_planes)
+ 		return -ENOMEM;
+ 	vkms_state->num_active_planes = i;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index 9b5b8c1f063bb7..f30df3dc871fd1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -51,11 +51,13 @@ static void vmw_bo_release(struct vmw_bo *vbo)
+ 			mutex_lock(&res->dev_priv->cmdbuf_mutex);
+ 			(void)vmw_resource_reserve(res, false, true);
+ 			vmw_resource_mob_detach(res);
++			if (res->dirty)
++				res->func->dirty_free(res);
+ 			if (res->coherent)
+ 				vmw_bo_dirty_release(res->guest_memory_bo);
+ 			res->guest_memory_bo = NULL;
+ 			res->guest_memory_offset = 0;
+-			vmw_resource_unreserve(res, false, false, false, NULL,
++			vmw_resource_unreserve(res, true, false, false, NULL,
+ 					       0);
+ 			mutex_unlock(&res->dev_priv->cmdbuf_mutex);
+ 		}
+@@ -73,9 +75,9 @@ static void vmw_bo_free(struct ttm_buffer_object *bo)
+ {
+ 	struct vmw_bo *vbo = to_vmw_bo(&bo->base);
+ 
+-	WARN_ON(vbo->dirty);
+ 	WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree));
+ 	vmw_bo_release(vbo);
++	WARN_ON(vbo->dirty);
+ 	kfree(vbo);
+ }
+ 
+@@ -848,9 +850,9 @@ void vmw_bo_placement_set_default_accelerated(struct vmw_bo *bo)
+ 	vmw_bo_placement_set(bo, domain, domain);
+ }
+ 
+-void vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
++int vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
+ {
+-	xa_store(&vbo->detached_resources, (unsigned long)res, res, GFP_KERNEL);
++	return xa_err(xa_store(&vbo->detached_resources, (unsigned long)res, res, GFP_KERNEL));
+ }
+ 
+ void vmw_bo_del_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res)
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+index 11e330c7c7f52b..51790a11fe6494 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+@@ -141,7 +141,7 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
+ 			struct ttm_resource *mem);
+ void vmw_bo_swap_notify(struct ttm_buffer_object *bo);
+ 
+-void vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
++int vmw_bo_add_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
+ void vmw_bo_del_detached_resource(struct vmw_bo *vbo, struct vmw_resource *res);
+ struct vmw_surface *vmw_bo_surface(struct vmw_bo *vbo);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 2e52d73eba4840..ea741bc4ac3fc7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -4086,6 +4086,23 @@ static int vmw_execbuf_tie_context(struct vmw_private *dev_priv,
+ 	return 0;
+ }
+ 
++/*
++ * DMA fence callback to remove a seqno_waiter
++ */
++struct seqno_waiter_rm_context {
++	struct dma_fence_cb base;
++	struct vmw_private *dev_priv;
++};
++
++static void seqno_waiter_rm_cb(struct dma_fence *f, struct dma_fence_cb *cb)
++{
++	struct seqno_waiter_rm_context *ctx =
++		container_of(cb, struct seqno_waiter_rm_context, base);
++
++	vmw_seqno_waiter_remove(ctx->dev_priv);
++	kfree(ctx);
++}
++
+ int vmw_execbuf_process(struct drm_file *file_priv,
+ 			struct vmw_private *dev_priv,
+ 			void __user *user_commands, void *kernel_commands,
+@@ -4266,6 +4283,15 @@ int vmw_execbuf_process(struct drm_file *file_priv,
+ 		} else {
+ 			/* Link the fence with the FD created earlier */
+ 			fd_install(out_fence_fd, sync_file->file);
++			struct seqno_waiter_rm_context *ctx =
++				kmalloc(sizeof(*ctx), GFP_KERNEL);
++			ctx->dev_priv = dev_priv;
++			vmw_seqno_waiter_add(dev_priv);
++			if (dma_fence_add_callback(&fence->base, &ctx->base,
++						   seqno_waiter_rm_cb) < 0) {
++				vmw_seqno_waiter_remove(dev_priv);
++				kfree(ctx);
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index a73af8a355fbf5..c4d5fe5f330f98 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -273,7 +273,7 @@ int vmw_user_resource_lookup_handle(struct vmw_private *dev_priv,
+ 		goto out_bad_resource;
+ 
+ 	res = converter->base_obj_to_res(base);
+-	kref_get(&res->kref);
++	vmw_resource_reference(res);
+ 
+ 	*p_res = res;
+ 	ret = 0;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 5721c74da3e0b9..d7a8070330ba54 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -658,7 +658,7 @@ static void vmw_user_surface_free(struct vmw_resource *res)
+ 	struct vmw_user_surface *user_srf =
+ 	    container_of(srf, struct vmw_user_surface, srf);
+ 
+-	WARN_ON_ONCE(res->dirty);
++	WARN_ON(res->dirty);
+ 	if (user_srf->master)
+ 		drm_master_put(&user_srf->master);
+ 	kfree(srf->offsets);
+@@ -689,8 +689,7 @@ static void vmw_user_surface_base_release(struct ttm_base_object **p_base)
+ 	 * Dumb buffers own the resource and they'll unref the
+ 	 * resource themselves
+ 	 */
+-	if (res && res->guest_memory_bo && res->guest_memory_bo->is_dumb)
+-		return;
++	WARN_ON(res && res->guest_memory_bo && res->guest_memory_bo->is_dumb);
+ 
+ 	vmw_resource_unreference(&res);
+ }
+@@ -871,7 +870,12 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 			vmw_resource_unreference(&res);
+ 			goto out_unlock;
+ 		}
+-		vmw_bo_add_detached_resource(res->guest_memory_bo, res);
++
++		ret = vmw_bo_add_detached_resource(res->guest_memory_bo, res);
++		if (unlikely(ret != 0)) {
++			vmw_resource_unreference(&res);
++			goto out_unlock;
++		}
+ 	}
+ 
+ 	tmp = vmw_resource_reference(&srf->res);
+@@ -1670,6 +1674,14 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 
+ 	}
+ 
++	if (res->guest_memory_bo) {
++		ret = vmw_bo_add_detached_resource(res->guest_memory_bo, res);
++		if (unlikely(ret != 0)) {
++			vmw_resource_unreference(&res);
++			goto out_unlock;
++		}
++	}
++
+ 	tmp = vmw_resource_reference(res);
+ 	ret = ttm_prime_object_init(tfile, res->guest_memory_size, &user_srf->prime,
+ 				    VMW_RES_SURFACE,
+@@ -1684,7 +1696,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 	rep->handle      = user_srf->prime.base.handle;
+ 	rep->backup_size = res->guest_memory_size;
+ 	if (res->guest_memory_bo) {
+-		vmw_bo_add_detached_resource(res->guest_memory_bo, res);
+ 		rep->buffer_map_handle =
+ 			drm_vma_node_offset_addr(&res->guest_memory_bo->tbo.base.vma_node);
+ 		rep->buffer_size = res->guest_memory_bo->tbo.base.size;
+@@ -2358,12 +2369,19 @@ int vmw_dumb_create(struct drm_file *file_priv,
+ 	vbo = res->guest_memory_bo;
+ 	vbo->is_dumb = true;
+ 	vbo->dumb_surface = vmw_res_to_srf(res);
+-
++	drm_gem_object_put(&vbo->tbo.base);
++	/*
++	 * Unset the user surface dtor since this in not actually exposed
++	 * to userspace. The suface is owned via the dumb_buffer's GEM handle
++	 */
++	struct vmw_user_surface *usurf = container_of(vbo->dumb_surface,
++						struct vmw_user_surface, srf);
++	usurf->prime.base.refcount_release = NULL;
+ err:
+ 	if (res)
+ 		vmw_resource_unreference(&res);
+-	if (ret)
+-		ttm_ref_object_base_unref(tfile, arg.rep.handle);
++
++	ttm_ref_object_base_unref(tfile, arg.rep.handle);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
+index 5c2f459a2925a4..9a46aafcb33bae 100644
+--- a/drivers/gpu/drm/xe/Kconfig
++++ b/drivers/gpu/drm/xe/Kconfig
+@@ -2,6 +2,8 @@
+ config DRM_XE
+ 	tristate "Intel Xe Graphics"
+ 	depends on DRM && PCI && MMU && (m || (y && KUNIT=y))
++	depends on INTEL_VSEC || !INTEL_VSEC
++	depends on X86_PLATFORM_DEVICES || !(X86 && ACPI)
+ 	select INTERVAL_TREE
+ 	# we need shmfs for the swappable backing store, and in particular
+ 	# the shmem_readpage() which depends upon tmpfs
+@@ -27,7 +29,6 @@ config DRM_XE
+ 	select BACKLIGHT_CLASS_DEVICE if ACPI
+ 	select INPUT if ACPI
+ 	select ACPI_VIDEO if X86 && ACPI
+-	select X86_PLATFORM_DEVICES if X86 && ACPI
+ 	select ACPI_WMI if X86 && ACPI
+ 	select SYNC_FILE
+ 	select IOSF_MBI
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 64f9c936eea063..5922302c3e00cc 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -816,21 +816,6 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ 		goto out;
+ 	}
+ 
+-	/* Reject BO eviction if BO is bound to current VM. */
+-	if (evict && ctx->resv) {
+-		struct drm_gpuvm_bo *vm_bo;
+-
+-		drm_gem_for_each_gpuvm_bo(vm_bo, &bo->ttm.base) {
+-			struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
+-
+-			if (xe_vm_resv(vm) == ctx->resv &&
+-			    xe_vm_in_preempt_fence_mode(vm)) {
+-				ret = -EBUSY;
+-				goto out;
+-			}
+-		}
+-	}
+-
+ 	/*
+ 	 * Failed multi-hop where the old_mem is still marked as
+ 	 * TTM_PL_FLAG_TEMPORARY, should just be a dummy move.
+@@ -1023,6 +1008,25 @@ static long xe_bo_shrink_purge(struct ttm_operation_ctx *ctx,
+ 	return lret;
+ }
+ 
++static bool
++xe_bo_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place)
++{
++	struct drm_gpuvm_bo *vm_bo;
++
++	if (!ttm_bo_eviction_valuable(bo, place))
++		return false;
++
++	if (!xe_bo_is_xe_bo(bo))
++		return true;
++
++	drm_gem_for_each_gpuvm_bo(vm_bo, &bo->base) {
++		if (xe_vm_is_validating(gpuvm_to_vm(vm_bo->vm)))
++			return false;
++	}
++
++	return true;
++}
++
+ /**
+  * xe_bo_shrink() - Try to shrink an xe bo.
+  * @ctx: The struct ttm_operation_ctx used for shrinking.
+@@ -1057,7 +1061,7 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
+ 	    (flags.purge && !xe_tt->purgeable))
+ 		return -EBUSY;
+ 
+-	if (!ttm_bo_eviction_valuable(bo, &place))
++	if (!xe_bo_eviction_valuable(bo, &place))
+ 		return -EBUSY;
+ 
+ 	if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
+@@ -1418,7 +1422,7 @@ const struct ttm_device_funcs xe_ttm_funcs = {
+ 	.io_mem_pfn = xe_ttm_io_mem_pfn,
+ 	.access_memory = xe_ttm_access_memory,
+ 	.release_notify = xe_ttm_bo_release_notify,
+-	.eviction_valuable = ttm_bo_eviction_valuable,
++	.eviction_valuable = xe_bo_eviction_valuable,
+ 	.delete_mem_notify = xe_ttm_bo_delete_mem_notify,
+ 	.swap_notify = xe_ttm_bo_swap_notify,
+ };
+@@ -2260,6 +2264,8 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 		.no_wait_gpu = false,
+ 		.gfp_retry_mayfail = true,
+ 	};
++	struct pin_cookie cookie;
++	int ret;
+ 
+ 	if (vm) {
+ 		lockdep_assert_held(&vm->lock);
+@@ -2269,8 +2275,12 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ 		ctx.resv = xe_vm_resv(vm);
+ 	}
+ 
++	cookie = xe_vm_set_validating(vm, allow_res_evict);
+ 	trace_xe_bo_validate(bo);
+-	return ttm_bo_validate(&bo->ttm, &bo->placement, &ctx);
++	ret = ttm_bo_validate(&bo->ttm, &bo->placement, &ctx);
++	xe_vm_clear_validating(vm, allow_res_evict, cookie);
++
++	return ret;
+ }
+ 
+ bool xe_bo_is_xe_bo(struct ttm_buffer_object *bo)
+@@ -2386,7 +2396,7 @@ typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
+ 					     u64 value);
+ 
+ static const xe_gem_create_set_property_fn gem_create_set_property_funcs[] = {
+-	[DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] = gem_create_set_pxp_type,
++	[DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE] = gem_create_set_pxp_type,
+ };
+ 
+ static int gem_create_user_ext_set_property(struct xe_device *xe,
+diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
+index 604bdc7c817360..552ac92496a408 100644
+--- a/drivers/gpu/drm/xe/xe_gt_freq.c
++++ b/drivers/gpu/drm/xe/xe_gt_freq.c
+@@ -32,13 +32,18 @@
+  * Xe's Freq provides a sysfs API for frequency management:
+  *
+  * device/tile#/gt#/freq0/<item>_freq *read-only* files:
++ *
+  * - act_freq: The actual resolved frequency decided by PCODE.
+  * - cur_freq: The current one requested by GuC PC to the PCODE.
+  * - rpn_freq: The Render Performance (RP) N level, which is the minimal one.
++ * - rpa_freq: The Render Performance (RP) A level, which is the achiveable one.
++ *   Calculated by PCODE at runtime based on multiple running conditions
+  * - rpe_freq: The Render Performance (RP) E level, which is the efficient one.
++ *   Calculated by PCODE at runtime based on multiple running conditions
+  * - rp0_freq: The Render Performance (RP) 0 level, which is the maximum one.
+  *
+  * device/tile#/gt#/freq0/<item>_freq *read-write* files:
++ *
+  * - min_freq: Min frequency request.
+  * - max_freq: Max frequency request.
+  *             If max <= min, then freq_min becomes a fixed frequency request.
+diff --git a/drivers/gpu/drm/xe/xe_guc_debugfs.c b/drivers/gpu/drm/xe/xe_guc_debugfs.c
+index c569ff456e7416..0b102ab46c4df2 100644
+--- a/drivers/gpu/drm/xe/xe_guc_debugfs.c
++++ b/drivers/gpu/drm/xe/xe_guc_debugfs.c
+@@ -17,101 +17,130 @@
+ #include "xe_macros.h"
+ #include "xe_pm.h"
+ 
+-static struct xe_guc *node_to_guc(struct drm_info_node *node)
+-{
+-	return node->info_ent->data;
+-}
+-
+-static int guc_info(struct seq_file *m, void *data)
++/*
++ * guc_debugfs_show - A show callback for struct drm_info_list
++ * @m: the &seq_file
++ * @data: data used by the drm debugfs helpers
++ *
++ * This callback can be used in struct drm_info_list to describe debugfs
++ * files that are &xe_guc specific in similar way how we handle &xe_gt
++ * specific files using &xe_gt_debugfs_simple_show.
++ *
++ * It is assumed that those debugfs files will be created on directory entry
++ * which grandparent struct dentry d_inode->i_private points to &xe_gt.
++ *
++ *      /sys/kernel/debug/dri/0/
++ *      ├── gt0			# dent->d_parent->d_parent (d_inode->i_private == gt)
++ *      │   ├── uc		# dent->d_parent
++ *      │   │   ├── guc_info	# dent
++ *      │   │   ├── guc_...
++ *
++ * This function assumes that &m->private will be set to the &struct
++ * drm_info_node corresponding to the instance of the info on a given &struct
++ * drm_minor (see struct drm_info_list.show for details).
++ *
++ * This function also assumes that struct drm_info_list.data will point to the
++ * function code that will actually print a file content::
++ *
++ *    int (*print)(struct xe_guc *, struct drm_printer *)
++ *
++ * Example::
++ *
++ *    int foo(struct xe_guc *guc, struct drm_printer *p)
++ *    {
++ *        drm_printf(p, "enabled %d\n", guc->submission_state.enabled);
++ *        return 0;
++ *    }
++ *
++ *    static const struct drm_info_list bar[] = {
++ *        { name = "foo", .show = guc_debugfs_show, .data = foo },
++ *    };
++ *
++ *    parent = debugfs_create_dir("uc", gtdir);
++ *    drm_debugfs_create_files(bar, ARRAY_SIZE(bar), parent, minor);
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++static int guc_debugfs_show(struct seq_file *m, void *data)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+ 	struct drm_printer p = drm_seq_file_printer(m);
++	struct drm_info_node *node = m->private;
++	struct dentry *parent = node->dent->d_parent;
++	struct dentry *grandparent = parent->d_parent;
++	struct xe_gt *gt = grandparent->d_inode->i_private;
++	struct xe_device *xe = gt_to_xe(gt);
++	int (*print)(struct xe_guc *, struct drm_printer *) = node->info_ent->data;
++	int ret;
+ 
+ 	xe_pm_runtime_get(xe);
+-	xe_guc_print_info(guc, &p);
++	ret = print(&gt->uc.guc, &p);
+ 	xe_pm_runtime_put(xe);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+-static int guc_log(struct seq_file *m, void *data)
++static int guc_log(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-	struct drm_printer p = drm_seq_file_printer(m);
+-
+-	xe_pm_runtime_get(xe);
+-	xe_guc_log_print(&guc->log, &p);
+-	xe_pm_runtime_put(xe);
+-
++	xe_guc_log_print(&guc->log, p);
+ 	return 0;
+ }
+ 
+-static int guc_log_dmesg(struct seq_file *m, void *data)
++static int guc_log_dmesg(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-
+-	xe_pm_runtime_get(xe);
+ 	xe_guc_log_print_dmesg(&guc->log);
+-	xe_pm_runtime_put(xe);
+-
+ 	return 0;
+ }
+ 
+-static int guc_ctb(struct seq_file *m, void *data)
++static int guc_ctb(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-	struct drm_printer p = drm_seq_file_printer(m);
+-
+-	xe_pm_runtime_get(xe);
+-	xe_guc_ct_print(&guc->ct, &p, true);
+-	xe_pm_runtime_put(xe);
+-
++	xe_guc_ct_print(&guc->ct, p, true);
+ 	return 0;
+ }
+ 
+-static int guc_pc(struct seq_file *m, void *data)
++static int guc_pc(struct xe_guc *guc, struct drm_printer *p)
+ {
+-	struct xe_guc *guc = node_to_guc(m->private);
+-	struct xe_device *xe = guc_to_xe(guc);
+-	struct drm_printer p = drm_seq_file_printer(m);
+-
+-	xe_pm_runtime_get(xe);
+-	xe_guc_pc_print(&guc->pc, &p);
+-	xe_pm_runtime_put(xe);
+-
++	xe_guc_pc_print(&guc->pc, p);
+ 	return 0;
+ }
+ 
+-static const struct drm_info_list debugfs_list[] = {
+-	{"guc_info", guc_info, 0},
+-	{"guc_log", guc_log, 0},
+-	{"guc_log_dmesg", guc_log_dmesg, 0},
+-	{"guc_ctb", guc_ctb, 0},
+-	{"guc_pc", guc_pc, 0},
++/*
++ * only for GuC debugfs files which can be safely used on the VF as well:
++ * - without access to the GuC privileged registers
++ * - without access to the PF specific GuC objects
++ */
++static const struct drm_info_list vf_safe_debugfs_list[] = {
++	{ "guc_info", .show = guc_debugfs_show, .data = xe_guc_print_info },
++	{ "guc_ctb", .show = guc_debugfs_show, .data = guc_ctb },
++};
++
++/* For GuC debugfs files that require the SLPC support */
++static const struct drm_info_list slpc_debugfs_list[] = {
++	{ "guc_pc", .show = guc_debugfs_show, .data = guc_pc },
++};
++
++/* everything else should be added here */
++static const struct drm_info_list pf_only_debugfs_list[] = {
++	{ "guc_log", .show = guc_debugfs_show, .data = guc_log },
++	{ "guc_log_dmesg", .show = guc_debugfs_show, .data = guc_log_dmesg },
+ };
+ 
+ void xe_guc_debugfs_register(struct xe_guc *guc, struct dentry *parent)
+ {
+-	struct drm_minor *minor = guc_to_xe(guc)->drm.primary;
+-	struct drm_info_list *local;
+-	int i;
+-
+-#define DEBUGFS_SIZE	(ARRAY_SIZE(debugfs_list) * sizeof(struct drm_info_list))
+-	local = drmm_kmalloc(&guc_to_xe(guc)->drm, DEBUGFS_SIZE, GFP_KERNEL);
+-	if (!local)
+-		return;
++	struct xe_device *xe =  guc_to_xe(guc);
++	struct drm_minor *minor = xe->drm.primary;
+ 
+-	memcpy(local, debugfs_list, DEBUGFS_SIZE);
+-#undef DEBUGFS_SIZE
++	drm_debugfs_create_files(vf_safe_debugfs_list,
++				 ARRAY_SIZE(vf_safe_debugfs_list),
++				 parent, minor);
+ 
+-	for (i = 0; i < ARRAY_SIZE(debugfs_list); ++i)
+-		local[i].data = guc;
++	if (!IS_SRIOV_VF(xe)) {
++		drm_debugfs_create_files(pf_only_debugfs_list,
++					 ARRAY_SIZE(pf_only_debugfs_list),
++					 parent, minor);
+ 
+-	drm_debugfs_create_files(local,
+-				 ARRAY_SIZE(debugfs_list),
+-				 parent, minor);
++		if (!xe->info.skip_guc_pc)
++			drm_debugfs_create_files(slpc_debugfs_list,
++						 ARRAY_SIZE(slpc_debugfs_list),
++						 parent, minor);
++	}
+ }
+diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
+index 03bfba696b3789..16e20b5ad325f9 100644
+--- a/drivers/gpu/drm/xe/xe_lrc.c
++++ b/drivers/gpu/drm/xe/xe_lrc.c
+@@ -941,11 +941,18 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
+  * store it in the PPHSWP.
+  */
+ #define CONTEXT_ACTIVE 1ULL
+-static void xe_lrc_setup_utilization(struct xe_lrc *lrc)
++static int xe_lrc_setup_utilization(struct xe_lrc *lrc)
+ {
+-	u32 *cmd;
++	u32 *cmd, *buf = NULL;
+ 
+-	cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
++	if (lrc->bb_per_ctx_bo->vmap.is_iomem) {
++		buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL);
++		if (!buf)
++			return -ENOMEM;
++		cmd = buf;
++	} else {
++		cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
++	}
+ 
+ 	*cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT | MI_SRM_ADD_CS_OFFSET;
+ 	*cmd++ = ENGINE_ID(0).addr;
+@@ -966,9 +973,16 @@ static void xe_lrc_setup_utilization(struct xe_lrc *lrc)
+ 
+ 	*cmd++ = MI_BATCH_BUFFER_END;
+ 
++	if (buf) {
++		xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0,
++				 buf, (cmd - buf) * sizeof(*cmd));
++		kfree(buf);
++	}
++
+ 	xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR,
+ 			     xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1);
+ 
++	return 0;
+ }
+ 
+ #define PVC_CTX_ASID		(0x2e + 1)
+@@ -1123,7 +1137,9 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
+ 	map = __xe_lrc_start_seqno_map(lrc);
+ 	xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1);
+ 
+-	xe_lrc_setup_utilization(lrc);
++	err = xe_lrc_setup_utilization(lrc);
++	if (err)
++		goto err_lrc_finish;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index f4d108dc49b1b1..30f7ce06c89690 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -922,6 +922,7 @@ static int xe_pci_suspend(struct device *dev)
+ 
+ 	pci_save_state(pdev);
+ 	pci_disable_device(pdev);
++	pci_set_power_state(pdev, PCI_D3cold);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
+index 454ea7dc08ac83..b5bc15f436fa2d 100644
+--- a/drivers/gpu/drm/xe/xe_pxp.c
++++ b/drivers/gpu/drm/xe/xe_pxp.c
+@@ -541,10 +541,14 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
+ 	 */
+ 	xe_pm_runtime_get(pxp->xe);
+ 
+-	if (!pxp_prerequisites_done(pxp)) {
+-		ret = -EBUSY;
++	/* get_readiness_status() returns 0 for in-progress and 1 for done */
++	ret = xe_pxp_get_readiness_status(pxp);
++	if (ret <= 0) {
++		if (!ret)
++			ret = -EBUSY;
+ 		goto out;
+ 	}
++	ret = 0;
+ 
+ wait_for_idle:
+ 	/*
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 367c84b90e9ef7..737172013a8f9e 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -1681,10 +1681,16 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	if (flags & XE_VM_FLAG_LR_MODE)
+ 		xe_pm_runtime_get_noresume(xe);
+ 
++	if (flags & XE_VM_FLAG_FAULT_MODE) {
++		err = xe_svm_init(vm);
++		if (err)
++			goto err_no_resv;
++	}
++
+ 	vm_resv_obj = drm_gpuvm_resv_object_alloc(&xe->drm);
+ 	if (!vm_resv_obj) {
+ 		err = -ENOMEM;
+-		goto err_no_resv;
++		goto err_svm_fini;
+ 	}
+ 
+ 	drm_gpuvm_init(&vm->gpuvm, "Xe VM", DRM_GPUVM_RESV_PROTECTED, &xe->drm,
+@@ -1757,12 +1763,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 		}
+ 	}
+ 
+-	if (flags & XE_VM_FLAG_FAULT_MODE) {
+-		err = xe_svm_init(vm);
+-		if (err)
+-			goto err_close;
+-	}
+-
+ 	if (number_tiles > 1)
+ 		vm->composite_fence_ctx = dma_fence_context_alloc(1);
+ 
+@@ -1776,6 +1776,11 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
+ 	xe_vm_close_and_put(vm);
+ 	return ERR_PTR(err);
+ 
++err_svm_fini:
++	if (flags & XE_VM_FLAG_FAULT_MODE) {
++		vm->size = 0; /* close the vm */
++		xe_svm_fini(vm);
++	}
+ err_no_resv:
+ 	mutex_destroy(&vm->snap_mutex);
+ 	for_each_tile(tile, xe, id)
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index 0ef811fc2bdeee..494af6bdc646b4 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -301,6 +301,75 @@ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
+ void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
+ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
+ 
++/**
++ * xe_vm_set_validating() - Register this task as currently making bos resident
++ * @allow_res_evict: Allow eviction of buffer objects bound to @vm when
++ * validating.
++ * @vm: Pointer to the vm or NULL.
++ *
++ * Register this task as currently making bos resident for the vm. Intended
++ * to avoid eviction by the same task of shared bos bound to the vm.
++ * Call with the vm's resv lock held.
++ *
++ * Return: A pin cookie that should be used for xe_vm_clear_validating().
++ */
++static inline struct pin_cookie xe_vm_set_validating(struct xe_vm *vm,
++						     bool allow_res_evict)
++{
++	struct pin_cookie cookie = {};
++
++	if (vm && !allow_res_evict) {
++		xe_vm_assert_held(vm);
++		cookie = lockdep_pin_lock(&xe_vm_resv(vm)->lock.base);
++		/* Pairs with READ_ONCE in xe_vm_is_validating() */
++		WRITE_ONCE(vm->validating, current);
++	}
++
++	return cookie;
++}
++
++/**
++ * xe_vm_clear_validating() - Unregister this task as currently making bos resident
++ * @vm: Pointer to the vm or NULL
++ * @allow_res_evict: Eviction from @vm was allowed. Must be set to the same
++ * value as for xe_vm_set_validation().
++ * @cookie: Cookie obtained from xe_vm_set_validating().
++ *
++ * Register this task as currently making bos resident for the vm. Intended
++ * to avoid eviction by the same task of shared bos bound to the vm.
++ * Call with the vm's resv lock held.
++ */
++static inline void xe_vm_clear_validating(struct xe_vm *vm, bool allow_res_evict,
++					  struct pin_cookie cookie)
++{
++	if (vm && !allow_res_evict) {
++		lockdep_unpin_lock(&xe_vm_resv(vm)->lock.base, cookie);
++		/* Pairs with READ_ONCE in xe_vm_is_validating() */
++		WRITE_ONCE(vm->validating, NULL);
++	}
++}
++
++/**
++ * xe_vm_is_validating() - Whether bos bound to the vm are currently being made resident
++ * by the current task.
++ * @vm: Pointer to the vm.
++ *
++ * If this function returns %true, we should be in a vm resv locked region, since
++ * the current process is the same task that called xe_vm_set_validating().
++ * The function asserts that that's indeed the case.
++ *
++ * Return: %true if the task is currently making bos resident, %false otherwise.
++ */
++static inline bool xe_vm_is_validating(struct xe_vm *vm)
++{
++	/* Pairs with WRITE_ONCE in xe_vm_is_validating() */
++	if (READ_ONCE(vm->validating) == current) {
++		xe_vm_assert_held(vm);
++		return true;
++	}
++	return false;
++}
++
+ #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+ void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
+ #else
+diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
+index 84fa41b9fa20f3..0882674ce1cbab 100644
+--- a/drivers/gpu/drm/xe/xe_vm_types.h
++++ b/drivers/gpu/drm/xe/xe_vm_types.h
+@@ -310,6 +310,14 @@ struct xe_vm {
+ 	 * protected by the vm resv.
+ 	 */
+ 	u64 tlb_flush_seqno;
++	/**
++	 * @validating: The task that is currently making bos resident for this vm.
++	 * Protected by the VM's resv for writing. Opportunistic reading can be done
++	 * using READ_ONCE. Note: This is a workaround for the
++	 * TTM eviction_valuable() callback not being passed a struct
++	 * ttm_operation_context(). Future work might want to address this.
++	 */
++	struct task_struct *validating;
+ 	/** @batch_invalidate_tlb: Always invalidate TLB before batch start */
+ 	bool batch_invalidate_tlb;
+ 	/** @xef: XE file handle for tracking this VM's drm client */
+diff --git a/drivers/gpu/drm/xlnx/Kconfig b/drivers/gpu/drm/xlnx/Kconfig
+index dbecca9bdd544f..cfabf5e2a0bb0a 100644
+--- a/drivers/gpu/drm/xlnx/Kconfig
++++ b/drivers/gpu/drm/xlnx/Kconfig
+@@ -22,6 +22,7 @@ config DRM_ZYNQMP_DPSUB_AUDIO
+ 	bool "ZynqMP DisplayPort Audio Support"
+ 	depends on DRM_ZYNQMP_DPSUB
+ 	depends on SND && SND_SOC
++	depends on SND_SOC=y || DRM_ZYNQMP_DPSUB=m
+ 	select SND_SOC_GENERIC_DMAENGINE_PCM
+ 	help
+ 	  Choose this option to enable DisplayPort audio support in the ZynqMP
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index a503252702b7b4..43859fc757470c 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -151,6 +151,7 @@ config HID_APPLEIR
+ config HID_APPLETB_BL
+ 	tristate "Apple Touch Bar Backlight"
+ 	depends on BACKLIGHT_CLASS_DEVICE
++	depends on X86 || COMPILE_TEST
+ 	help
+ 	  Say Y here if you want support for the backlight of Touch Bars on x86
+ 	  MacBook Pros.
+@@ -163,6 +164,7 @@ config HID_APPLETB_KBD
+ 	depends on USB_HID
+ 	depends on BACKLIGHT_CLASS_DEVICE
+ 	depends on INPUT
++	depends on X86 || COMPILE_TEST
+ 	select INPUT_SPARSEKMAP
+ 	select HID_APPLETB_BL
+ 	help
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index 0fb210e40a4127..9eafff0b6ea4c3 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -192,7 +192,7 @@ static void mousevsc_on_receive_device_info(struct mousevsc_dev *input_device,
+ 		goto cleanup;
+ 
+ 	input_device->report_desc_size = le16_to_cpu(
+-					desc->desc[0].wDescriptorLength);
++					desc->rpt_desc.wDescriptorLength);
+ 	if (input_device->report_desc_size == 0) {
+ 		input_device->dev_info_status = -EINVAL;
+ 		goto cleanup;
+@@ -210,7 +210,7 @@ static void mousevsc_on_receive_device_info(struct mousevsc_dev *input_device,
+ 
+ 	memcpy(input_device->report_desc,
+ 	       ((unsigned char *)desc) + desc->bLength,
+-	       le16_to_cpu(desc->desc[0].wDescriptorLength));
++	       le16_to_cpu(desc->rpt_desc.wDescriptorLength));
+ 
+ 	/* Send the ack */
+ 	memset(&ack, 0, sizeof(struct mousevsc_prt_msg));
+diff --git a/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c b/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
+index fa51155ebe3937..8a8c4a46f92700 100644
+--- a/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
++++ b/drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
+@@ -82,15 +82,10 @@ static int quicki2c_acpi_get_dsd_property(struct acpi_device *adev, acpi_string
+ {
+ 	acpi_handle handle = acpi_device_handle(adev);
+ 	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+-	union acpi_object obj = { .type = type };
+-	struct acpi_object_list arg_list = {
+-		.count = 1,
+-		.pointer = &obj,
+-	};
+ 	union acpi_object *ret_obj;
+ 	acpi_status status;
+ 
+-	status = acpi_evaluate_object(handle, dsd_method_name, &arg_list, &buffer);
++	status = acpi_evaluate_object(handle, dsd_method_name, NULL, &buffer);
+ 	if (ACPI_FAILURE(status)) {
+ 		acpi_handle_err(handle,
+ 				"Can't evaluate %s method: %d\n", dsd_method_name, status);
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index 7d9297fad90ea7..d4cbecc668ec02 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -984,12 +984,11 @@ static int usbhid_parse(struct hid_device *hid)
+ 	struct usb_host_interface *interface = intf->cur_altsetting;
+ 	struct usb_device *dev = interface_to_usbdev (intf);
+ 	struct hid_descriptor *hdesc;
++	struct hid_class_descriptor *hcdesc;
+ 	u32 quirks = 0;
+ 	unsigned int rsize = 0;
+ 	char *rdesc;
+-	int ret, n;
+-	int num_descriptors;
+-	size_t offset = offsetof(struct hid_descriptor, desc);
++	int ret;
+ 
+ 	quirks = hid_lookup_quirk(hid);
+ 
+@@ -1011,20 +1010,19 @@ static int usbhid_parse(struct hid_device *hid)
+ 		return -ENODEV;
+ 	}
+ 
+-	if (hdesc->bLength < sizeof(struct hid_descriptor)) {
+-		dbg_hid("hid descriptor is too short\n");
++	if (!hdesc->bNumDescriptors ||
++	    hdesc->bLength != sizeof(*hdesc) +
++			      (hdesc->bNumDescriptors - 1) * sizeof(*hcdesc)) {
++		dbg_hid("hid descriptor invalid, bLen=%hhu bNum=%hhu\n",
++			hdesc->bLength, hdesc->bNumDescriptors);
+ 		return -EINVAL;
+ 	}
+ 
+ 	hid->version = le16_to_cpu(hdesc->bcdHID);
+ 	hid->country = hdesc->bCountryCode;
+ 
+-	num_descriptors = min_t(int, hdesc->bNumDescriptors,
+-	       (hdesc->bLength - offset) / sizeof(struct hid_class_descriptor));
+-
+-	for (n = 0; n < num_descriptors; n++)
+-		if (hdesc->desc[n].bDescriptorType == HID_DT_REPORT)
+-			rsize = le16_to_cpu(hdesc->desc[n].wDescriptorLength);
++	if (hdesc->rpt_desc.bDescriptorType == HID_DT_REPORT)
++		rsize = le16_to_cpu(hdesc->rpt_desc.wDescriptorLength);
+ 
+ 	if (!rsize || rsize > HID_MAX_DESCRIPTOR_SIZE) {
+ 		dbg_hid("weird size of report descriptor (%u)\n", rsize);
+@@ -1052,6 +1050,11 @@ static int usbhid_parse(struct hid_device *hid)
+ 		goto err;
+ 	}
+ 
++	if (hdesc->bNumDescriptors > 1)
++		hid_warn(intf,
++			"%u unsupported optional hid class descriptors\n",
++			(int)(hdesc->bNumDescriptors - 1));
++
+ 	hid->quirks |= quirks;
+ 
+ 	return 0;
+diff --git a/drivers/hwmon/asus-ec-sensors.c b/drivers/hwmon/asus-ec-sensors.c
+index 006ced5ab6e6ad..c7c02a1f55d459 100644
+--- a/drivers/hwmon/asus-ec-sensors.c
++++ b/drivers/hwmon/asus-ec-sensors.c
+@@ -933,6 +933,10 @@ static int asus_ec_hwmon_read_string(struct device *dev,
+ {
+ 	struct ec_sensors_data *state = dev_get_drvdata(dev);
+ 	int sensor_index = find_ec_sensor_index(state, type, channel);
++
++	if (sensor_index < 0)
++		return sensor_index;
++
+ 	*str = get_sensor_info(state, sensor_index)->label;
+ 
+ 	return 0;
+diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c
+index fa170c966bc3be..d4e2e175e07700 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.c
++++ b/drivers/hwtracing/coresight/coresight-catu.c
+@@ -458,12 +458,17 @@ static int catu_enable_hw(struct catu_drvdata *drvdata, enum cs_mode cs_mode,
+ static int catu_enable(struct coresight_device *csdev, enum cs_mode mode,
+ 		       void *data)
+ {
+-	int rc;
++	int rc = 0;
+ 	struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev);
+ 
+-	CS_UNLOCK(catu_drvdata->base);
+-	rc = catu_enable_hw(catu_drvdata, mode, data);
+-	CS_LOCK(catu_drvdata->base);
++	guard(raw_spinlock_irqsave)(&catu_drvdata->spinlock);
++	if (csdev->refcnt == 0) {
++		CS_UNLOCK(catu_drvdata->base);
++		rc = catu_enable_hw(catu_drvdata, mode, data);
++		CS_LOCK(catu_drvdata->base);
++	}
++	if (!rc)
++		csdev->refcnt++;
+ 	return rc;
+ }
+ 
+@@ -486,12 +491,15 @@ static int catu_disable_hw(struct catu_drvdata *drvdata)
+ 
+ static int catu_disable(struct coresight_device *csdev, void *__unused)
+ {
+-	int rc;
++	int rc = 0;
+ 	struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev);
+ 
+-	CS_UNLOCK(catu_drvdata->base);
+-	rc = catu_disable_hw(catu_drvdata);
+-	CS_LOCK(catu_drvdata->base);
++	guard(raw_spinlock_irqsave)(&catu_drvdata->spinlock);
++	if (--csdev->refcnt == 0) {
++		CS_UNLOCK(catu_drvdata->base);
++		rc = catu_disable_hw(catu_drvdata);
++		CS_LOCK(catu_drvdata->base);
++	}
+ 	return rc;
+ }
+ 
+@@ -550,6 +558,7 @@ static int __catu_probe(struct device *dev, struct resource *res)
+ 	dev->platform_data = pdata;
+ 
+ 	drvdata->base = base;
++	raw_spin_lock_init(&drvdata->spinlock);
+ 	catu_desc.access = CSDEV_ACCESS_IOMEM(base);
+ 	catu_desc.pdata = pdata;
+ 	catu_desc.dev = dev;
+@@ -702,7 +711,7 @@ static int __init catu_init(void)
+ {
+ 	int ret;
+ 
+-	ret = coresight_init_driver("catu", &catu_driver, &catu_platform_driver);
++	ret = coresight_init_driver("catu", &catu_driver, &catu_platform_driver, THIS_MODULE);
+ 	tmc_etr_set_catu_ops(&etr_catu_buf_ops);
+ 	return ret;
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-catu.h b/drivers/hwtracing/coresight/coresight-catu.h
+index 141feac1c14b08..755776cd19c5bb 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.h
++++ b/drivers/hwtracing/coresight/coresight-catu.h
+@@ -65,6 +65,7 @@ struct catu_drvdata {
+ 	void __iomem *base;
+ 	struct coresight_device *csdev;
+ 	int irq;
++	raw_spinlock_t spinlock;
+ };
+ 
+ #define CATU_REG32(name, offset)					\
+diff --git a/drivers/hwtracing/coresight/coresight-config.h b/drivers/hwtracing/coresight/coresight-config.h
+index b9ebc9fcfb7f20..90fd937d3bd837 100644
+--- a/drivers/hwtracing/coresight/coresight-config.h
++++ b/drivers/hwtracing/coresight/coresight-config.h
+@@ -228,7 +228,7 @@ struct cscfg_feature_csdev {
+  * @feats_csdev:references to the device features to enable.
+  */
+ struct cscfg_config_csdev {
+-	const struct cscfg_config_desc *config_desc;
++	struct cscfg_config_desc *config_desc;
+ 	struct coresight_device *csdev;
+ 	bool enabled;
+ 	struct list_head node;
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index fb43ef6a3b1f0d..d3523f0262af82 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -465,7 +465,7 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
+ 		/* Enable all helpers adjacent to the path first */
+ 		ret = coresight_enable_helpers(csdev, mode, path);
+ 		if (ret)
+-			goto err;
++			goto err_disable_path;
+ 		/*
+ 		 * ETF devices are tricky... They can be a link or a sink,
+ 		 * depending on how they are configured.  If an ETF has been
+@@ -486,8 +486,10 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
+ 			 * that need disabling. Disabling the path here
+ 			 * would mean we could disrupt an existing session.
+ 			 */
+-			if (ret)
++			if (ret) {
++				coresight_disable_helpers(csdev, path);
+ 				goto out;
++			}
+ 			break;
+ 		case CORESIGHT_DEV_TYPE_SOURCE:
+ 			/* sources are enabled from either sysFS or Perf */
+@@ -497,16 +499,19 @@ int coresight_enable_path(struct coresight_path *path, enum cs_mode mode,
+ 			child = list_next_entry(nd, link)->csdev;
+ 			ret = coresight_enable_link(csdev, parent, child, source);
+ 			if (ret)
+-				goto err;
++				goto err_disable_helpers;
+ 			break;
+ 		default:
+-			goto err;
++			ret = -EINVAL;
++			goto err_disable_helpers;
+ 		}
+ 	}
+ 
+ out:
+ 	return ret;
+-err:
++err_disable_helpers:
++	coresight_disable_helpers(csdev, path);
++err_disable_path:
+ 	coresight_disable_path_from(path, nd);
+ 	goto out;
+ }
+@@ -1585,17 +1590,17 @@ module_init(coresight_init);
+ module_exit(coresight_exit);
+ 
+ int coresight_init_driver(const char *drv, struct amba_driver *amba_drv,
+-			  struct platform_driver *pdev_drv)
++			  struct platform_driver *pdev_drv, struct module *owner)
+ {
+ 	int ret;
+ 
+-	ret = amba_driver_register(amba_drv);
++	ret = __amba_driver_register(amba_drv, owner);
+ 	if (ret) {
+ 		pr_err("%s: error registering AMBA driver\n", drv);
+ 		return ret;
+ 	}
+ 
+-	ret = platform_driver_register(pdev_drv);
++	ret = __platform_driver_register(pdev_drv, owner);
+ 	if (!ret)
+ 		return 0;
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+index 342c3aaf414dd8..a871d997330b09 100644
+--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+@@ -774,7 +774,8 @@ static struct platform_driver debug_platform_driver = {
+ 
+ static int __init debug_init(void)
+ {
+-	return coresight_init_driver("debug", &debug_driver, &debug_platform_driver);
++	return coresight_init_driver("debug", &debug_driver, &debug_platform_driver,
++				     THIS_MODULE);
+ }
+ 
+ static void __exit debug_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 2b8f1046384020..88ef381ee6dd9b 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -1020,6 +1020,9 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
+ 	smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
+ 
+ 	raw_spin_unlock(&drvdata->spinlock);
++
++	cscfg_csdev_disable_active_config(csdev);
++
+ 	cpus_read_unlock();
+ 
+ 	/*
+@@ -1176,7 +1179,7 @@ static void cpu_detect_trace_filtering(struct etmv4_drvdata *drvdata)
+ 	 * tracing at the kernel EL and EL0, forcing to use the
+ 	 * virtual time as the timestamp.
+ 	 */
+-	trfcr = (TRFCR_EL1_TS_VIRTUAL |
++	trfcr = (FIELD_PREP(TRFCR_EL1_TS_MASK, TRFCR_EL1_TS_VIRTUAL) |
+ 		 TRFCR_EL1_ExTRE |
+ 		 TRFCR_EL1_E0TRE);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index fdd0956fecb36d..c3ca904de584d7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -2320,11 +2320,11 @@ static ssize_t ts_source_show(struct device *dev,
+ 		goto out;
+ 	}
+ 
+-	switch (drvdata->trfcr & TRFCR_EL1_TS_MASK) {
++	val = FIELD_GET(TRFCR_EL1_TS_MASK, drvdata->trfcr);
++	switch (val) {
+ 	case TRFCR_EL1_TS_VIRTUAL:
+ 	case TRFCR_EL1_TS_GUEST_PHYSICAL:
+ 	case TRFCR_EL1_TS_PHYSICAL:
+-		val = FIELD_GET(TRFCR_EL1_TS_MASK, drvdata->trfcr);
+ 		break;
+ 	default:
+ 		val = -1;
+diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
+index 0541712b2bcb69..124fc2e26cfb1a 100644
+--- a/drivers/hwtracing/coresight/coresight-funnel.c
++++ b/drivers/hwtracing/coresight/coresight-funnel.c
+@@ -433,7 +433,8 @@ static struct amba_driver dynamic_funnel_driver = {
+ 
+ static int __init funnel_init(void)
+ {
+-	return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver);
++	return coresight_init_driver("funnel", &dynamic_funnel_driver, &funnel_driver,
++				     THIS_MODULE);
+ }
+ 
+ static void __exit funnel_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
+index ee7ee79f6cf775..572dcd2bac16d9 100644
+--- a/drivers/hwtracing/coresight/coresight-replicator.c
++++ b/drivers/hwtracing/coresight/coresight-replicator.c
+@@ -438,7 +438,8 @@ static struct amba_driver dynamic_replicator_driver = {
+ 
+ static int __init replicator_init(void)
+ {
+-	return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver);
++	return coresight_init_driver("replicator", &dynamic_replicator_driver, &replicator_driver,
++				     THIS_MODULE);
+ }
+ 
+ static void __exit replicator_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
+index 26f9339f38b938..527347e4d16c5d 100644
+--- a/drivers/hwtracing/coresight/coresight-stm.c
++++ b/drivers/hwtracing/coresight/coresight-stm.c
+@@ -1058,7 +1058,7 @@ static struct platform_driver stm_platform_driver = {
+ 
+ static int __init stm_init(void)
+ {
+-	return coresight_init_driver("stm", &stm_driver, &stm_platform_driver);
++	return coresight_init_driver("stm", &stm_driver, &stm_platform_driver, THIS_MODULE);
+ }
+ 
+ static void __exit stm_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
+index a70c1454b4106c..83dad24e0116d4 100644
+--- a/drivers/hwtracing/coresight/coresight-syscfg.c
++++ b/drivers/hwtracing/coresight/coresight-syscfg.c
+@@ -395,6 +395,8 @@ static void cscfg_remove_owned_csdev_configs(struct coresight_device *csdev, voi
+ 	if (list_empty(&csdev->config_csdev_list))
+ 		return;
+ 
++  guard(raw_spinlock_irqsave)(&csdev->cscfg_csdev_lock);
++
+ 	list_for_each_entry_safe(config_csdev, tmp, &csdev->config_csdev_list, node) {
+ 		if (config_csdev->config_desc->load_owner == load_owner)
+ 			list_del(&config_csdev->node);
+@@ -867,6 +869,25 @@ void cscfg_csdev_reset_feats(struct coresight_device *csdev)
+ }
+ EXPORT_SYMBOL_GPL(cscfg_csdev_reset_feats);
+ 
++static bool cscfg_config_desc_get(struct cscfg_config_desc *config_desc)
++{
++	if (!atomic_fetch_inc(&config_desc->active_cnt)) {
++		/* must ensure that config cannot be unloaded in use */
++		if (unlikely(cscfg_owner_get(config_desc->load_owner))) {
++			atomic_dec(&config_desc->active_cnt);
++			return false;
++		}
++	}
++
++	return true;
++}
++
++static void cscfg_config_desc_put(struct cscfg_config_desc *config_desc)
++{
++	if (!atomic_dec_return(&config_desc->active_cnt))
++		cscfg_owner_put(config_desc->load_owner);
++}
++
+ /*
+  * This activate configuration for either perf or sysfs. Perf can have multiple
+  * active configs, selected per event, sysfs is limited to one.
+@@ -890,22 +911,17 @@ static int _cscfg_activate_config(unsigned long cfg_hash)
+ 			if (config_desc->available == false)
+ 				return -EBUSY;
+ 
+-			/* must ensure that config cannot be unloaded in use */
+-			err = cscfg_owner_get(config_desc->load_owner);
+-			if (err)
++			if (!cscfg_config_desc_get(config_desc)) {
++				err = -EINVAL;
+ 				break;
++			}
++
+ 			/*
+ 			 * increment the global active count - control changes to
+ 			 * active configurations
+ 			 */
+ 			atomic_inc(&cscfg_mgr->sys_active_cnt);
+ 
+-			/*
+-			 * mark the descriptor as active so enable config on a
+-			 * device instance will use it
+-			 */
+-			atomic_inc(&config_desc->active_cnt);
+-
+ 			err = 0;
+ 			dev_dbg(cscfg_device(), "Activate config %s.\n", config_desc->name);
+ 			break;
+@@ -920,9 +936,8 @@ static void _cscfg_deactivate_config(unsigned long cfg_hash)
+ 
+ 	list_for_each_entry(config_desc, &cscfg_mgr->config_desc_list, item) {
+ 		if ((unsigned long)config_desc->event_ea->var == cfg_hash) {
+-			atomic_dec(&config_desc->active_cnt);
+ 			atomic_dec(&cscfg_mgr->sys_active_cnt);
+-			cscfg_owner_put(config_desc->load_owner);
++			cscfg_config_desc_put(config_desc);
+ 			dev_dbg(cscfg_device(), "Deactivate config %s.\n", config_desc->name);
+ 			break;
+ 		}
+@@ -1047,7 +1062,7 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
+ 				     unsigned long cfg_hash, int preset)
+ {
+ 	struct cscfg_config_csdev *config_csdev_active = NULL, *config_csdev_item;
+-	const struct cscfg_config_desc *config_desc;
++	struct cscfg_config_desc *config_desc;
+ 	unsigned long flags;
+ 	int err = 0;
+ 
+@@ -1062,8 +1077,8 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
+ 	raw_spin_lock_irqsave(&csdev->cscfg_csdev_lock, flags);
+ 	list_for_each_entry(config_csdev_item, &csdev->config_csdev_list, node) {
+ 		config_desc = config_csdev_item->config_desc;
+-		if ((atomic_read(&config_desc->active_cnt)) &&
+-		    ((unsigned long)config_desc->event_ea->var == cfg_hash)) {
++		if (((unsigned long)config_desc->event_ea->var == cfg_hash) &&
++				cscfg_config_desc_get(config_desc)) {
+ 			config_csdev_active = config_csdev_item;
+ 			csdev->active_cscfg_ctxt = (void *)config_csdev_active;
+ 			break;
+@@ -1097,7 +1112,11 @@ int cscfg_csdev_enable_active_config(struct coresight_device *csdev,
+ 				err = -EBUSY;
+ 			raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
+ 		}
++
++		if (err)
++			cscfg_config_desc_put(config_desc);
+ 	}
++
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(cscfg_csdev_enable_active_config);
+@@ -1136,8 +1155,10 @@ void cscfg_csdev_disable_active_config(struct coresight_device *csdev)
+ 	raw_spin_unlock_irqrestore(&csdev->cscfg_csdev_lock, flags);
+ 
+ 	/* true if there was an enabled active config */
+-	if (config_csdev)
++	if (config_csdev) {
+ 		cscfg_csdev_disable_config(config_csdev);
++		cscfg_config_desc_put(config_csdev->config_desc);
++	}
+ }
+ EXPORT_SYMBOL_GPL(cscfg_csdev_disable_active_config);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-core.c b/drivers/hwtracing/coresight/coresight-tmc-core.c
+index a7814e8e657b21..455b1c9b15682c 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-core.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-core.c
+@@ -1060,7 +1060,7 @@ static struct platform_driver tmc_platform_driver = {
+ 
+ static int __init tmc_init(void)
+ {
+-	return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver);
++	return coresight_init_driver("tmc", &tmc_driver, &tmc_platform_driver, THIS_MODULE);
+ }
+ 
+ static void __exit tmc_exit(void)
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index d858740001c27d..a922e3b709638d 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -747,7 +747,6 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+ 	char *buf = NULL;
+ 	enum tmc_mode mode;
+ 	unsigned long flags;
+-	int rc = 0;
+ 
+ 	/* config types are set a boot time and never change */
+ 	if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
+@@ -773,11 +772,11 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+ 		 * can't be NULL.
+ 		 */
+ 		memset(drvdata->buf, 0, drvdata->size);
+-		rc = __tmc_etb_enable_hw(drvdata);
+-		if (rc) {
+-			raw_spin_unlock_irqrestore(&drvdata->spinlock, flags);
+-			return rc;
+-		}
++		/*
++		 * Ignore failures to enable the TMC to make sure, we don't
++		 * leave the TMC in a "reading" state.
++		 */
++		__tmc_etb_enable_hw(drvdata);
+ 	} else {
+ 		/*
+ 		 * The ETB/ETF is not tracing and the buffer was just read.
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 97ef36f03ec207..3e015928842808 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -318,7 +318,7 @@ static struct platform_driver tpiu_platform_driver = {
+ 
+ static int __init tpiu_init(void)
+ {
+-	return coresight_init_driver("tpiu", &tpiu_driver, &tpiu_platform_driver);
++	return coresight_init_driver("tpiu", &tpiu_driver, &tpiu_platform_driver, THIS_MODULE);
+ }
+ 
+ static void __exit tpiu_exit(void)
+diff --git a/drivers/iio/adc/ad4851.c b/drivers/iio/adc/ad4851.c
+index 98ebc853db7962..f1d2e2896f2a2d 100644
+--- a/drivers/iio/adc/ad4851.c
++++ b/drivers/iio/adc/ad4851.c
+@@ -1034,7 +1034,7 @@ static int ad4858_parse_channels(struct iio_dev *indio_dev)
+ 	struct device *dev = &st->spi->dev;
+ 	struct iio_chan_spec *ad4851_channels;
+ 	const struct iio_chan_spec ad4851_chan = AD4858_IIO_CHANNEL;
+-	int ret;
++	int ret, i = 0;
+ 
+ 	ret = ad4851_parse_channels_common(indio_dev, &ad4851_channels,
+ 					   ad4851_chan);
+@@ -1042,15 +1042,15 @@ static int ad4858_parse_channels(struct iio_dev *indio_dev)
+ 		return ret;
+ 
+ 	device_for_each_child_node_scoped(dev, child) {
+-		ad4851_channels->has_ext_scan_type = 1;
++		ad4851_channels[i].has_ext_scan_type = 1;
+ 		if (fwnode_property_read_bool(child, "bipolar")) {
+-			ad4851_channels->ext_scan_type = ad4851_scan_type_20_b;
+-			ad4851_channels->num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_b);
++			ad4851_channels[i].ext_scan_type = ad4851_scan_type_20_b;
++			ad4851_channels[i].num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_b);
+ 		} else {
+-			ad4851_channels->ext_scan_type = ad4851_scan_type_20_u;
+-			ad4851_channels->num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_u);
++			ad4851_channels[i].ext_scan_type = ad4851_scan_type_20_u;
++			ad4851_channels[i].num_ext_scan_type = ARRAY_SIZE(ad4851_scan_type_20_u);
+ 		}
+-		ad4851_channels++;
++		i++;
+ 	}
+ 
+ 	indio_dev->channels = ad4851_channels;
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 3ea81a98e45534..7d5d84a07cae1d 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -301,9 +301,9 @@ static int ad7124_get_3db_filter_freq(struct ad7124_state *st,
+ 
+ 	switch (st->channels[channel].cfg.filter_type) {
+ 	case AD7124_SINC3_FILTER:
+-		return DIV_ROUND_CLOSEST(fadc * 230, 1000);
++		return DIV_ROUND_CLOSEST(fadc * 272, 1000);
+ 	case AD7124_SINC4_FILTER:
+-		return DIV_ROUND_CLOSEST(fadc * 262, 1000);
++		return DIV_ROUND_CLOSEST(fadc * 230, 1000);
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/iio/adc/mcp3911.c b/drivers/iio/adc/mcp3911.c
+index 6748b44d568db6..60a19c35807ab7 100644
+--- a/drivers/iio/adc/mcp3911.c
++++ b/drivers/iio/adc/mcp3911.c
+@@ -6,7 +6,7 @@
+  * Copyright (C) 2018 Kent Gustavsson <kent@minoris.se>
+  */
+ #include <linux/bitfield.h>
+-#include <linux/bits.h>
++#include <linux/bitops.h>
+ #include <linux/cleanup.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -79,6 +79,8 @@
+ #define MCP3910_CONFIG1_CLKEXT		BIT(6)
+ #define MCP3910_CONFIG1_VREFEXT		BIT(7)
+ 
++#define MCP3910_CHANNEL(ch)		(MCP3911_REG_CHANNEL0 + (ch))
++
+ #define MCP3910_REG_OFFCAL_CH0		0x0f
+ #define MCP3910_OFFCAL(ch)		(MCP3910_REG_OFFCAL_CH0 + (ch) * 6)
+ 
+@@ -110,6 +112,7 @@ struct mcp3911_chip_info {
+ 	int (*get_offset)(struct mcp3911 *adc, int channel, int *val);
+ 	int (*set_offset)(struct mcp3911 *adc, int channel, int val);
+ 	int (*set_scale)(struct mcp3911 *adc, int channel, u32 val);
++	int (*get_raw)(struct mcp3911 *adc, int channel, int *val);
+ };
+ 
+ struct mcp3911 {
+@@ -170,6 +173,18 @@ static int mcp3911_update(struct mcp3911 *adc, u8 reg, u32 mask, u32 val, u8 len
+ 	return mcp3911_write(adc, reg, val, len);
+ }
+ 
++static int mcp3911_read_s24(struct mcp3911 *const adc, u8 const reg, s32 *const val)
++{
++	u32 uval;
++	int const ret = mcp3911_read(adc, reg, &uval, 3);
++
++	if (ret)
++		return ret;
++
++	*val = sign_extend32(uval, 23);
++	return ret;
++}
++
+ static int mcp3910_enable_offset(struct mcp3911 *adc, bool enable)
+ {
+ 	unsigned int mask = MCP3910_CONFIG0_EN_OFFCAL;
+@@ -194,6 +209,11 @@ static int mcp3910_set_offset(struct mcp3911 *adc, int channel, int val)
+ 	return adc->chip->enable_offset(adc, 1);
+ }
+ 
++static int mcp3910_get_raw(struct mcp3911 *adc, int channel, s32 *val)
++{
++	return mcp3911_read_s24(adc, MCP3910_CHANNEL(channel), val);
++}
++
+ static int mcp3911_enable_offset(struct mcp3911 *adc, bool enable)
+ {
+ 	unsigned int mask = MCP3911_STATUSCOM_EN_OFFCAL;
+@@ -218,6 +238,11 @@ static int mcp3911_set_offset(struct mcp3911 *adc, int channel, int val)
+ 	return adc->chip->enable_offset(adc, 1);
+ }
+ 
++static int mcp3911_get_raw(struct mcp3911 *adc, int channel, s32 *val)
++{
++	return mcp3911_read_s24(adc, MCP3911_CHANNEL(channel), val);
++}
++
+ static int mcp3910_get_osr(struct mcp3911 *adc, u32 *val)
+ {
+ 	int ret;
+@@ -321,12 +346,9 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
+ 	guard(mutex)(&adc->lock);
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_RAW:
+-		ret = mcp3911_read(adc,
+-				   MCP3911_CHANNEL(channel->channel), val, 3);
++		ret = adc->chip->get_raw(adc, channel->channel, val);
+ 		if (ret)
+ 			return ret;
+-
+-		*val = sign_extend32(*val, 23);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_OFFSET:
+ 		ret = adc->chip->get_offset(adc, channel->channel, val);
+@@ -799,6 +821,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3911] = {
+ 		.channels = mcp3911_channels,
+@@ -810,6 +833,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3911_get_offset,
+ 		.set_offset = mcp3911_set_offset,
+ 		.set_scale = mcp3911_set_scale,
++		.get_raw = mcp3911_get_raw,
+ 	},
+ 	[MCP3912] = {
+ 		.channels = mcp3912_channels,
+@@ -821,6 +845,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3913] = {
+ 		.channels = mcp3913_channels,
+@@ -832,6 +857,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3914] = {
+ 		.channels = mcp3914_channels,
+@@ -843,6 +869,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3918] = {
+ 		.channels = mcp3918_channels,
+@@ -854,6 +881,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ 	[MCP3919] = {
+ 		.channels = mcp3919_channels,
+@@ -865,6 +893,7 @@ static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+ 		.get_offset = mcp3910_get_offset,
+ 		.set_offset = mcp3910_set_offset,
+ 		.set_scale = mcp3910_set_scale,
++		.get_raw = mcp3910_get_raw,
+ 	},
+ };
+ static const struct of_device_id mcp3911_dt_ids[] = {
+diff --git a/drivers/iio/adc/pac1934.c b/drivers/iio/adc/pac1934.c
+index 20802b7f49ea84..09fe88eb3fb045 100644
+--- a/drivers/iio/adc/pac1934.c
++++ b/drivers/iio/adc/pac1934.c
+@@ -1081,7 +1081,7 @@ static int pac1934_chip_identify(struct pac1934_chip_info *info)
+ 
+ /*
+  * documentation related to the ACPI device definition
+- * https://ww1.microchip.com/downloads/aemDocuments/documents/OTH/ApplicationNotes/ApplicationNotes/PAC1934-Integration-Notes-for-Microsoft-Windows-10-and-Windows-11-Driver-Support-DS00002534.pdf
++ * https://ww1.microchip.com/downloads/aemDocuments/documents/OTH/ApplicationNotes/ApplicationNotes/PAC193X-Integration-Notes-for-Microsoft-Windows-10-and-Windows-11-Driver-Support-DS00002534.pdf
+  */
+ static int pac1934_acpi_parse_channel_config(struct i2c_client *client,
+ 					     struct pac1934_chip_info *info)
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index 892d770aec69c4..05b374e137d35d 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -707,6 +707,7 @@ static int axi_dac_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val,
+ {
+ 	struct axi_dac_state *st = iio_backend_get_priv(back);
+ 	int ret;
++	u32 ival;
+ 
+ 	guard(mutex)(&st->lock);
+ 
+@@ -719,6 +720,13 @@ static int axi_dac_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val,
+ 	if (ret)
+ 		return ret;
+ 
++	ret = regmap_read_poll_timeout(st->regmap,
++				AXI_DAC_UI_STATUS_REG, ival,
++				FIELD_GET(AXI_DAC_UI_STATUS_IF_BUSY, ival) == 0,
++				10, 100 * KILO);
++	if (ret)
++		return ret;
++
+ 	return regmap_read(st->regmap, AXI_DAC_CUSTOM_RD_REG, val);
+ }
+ 
+diff --git a/drivers/iio/filter/admv8818.c b/drivers/iio/filter/admv8818.c
+index d85b7d3de86604..cc8ce0fe74e7c6 100644
+--- a/drivers/iio/filter/admv8818.c
++++ b/drivers/iio/filter/admv8818.c
+@@ -14,6 +14,7 @@
+ #include <linux/mod_devicetable.h>
+ #include <linux/mutex.h>
+ #include <linux/notifier.h>
++#include <linux/property.h>
+ #include <linux/regmap.h>
+ #include <linux/spi/spi.h>
+ #include <linux/units.h>
+@@ -70,6 +71,16 @@
+ #define ADMV8818_HPF_WR0_MSK			GENMASK(7, 4)
+ #define ADMV8818_LPF_WR0_MSK			GENMASK(3, 0)
+ 
++#define ADMV8818_BAND_BYPASS       0
++#define ADMV8818_BAND_MIN          1
++#define ADMV8818_BAND_MAX          4
++#define ADMV8818_BAND_CORNER_LOW   0
++#define ADMV8818_BAND_CORNER_HIGH  1
++
++#define ADMV8818_STATE_MIN   0
++#define ADMV8818_STATE_MAX   15
++#define ADMV8818_NUM_STATES  16
++
+ enum {
+ 	ADMV8818_BW_FREQ,
+ 	ADMV8818_CENTER_FREQ
+@@ -90,20 +101,24 @@ struct admv8818_state {
+ 	struct mutex		lock;
+ 	unsigned int		filter_mode;
+ 	u64			cf_hz;
++	u64			lpf_margin_hz;
++	u64			hpf_margin_hz;
+ };
+ 
+-static const unsigned long long freq_range_hpf[4][2] = {
++static const unsigned long long freq_range_hpf[5][2] = {
++	{0ULL, 0ULL}, /* bypass */
+ 	{1750000000ULL, 3550000000ULL},
+ 	{3400000000ULL, 7250000000ULL},
+ 	{6600000000, 12000000000},
+ 	{12500000000, 19900000000}
+ };
+ 
+-static const unsigned long long freq_range_lpf[4][2] = {
++static const unsigned long long freq_range_lpf[5][2] = {
++	{U64_MAX, U64_MAX}, /* bypass */
+ 	{2050000000ULL, 3850000000ULL},
+ 	{3350000000ULL, 7250000000ULL},
+ 	{7000000000, 13000000000},
+-	{12550000000, 18500000000}
++	{12550000000, 18850000000}
+ };
+ 
+ static const struct regmap_config admv8818_regmap_config = {
+@@ -121,44 +136,59 @@ static const char * const admv8818_modes[] = {
+ 
+ static int __admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+ {
+-	unsigned int hpf_step = 0, hpf_band = 0, i, j;
+-	u64 freq_step;
+-	int ret;
++	int band, state, ret;
++	unsigned int hpf_state = ADMV8818_STATE_MIN, hpf_band = ADMV8818_BAND_BYPASS;
++	u64 freq_error, min_freq_error, freq_corner, freq_step;
+ 
+-	if (freq < freq_range_hpf[0][0])
++	if (freq < freq_range_hpf[ADMV8818_BAND_MIN][ADMV8818_BAND_CORNER_LOW])
+ 		goto hpf_write;
+ 
+-	if (freq > freq_range_hpf[3][1]) {
+-		hpf_step = 15;
+-		hpf_band = 4;
+-
++	if (freq >= freq_range_hpf[ADMV8818_BAND_MAX][ADMV8818_BAND_CORNER_HIGH]) {
++		hpf_state = ADMV8818_STATE_MAX;
++		hpf_band = ADMV8818_BAND_MAX;
+ 		goto hpf_write;
+ 	}
+ 
+-	for (i = 0; i < 4; i++) {
+-		freq_step = div_u64((freq_range_hpf[i][1] -
+-			freq_range_hpf[i][0]), 15);
++	/* Close HPF frequency gap between 12 and 12.5 GHz */
++	if (freq >= 12000ULL * HZ_PER_MHZ && freq < 12500ULL * HZ_PER_MHZ) {
++		hpf_state = ADMV8818_STATE_MAX;
++		hpf_band = 3;
++		goto hpf_write;
++	}
+ 
+-		if (freq > freq_range_hpf[i][0] &&
+-		    (freq < freq_range_hpf[i][1] + freq_step)) {
+-			hpf_band = i + 1;
++	min_freq_error = U64_MAX;
++	for (band = ADMV8818_BAND_MIN; band <= ADMV8818_BAND_MAX; band++) {
++		/*
++		 * This (and therefore all other ranges) have a corner
++		 * frequency higher than the target frequency.
++		 */
++		if (freq_range_hpf[band][ADMV8818_BAND_CORNER_LOW] > freq)
++			break;
+ 
+-			for (j = 1; j <= 16; j++) {
+-				if (freq < (freq_range_hpf[i][0] + (freq_step * j))) {
+-					hpf_step = j - 1;
+-					break;
+-				}
++		freq_step = freq_range_hpf[band][ADMV8818_BAND_CORNER_HIGH] -
++			    freq_range_hpf[band][ADMV8818_BAND_CORNER_LOW];
++		freq_step = div_u64(freq_step, ADMV8818_NUM_STATES - 1);
++
++		for (state = ADMV8818_STATE_MIN; state <= ADMV8818_STATE_MAX; state++) {
++			freq_corner = freq_range_hpf[band][ADMV8818_BAND_CORNER_LOW] +
++				      freq_step * state;
++
++			/*
++			 * This (and therefore all other states) have a corner
++			 * frequency higher than the target frequency.
++			 */
++			if (freq_corner > freq)
++				break;
++
++			freq_error = freq - freq_corner;
++			if (freq_error < min_freq_error) {
++				min_freq_error = freq_error;
++				hpf_state = state;
++				hpf_band = band;
+ 			}
+-			break;
+ 		}
+ 	}
+ 
+-	/* Close HPF frequency gap between 12 and 12.5 GHz */
+-	if (freq >= 12000 * HZ_PER_MHZ && freq <= 12500 * HZ_PER_MHZ) {
+-		hpf_band = 3;
+-		hpf_step = 15;
+-	}
+-
+ hpf_write:
+ 	ret = regmap_update_bits(st->regmap, ADMV8818_REG_WR0_SW,
+ 				 ADMV8818_SW_IN_SET_WR0_MSK |
+@@ -170,7 +200,7 @@ static int __admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+ 
+ 	return regmap_update_bits(st->regmap, ADMV8818_REG_WR0_FILTER,
+ 				  ADMV8818_HPF_WR0_MSK,
+-				  FIELD_PREP(ADMV8818_HPF_WR0_MSK, hpf_step));
++				  FIELD_PREP(ADMV8818_HPF_WR0_MSK, hpf_state));
+ }
+ 
+ static int admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+@@ -186,31 +216,52 @@ static int admv8818_hpf_select(struct admv8818_state *st, u64 freq)
+ 
+ static int __admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+ {
+-	unsigned int lpf_step = 0, lpf_band = 0, i, j;
+-	u64 freq_step;
+-	int ret;
++	int band, state, ret;
++	unsigned int lpf_state = ADMV8818_STATE_MIN, lpf_band = ADMV8818_BAND_BYPASS;
++	u64 freq_error, min_freq_error, freq_corner, freq_step;
+ 
+-	if (freq > freq_range_lpf[3][1])
++	if (freq > freq_range_lpf[ADMV8818_BAND_MAX][ADMV8818_BAND_CORNER_HIGH])
+ 		goto lpf_write;
+ 
+-	if (freq < freq_range_lpf[0][0]) {
+-		lpf_band = 1;
+-
++	if (freq < freq_range_lpf[ADMV8818_BAND_MIN][ADMV8818_BAND_CORNER_LOW]) {
++		lpf_state = ADMV8818_STATE_MIN;
++		lpf_band = ADMV8818_BAND_MIN;
+ 		goto lpf_write;
+ 	}
+ 
+-	for (i = 0; i < 4; i++) {
+-		if (freq > freq_range_lpf[i][0] && freq < freq_range_lpf[i][1]) {
+-			lpf_band = i + 1;
+-			freq_step = div_u64((freq_range_lpf[i][1] - freq_range_lpf[i][0]), 15);
++	min_freq_error = U64_MAX;
++	for (band = ADMV8818_BAND_MAX; band >= ADMV8818_BAND_MIN; --band) {
++		/*
++		 * At this point the highest corner frequency of
++		 * all remaining ranges is below the target.
++		 * LPF corner should be >= the target.
++		 */
++		if (freq > freq_range_lpf[band][ADMV8818_BAND_CORNER_HIGH])
++			break;
++
++		freq_step = freq_range_lpf[band][ADMV8818_BAND_CORNER_HIGH] -
++			    freq_range_lpf[band][ADMV8818_BAND_CORNER_LOW];
++		freq_step = div_u64(freq_step, ADMV8818_NUM_STATES - 1);
++
++		for (state = ADMV8818_STATE_MAX; state >= ADMV8818_STATE_MIN; --state) {
++
++			freq_corner = freq_range_lpf[band][ADMV8818_BAND_CORNER_LOW] +
++				      state * freq_step;
+ 
+-			for (j = 0; j <= 15; j++) {
+-				if (freq < (freq_range_lpf[i][0] + (freq_step * j))) {
+-					lpf_step = j;
+-					break;
+-				}
++			/*
++			 * At this point all other states in range will
++			 * place the corner frequency below the target
++			 * LPF corner should >= the target.
++			 */
++			if (freq > freq_corner)
++				break;
++
++			freq_error = freq_corner - freq;
++			if (freq_error < min_freq_error) {
++				min_freq_error = freq_error;
++				lpf_state = state;
++				lpf_band = band;
+ 			}
+-			break;
+ 		}
+ 	}
+ 
+@@ -225,7 +276,7 @@ static int __admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+ 
+ 	return regmap_update_bits(st->regmap, ADMV8818_REG_WR0_FILTER,
+ 				  ADMV8818_LPF_WR0_MSK,
+-				  FIELD_PREP(ADMV8818_LPF_WR0_MSK, lpf_step));
++				  FIELD_PREP(ADMV8818_LPF_WR0_MSK, lpf_state));
+ }
+ 
+ static int admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+@@ -242,16 +293,28 @@ static int admv8818_lpf_select(struct admv8818_state *st, u64 freq)
+ static int admv8818_rfin_band_select(struct admv8818_state *st)
+ {
+ 	int ret;
++	u64 hpf_corner_target, lpf_corner_target;
+ 
+ 	st->cf_hz = clk_get_rate(st->clkin);
+ 
++	/* Check for underflow */
++	if (st->cf_hz > st->hpf_margin_hz)
++		hpf_corner_target = st->cf_hz - st->hpf_margin_hz;
++	else
++		hpf_corner_target = 0;
++
++	/* Check for overflow */
++	lpf_corner_target = st->cf_hz + st->lpf_margin_hz;
++	if (lpf_corner_target < st->cf_hz)
++		lpf_corner_target = U64_MAX;
++
+ 	mutex_lock(&st->lock);
+ 
+-	ret = __admv8818_hpf_select(st, st->cf_hz);
++	ret = __admv8818_hpf_select(st, hpf_corner_target);
+ 	if (ret)
+ 		goto exit;
+ 
+-	ret = __admv8818_lpf_select(st, st->cf_hz);
++	ret = __admv8818_lpf_select(st, lpf_corner_target);
+ exit:
+ 	mutex_unlock(&st->lock);
+ 	return ret;
+@@ -278,8 +341,11 @@ static int __admv8818_read_hpf_freq(struct admv8818_state *st, u64 *hpf_freq)
+ 
+ 	hpf_state = FIELD_GET(ADMV8818_HPF_WR0_MSK, data);
+ 
+-	*hpf_freq = div_u64(freq_range_hpf[hpf_band - 1][1] - freq_range_hpf[hpf_band - 1][0], 15);
+-	*hpf_freq = freq_range_hpf[hpf_band - 1][0] + (*hpf_freq * hpf_state);
++	*hpf_freq = freq_range_hpf[hpf_band][ADMV8818_BAND_CORNER_HIGH] -
++		    freq_range_hpf[hpf_band][ADMV8818_BAND_CORNER_LOW];
++	*hpf_freq = div_u64(*hpf_freq, ADMV8818_NUM_STATES - 1);
++	*hpf_freq = freq_range_hpf[hpf_band][ADMV8818_BAND_CORNER_LOW] +
++		    (*hpf_freq * hpf_state);
+ 
+ 	return ret;
+ }
+@@ -316,8 +382,11 @@ static int __admv8818_read_lpf_freq(struct admv8818_state *st, u64 *lpf_freq)
+ 
+ 	lpf_state = FIELD_GET(ADMV8818_LPF_WR0_MSK, data);
+ 
+-	*lpf_freq = div_u64(freq_range_lpf[lpf_band - 1][1] - freq_range_lpf[lpf_band - 1][0], 15);
+-	*lpf_freq = freq_range_lpf[lpf_band - 1][0] + (*lpf_freq * lpf_state);
++	*lpf_freq = freq_range_lpf[lpf_band][ADMV8818_BAND_CORNER_HIGH] -
++		    freq_range_lpf[lpf_band][ADMV8818_BAND_CORNER_LOW];
++	*lpf_freq = div_u64(*lpf_freq, ADMV8818_NUM_STATES - 1);
++	*lpf_freq = freq_range_lpf[lpf_band][ADMV8818_BAND_CORNER_LOW] +
++		    (*lpf_freq * lpf_state);
+ 
+ 	return ret;
+ }
+@@ -333,6 +402,19 @@ static int admv8818_read_lpf_freq(struct admv8818_state *st, u64 *lpf_freq)
+ 	return ret;
+ }
+ 
++static int admv8818_write_raw_get_fmt(struct iio_dev *indio_dev,
++								struct iio_chan_spec const *chan,
++								long mask)
++{
++	switch (mask) {
++	case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY:
++	case IIO_CHAN_INFO_HIGH_PASS_FILTER_3DB_FREQUENCY:
++		return IIO_VAL_INT_64;
++	default:
++		return -EINVAL;
++	}
++}
++
+ static int admv8818_write_raw(struct iio_dev *indio_dev,
+ 			      struct iio_chan_spec const *chan,
+ 			      int val, int val2, long info)
+@@ -341,6 +423,9 @@ static int admv8818_write_raw(struct iio_dev *indio_dev,
+ 
+ 	u64 freq = ((u64)val2 << 32 | (u32)val);
+ 
++	if ((s64)freq < 0)
++		return -EINVAL;
++
+ 	switch (info) {
+ 	case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY:
+ 		return admv8818_lpf_select(st, freq);
+@@ -502,6 +587,7 @@ static int admv8818_set_mode(struct iio_dev *indio_dev,
+ 
+ static const struct iio_info admv8818_info = {
+ 	.write_raw = admv8818_write_raw,
++	.write_raw_get_fmt = admv8818_write_raw_get_fmt,
+ 	.read_raw = admv8818_read_raw,
+ 	.debugfs_reg_access = &admv8818_reg_access,
+ };
+@@ -641,6 +727,32 @@ static int admv8818_clk_setup(struct admv8818_state *st)
+ 	return devm_add_action_or_reset(&spi->dev, admv8818_clk_notifier_unreg, st);
+ }
+ 
++static int admv8818_read_properties(struct admv8818_state *st)
++{
++	struct spi_device *spi = st->spi;
++	u32 mhz;
++	int ret;
++
++	ret = device_property_read_u32(&spi->dev, "adi,lpf-margin-mhz", &mhz);
++	if (ret == 0)
++		st->lpf_margin_hz = (u64)mhz * HZ_PER_MHZ;
++	else if (ret == -EINVAL)
++		st->lpf_margin_hz = 0;
++	else
++		return ret;
++
++
++	ret = device_property_read_u32(&spi->dev, "adi,hpf-margin-mhz", &mhz);
++	if (ret == 0)
++		st->hpf_margin_hz = (u64)mhz * HZ_PER_MHZ;
++	else if (ret == -EINVAL)
++		st->hpf_margin_hz = 0;
++	else if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int admv8818_probe(struct spi_device *spi)
+ {
+ 	struct iio_dev *indio_dev;
+@@ -672,6 +784,10 @@ static int admv8818_probe(struct spi_device *spi)
+ 
+ 	mutex_init(&st->lock);
+ 
++	ret = admv8818_read_properties(st);
++	if (ret)
++		return ret;
++
+ 	ret = admv8818_init(st);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 142170473e7536..e64cbd034a2a19 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -167,7 +167,7 @@ struct cm_port {
+ struct cm_device {
+ 	struct kref kref;
+ 	struct list_head list;
+-	spinlock_t mad_agent_lock;
++	rwlock_t mad_agent_lock;
+ 	struct ib_device *ib_device;
+ 	u8 ack_delay;
+ 	int going_down;
+@@ -285,7 +285,7 @@ static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private *cm_id_priv)
+ 	if (!cm_id_priv->av.port)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	spin_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	mad_agent = cm_id_priv->av.port->mad_agent;
+ 	if (!mad_agent) {
+ 		m = ERR_PTR(-EINVAL);
+@@ -311,7 +311,7 @@ static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private *cm_id_priv)
+ 	m->ah = ah;
+ 
+ out:
+-	spin_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	return m;
+ }
+ 
+@@ -1297,10 +1297,10 @@ static __be64 cm_form_tid(struct cm_id_private *cm_id_priv)
+ 	if (!cm_id_priv->av.port)
+ 		return cpu_to_be64(low_tid);
+ 
+-	spin_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	if (cm_id_priv->av.port->mad_agent)
+ 		hi_tid = ((u64)cm_id_priv->av.port->mad_agent->hi_tid) << 32;
+-	spin_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
++	read_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
+ 	return cpu_to_be64(hi_tid | low_tid);
+ }
+ 
+@@ -3786,7 +3786,8 @@ static void cm_process_send_error(struct cm_id_private *cm_id_priv,
+ 	spin_lock_irq(&cm_id_priv->lock);
+ 	if (msg != cm_id_priv->msg) {
+ 		spin_unlock_irq(&cm_id_priv->lock);
+-		cm_free_priv_msg(msg);
++		cm_free_msg(msg);
++		cm_deref_id(cm_id_priv);
+ 		return;
+ 	}
+ 	cm_free_priv_msg(msg);
+@@ -4378,7 +4379,7 @@ static int cm_add_one(struct ib_device *ib_device)
+ 		return -ENOMEM;
+ 
+ 	kref_init(&cm_dev->kref);
+-	spin_lock_init(&cm_dev->mad_agent_lock);
++	rwlock_init(&cm_dev->mad_agent_lock);
+ 	cm_dev->ib_device = ib_device;
+ 	cm_dev->ack_delay = ib_device->attrs.local_ca_ack_delay;
+ 	cm_dev->going_down = 0;
+@@ -4494,9 +4495,9 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ 		 * The above ensures no call paths from the work are running,
+ 		 * the remaining paths all take the mad_agent_lock.
+ 		 */
+-		spin_lock(&cm_dev->mad_agent_lock);
++		write_lock(&cm_dev->mad_agent_lock);
+ 		port->mad_agent = NULL;
+-		spin_unlock(&cm_dev->mad_agent_lock);
++		write_unlock(&cm_dev->mad_agent_lock);
+ 		ib_unregister_mad_agent(mad_agent);
+ 		ib_port_unregister_client_groups(ib_device, i,
+ 						 cm_counter_groups);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index ab31eefa916b3e..274cfbd5aaba76 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -5245,7 +5245,8 @@ static int cma_netevent_callback(struct notifier_block *self,
+ 			   neigh->ha, ETH_ALEN))
+ 			continue;
+ 		cma_id_get(current_id);
+-		queue_work(cma_wq, &current_id->id.net_work);
++		if (!queue_work(cma_wq, &current_id->id.net_work))
++			cma_id_put(current_id);
+ 	}
+ out:
+ 	spin_unlock_irqrestore(&id_table_lock, flags);
+diff --git a/drivers/infiniband/hw/bnxt_re/debugfs.c b/drivers/infiniband/hw/bnxt_re/debugfs.c
+index af91d16c3c77f5..e632f1661b9295 100644
+--- a/drivers/infiniband/hw/bnxt_re/debugfs.c
++++ b/drivers/infiniband/hw/bnxt_re/debugfs.c
+@@ -170,6 +170,9 @@ static int map_cc_config_offset_gen0_ext0(u32 offset, struct bnxt_qplib_cc_param
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TCP_CP:
+ 		*val =  ccparam->tcp_cp;
+ 		break;
++	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP:
++		*val = ccparam->inact_th;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -203,7 +206,7 @@ static ssize_t bnxt_re_cc_config_get(struct file *filp, char __user *buffer,
+ 	return simple_read_from_buffer(buffer, usr_buf_len, ppos, (u8 *)(buf), rc);
+ }
+ 
+-static void bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offset, u32 val)
++static int bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offset, u32 val)
+ {
+ 	u32 modify_mask;
+ 
+@@ -247,7 +250,9 @@ static void bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offs
+ 		ccparam->tcp_cp = val;
+ 		break;
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TX_QUEUE:
++		return -EOPNOTSUPP;
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_INACTIVITY_CP:
++		ccparam->inact_th = val;
+ 		break;
+ 	case CMDQ_MODIFY_ROCE_CC_MODIFY_MASK_TIME_PER_PHASE:
+ 		ccparam->time_pph = val;
+@@ -258,17 +263,20 @@ static void bnxt_re_fill_gen0_ext0(struct bnxt_qplib_cc_param *ccparam, u32 offs
+ 	}
+ 
+ 	ccparam->mask = modify_mask;
++	return 0;
+ }
+ 
+ static int bnxt_re_configure_cc(struct bnxt_re_dev *rdev, u32 gen_ext, u32 offset, u32 val)
+ {
+ 	struct bnxt_qplib_cc_param ccparam = { };
++	int rc;
+ 
+-	/* Supporting only Gen 0 now */
+-	if (gen_ext == CC_CONFIG_GEN0_EXT0)
+-		bnxt_re_fill_gen0_ext0(&ccparam, offset, val);
+-	else
+-		return -EINVAL;
++	if (gen_ext != CC_CONFIG_GEN0_EXT0)
++		return -EOPNOTSUPP;
++
++	rc = bnxt_re_fill_gen0_ext0(&ccparam, offset, val);
++	if (rc)
++		return rc;
+ 
+ 	bnxt_qplib_modify_cc(&rdev->qplib_res, &ccparam);
+ 	return 0;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index 4fc5b9d5fea87e..307c35888b3003 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -33,7 +33,6 @@
+ #include <linux/pci.h>
+ #include <rdma/ib_addr.h>
+ #include <rdma/ib_cache.h>
+-#include "hnae3.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_hw_v2.h"
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 160e8927d364e1..59352d1b62099f 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -43,7 +43,6 @@
+ #include <rdma/ib_umem.h>
+ #include <rdma/uverbs_ioctl.h>
+ 
+-#include "hnae3.h"
+ #include "hns_roce_common.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_cmd.h"
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 91a5665465ffba..bc7466830eaf9d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -34,6 +34,7 @@
+ #define _HNS_ROCE_HW_V2_H
+ 
+ #include <linux/bitops.h>
++#include "hnae3.h"
+ 
+ #define HNS_ROCE_V2_MAX_RC_INL_INN_SZ		32
+ #define HNS_ROCE_V2_MTT_ENTRY_SZ		64
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 8d0b63d4b50a6c..e7a497cc125cc3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -37,7 +37,6 @@
+ #include <rdma/ib_smi.h>
+ #include <rdma/ib_user_verbs.h>
+ #include <rdma/ib_cache.h>
+-#include "hnae3.h"
+ #include "hns_roce_common.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_hem.h"
+diff --git a/drivers/infiniband/hw/hns/hns_roce_restrack.c b/drivers/infiniband/hw/hns/hns_roce_restrack.c
+index 356d9881694973..f637b73b946e44 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_restrack.c
++++ b/drivers/infiniband/hw/hns/hns_roce_restrack.c
+@@ -4,7 +4,6 @@
+ #include <rdma/rdma_cm.h>
+ #include <rdma/restrack.h>
+ #include <uapi/rdma/rdma_netlink.h>
+-#include "hnae3.h"
+ #include "hns_roce_common.h"
+ #include "hns_roce_device.h"
+ #include "hns_roce_hw_v2.h"
+diff --git a/drivers/infiniband/hw/mlx5/qpc.c b/drivers/infiniband/hw/mlx5/qpc.c
+index d3dcc272200afa..146d03ae40bd9f 100644
+--- a/drivers/infiniband/hw/mlx5/qpc.c
++++ b/drivers/infiniband/hw/mlx5/qpc.c
+@@ -21,8 +21,10 @@ mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn)
+ 	spin_lock_irqsave(&table->lock, flags);
+ 
+ 	common = radix_tree_lookup(&table->tree, rsn);
+-	if (common)
++	if (common && !common->invalid)
+ 		refcount_inc(&common->refcount);
++	else
++		common = NULL;
+ 
+ 	spin_unlock_irqrestore(&table->lock, flags);
+ 
+@@ -178,6 +180,18 @@ static int create_resource_common(struct mlx5_ib_dev *dev,
+ 	return 0;
+ }
+ 
++static void modify_resource_common_state(struct mlx5_ib_dev *dev,
++					 struct mlx5_core_qp *qp,
++					 bool invalid)
++{
++	struct mlx5_qp_table *table = &dev->qp_table;
++	unsigned long flags;
++
++	spin_lock_irqsave(&table->lock, flags);
++	qp->common.invalid = invalid;
++	spin_unlock_irqrestore(&table->lock, flags);
++}
++
+ static void destroy_resource_common(struct mlx5_ib_dev *dev,
+ 				    struct mlx5_core_qp *qp)
+ {
+@@ -609,8 +623,20 @@ int mlx5_core_create_rq_tracked(struct mlx5_ib_dev *dev, u32 *in, int inlen,
+ int mlx5_core_destroy_rq_tracked(struct mlx5_ib_dev *dev,
+ 				 struct mlx5_core_qp *rq)
+ {
++	int ret;
++
++	/* The rq destruction can be called again in case it fails, hence we
++	 * mark the common resource as invalid and only once FW destruction
++	 * is completed successfully we actually destroy the resources.
++	 */
++	modify_resource_common_state(dev, rq, true);
++	ret = destroy_rq_tracked(dev, rq->qpn, rq->uid);
++	if (ret) {
++		modify_resource_common_state(dev, rq, false);
++		return ret;
++	}
+ 	destroy_resource_common(dev, rq);
+-	return destroy_rq_tracked(dev, rq->qpn, rq->uid);
++	return 0;
+ }
+ 
+ static void destroy_sq_tracked(struct mlx5_ib_dev *dev, u32 sqn, u16 uid)
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 7975fb0e2782f0..f2af3e0aef35b5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -811,7 +811,12 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ 	spin_unlock_irqrestore(&qp->state_lock, flags);
+ 	qp->qp_timeout_jiffies = 0;
+ 
+-	if (qp_type(qp) == IB_QPT_RC) {
++	/* In the function timer_setup, .function is initialized. If .function
++	 * is NULL, it indicates the function timer_setup is not called, the
++	 * timer is not initialized. Or else, the timer is initialized.
++	 */
++	if (qp_type(qp) == IB_QPT_RC && qp->retrans_timer.function &&
++		qp->rnr_nak_timer.function) {
+ 		timer_delete_sync(&qp->retrans_timer);
+ 		timer_delete_sync(&qp->rnr_nak_timer);
+ 	}
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index cd750f512deee2..bad585b45a31df 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -199,7 +199,6 @@ source "drivers/iommu/riscv/Kconfig"
+ config IRQ_REMAP
+ 	bool "Support for Interrupt Remapping"
+ 	depends on X86_64 && X86_IO_APIC && PCI_MSI && ACPI
+-	select DMAR_TABLE if INTEL_IOMMU
+ 	help
+ 	  Supports Interrupt remapping for IO-APIC and MSI devices.
+ 	  To use x2apic mode in the CPU's which support x2APIC enhancements or
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 48d910399a1ba6..be8d0f7db617d0 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2953,7 +2953,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+ 	smmu = master->smmu;
+ 
+ 	if (smmu_domain->smmu != smmu)
+-		return ret;
++		return -EINVAL;
+ 
+ 	if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ 		cdptr = arm_smmu_alloc_cd_ptr(master, IOMMU_NO_PASID);
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 7632c80edea63a..396c4f6f5a5bd9 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -13,6 +13,7 @@
+ #include <linux/bitops.h>
+ #include <linux/io-pgtable.h>
+ #include <linux/kernel.h>
++#include <linux/device/faux.h>
+ #include <linux/sizes.h>
+ #include <linux/slab.h>
+ #include <linux/types.h>
+@@ -1433,15 +1434,17 @@ static int __init arm_lpae_do_selftests(void)
+ 	};
+ 
+ 	int i, j, k, pass = 0, fail = 0;
+-	struct device dev;
++	struct faux_device *dev;
+ 	struct io_pgtable_cfg cfg = {
+ 		.tlb = &dummy_tlb_ops,
+ 		.coherent_walk = true,
+-		.iommu_dev = &dev,
+ 	};
+ 
+-	/* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */
+-	set_dev_node(&dev, NUMA_NO_NODE);
++	dev = faux_device_create("io-pgtable-test", NULL, 0);
++	if (!dev)
++		return -ENOMEM;
++
++	cfg.iommu_dev = &dev->dev;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(pgsize); ++i) {
+ 		for (j = 0; j < ARRAY_SIZE(address_size); ++j) {
+@@ -1461,6 +1464,8 @@ static int __init arm_lpae_do_selftests(void)
+ 	}
+ 
+ 	pr_info("selftest: completed with %d PASS %d FAIL\n", pass, fail);
++	faux_device_destroy(dev);
++
+ 	return fail ? -EFAULT : 0;
+ }
+ subsys_initcall(arm_lpae_do_selftests);
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 5bc2fc969494f5..e4628d96216102 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -2399,6 +2399,7 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
+ 	unsigned int pgsize_idx, pgsize_idx_next;
+ 	unsigned long pgsizes;
+ 	size_t offset, pgsize, pgsize_next;
++	size_t offset_end;
+ 	unsigned long addr_merge = paddr | iova;
+ 
+ 	/* Page sizes supported by the hardware and small enough for @size */
+@@ -2439,7 +2440,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
+ 	 * If size is big enough to accommodate the larger page, reduce
+ 	 * the number of smaller pages.
+ 	 */
+-	if (offset + pgsize_next <= size)
++	if (!check_add_overflow(offset, pgsize_next, &offset_end) &&
++	    offset_end <= size)
+ 		size = offset;
+ 
+ out_set_count:
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index e424b279a8cd9b..90341b24a81155 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1090,7 +1090,8 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 	if (mmu->features->has_cache_leaf_nodes && ipmmu_is_root(mmu))
+ 		return 0;
+ 
+-	ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, dev_name(&pdev->dev));
++	ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, "%s",
++				     dev_name(&pdev->dev));
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
+index ed52db272f4d05..e8445cda7c6182 100644
+--- a/drivers/mailbox/Kconfig
++++ b/drivers/mailbox/Kconfig
+@@ -191,8 +191,8 @@ config POLARFIRE_SOC_MAILBOX
+ 
+ config MCHP_SBI_IPC_MBOX
+ 	tristate "Microchip Inter-processor Communication (IPC) SBI driver"
+-	depends on RISCV_SBI || COMPILE_TEST
+-	depends on ARCH_MICROCHIP
++	depends on RISCV_SBI
++	depends on ARCH_MICROCHIP || COMPILE_TEST
+ 	help
+ 	  Mailbox implementation for Microchip devices with an
+ 	  Inter-process communication (IPC) controller.
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 6ef8338add0d61..6778afc64a048c 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -226,7 +226,7 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
+ {
+ 	u32 *arg = data;
+ 	u32 val;
+-	int ret;
++	int ret, count;
+ 
+ 	switch (cp->type) {
+ 	case IMX_MU_TYPE_TX:
+@@ -240,11 +240,20 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
+ 	case IMX_MU_TYPE_TXDB_V2:
+ 		imx_mu_write(priv, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx),
+ 			     priv->dcfg->xCR[IMX_MU_GCR]);
+-		ret = readl_poll_timeout(priv->base + priv->dcfg->xCR[IMX_MU_GCR], val,
+-					 !(val & IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx)),
+-					 0, 1000);
+-		if (ret)
+-			dev_warn_ratelimited(priv->dev, "channel type: %d failure\n", cp->type);
++		ret = -ETIMEDOUT;
++		count = 0;
++		while (ret && (count < 10)) {
++			ret =
++			readl_poll_timeout(priv->base + priv->dcfg->xCR[IMX_MU_GCR], val,
++					   !(val & IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx)),
++					   0, 10000);
++
++			if (ret) {
++				dev_warn_ratelimited(priv->dev,
++						     "channel type: %d timeout, %d times, retry\n",
++						     cp->type, ++count);
++			}
++		}
+ 		break;
+ 	default:
+ 		dev_warn_ratelimited(priv->dev, "Send data on wrong channel type: %d\n", cp->type);
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index d186865b8dce64..ab4e8d1954a16e 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -92,18 +92,6 @@ struct gce_plat {
+ 	u32 gce_num;
+ };
+ 
+-static void cmdq_sw_ddr_enable(struct cmdq *cmdq, bool enable)
+-{
+-	WARN_ON(clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks));
+-
+-	if (enable)
+-		writel(GCE_DDR_EN | GCE_CTRL_BY_SW, cmdq->base + GCE_GCTL_VALUE);
+-	else
+-		writel(GCE_CTRL_BY_SW, cmdq->base + GCE_GCTL_VALUE);
+-
+-	clk_bulk_disable(cmdq->pdata->gce_num, cmdq->clocks);
+-}
+-
+ u8 cmdq_get_shift_pa(struct mbox_chan *chan)
+ {
+ 	struct cmdq *cmdq = container_of(chan->mbox, struct cmdq, mbox);
+@@ -112,6 +100,19 @@ u8 cmdq_get_shift_pa(struct mbox_chan *chan)
+ }
+ EXPORT_SYMBOL(cmdq_get_shift_pa);
+ 
++static void cmdq_gctl_value_toggle(struct cmdq *cmdq, bool ddr_enable)
++{
++	u32 val = cmdq->pdata->control_by_sw ? GCE_CTRL_BY_SW : 0;
++
++	if (!cmdq->pdata->control_by_sw && !cmdq->pdata->sw_ddr_en)
++		return;
++
++	if (cmdq->pdata->sw_ddr_en && ddr_enable)
++		val |= GCE_DDR_EN;
++
++	writel(val, cmdq->base + GCE_GCTL_VALUE);
++}
++
+ static int cmdq_thread_suspend(struct cmdq *cmdq, struct cmdq_thread *thread)
+ {
+ 	u32 status;
+@@ -140,16 +141,10 @@ static void cmdq_thread_resume(struct cmdq_thread *thread)
+ static void cmdq_init(struct cmdq *cmdq)
+ {
+ 	int i;
+-	u32 gctl_regval = 0;
+ 
+ 	WARN_ON(clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks));
+-	if (cmdq->pdata->control_by_sw)
+-		gctl_regval = GCE_CTRL_BY_SW;
+-	if (cmdq->pdata->sw_ddr_en)
+-		gctl_regval |= GCE_DDR_EN;
+ 
+-	if (gctl_regval)
+-		writel(gctl_regval, cmdq->base + GCE_GCTL_VALUE);
++	cmdq_gctl_value_toggle(cmdq, true);
+ 
+ 	writel(CMDQ_THR_ACTIVE_SLOT_CYCLES, cmdq->base + CMDQ_THR_SLOT_CYCLES);
+ 	for (i = 0; i <= CMDQ_MAX_EVENT; i++)
+@@ -315,14 +310,21 @@ static irqreturn_t cmdq_irq_handler(int irq, void *dev)
+ static int cmdq_runtime_resume(struct device *dev)
+ {
+ 	struct cmdq *cmdq = dev_get_drvdata(dev);
++	int ret;
+ 
+-	return clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks);
++	ret = clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks);
++	if (ret)
++		return ret;
++
++	cmdq_gctl_value_toggle(cmdq, true);
++	return 0;
+ }
+ 
+ static int cmdq_runtime_suspend(struct device *dev)
+ {
+ 	struct cmdq *cmdq = dev_get_drvdata(dev);
+ 
++	cmdq_gctl_value_toggle(cmdq, false);
+ 	clk_bulk_disable(cmdq->pdata->gce_num, cmdq->clocks);
+ 	return 0;
+ }
+@@ -347,9 +349,6 @@ static int cmdq_suspend(struct device *dev)
+ 	if (task_running)
+ 		dev_warn(dev, "exist running task(s) in suspend\n");
+ 
+-	if (cmdq->pdata->sw_ddr_en)
+-		cmdq_sw_ddr_enable(cmdq, false);
+-
+ 	return pm_runtime_force_suspend(dev);
+ }
+ 
+@@ -360,9 +359,6 @@ static int cmdq_resume(struct device *dev)
+ 	WARN_ON(pm_runtime_force_resume(dev));
+ 	cmdq->suspended = false;
+ 
+-	if (cmdq->pdata->sw_ddr_en)
+-		cmdq_sw_ddr_enable(cmdq, true);
+-
+ 	return 0;
+ }
+ 
+@@ -370,9 +366,6 @@ static void cmdq_remove(struct platform_device *pdev)
+ {
+ 	struct cmdq *cmdq = platform_get_drvdata(pdev);
+ 
+-	if (cmdq->pdata->sw_ddr_en)
+-		cmdq_sw_ddr_enable(cmdq, false);
+-
+ 	if (!IS_ENABLED(CONFIG_PM))
+ 		cmdq_runtime_suspend(&pdev->dev);
+ 
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 3637761f35853c..f3a3f2ef632261 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -141,6 +141,7 @@ struct mapped_device {
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	unsigned int nr_zones;
+ 	void *zone_revalidate_map;
++	struct task_struct *revalidate_map_task;
+ #endif
+ 
+ #ifdef CONFIG_IMA
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index b690905ab89ffb..347881f323d5bc 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -47,14 +47,15 @@ enum feature_flag_bits {
+ };
+ 
+ struct per_bio_data {
+-	bool bio_submitted;
++	bool bio_can_corrupt;
++	struct bvec_iter saved_iter;
+ };
+ 
+ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 			  struct dm_target *ti)
+ {
+-	int r;
+-	unsigned int argc;
++	int r = 0;
++	unsigned int argc = 0;
+ 	const char *arg_name;
+ 
+ 	static const struct dm_arg _args[] = {
+@@ -65,14 +66,13 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 		{0, PROBABILITY_BASE, "Invalid random corrupt argument"},
+ 	};
+ 
+-	/* No feature arguments supplied. */
+-	if (!as->argc)
+-		return 0;
+-
+-	r = dm_read_arg_group(_args, as, &argc, &ti->error);
+-	if (r)
++	if (as->argc && (r = dm_read_arg_group(_args, as, &argc, &ti->error)))
+ 		return r;
+ 
++	/* No feature arguments supplied. */
++	if (!argc)
++		goto error_all_io;
++
+ 	while (argc) {
+ 		arg_name = dm_shift_arg(as);
+ 		argc--;
+@@ -217,6 +217,7 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
+ 	if (!fc->corrupt_bio_byte && !test_bit(ERROR_READS, &fc->flags) &&
+ 	    !test_bit(DROP_WRITES, &fc->flags) && !test_bit(ERROR_WRITES, &fc->flags) &&
+ 	    !fc->random_read_corrupt && !fc->random_write_corrupt) {
++error_all_io:
+ 		set_bit(ERROR_WRITES, &fc->flags);
+ 		set_bit(ERROR_READS, &fc->flags);
+ 	}
+@@ -339,7 +340,8 @@ static void flakey_map_bio(struct dm_target *ti, struct bio *bio)
+ }
+ 
+ static void corrupt_bio_common(struct bio *bio, unsigned int corrupt_bio_byte,
+-			       unsigned char corrupt_bio_value)
++			       unsigned char corrupt_bio_value,
++			       struct bvec_iter start)
+ {
+ 	struct bvec_iter iter;
+ 	struct bio_vec bvec;
+@@ -348,7 +350,7 @@ static void corrupt_bio_common(struct bio *bio, unsigned int corrupt_bio_byte,
+ 	 * Overwrite the Nth byte of the bio's data, on whichever page
+ 	 * it falls.
+ 	 */
+-	bio_for_each_segment(bvec, bio, iter) {
++	__bio_for_each_segment(bvec, bio, iter, start) {
+ 		if (bio_iter_len(bio, iter) > corrupt_bio_byte) {
+ 			unsigned char *segment = bvec_kmap_local(&bvec);
+ 			segment[corrupt_bio_byte] = corrupt_bio_value;
+@@ -357,36 +359,31 @@ static void corrupt_bio_common(struct bio *bio, unsigned int corrupt_bio_byte,
+ 				"(rw=%c bi_opf=%u bi_sector=%llu size=%u)\n",
+ 				bio, corrupt_bio_value, corrupt_bio_byte,
+ 				(bio_data_dir(bio) == WRITE) ? 'w' : 'r', bio->bi_opf,
+-				(unsigned long long)bio->bi_iter.bi_sector,
+-				bio->bi_iter.bi_size);
++				(unsigned long long)start.bi_sector,
++				start.bi_size);
+ 			break;
+ 		}
+ 		corrupt_bio_byte -= bio_iter_len(bio, iter);
+ 	}
+ }
+ 
+-static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc)
++static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc,
++			     struct bvec_iter start)
+ {
+ 	unsigned int corrupt_bio_byte = fc->corrupt_bio_byte - 1;
+ 
+-	if (!bio_has_data(bio))
+-		return;
+-
+-	corrupt_bio_common(bio, corrupt_bio_byte, fc->corrupt_bio_value);
++	corrupt_bio_common(bio, corrupt_bio_byte, fc->corrupt_bio_value, start);
+ }
+ 
+-static void corrupt_bio_random(struct bio *bio)
++static void corrupt_bio_random(struct bio *bio, struct bvec_iter start)
+ {
+ 	unsigned int corrupt_byte;
+ 	unsigned char corrupt_value;
+ 
+-	if (!bio_has_data(bio))
+-		return;
+-
+-	corrupt_byte = get_random_u32() % bio->bi_iter.bi_size;
++	corrupt_byte = get_random_u32() % start.bi_size;
+ 	corrupt_value = get_random_u8();
+ 
+-	corrupt_bio_common(bio, corrupt_byte, corrupt_value);
++	corrupt_bio_common(bio, corrupt_byte, corrupt_value, start);
+ }
+ 
+ static void clone_free(struct bio *clone)
+@@ -481,7 +478,7 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 	unsigned int elapsed;
+ 	struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
+ 
+-	pb->bio_submitted = false;
++	pb->bio_can_corrupt = false;
+ 
+ 	if (op_is_zone_mgmt(bio_op(bio)))
+ 		goto map_bio;
+@@ -490,10 +487,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 	elapsed = (jiffies - fc->start_time) / HZ;
+ 	if (elapsed % (fc->up_interval + fc->down_interval) >= fc->up_interval) {
+ 		bool corrupt_fixed, corrupt_random;
+-		/*
+-		 * Flag this bio as submitted while down.
+-		 */
+-		pb->bio_submitted = true;
++
++		if (bio_has_data(bio)) {
++			pb->bio_can_corrupt = true;
++			pb->saved_iter = bio->bi_iter;
++		}
+ 
+ 		/*
+ 		 * Error reads if neither corrupt_bio_byte or drop_writes or error_writes are set.
+@@ -516,6 +514,8 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 			return DM_MAPIO_SUBMITTED;
+ 		}
+ 
++		if (!pb->bio_can_corrupt)
++			goto map_bio;
+ 		/*
+ 		 * Corrupt matching writes.
+ 		 */
+@@ -535,9 +535,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
+ 			struct bio *clone = clone_bio(ti, fc, bio);
+ 			if (clone) {
+ 				if (corrupt_fixed)
+-					corrupt_bio_data(clone, fc);
++					corrupt_bio_data(clone, fc,
++							 clone->bi_iter);
+ 				if (corrupt_random)
+-					corrupt_bio_random(clone);
++					corrupt_bio_random(clone,
++							   clone->bi_iter);
+ 				submit_bio(clone);
+ 				return DM_MAPIO_SUBMITTED;
+ 			}
+@@ -559,21 +561,21 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio,
+ 	if (op_is_zone_mgmt(bio_op(bio)))
+ 		return DM_ENDIO_DONE;
+ 
+-	if (!*error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {
++	if (!*error && pb->bio_can_corrupt && (bio_data_dir(bio) == READ)) {
+ 		if (fc->corrupt_bio_byte) {
+ 			if ((fc->corrupt_bio_rw == READ) &&
+ 			    all_corrupt_bio_flags_match(bio, fc)) {
+ 				/*
+ 				 * Corrupt successful matching READs while in down state.
+ 				 */
+-				corrupt_bio_data(bio, fc);
++				corrupt_bio_data(bio, fc, pb->saved_iter);
+ 			}
+ 		}
+ 		if (fc->random_read_corrupt) {
+ 			u64 rnd = get_random_u64();
+ 			u32 rem = do_div(rnd, PROBABILITY_BASE);
+ 			if (rem < fc->random_read_corrupt)
+-				corrupt_bio_random(bio);
++				corrupt_bio_random(bio, pb->saved_iter);
+ 		}
+ 		if (test_bit(ERROR_READS, &fc->flags)) {
+ 			/*
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 6b23e777e10e7f..e009bba52d4c0c 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1490,6 +1490,18 @@ bool dm_table_has_no_data_devices(struct dm_table *t)
+ 	return true;
+ }
+ 
++bool dm_table_is_wildcard(struct dm_table *t)
++{
++	for (unsigned int i = 0; i < t->num_targets; i++) {
++		struct dm_target *ti = dm_table_get_target(t, i);
++
++		if (!dm_target_is_wildcard(ti->type))
++			return false;
++	}
++
++	return true;
++}
++
+ static int device_not_zoned(struct dm_target *ti, struct dm_dev *dev,
+ 			    sector_t start, sector_t len, void *data)
+ {
+@@ -1830,10 +1842,24 @@ static bool dm_table_supports_atomic_writes(struct dm_table *t)
+ 	return true;
+ }
+ 
++bool dm_table_supports_size_change(struct dm_table *t, sector_t old_size,
++				   sector_t new_size)
++{
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && dm_has_zone_plugs(t->md) &&
++	    old_size != new_size) {
++		DMWARN("%s: device has zone write plug resources. "
++		       "Cannot change size",
++		       dm_device_name(t->md));
++		return false;
++	}
++	return true;
++}
++
+ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 			      struct queue_limits *limits)
+ {
+ 	int r;
++	struct queue_limits old_limits;
+ 
+ 	if (!dm_table_supports_nowait(t))
+ 		limits->features &= ~BLK_FEAT_NOWAIT;
+@@ -1860,28 +1886,30 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	if (dm_table_supports_flush(t))
+ 		limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
+ 
+-	if (dm_table_supports_dax(t, device_not_dax_capable)) {
++	if (dm_table_supports_dax(t, device_not_dax_capable))
+ 		limits->features |= BLK_FEAT_DAX;
+-		if (dm_table_supports_dax(t, device_not_dax_synchronous_capable))
+-			set_dax_synchronous(t->md->dax_dev);
+-	} else
++	else
+ 		limits->features &= ~BLK_FEAT_DAX;
+ 
+-	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
+-		dax_write_cache(t->md->dax_dev, true);
+-
+ 	/* For a zoned table, setup the zone related queue attributes. */
+-	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+-	    (limits->features & BLK_FEAT_ZONED)) {
+-		r = dm_set_zones_restrictions(t, q, limits);
+-		if (r)
+-			return r;
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
++		if (limits->features & BLK_FEAT_ZONED) {
++			r = dm_set_zones_restrictions(t, q, limits);
++			if (r)
++				return r;
++		} else if (dm_has_zone_plugs(t->md)) {
++			DMWARN("%s: device has zone write plug resources. "
++			       "Cannot switch to non-zoned table.",
++			       dm_device_name(t->md));
++			return -EINVAL;
++		}
+ 	}
+ 
+ 	if (dm_table_supports_atomic_writes(t))
+ 		limits->features |= BLK_FEAT_ATOMIC_WRITES;
+ 
+-	r = queue_limits_set(q, limits);
++	old_limits = queue_limits_start_update(q);
++	r = queue_limits_commit_update(q, limits);
+ 	if (r)
+ 		return r;
+ 
+@@ -1892,10 +1920,21 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+ 	    (limits->features & BLK_FEAT_ZONED)) {
+ 		r = dm_revalidate_zones(t, q);
+-		if (r)
++		if (r) {
++			queue_limits_set(q, &old_limits);
+ 			return r;
++		}
+ 	}
+ 
++	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED))
++		dm_finalize_zone_settings(t, limits);
++
++	if (dm_table_supports_dax(t, device_not_dax_synchronous_capable))
++		set_dax_synchronous(t->md->dax_dev);
++
++	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
++		dax_write_cache(t->md->dax_dev, true);
++
+ 	dm_update_crypto_profile(q, t);
+ 	return 0;
+ }
+diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
+index 20edd3fabbabfe..4af78111d0b4dd 100644
+--- a/drivers/md/dm-zone.c
++++ b/drivers/md/dm-zone.c
+@@ -56,24 +56,31 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+ {
+ 	struct mapped_device *md = disk->private_data;
+ 	struct dm_table *map;
+-	int srcu_idx, ret;
++	struct dm_table *zone_revalidate_map = md->zone_revalidate_map;
++	int srcu_idx, ret = -EIO;
++	bool put_table = false;
+ 
+-	if (!md->zone_revalidate_map) {
+-		/* Regular user context */
++	if (!zone_revalidate_map || md->revalidate_map_task != current) {
++		/*
++		 * Regular user context or
++		 * Zone revalidation during __bind() is in progress, but this
++		 * call is from a different process
++		 */
+ 		if (dm_suspended_md(md))
+ 			return -EAGAIN;
+ 
+ 		map = dm_get_live_table(md, &srcu_idx);
+-		if (!map)
+-			return -EIO;
++		put_table = true;
+ 	} else {
+ 		/* Zone revalidation during __bind() */
+-		map = md->zone_revalidate_map;
++		map = zone_revalidate_map;
+ 	}
+ 
+-	ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb, data);
++	if (map)
++		ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb,
++					     data);
+ 
+-	if (!md->zone_revalidate_map)
++	if (put_table)
+ 		dm_put_live_table(md, srcu_idx);
+ 
+ 	return ret;
+@@ -153,33 +160,36 @@ int dm_revalidate_zones(struct dm_table *t, struct request_queue *q)
+ {
+ 	struct mapped_device *md = t->md;
+ 	struct gendisk *disk = md->disk;
++	unsigned int nr_zones = disk->nr_zones;
+ 	int ret;
+ 
+ 	if (!get_capacity(disk))
+ 		return 0;
+ 
+-	/* Revalidate only if something changed. */
+-	if (!disk->nr_zones || disk->nr_zones != md->nr_zones) {
+-		DMINFO("%s using %s zone append",
+-		       disk->disk_name,
+-		       queue_emulates_zone_append(q) ? "emulated" : "native");
+-		md->nr_zones = 0;
+-	}
+-
+-	if (md->nr_zones)
++	/*
++	 * Do not revalidate if zone write plug resources have already
++	 * been allocated.
++	 */
++	if (dm_has_zone_plugs(md))
+ 		return 0;
+ 
++	DMINFO("%s using %s zone append", disk->disk_name,
++	       queue_emulates_zone_append(q) ? "emulated" : "native");
++
+ 	/*
+ 	 * Our table is not live yet. So the call to dm_get_live_table()
+ 	 * in dm_blk_report_zones() will fail. Set a temporary pointer to
+ 	 * our table for dm_blk_report_zones() to use directly.
+ 	 */
+ 	md->zone_revalidate_map = t;
++	md->revalidate_map_task = current;
+ 	ret = blk_revalidate_disk_zones(disk);
++	md->revalidate_map_task = NULL;
+ 	md->zone_revalidate_map = NULL;
+ 
+ 	if (ret) {
+ 		DMERR("Revalidate zones failed %d", ret);
++		disk->nr_zones = nr_zones;
+ 		return ret;
+ 	}
+ 
+@@ -340,12 +350,8 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 	 * mapped device queue as needing zone append emulation.
+ 	 */
+ 	WARN_ON_ONCE(queue_is_mq(q));
+-	if (dm_table_supports_zone_append(t)) {
+-		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
+-	} else {
+-		set_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++	if (!dm_table_supports_zone_append(t))
+ 		lim->max_hw_zone_append_sectors = 0;
+-	}
+ 
+ 	/*
+ 	 * Determine the max open and max active zone limits for the mapped
+@@ -380,15 +386,28 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		lim->max_open_zones = 0;
+ 		lim->max_active_zones = 0;
+ 		lim->max_hw_zone_append_sectors = 0;
++		lim->max_zone_append_sectors = 0;
+ 		lim->zone_write_granularity = 0;
+ 		lim->chunk_sectors = 0;
+ 		lim->features &= ~BLK_FEAT_ZONED;
+-		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
+-		md->nr_zones = 0;
+-		disk->nr_zones = 0;
+ 		return 0;
+ 	}
+ 
++	if (get_capacity(disk) && dm_has_zone_plugs(t->md)) {
++		if (q->limits.chunk_sectors != lim->chunk_sectors) {
++			DMWARN("%s: device has zone write plug resources. "
++			       "Cannot change zone size",
++			       disk->disk_name);
++			return -EINVAL;
++		}
++		if (lim->max_hw_zone_append_sectors != 0 &&
++		    !dm_table_is_wildcard(t)) {
++			DMWARN("%s: device has zone write plug resources. "
++			       "New table must emulate zone append",
++			       disk->disk_name);
++			return -EINVAL;
++		}
++	}
+ 	/*
+ 	 * Warn once (when the capacity is not yet set) if the mapped device is
+ 	 * partially using zone resources of the target devices as that leads to
+@@ -408,6 +427,23 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 	return 0;
+ }
+ 
++void dm_finalize_zone_settings(struct dm_table *t, struct queue_limits *lim)
++{
++	struct mapped_device *md = t->md;
++
++	if (lim->features & BLK_FEAT_ZONED) {
++		if (dm_table_supports_zone_append(t))
++			clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++		else
++			set_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++	} else {
++		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
++		md->nr_zones = 0;
++		md->disk->nr_zones = 0;
++	}
++}
++
++
+ /*
+  * IO completion callback called from clone_endio().
+  */
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 5ab7574c0c76ab..240f6dab8ddafb 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -2421,21 +2421,35 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ 			       struct queue_limits *limits)
+ {
+ 	struct dm_table *old_map;
+-	sector_t size;
++	sector_t size, old_size;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&md->suspend_lock);
+ 
+ 	size = dm_table_get_size(t);
+ 
++	old_size = dm_get_size(md);
++
++	if (!dm_table_supports_size_change(t, old_size, size)) {
++		old_map = ERR_PTR(-EINVAL);
++		goto out;
++	}
++
++	set_capacity(md->disk, size);
++
++	ret = dm_table_set_restrictions(t, md->queue, limits);
++	if (ret) {
++		set_capacity(md->disk, old_size);
++		old_map = ERR_PTR(ret);
++		goto out;
++	}
++
+ 	/*
+ 	 * Wipe any geometry if the size of the table changed.
+ 	 */
+-	if (size != dm_get_size(md))
++	if (size != old_size)
+ 		memset(&md->geometry, 0, sizeof(md->geometry));
+ 
+-	set_capacity(md->disk, size);
+-
+ 	dm_table_event_callback(t, event_callback, md);
+ 
+ 	if (dm_table_request_based(t)) {
+@@ -2453,10 +2467,10 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ 		 * requests in the queue may refer to bio from the old bioset,
+ 		 * so you must walk through the queue to unprep.
+ 		 */
+-		if (!md->mempools) {
++		if (!md->mempools)
+ 			md->mempools = t->mempools;
+-			t->mempools = NULL;
+-		}
++		else
++			dm_free_md_mempools(t->mempools);
+ 	} else {
+ 		/*
+ 		 * The md may already have mempools that need changing.
+@@ -2465,14 +2479,8 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
+ 		 */
+ 		dm_free_md_mempools(md->mempools);
+ 		md->mempools = t->mempools;
+-		t->mempools = NULL;
+-	}
+-
+-	ret = dm_table_set_restrictions(t, md->queue, limits);
+-	if (ret) {
+-		old_map = ERR_PTR(ret);
+-		goto out;
+ 	}
++	t->mempools = NULL;
+ 
+ 	old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
+ 	rcu_assign_pointer(md->map, (void *)t);
+diff --git a/drivers/md/dm.h b/drivers/md/dm.h
+index a0a8ff11981580..245f52b592154d 100644
+--- a/drivers/md/dm.h
++++ b/drivers/md/dm.h
+@@ -58,6 +58,7 @@ void dm_table_event_callback(struct dm_table *t,
+ 			     void (*fn)(void *), void *context);
+ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector);
+ bool dm_table_has_no_data_devices(struct dm_table *table);
++bool dm_table_is_wildcard(struct dm_table *t);
+ int dm_calculate_queue_limits(struct dm_table *table,
+ 			      struct queue_limits *limits);
+ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+@@ -72,6 +73,8 @@ struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
+ struct dm_target *dm_table_get_immutable_target(struct dm_table *t);
+ struct dm_target *dm_table_get_wildcard_target(struct dm_table *t);
+ bool dm_table_request_based(struct dm_table *t);
++bool dm_table_supports_size_change(struct dm_table *t, sector_t old_size,
++				   sector_t new_size);
+ 
+ void dm_lock_md_type(struct mapped_device *md);
+ void dm_unlock_md_type(struct mapped_device *md);
+@@ -102,6 +105,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
+ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
+ 		struct queue_limits *lim);
+ int dm_revalidate_zones(struct dm_table *t, struct request_queue *q);
++void dm_finalize_zone_settings(struct dm_table *t, struct queue_limits *lim);
+ void dm_zone_endio(struct dm_io *io, struct bio *clone);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
+@@ -110,12 +114,14 @@ bool dm_is_zone_write(struct mapped_device *md, struct bio *bio);
+ int dm_zone_get_reset_bitmap(struct mapped_device *md, struct dm_table *t,
+ 			     sector_t sector, unsigned int nr_zones,
+ 			     unsigned long *need_reset);
++#define dm_has_zone_plugs(md) ((md)->disk->zone_wplugs_hash != NULL)
+ #else
+ #define dm_blk_report_zones	NULL
+ static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio)
+ {
+ 	return false;
+ }
++#define dm_has_zone_plugs(md) false
+ #endif
+ 
+ /*
+diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
+index c7efd8aab675cc..b8b3a90697012c 100644
+--- a/drivers/md/raid1-10.c
++++ b/drivers/md/raid1-10.c
+@@ -293,3 +293,13 @@ static inline bool raid1_should_read_first(struct mddev *mddev,
+ 
+ 	return false;
+ }
++
++/*
++ * bio with REQ_RAHEAD or REQ_NOWAIT can fail at anytime, before such IO is
++ * submitted to the underlying disks, hence don't record badblocks or retry
++ * in this case.
++ */
++static inline bool raid1_should_handle_error(struct bio *bio)
++{
++	return !(bio->bi_opf & (REQ_RAHEAD | REQ_NOWAIT));
++}
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index de9bccbe7337b5..1fe645e6300121 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -373,14 +373,16 @@ static void raid1_end_read_request(struct bio *bio)
+ 	 */
+ 	update_head_pos(r1_bio->read_disk, r1_bio);
+ 
+-	if (uptodate)
++	if (uptodate) {
+ 		set_bit(R1BIO_Uptodate, &r1_bio->state);
+-	else if (test_bit(FailFast, &rdev->flags) &&
+-		 test_bit(R1BIO_FailFast, &r1_bio->state))
++	} else if (test_bit(FailFast, &rdev->flags) &&
++		 test_bit(R1BIO_FailFast, &r1_bio->state)) {
+ 		/* This was a fail-fast read so we definitely
+ 		 * want to retry */
+ 		;
+-	else {
++	} else if (!raid1_should_handle_error(bio)) {
++		uptodate = 1;
++	} else {
+ 		/* If all other devices have failed, we want to return
+ 		 * the error upwards rather than fail the last device.
+ 		 * Here we redefine "uptodate" to mean "Don't want to retry"
+@@ -451,16 +453,15 @@ static void raid1_end_write_request(struct bio *bio)
+ 	struct bio *to_put = NULL;
+ 	int mirror = find_bio_disk(r1_bio, bio);
+ 	struct md_rdev *rdev = conf->mirrors[mirror].rdev;
+-	bool discard_error;
+ 	sector_t lo = r1_bio->sector;
+ 	sector_t hi = r1_bio->sector + r1_bio->sectors;
+-
+-	discard_error = bio->bi_status && bio_op(bio) == REQ_OP_DISCARD;
++	bool ignore_error = !raid1_should_handle_error(bio) ||
++		(bio->bi_status && bio_op(bio) == REQ_OP_DISCARD);
+ 
+ 	/*
+ 	 * 'one mirror IO has finished' event handler:
+ 	 */
+-	if (bio->bi_status && !discard_error) {
++	if (bio->bi_status && !ignore_error) {
+ 		set_bit(WriteErrorSeen,	&rdev->flags);
+ 		if (!test_and_set_bit(WantReplacement, &rdev->flags))
+ 			set_bit(MD_RECOVERY_NEEDED, &
+@@ -511,7 +512,7 @@ static void raid1_end_write_request(struct bio *bio)
+ 
+ 		/* Maybe we can clear some bad blocks. */
+ 		if (rdev_has_badblock(rdev, r1_bio->sector, r1_bio->sectors) &&
+-		    !discard_error) {
++		    !ignore_error) {
+ 			r1_bio->bios[mirror] = IO_MADE_GOOD;
+ 			set_bit(R1BIO_MadeGood, &r1_bio->state);
+ 		}
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index ba32bac975b8d6..54320a887ecc50 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -399,6 +399,8 @@ static void raid10_end_read_request(struct bio *bio)
+ 		 * wait for the 'master' bio.
+ 		 */
+ 		set_bit(R10BIO_Uptodate, &r10_bio->state);
++	} else if (!raid1_should_handle_error(bio)) {
++		uptodate = 1;
+ 	} else {
+ 		/* If all other devices that store this block have
+ 		 * failed, we want to return the error upwards rather
+@@ -456,9 +458,8 @@ static void raid10_end_write_request(struct bio *bio)
+ 	int slot, repl;
+ 	struct md_rdev *rdev = NULL;
+ 	struct bio *to_put = NULL;
+-	bool discard_error;
+-
+-	discard_error = bio->bi_status && bio_op(bio) == REQ_OP_DISCARD;
++	bool ignore_error = !raid1_should_handle_error(bio) ||
++		(bio->bi_status && bio_op(bio) == REQ_OP_DISCARD);
+ 
+ 	dev = find_bio_disk(conf, r10_bio, bio, &slot, &repl);
+ 
+@@ -472,7 +473,7 @@ static void raid10_end_write_request(struct bio *bio)
+ 	/*
+ 	 * this branch is our 'one mirror IO has finished' event handler:
+ 	 */
+-	if (bio->bi_status && !discard_error) {
++	if (bio->bi_status && !ignore_error) {
+ 		if (repl)
+ 			/* Never record new bad blocks to replacement,
+ 			 * just fail it.
+@@ -527,7 +528,7 @@ static void raid10_end_write_request(struct bio *bio)
+ 		/* Maybe we can clear some bad blocks. */
+ 		if (rdev_has_badblock(rdev, r10_bio->devs[slot].addr,
+ 				      r10_bio->sectors) &&
+-		    !discard_error) {
++		    !ignore_error) {
+ 			bio_put(bio);
+ 			if (repl)
+ 				r10_bio->devs[slot].repl_bio = IO_MADE_GOOD;
+diff --git a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
+index 3d2913de9a86c6..7af6765532e332 100644
+--- a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
++++ b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c
+@@ -114,7 +114,7 @@ struct hdmirx_stream {
+ 	spinlock_t vbq_lock; /* to lock video buffer queue */
+ 	bool stopping;
+ 	wait_queue_head_t wq_stopped;
+-	u32 frame_idx;
++	u32 sequence;
+ 	u32 line_flag_int_cnt;
+ 	u32 irq_stat;
+ };
+@@ -1540,7 +1540,7 @@ static int hdmirx_start_streaming(struct vb2_queue *queue, unsigned int count)
+ 	int line_flag;
+ 
+ 	mutex_lock(&hdmirx_dev->stream_lock);
+-	stream->frame_idx = 0;
++	stream->sequence = 0;
+ 	stream->line_flag_int_cnt = 0;
+ 	stream->curr_buf = NULL;
+ 	stream->next_buf = NULL;
+@@ -1948,7 +1948,7 @@ static void dma_idle_int_handler(struct snps_hdmirx_dev *hdmirx_dev,
+ 
+ 			if (vb_done) {
+ 				vb_done->vb2_buf.timestamp = ktime_get_ns();
+-				vb_done->sequence = stream->frame_idx;
++				vb_done->sequence = stream->sequence;
+ 
+ 				if (bt->interlaced)
+ 					vb_done->field = V4L2_FIELD_INTERLACED_TB;
+@@ -1956,10 +1956,6 @@ static void dma_idle_int_handler(struct snps_hdmirx_dev *hdmirx_dev,
+ 					vb_done->field = V4L2_FIELD_NONE;
+ 
+ 				hdmirx_vb_done(stream, vb_done);
+-				stream->frame_idx++;
+-				if (stream->frame_idx == 30)
+-					v4l2_dbg(1, debug, v4l2_dev,
+-						 "rcv frames\n");
+ 			}
+ 
+ 			stream->curr_buf = NULL;
+@@ -1971,6 +1967,10 @@ static void dma_idle_int_handler(struct snps_hdmirx_dev *hdmirx_dev,
+ 			v4l2_dbg(3, debug, v4l2_dev,
+ 				 "%s: next_buf NULL, skip vb_done\n", __func__);
+ 		}
++
++		stream->sequence++;
++		if (stream->sequence == 30)
++			v4l2_dbg(1, debug, v4l2_dev, "rcv frames\n");
+ 	}
+ 
+ DMA_IDLE_OUT:
+diff --git a/drivers/media/platform/verisilicon/hantro_postproc.c b/drivers/media/platform/verisilicon/hantro_postproc.c
+index c435a393e0cb70..9f559a13d409bb 100644
+--- a/drivers/media/platform/verisilicon/hantro_postproc.c
++++ b/drivers/media/platform/verisilicon/hantro_postproc.c
+@@ -250,8 +250,10 @@ int hantro_postproc_init(struct hantro_ctx *ctx)
+ 
+ 	for (i = 0; i < num_buffers; i++) {
+ 		ret = hantro_postproc_alloc(ctx, i);
+-		if (ret)
++		if (ret) {
++			hantro_postproc_free(ctx);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mfd/exynos-lpass.c b/drivers/mfd/exynos-lpass.c
+index 6a585173230b13..44797001a4322b 100644
+--- a/drivers/mfd/exynos-lpass.c
++++ b/drivers/mfd/exynos-lpass.c
+@@ -104,11 +104,22 @@ static const struct regmap_config exynos_lpass_reg_conf = {
+ 	.fast_io	= true,
+ };
+ 
++static void exynos_lpass_disable_lpass(void *data)
++{
++	struct platform_device *pdev = data;
++	struct exynos_lpass *lpass = platform_get_drvdata(pdev);
++
++	pm_runtime_disable(&pdev->dev);
++	if (!pm_runtime_status_suspended(&pdev->dev))
++		exynos_lpass_disable(lpass);
++}
++
+ static int exynos_lpass_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct exynos_lpass *lpass;
+ 	void __iomem *base_top;
++	int ret;
+ 
+ 	lpass = devm_kzalloc(dev, sizeof(*lpass), GFP_KERNEL);
+ 	if (!lpass)
+@@ -122,8 +133,8 @@ static int exynos_lpass_probe(struct platform_device *pdev)
+ 	if (IS_ERR(lpass->sfr0_clk))
+ 		return PTR_ERR(lpass->sfr0_clk);
+ 
+-	lpass->top = regmap_init_mmio(dev, base_top,
+-					&exynos_lpass_reg_conf);
++	lpass->top = devm_regmap_init_mmio(dev, base_top,
++					   &exynos_lpass_reg_conf);
+ 	if (IS_ERR(lpass->top)) {
+ 		dev_err(dev, "LPASS top regmap initialization failed\n");
+ 		return PTR_ERR(lpass->top);
+@@ -134,18 +145,11 @@ static int exynos_lpass_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	exynos_lpass_enable(lpass);
+ 
+-	return devm_of_platform_populate(dev);
+-}
+-
+-static void exynos_lpass_remove(struct platform_device *pdev)
+-{
+-	struct exynos_lpass *lpass = platform_get_drvdata(pdev);
++	ret = devm_add_action_or_reset(dev, exynos_lpass_disable_lpass, pdev);
++	if (ret)
++		return ret;
+ 
+-	exynos_lpass_disable(lpass);
+-	pm_runtime_disable(&pdev->dev);
+-	if (!pm_runtime_status_suspended(&pdev->dev))
+-		exynos_lpass_disable(lpass);
+-	regmap_exit(lpass->top);
++	return devm_of_platform_populate(dev);
+ }
+ 
+ static int __maybe_unused exynos_lpass_suspend(struct device *dev)
+@@ -185,7 +189,6 @@ static struct platform_driver exynos_lpass_driver = {
+ 		.of_match_table	= exynos_lpass_of_match,
+ 	},
+ 	.probe	= exynos_lpass_probe,
+-	.remove	= exynos_lpass_remove,
+ };
+ module_platform_driver(exynos_lpass_driver);
+ 
+diff --git a/drivers/mfd/stmpe-spi.c b/drivers/mfd/stmpe-spi.c
+index 792236f56399af..b9cc85ea2c4019 100644
+--- a/drivers/mfd/stmpe-spi.c
++++ b/drivers/mfd/stmpe-spi.c
+@@ -129,7 +129,7 @@ static const struct spi_device_id stmpe_spi_id[] = {
+ 	{ "stmpe2403", STMPE2403 },
+ 	{ }
+ };
+-MODULE_DEVICE_TABLE(spi, stmpe_id);
++MODULE_DEVICE_TABLE(spi, stmpe_spi_id);
+ 
+ static struct spi_driver stmpe_spi_driver = {
+ 	.driver = {
+diff --git a/drivers/misc/lis3lv02d/Kconfig b/drivers/misc/lis3lv02d/Kconfig
+index bb2fec4b5880bf..56005243a230d5 100644
+--- a/drivers/misc/lis3lv02d/Kconfig
++++ b/drivers/misc/lis3lv02d/Kconfig
+@@ -10,7 +10,7 @@ config SENSORS_LIS3_SPI
+ 	help
+ 	  This driver provides support for the LIS3LV02Dx accelerometer connected
+ 	  via SPI. The accelerometer data is readable via
+-	  /sys/devices/platform/lis3lv02d.
++	  /sys/devices/faux/lis3lv02d.
+ 
+ 	  This driver also provides an absolute input class device, allowing
+ 	  the laptop to act as a pinball machine-esque joystick.
+@@ -26,7 +26,7 @@ config SENSORS_LIS3_I2C
+ 	help
+ 	  This driver provides support for the LIS3LV02Dx accelerometer connected
+ 	  via I2C. The accelerometer data is readable via
+-	  /sys/devices/platform/lis3lv02d.
++	  /sys/devices/faux/lis3lv02d.
+ 
+ 	  This driver also provides an absolute input class device, allowing
+ 	  the device to act as a pinball machine-esque joystick.
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index da26a080916c54..267d0de5fade83 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -324,7 +324,7 @@ int vsc_tp_rom_xfer(struct vsc_tp *tp, const void *obuf, void *ibuf, size_t len)
+ 	guard(mutex)(&tp->mutex);
+ 
+ 	/* rom xfer is big endian */
+-	cpu_to_be32_array((u32 *)tp->tx_buf, obuf, words);
++	cpu_to_be32_array((__be32 *)tp->tx_buf, obuf, words);
+ 
+ 	ret = read_poll_timeout(gpiod_get_value_cansleep, ret,
+ 				!ret, VSC_TP_ROM_XFER_POLL_DELAY_US,
+@@ -340,7 +340,7 @@ int vsc_tp_rom_xfer(struct vsc_tp *tp, const void *obuf, void *ibuf, size_t len)
+ 		return ret;
+ 
+ 	if (ibuf)
+-		be32_to_cpu_array(ibuf, (u32 *)tp->rx_buf, words);
++		be32_to_cpu_array(ibuf, (__be32 *)tp->rx_buf, words);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
+index abe79f6fd2a79b..b64944367ac533 100644
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -227,6 +227,7 @@ static int drv_cp_harray_to_user(void __user *user_buf_uva,
+ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ 				  unsigned long uva)
+ {
++	struct page *page;
+ 	int retval;
+ 
+ 	if (context->notify_page) {
+@@ -243,13 +244,11 @@ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ 	/*
+ 	 * Lock physical page backing a given user VA.
+ 	 */
+-	retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &context->notify_page);
+-	if (retval != 1) {
+-		context->notify_page = NULL;
++	retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &page);
++	if (retval != 1)
+ 		return VMCI_ERROR_GENERIC;
+-	}
+-	if (context->notify_page == NULL)
+-		return VMCI_ERROR_UNAVAILABLE;
++
++	context->notify_page = page;
+ 
+ 	/*
+ 	 * Map the locked page and set up notify pointer.
+diff --git a/drivers/mtd/nand/ecc-mxic.c b/drivers/mtd/nand/ecc-mxic.c
+index 56b56f726b9983..1bf9a5a64b87a4 100644
+--- a/drivers/mtd/nand/ecc-mxic.c
++++ b/drivers/mtd/nand/ecc-mxic.c
+@@ -614,7 +614,7 @@ static int mxic_ecc_finish_io_req_external(struct nand_device *nand,
+ {
+ 	struct mxic_ecc_engine *mxic = nand_to_mxic(nand);
+ 	struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand);
+-	int nents, step, ret;
++	int nents, step, ret = 0;
+ 
+ 	if (req->mode == MTD_OPS_RAW)
+ 		return 0;
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 8ea183da8d5398..17ae4b819a5977 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -453,13 +453,14 @@ static struct net_device *bond_ipsec_dev(struct xfrm_state *xs)
+ 
+ /**
+  * bond_ipsec_add_sa - program device with a security association
++ * @bond_dev: pointer to the bond net device
+  * @xs: pointer to transformer state struct
+  * @extack: extack point to fill failure reason
+  **/
+-static int bond_ipsec_add_sa(struct xfrm_state *xs,
++static int bond_ipsec_add_sa(struct net_device *bond_dev,
++			     struct xfrm_state *xs,
+ 			     struct netlink_ext_ack *extack)
+ {
+-	struct net_device *bond_dev = xs->xso.dev;
+ 	struct net_device *real_dev;
+ 	netdevice_tracker tracker;
+ 	struct bond_ipsec *ipsec;
+@@ -495,9 +496,9 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
+ 		goto out;
+ 	}
+ 
+-	xs->xso.real_dev = real_dev;
+-	err = real_dev->xfrmdev_ops->xdo_dev_state_add(xs, extack);
++	err = real_dev->xfrmdev_ops->xdo_dev_state_add(real_dev, xs, extack);
+ 	if (!err) {
++		xs->xso.real_dev = real_dev;
+ 		ipsec->xs = xs;
+ 		INIT_LIST_HEAD(&ipsec->list);
+ 		mutex_lock(&bond->ipsec_lock);
+@@ -539,11 +540,25 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
+ 		if (ipsec->xs->xso.real_dev == real_dev)
+ 			continue;
+ 
+-		ipsec->xs->xso.real_dev = real_dev;
+-		if (real_dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
++		if (real_dev->xfrmdev_ops->xdo_dev_state_add(real_dev,
++							     ipsec->xs, NULL)) {
+ 			slave_warn(bond_dev, real_dev, "%s: failed to add SA\n", __func__);
+-			ipsec->xs->xso.real_dev = NULL;
++			continue;
+ 		}
++
++		spin_lock_bh(&ipsec->xs->lock);
++		/* xs might have been killed by the user during the migration
++		 * to the new dev, but bond_ipsec_del_sa() should have done
++		 * nothing, as xso.real_dev is NULL.
++		 * Delete it from the device we just added it to. The pending
++		 * bond_ipsec_free_sa() call will do the rest of the cleanup.
++		 */
++		if (ipsec->xs->km.state == XFRM_STATE_DEAD &&
++		    real_dev->xfrmdev_ops->xdo_dev_state_delete)
++			real_dev->xfrmdev_ops->xdo_dev_state_delete(real_dev,
++								    ipsec->xs);
++		ipsec->xs->xso.real_dev = real_dev;
++		spin_unlock_bh(&ipsec->xs->lock);
+ 	}
+ out:
+ 	mutex_unlock(&bond->ipsec_lock);
+@@ -551,54 +566,27 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
+ 
+ /**
+  * bond_ipsec_del_sa - clear out this specific SA
++ * @bond_dev: pointer to the bond net device
+  * @xs: pointer to transformer state struct
+  **/
+-static void bond_ipsec_del_sa(struct xfrm_state *xs)
++static void bond_ipsec_del_sa(struct net_device *bond_dev,
++			      struct xfrm_state *xs)
+ {
+-	struct net_device *bond_dev = xs->xso.dev;
+ 	struct net_device *real_dev;
+-	netdevice_tracker tracker;
+-	struct bond_ipsec *ipsec;
+-	struct bonding *bond;
+-	struct slave *slave;
+ 
+-	if (!bond_dev)
++	if (!bond_dev || !xs->xso.real_dev)
+ 		return;
+ 
+-	rcu_read_lock();
+-	bond = netdev_priv(bond_dev);
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	real_dev = slave ? slave->dev : NULL;
+-	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
+-	rcu_read_unlock();
+-
+-	if (!slave)
+-		goto out;
+-
+-	if (!xs->xso.real_dev)
+-		goto out;
+-
+-	WARN_ON(xs->xso.real_dev != real_dev);
++	real_dev = xs->xso.real_dev;
+ 
+ 	if (!real_dev->xfrmdev_ops ||
+ 	    !real_dev->xfrmdev_ops->xdo_dev_state_delete ||
+ 	    netif_is_bond_master(real_dev)) {
+ 		slave_warn(bond_dev, real_dev, "%s: no slave xdo_dev_state_delete\n", __func__);
+-		goto out;
++		return;
+ 	}
+ 
+-	real_dev->xfrmdev_ops->xdo_dev_state_delete(xs);
+-out:
+-	netdev_put(real_dev, &tracker);
+-	mutex_lock(&bond->ipsec_lock);
+-	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
+-		if (ipsec->xs == xs) {
+-			list_del(&ipsec->list);
+-			kfree(ipsec);
+-			break;
+-		}
+-	}
+-	mutex_unlock(&bond->ipsec_lock);
++	real_dev->xfrmdev_ops->xdo_dev_state_delete(real_dev, xs);
+ }
+ 
+ static void bond_ipsec_del_sa_all(struct bonding *bond)
+@@ -624,46 +612,55 @@ static void bond_ipsec_del_sa_all(struct bonding *bond)
+ 			slave_warn(bond_dev, real_dev,
+ 				   "%s: no slave xdo_dev_state_delete\n",
+ 				   __func__);
+-		} else {
+-			real_dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
+-			if (real_dev->xfrmdev_ops->xdo_dev_state_free)
+-				real_dev->xfrmdev_ops->xdo_dev_state_free(ipsec->xs);
++			continue;
+ 		}
++
++		spin_lock_bh(&ipsec->xs->lock);
++		ipsec->xs->xso.real_dev = NULL;
++		/* Don't double delete states killed by the user. */
++		if (ipsec->xs->km.state != XFRM_STATE_DEAD)
++			real_dev->xfrmdev_ops->xdo_dev_state_delete(real_dev,
++								    ipsec->xs);
++		spin_unlock_bh(&ipsec->xs->lock);
++
++		if (real_dev->xfrmdev_ops->xdo_dev_state_free)
++			real_dev->xfrmdev_ops->xdo_dev_state_free(real_dev,
++								  ipsec->xs);
+ 	}
+ 	mutex_unlock(&bond->ipsec_lock);
+ }
+ 
+-static void bond_ipsec_free_sa(struct xfrm_state *xs)
++static void bond_ipsec_free_sa(struct net_device *bond_dev,
++			       struct xfrm_state *xs)
+ {
+-	struct net_device *bond_dev = xs->xso.dev;
+ 	struct net_device *real_dev;
+-	netdevice_tracker tracker;
++	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+-	struct slave *slave;
+ 
+ 	if (!bond_dev)
+ 		return;
+ 
+-	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+-	slave = rcu_dereference(bond->curr_active_slave);
+-	real_dev = slave ? slave->dev : NULL;
+-	netdev_hold(real_dev, &tracker, GFP_ATOMIC);
+-	rcu_read_unlock();
+-
+-	if (!slave)
+-		goto out;
+ 
++	mutex_lock(&bond->ipsec_lock);
+ 	if (!xs->xso.real_dev)
+ 		goto out;
+ 
+-	WARN_ON(xs->xso.real_dev != real_dev);
++	real_dev = xs->xso.real_dev;
+ 
+-	if (real_dev && real_dev->xfrmdev_ops &&
++	xs->xso.real_dev = NULL;
++	if (real_dev->xfrmdev_ops &&
+ 	    real_dev->xfrmdev_ops->xdo_dev_state_free)
+-		real_dev->xfrmdev_ops->xdo_dev_state_free(xs);
++		real_dev->xfrmdev_ops->xdo_dev_state_free(real_dev, xs);
+ out:
+-	netdev_put(real_dev, &tracker);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		if (ipsec->xs == xs) {
++			list_del(&ipsec->list);
++			kfree(ipsec);
++			break;
++		}
++	}
++	mutex_unlock(&bond->ipsec_lock);
+ }
+ 
+ /**
+@@ -2118,15 +2115,26 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ 		 * set the master's mac address to that of the first slave
+ 		 */
+ 		memcpy(ss.__data, bond_dev->dev_addr, bond_dev->addr_len);
+-		ss.ss_family = slave_dev->type;
+-		res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss,
+-					  extack);
+-		if (res) {
+-			slave_err(bond_dev, slave_dev, "Error %d calling set_mac_address\n", res);
+-			goto err_restore_mtu;
+-		}
++	} else if (bond->params.fail_over_mac == BOND_FOM_FOLLOW &&
++		   BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP &&
++		   memcmp(slave_dev->dev_addr, bond_dev->dev_addr, bond_dev->addr_len) == 0) {
++		/* Set slave to random address to avoid duplicate mac
++		 * address in later fail over.
++		 */
++		eth_random_addr(ss.__data);
++	} else {
++		goto skip_mac_set;
+ 	}
+ 
++	ss.ss_family = slave_dev->type;
++	res = dev_set_mac_address(slave_dev, (struct sockaddr *)&ss, extack);
++	if (res) {
++		slave_err(bond_dev, slave_dev, "Error %d calling set_mac_address\n", res);
++		goto err_restore_mtu;
++	}
++
++skip_mac_set:
++
+ 	/* set no_addrconf flag before open to prevent IPv6 addrconf */
+ 	slave_dev->priv_flags |= IFF_NO_ADDRCONF;
+ 
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 7216eb8f949367..dc2f4adac9bc96 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -21,6 +21,8 @@
+ #include <linux/export.h>
+ #include <linux/gpio.h>
+ #include <linux/kernel.h>
++#include <linux/math.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/platform_data/b53.h>
+ #include <linux/phy.h>
+@@ -1202,6 +1204,10 @@ static int b53_setup(struct dsa_switch *ds)
+ 	 */
+ 	ds->untag_vlan_aware_bridge_pvid = true;
+ 
++	/* Ageing time is set in seconds */
++	ds->ageing_time_min = 1 * 1000;
++	ds->ageing_time_max = AGE_TIME_MAX * 1000;
++
+ 	ret = b53_reset_switch(dev);
+ 	if (ret) {
+ 		dev_err(ds->dev, "failed to reset switch\n");
+@@ -1317,41 +1323,17 @@ static void b53_adjust_63xx_rgmii(struct dsa_switch *ds, int port,
+ 				  phy_interface_t interface)
+ {
+ 	struct b53_device *dev = ds->priv;
+-	u8 rgmii_ctrl = 0, off;
+-
+-	if (port == dev->imp_port)
+-		off = B53_RGMII_CTRL_IMP;
+-	else
+-		off = B53_RGMII_CTRL_P(port);
+-
+-	b53_read8(dev, B53_CTRL_PAGE, off, &rgmii_ctrl);
++	u8 rgmii_ctrl = 0;
+ 
+-	switch (interface) {
+-	case PHY_INTERFACE_MODE_RGMII_ID:
+-		rgmii_ctrl |= (RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+-		break;
+-	case PHY_INTERFACE_MODE_RGMII_RXID:
+-		rgmii_ctrl &= ~(RGMII_CTRL_DLL_TXC);
+-		rgmii_ctrl |= RGMII_CTRL_DLL_RXC;
+-		break;
+-	case PHY_INTERFACE_MODE_RGMII_TXID:
+-		rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC);
+-		rgmii_ctrl |= RGMII_CTRL_DLL_TXC;
+-		break;
+-	case PHY_INTERFACE_MODE_RGMII:
+-	default:
+-		rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+-		break;
+-	}
++	b53_read8(dev, B53_CTRL_PAGE, B53_RGMII_CTRL_P(port), &rgmii_ctrl);
++	rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+ 
+-	if (port != dev->imp_port) {
+-		if (is63268(dev))
+-			rgmii_ctrl |= RGMII_CTRL_MII_OVERRIDE;
++	if (is63268(dev))
++		rgmii_ctrl |= RGMII_CTRL_MII_OVERRIDE;
+ 
+-		rgmii_ctrl |= RGMII_CTRL_ENABLE_GMII;
+-	}
++	rgmii_ctrl |= RGMII_CTRL_ENABLE_GMII;
+ 
+-	b53_write8(dev, B53_CTRL_PAGE, off, rgmii_ctrl);
++	b53_write8(dev, B53_CTRL_PAGE, B53_RGMII_CTRL_P(port), rgmii_ctrl);
+ 
+ 	dev_dbg(ds->dev, "Configured port %d for %s\n", port,
+ 		phy_modes(interface));
+@@ -1372,8 +1354,7 @@ static void b53_adjust_531x5_rgmii(struct dsa_switch *ds, int port,
+ 	 * tx_clk aligned timing (restoring to reset defaults)
+ 	 */
+ 	b53_read8(dev, B53_CTRL_PAGE, off, &rgmii_ctrl);
+-	rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC |
+-			RGMII_CTRL_TIMING_SEL);
++	rgmii_ctrl &= ~(RGMII_CTRL_DLL_RXC | RGMII_CTRL_DLL_TXC);
+ 
+ 	/* PHY_INTERFACE_MODE_RGMII_TXID means TX internal delay, make
+ 	 * sure that we enable the port TX clock internal delay to
+@@ -1393,7 +1374,10 @@ static void b53_adjust_531x5_rgmii(struct dsa_switch *ds, int port,
+ 		rgmii_ctrl |= RGMII_CTRL_DLL_TXC;
+ 	if (interface == PHY_INTERFACE_MODE_RGMII)
+ 		rgmii_ctrl |= RGMII_CTRL_DLL_TXC | RGMII_CTRL_DLL_RXC;
+-	rgmii_ctrl |= RGMII_CTRL_TIMING_SEL;
++
++	if (dev->chip_id != BCM53115_DEVICE_ID)
++		rgmii_ctrl |= RGMII_CTRL_TIMING_SEL;
++
+ 	b53_write8(dev, B53_CTRL_PAGE, off, rgmii_ctrl);
+ 
+ 	dev_info(ds->dev, "Configured port %d for %s\n", port,
+@@ -1457,6 +1441,10 @@ static void b53_phylink_get_caps(struct dsa_switch *ds, int port,
+ 	__set_bit(PHY_INTERFACE_MODE_MII, config->supported_interfaces);
+ 	__set_bit(PHY_INTERFACE_MODE_REVMII, config->supported_interfaces);
+ 
++	/* BCM63xx RGMII ports support RGMII */
++	if (is63xx(dev) && in_range(port, B53_63XX_RGMII0, 4))
++		phy_interface_set_rgmii(config->supported_interfaces);
++
+ 	config->mac_capabilities = MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
+ 		MAC_10 | MAC_100;
+ 
+@@ -1496,7 +1484,7 @@ static void b53_phylink_mac_config(struct phylink_config *config,
+ 	struct b53_device *dev = ds->priv;
+ 	int port = dp->index;
+ 
+-	if (is63xx(dev) && port >= B53_63XX_RGMII0)
++	if (is63xx(dev) && in_range(port, B53_63XX_RGMII0, 4))
+ 		b53_adjust_63xx_rgmii(ds, port, interface);
+ 
+ 	if (mode == MLO_AN_FIXED) {
+@@ -2046,9 +2034,6 @@ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ 
+ 		b53_get_vlan_entry(dev, pvid, vl);
+ 		vl->members &= ~BIT(port);
+-		if (vl->members == BIT(cpu_port))
+-			vl->members &= ~BIT(cpu_port);
+-		vl->untag = vl->members;
+ 		b53_set_vlan_entry(dev, pvid, vl);
+ 	}
+ 
+@@ -2127,8 +2112,7 @@ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge)
+ 		}
+ 
+ 		b53_get_vlan_entry(dev, pvid, vl);
+-		vl->members |= BIT(port) | BIT(cpu_port);
+-		vl->untag |= BIT(port) | BIT(cpu_port);
++		vl->members |= BIT(port);
+ 		b53_set_vlan_entry(dev, pvid, vl);
+ 	}
+ }
+@@ -2348,6 +2332,9 @@ int b53_eee_init(struct dsa_switch *ds, int port, struct phy_device *phy)
+ {
+ 	int ret;
+ 
++	if (!b53_support_eee(ds, port))
++		return 0;
++
+ 	ret = phy_init_eee(phy, false);
+ 	if (ret)
+ 		return 0;
+@@ -2362,7 +2349,7 @@ bool b53_support_eee(struct dsa_switch *ds, int port)
+ {
+ 	struct b53_device *dev = ds->priv;
+ 
+-	return !is5325(dev) && !is5365(dev);
++	return !is5325(dev) && !is5365(dev) && !is63xx(dev);
+ }
+ EXPORT_SYMBOL(b53_support_eee);
+ 
+@@ -2406,6 +2393,28 @@ static int b53_get_max_mtu(struct dsa_switch *ds, int port)
+ 	return B53_MAX_MTU;
+ }
+ 
++int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
++{
++	struct b53_device *dev = ds->priv;
++	u32 atc;
++	int reg;
++
++	if (is63xx(dev))
++		reg = B53_AGING_TIME_CONTROL_63XX;
++	else
++		reg = B53_AGING_TIME_CONTROL;
++
++	atc = DIV_ROUND_CLOSEST(msecs, 1000);
++
++	if (!is5325(dev) && !is5365(dev))
++		atc |= AGE_CHANGE;
++
++	b53_write32(dev, B53_MGMT_PAGE, reg, atc);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(b53_set_ageing_time);
++
+ static const struct phylink_mac_ops b53_phylink_mac_ops = {
+ 	.mac_select_pcs	= b53_phylink_mac_select_pcs,
+ 	.mac_config	= b53_phylink_mac_config,
+@@ -2429,6 +2438,7 @@ static const struct dsa_switch_ops b53_switch_ops = {
+ 	.port_disable		= b53_disable_port,
+ 	.support_eee		= b53_support_eee,
+ 	.set_mac_eee		= b53_set_mac_eee,
++	.set_ageing_time	= b53_set_ageing_time,
+ 	.port_bridge_join	= b53_br_join,
+ 	.port_bridge_leave	= b53_br_leave,
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 2cf3e6a81e3785..a5ef7071ba07b1 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -343,6 +343,7 @@ void b53_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+ void b53_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data);
+ int b53_get_sset_count(struct dsa_switch *ds, int port, int sset);
+ void b53_get_ethtool_phy_stats(struct dsa_switch *ds, int port, uint64_t *data);
++int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs);
+ int b53_br_join(struct dsa_switch *ds, int port, struct dsa_bridge bridge,
+ 		bool *tx_fwd_offload, struct netlink_ext_ack *extack);
+ void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge);
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index 5f7a0e5c5709d3..1fbc5a204bc721 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -220,6 +220,13 @@
+ #define   BRCM_HDR_P5_EN		BIT(1) /* Enable tagging on port 5 */
+ #define   BRCM_HDR_P7_EN		BIT(2) /* Enable tagging on port 7 */
+ 
++/* Aging Time control register (32 bit) */
++#define B53_AGING_TIME_CONTROL		0x06
++#define B53_AGING_TIME_CONTROL_63XX	0x08
++#define  AGE_CHANGE			BIT(20)
++#define  AGE_TIME_MASK			0x7ffff
++#define  AGE_TIME_MAX			1048575
++
+ /* Mirror capture control register (16 bit) */
+ #define B53_MIR_CAP_CTL			0x10
+ #define  CAP_PORT_MASK			0xf
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 454a8c7fd7eea5..960685596093b6 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1235,6 +1235,7 @@ static const struct dsa_switch_ops bcm_sf2_ops = {
+ 	.port_disable		= bcm_sf2_port_disable,
+ 	.support_eee		= b53_support_eee,
+ 	.set_mac_eee		= b53_set_mac_eee,
++	.set_ageing_time	= b53_set_ageing_time,
+ 	.port_bridge_join	= b53_br_join,
+ 	.port_bridge_leave	= b53_br_leave,
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c
+index 1e9ab65218ff14..af28a9300a15c7 100644
+--- a/drivers/net/ethernet/airoha/airoha_eth.c
++++ b/drivers/net/ethernet/airoha/airoha_eth.c
+@@ -67,15 +67,6 @@ static void airoha_qdma_irq_disable(struct airoha_qdma *qdma, int index,
+ 	airoha_qdma_set_irqmask(qdma, index, mask, 0);
+ }
+ 
+-static bool airhoa_is_lan_gdm_port(struct airoha_gdm_port *port)
+-{
+-	/* GDM1 port on EN7581 SoC is connected to the lan dsa switch.
+-	 * GDM{2,3,4} can be used as wan port connected to an external
+-	 * phy module.
+-	 */
+-	return port->id == 1;
+-}
+-
+ static void airoha_set_macaddr(struct airoha_gdm_port *port, const u8 *addr)
+ {
+ 	struct airoha_eth *eth = port->qdma->eth;
+@@ -89,6 +80,8 @@ static void airoha_set_macaddr(struct airoha_gdm_port *port, const u8 *addr)
+ 	val = (addr[3] << 16) | (addr[4] << 8) | addr[5];
+ 	airoha_fe_wr(eth, REG_FE_MAC_LMIN(reg), val);
+ 	airoha_fe_wr(eth, REG_FE_MAC_LMAX(reg), val);
++
++	airoha_ppe_init_upd_mem(port);
+ }
+ 
+ static void airoha_set_gdm_port_fwd_cfg(struct airoha_eth *eth, u32 addr,
+@@ -1068,7 +1061,7 @@ static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma)
+ 			LMGR_INIT_START | LMGR_SRAM_MODE_MASK |
+ 			HW_FWD_DESC_NUM_MASK,
+ 			FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) |
+-			LMGR_INIT_START);
++			LMGR_INIT_START | LMGR_SRAM_MODE_MASK);
+ 
+ 	return read_poll_timeout(airoha_qdma_rr, status,
+ 				 !(status & LMGR_INIT_START), USEC_PER_MSEC,
+@@ -2541,7 +2534,15 @@ static int airoha_alloc_gdm_port(struct airoha_eth *eth,
+ 	if (err)
+ 		return err;
+ 
+-	return register_netdev(dev);
++	err = register_netdev(dev);
++	if (err)
++		goto free_metadata_dst;
++
++	return 0;
++
++free_metadata_dst:
++	airoha_metadata_dst_free(port);
++	return err;
+ }
+ 
+ static int airoha_probe(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/airoha/airoha_eth.h b/drivers/net/ethernet/airoha/airoha_eth.h
+index ec8908f904c619..2bf6b1a2dd9b03 100644
+--- a/drivers/net/ethernet/airoha/airoha_eth.h
++++ b/drivers/net/ethernet/airoha/airoha_eth.h
+@@ -532,6 +532,15 @@ u32 airoha_rmw(void __iomem *base, u32 offset, u32 mask, u32 val);
+ #define airoha_qdma_clear(qdma, offset, val)			\
+ 	airoha_rmw((qdma)->regs, (offset), (val), 0)
+ 
++static inline bool airhoa_is_lan_gdm_port(struct airoha_gdm_port *port)
++{
++	/* GDM1 port on EN7581 SoC is connected to the lan dsa switch.
++	 * GDM{2,3,4} can be used as wan port connected to an external
++	 * phy module.
++	 */
++	return port->id == 1;
++}
++
+ bool airoha_is_valid_gdm_port(struct airoha_eth *eth,
+ 			      struct airoha_gdm_port *port);
+ 
+@@ -540,6 +549,7 @@ int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ 				 void *cb_priv);
+ int airoha_ppe_init(struct airoha_eth *eth);
+ void airoha_ppe_deinit(struct airoha_eth *eth);
++void airoha_ppe_init_upd_mem(struct airoha_gdm_port *port);
+ struct airoha_foe_entry *airoha_ppe_foe_get_entry(struct airoha_ppe *ppe,
+ 						  u32 hash);
+ 
+diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c
+index f10dab935cab6f..1b8f21f808890e 100644
+--- a/drivers/net/ethernet/airoha/airoha_ppe.c
++++ b/drivers/net/ethernet/airoha/airoha_ppe.c
+@@ -206,6 +206,7 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth,
+ 	int dsa_port = airoha_get_dsa_port(&dev);
+ 	struct airoha_foe_mac_info_common *l2;
+ 	u32 qdata, ports_pad, val;
++	u8 smac_id = 0xf;
+ 
+ 	memset(hwe, 0, sizeof(*hwe));
+ 
+@@ -234,6 +235,14 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth,
+ 		else
+ 			pse_port = 2; /* uplink relies on GDM2 loopback */
+ 		val |= FIELD_PREP(AIROHA_FOE_IB2_PSE_PORT, pse_port);
++
++		/* For downlink traffic consume SRAM memory for hw forwarding
++		 * descriptors queue.
++		 */
++		if (airhoa_is_lan_gdm_port(port))
++			val |= AIROHA_FOE_IB2_FAST_PATH;
++
++		smac_id = port->id;
+ 	}
+ 
+ 	if (is_multicast_ether_addr(data->eth.h_dest))
+@@ -274,7 +283,7 @@ static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth,
+ 		hwe->ipv4.l2.src_mac_lo =
+ 			get_unaligned_be16(data->eth.h_source + 4);
+ 	} else {
+-		l2->src_mac_hi = FIELD_PREP(AIROHA_FOE_MAC_SMAC_ID, 0xf);
++		l2->src_mac_hi = FIELD_PREP(AIROHA_FOE_MAC_SMAC_ID, smac_id);
+ 	}
+ 
+ 	if (data->vlan.num) {
+@@ -862,6 +871,27 @@ void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash)
+ 	airoha_ppe_foe_insert_entry(ppe, hash);
+ }
+ 
++void airoha_ppe_init_upd_mem(struct airoha_gdm_port *port)
++{
++	struct airoha_eth *eth = port->qdma->eth;
++	struct net_device *dev = port->dev;
++	const u8 *addr = dev->dev_addr;
++	u32 val;
++
++	val = (addr[2] << 24) | (addr[3] << 16) | (addr[4] << 8) | addr[5];
++	airoha_fe_wr(eth, REG_UPDMEM_DATA(0), val);
++	airoha_fe_wr(eth, REG_UPDMEM_CTRL(0),
++		     FIELD_PREP(PPE_UPDMEM_ADDR_MASK, port->id) |
++		     PPE_UPDMEM_WR_MASK | PPE_UPDMEM_REQ_MASK);
++
++	val = (addr[0] << 8) | addr[1];
++	airoha_fe_wr(eth, REG_UPDMEM_DATA(0), val);
++	airoha_fe_wr(eth, REG_UPDMEM_CTRL(0),
++		     FIELD_PREP(PPE_UPDMEM_ADDR_MASK, port->id) |
++		     FIELD_PREP(PPE_UPDMEM_OFFSET_MASK, 1) |
++		     PPE_UPDMEM_WR_MASK | PPE_UPDMEM_REQ_MASK);
++}
++
+ int airoha_ppe_init(struct airoha_eth *eth)
+ {
+ 	struct airoha_ppe *ppe;
+diff --git a/drivers/net/ethernet/airoha/airoha_regs.h b/drivers/net/ethernet/airoha/airoha_regs.h
+index 8146cde4e8ba37..57bff8d2de276b 100644
+--- a/drivers/net/ethernet/airoha/airoha_regs.h
++++ b/drivers/net/ethernet/airoha/airoha_regs.h
+@@ -312,6 +312,16 @@
+ #define REG_PPE_RAM_BASE(_n)			(((_n) ? PPE2_BASE : PPE1_BASE) + 0x320)
+ #define REG_PPE_RAM_ENTRY(_m, _n)		(REG_PPE_RAM_BASE(_m) + ((_n) << 2))
+ 
++#define REG_UPDMEM_CTRL(_n)			(((_n) ? PPE2_BASE : PPE1_BASE) + 0x370)
++#define PPE_UPDMEM_ACK_MASK			BIT(31)
++#define PPE_UPDMEM_ADDR_MASK			GENMASK(11, 8)
++#define PPE_UPDMEM_OFFSET_MASK			GENMASK(7, 4)
++#define PPE_UPDMEM_SEL_MASK			GENMASK(3, 2)
++#define PPE_UPDMEM_WR_MASK			BIT(1)
++#define PPE_UPDMEM_REQ_MASK			BIT(0)
++
++#define REG_UPDMEM_DATA(_n)			(((_n) ? PPE2_BASE : PPE1_BASE) + 0x374)
++
+ #define REG_FE_GDM_TX_OK_PKT_CNT_H(_n)		(GDM_BASE(_n) + 0x280)
+ #define REG_FE_GDM_TX_OK_BYTE_CNT_H(_n)		(GDM_BASE(_n) + 0x284)
+ #define REG_FE_GDM_TX_ETH_PKT_CNT_H(_n)		(GDM_BASE(_n) + 0x288)
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 551c279dc14bed..51395c96b2e994 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -6480,10 +6480,11 @@ static const struct tlsdev_ops cxgb4_ktls_ops = {
+ 
+ #if IS_ENABLED(CONFIG_CHELSIO_IPSEC_INLINE)
+ 
+-static int cxgb4_xfrm_add_state(struct xfrm_state *x,
++static int cxgb4_xfrm_add_state(struct net_device *dev,
++				struct xfrm_state *x,
+ 				struct netlink_ext_ack *extack)
+ {
+-	struct adapter *adap = netdev2adap(x->xso.dev);
++	struct adapter *adap = netdev2adap(dev);
+ 	int ret;
+ 
+ 	if (!mutex_trylock(&uld_mutex)) {
+@@ -6494,7 +6495,8 @@ static int cxgb4_xfrm_add_state(struct xfrm_state *x,
+ 	if (ret)
+ 		goto out_unlock;
+ 
+-	ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(x, extack);
++	ret = adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_add(dev, x,
++									extack);
+ 
+ out_unlock:
+ 	mutex_unlock(&uld_mutex);
+@@ -6502,9 +6504,9 @@ static int cxgb4_xfrm_add_state(struct xfrm_state *x,
+ 	return ret;
+ }
+ 
+-static void cxgb4_xfrm_del_state(struct xfrm_state *x)
++static void cxgb4_xfrm_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+-	struct adapter *adap = netdev2adap(x->xso.dev);
++	struct adapter *adap = netdev2adap(dev);
+ 
+ 	if (!mutex_trylock(&uld_mutex)) {
+ 		dev_dbg(adap->pdev_dev,
+@@ -6514,15 +6516,15 @@ static void cxgb4_xfrm_del_state(struct xfrm_state *x)
+ 	if (chcr_offload_state(adap, CXGB4_XFRMDEV_OPS))
+ 		goto out_unlock;
+ 
+-	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_delete(x);
++	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_delete(dev, x);
+ 
+ out_unlock:
+ 	mutex_unlock(&uld_mutex);
+ }
+ 
+-static void cxgb4_xfrm_free_state(struct xfrm_state *x)
++static void cxgb4_xfrm_free_state(struct net_device *dev, struct xfrm_state *x)
+ {
+-	struct adapter *adap = netdev2adap(x->xso.dev);
++	struct adapter *adap = netdev2adap(dev);
+ 
+ 	if (!mutex_trylock(&uld_mutex)) {
+ 		dev_dbg(adap->pdev_dev,
+@@ -6532,7 +6534,7 @@ static void cxgb4_xfrm_free_state(struct xfrm_state *x)
+ 	if (chcr_offload_state(adap, CXGB4_XFRMDEV_OPS))
+ 		goto out_unlock;
+ 
+-	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_free(x);
++	adap->uld[CXGB4_ULD_IPSEC].xfrmdev_ops->xdo_dev_state_free(dev, x);
+ 
+ out_unlock:
+ 	mutex_unlock(&uld_mutex);
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
+index baba96883f48b5..ecd9a0bd5e1822 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
+@@ -75,9 +75,12 @@ static int ch_ipsec_uld_state_change(void *handle, enum cxgb4_state new_state);
+ static int ch_ipsec_xmit(struct sk_buff *skb, struct net_device *dev);
+ static void *ch_ipsec_uld_add(const struct cxgb4_lld_info *infop);
+ static void ch_ipsec_advance_esn_state(struct xfrm_state *x);
+-static void ch_ipsec_xfrm_free_state(struct xfrm_state *x);
+-static void ch_ipsec_xfrm_del_state(struct xfrm_state *x);
+-static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
++static void ch_ipsec_xfrm_free_state(struct net_device *dev,
++				     struct xfrm_state *x);
++static void ch_ipsec_xfrm_del_state(struct net_device *dev,
++				    struct xfrm_state *x);
++static int ch_ipsec_xfrm_add_state(struct net_device *dev,
++				   struct xfrm_state *x,
+ 				   struct netlink_ext_ack *extack);
+ 
+ static const struct xfrmdev_ops ch_ipsec_xfrmdev_ops = {
+@@ -223,7 +226,8 @@ static int ch_ipsec_setkey(struct xfrm_state *x,
+  * returns 0 on success, negative error if failed to send message to FPGA
+  * positive error if FPGA returned a bad response
+  */
+-static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
++static int ch_ipsec_xfrm_add_state(struct net_device *dev,
++				   struct xfrm_state *x,
+ 				   struct netlink_ext_ack *extack)
+ {
+ 	struct ipsec_sa_entry *sa_entry;
+@@ -302,14 +306,16 @@ static int ch_ipsec_xfrm_add_state(struct xfrm_state *x,
+ 	return res;
+ }
+ 
+-static void ch_ipsec_xfrm_del_state(struct xfrm_state *x)
++static void ch_ipsec_xfrm_del_state(struct net_device *dev,
++				    struct xfrm_state *x)
+ {
+ 	/* do nothing */
+ 	if (!x->xso.offload_handle)
+ 		return;
+ }
+ 
+-static void ch_ipsec_xfrm_free_state(struct xfrm_state *x)
++static void ch_ipsec_xfrm_free_state(struct net_device *dev,
++				     struct xfrm_state *x)
+ {
+ 	struct ipsec_sa_entry *sa_entry;
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index c3791cf23c876c..d561d45021a581 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -2153,7 +2153,7 @@ void gve_handle_report_stats(struct gve_priv *priv)
+ 			};
+ 			stats[stats_idx++] = (struct stats) {
+ 				.stat_name = cpu_to_be32(RX_BUFFERS_POSTED),
+-				.value = cpu_to_be64(priv->rx[0].fill_cnt),
++				.value = cpu_to_be64(priv->rx[idx].fill_cnt),
+ 				.queue_id = cpu_to_be32(idx),
+ 			};
+ 		}
+diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+index 2eba868d80370a..f7da7de23d6726 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+@@ -763,6 +763,9 @@ static int gve_tx_add_skb_dqo(struct gve_tx_ring *tx,
+ 	s16 completion_tag;
+ 
+ 	pkt = gve_alloc_pending_packet(tx);
++	if (!pkt)
++		return -ENOMEM;
++
+ 	pkt->skb = skb;
+ 	completion_tag = pkt - tx->dqo.pending_packets;
+ 
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
+index 3f089c3d47b23b..d8595e84326dbc 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
+@@ -477,10 +477,6 @@ static void e1000_down_and_stop(struct e1000_adapter *adapter)
+ 
+ 	cancel_delayed_work_sync(&adapter->phy_info_task);
+ 	cancel_delayed_work_sync(&adapter->fifo_stall_task);
+-
+-	/* Only kill reset task if adapter is not resetting */
+-	if (!test_bit(__E1000_RESETTING, &adapter->flags))
+-		cancel_work_sync(&adapter->reset_task);
+ }
+ 
+ void e1000_down(struct e1000_adapter *adapter)
+@@ -1266,6 +1262,10 @@ static void e1000_remove(struct pci_dev *pdev)
+ 
+ 	unregister_netdev(netdev);
+ 
++	/* Only kill reset task if adapter is not resetting */
++	if (!test_bit(__E1000_RESETTING, &adapter->flags))
++		cancel_work_sync(&adapter->reset_task);
++
+ 	e1000_phy_hw_reset(hw);
+ 
+ 	kfree(adapter->tx_ring);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 1120f8e4bb6703..88e6bef69342c2 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1546,8 +1546,8 @@ static void i40e_cleanup_reset_vf(struct i40e_vf *vf)
+  * @vf: pointer to the VF structure
+  * @flr: VFLR was issued or not
+  *
+- * Returns true if the VF is in reset, resets successfully, or resets
+- * are disabled and false otherwise.
++ * Return: True if reset was performed successfully or if resets are disabled.
++ * False if reset is already in progress.
+  **/
+ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ {
+@@ -1566,7 +1566,7 @@ bool i40e_reset_vf(struct i40e_vf *vf, bool flr)
+ 
+ 	/* If VF is being reset already we don't need to continue. */
+ 	if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))
+-		return true;
++		return false;
+ 
+ 	i40e_trigger_vf_reset(vf, flr);
+ 
+@@ -4328,7 +4328,10 @@ int i40e_vc_process_vflr_event(struct i40e_pf *pf)
+ 		reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx));
+ 		if (reg & BIT(bit_idx))
+ 			/* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */
+-			i40e_reset_vf(vf, true);
++			if (!i40e_reset_vf(vf, true)) {
++				/* At least one VF did not finish resetting, retry next time */
++				set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);
++			}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 9de3e0ba37316c..f7a98ff43a57fb 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -268,7 +268,6 @@ struct iavf_adapter {
+ 	struct list_head vlan_filter_list;
+ 	int num_vlan_filters;
+ 	struct list_head mac_filter_list;
+-	struct mutex crit_lock;
+ 	/* Lock to protect accesses to MAC and VLAN lists */
+ 	spinlock_t mac_vlan_list_lock;
+ 	char misc_vector_name[IFNAMSIZ + 9];
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index 288bb5b2e72ef7..2b2b315205b5e0 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -4,6 +4,8 @@
+ #include <linux/bitfield.h>
+ #include <linux/uaccess.h>
+ 
++#include <net/netdev_lock.h>
++
+ /* ethtool support for iavf */
+ #include "iavf.h"
+ 
+@@ -1256,9 +1258,10 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ {
+ 	struct ethtool_rx_flow_spec *fsp = &cmd->fs;
+ 	struct iavf_fdir_fltr *fltr;
+-	int count = 50;
+ 	int err;
+ 
++	netdev_assert_locked(adapter->netdev);
++
+ 	if (!(adapter->flags & IAVF_FLAG_FDIR_ENABLED))
+ 		return -EOPNOTSUPP;
+ 
+@@ -1277,14 +1280,6 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ 	if (!fltr)
+ 		return -ENOMEM;
+ 
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		if (--count == 0) {
+-			kfree(fltr);
+-			return -EINVAL;
+-		}
+-		udelay(1);
+-	}
+-
+ 	err = iavf_add_fdir_fltr_info(adapter, fsp, fltr);
+ 	if (!err)
+ 		err = iavf_fdir_add_fltr(adapter, fltr);
+@@ -1292,7 +1287,6 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ 	if (err)
+ 		kfree(fltr);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	return err;
+ }
+ 
+@@ -1435,11 +1429,13 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ {
+ 	struct iavf_adv_rss *rss_old, *rss_new;
+ 	bool rss_new_add = false;
+-	int count = 50, err = 0;
+ 	bool symm = false;
+ 	u64 hash_flds;
++	int err = 0;
+ 	u32 hdrs;
+ 
++	netdev_assert_locked(adapter->netdev);
++
+ 	if (!ADV_RSS_SUPPORT(adapter))
+ 		return -EOPNOTSUPP;
+ 
+@@ -1463,15 +1459,6 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 		return -EINVAL;
+ 	}
+ 
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		if (--count == 0) {
+-			kfree(rss_new);
+-			return -EINVAL;
+-		}
+-
+-		udelay(1);
+-	}
+-
+ 	spin_lock_bh(&adapter->adv_rss_lock);
+ 	rss_old = iavf_find_adv_rss_cfg_by_hdrs(adapter, hdrs);
+ 	if (rss_old) {
+@@ -1500,8 +1487,6 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 	if (!err)
+ 		iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_ADV_RSS_CFG);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+-
+ 	if (!rss_new_add)
+ 		kfree(rss_new);
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 6d7ba4d67a1933..81d7249d1149c8 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1287,11 +1287,11 @@ static void iavf_configure(struct iavf_adapter *adapter)
+ /**
+  * iavf_up_complete - Finish the last steps of bringing up a connection
+  * @adapter: board private structure
+- *
+- * Expects to be called while holding crit_lock.
+- **/
++ */
+ static void iavf_up_complete(struct iavf_adapter *adapter)
+ {
++	netdev_assert_locked(adapter->netdev);
++
+ 	iavf_change_state(adapter, __IAVF_RUNNING);
+ 	clear_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 
+@@ -1410,13 +1410,13 @@ static void iavf_clear_adv_rss_conf(struct iavf_adapter *adapter)
+ /**
+  * iavf_down - Shutdown the connection processing
+  * @adapter: board private structure
+- *
+- * Expects to be called while holding crit_lock.
+- **/
++ */
+ void iavf_down(struct iavf_adapter *adapter)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 
++	netdev_assert_locked(netdev);
++
+ 	if (adapter->state <= __IAVF_DOWN_PENDING)
+ 		return;
+ 
+@@ -2025,22 +2025,21 @@ static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter, bool runni
+  * iavf_finish_config - do all netdev work that needs RTNL
+  * @work: our work_struct
+  *
+- * Do work that needs both RTNL and crit_lock.
+- **/
++ * Do work that needs RTNL.
++ */
+ static void iavf_finish_config(struct work_struct *work)
+ {
+ 	struct iavf_adapter *adapter;
+-	bool locks_released = false;
++	bool netdev_released = false;
+ 	int pairs, err;
+ 
+ 	adapter = container_of(work, struct iavf_adapter, finish_config);
+ 
+ 	/* Always take RTNL first to prevent circular lock dependency;
+-	 * The dev->lock is needed to update the queue number
++	 * the dev->lock (== netdev lock) is needed to update the queue number.
+ 	 */
+ 	rtnl_lock();
+ 	netdev_lock(adapter->netdev);
+-	mutex_lock(&adapter->crit_lock);
+ 
+ 	if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) &&
+ 	    adapter->netdev->reg_state == NETREG_REGISTERED &&
+@@ -2059,22 +2058,21 @@ static void iavf_finish_config(struct work_struct *work)
+ 		netif_set_real_num_tx_queues(adapter->netdev, pairs);
+ 
+ 		if (adapter->netdev->reg_state != NETREG_REGISTERED) {
+-			mutex_unlock(&adapter->crit_lock);
+ 			netdev_unlock(adapter->netdev);
+-			locks_released = true;
++			netdev_released = true;
+ 			err = register_netdevice(adapter->netdev);
+ 			if (err) {
+ 				dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n",
+ 					err);
+ 
+ 				/* go back and try again.*/
+-				mutex_lock(&adapter->crit_lock);
++				netdev_lock(adapter->netdev);
+ 				iavf_free_rss(adapter);
+ 				iavf_free_misc_irq(adapter);
+ 				iavf_reset_interrupt_capability(adapter);
+ 				iavf_change_state(adapter,
+ 						  __IAVF_INIT_CONFIG_ADAPTER);
+-				mutex_unlock(&adapter->crit_lock);
++				netdev_unlock(adapter->netdev);
+ 				goto out;
+ 			}
+ 		}
+@@ -2090,10 +2088,8 @@ static void iavf_finish_config(struct work_struct *work)
+ 	}
+ 
+ out:
+-	if (!locks_released) {
+-		mutex_unlock(&adapter->crit_lock);
++	if (!netdev_released)
+ 		netdev_unlock(adapter->netdev);
+-	}
+ 	rtnl_unlock();
+ }
+ 
+@@ -2911,28 +2907,15 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
+ 	iavf_change_state(adapter, __IAVF_INIT_FAILED);
+ }
+ 
+-/**
+- * iavf_watchdog_task - Periodic call-back task
+- * @work: pointer to work_struct
+- **/
+-static void iavf_watchdog_task(struct work_struct *work)
++static const int IAVF_NO_RESCHED = -1;
++
++/* return: msec delay for requeueing itself */
++static int iavf_watchdog_step(struct iavf_adapter *adapter)
+ {
+-	struct iavf_adapter *adapter = container_of(work,
+-						    struct iavf_adapter,
+-						    watchdog_task.work);
+-	struct net_device *netdev = adapter->netdev;
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	u32 reg_val;
+ 
+-	netdev_lock(netdev);
+-	if (!mutex_trylock(&adapter->crit_lock)) {
+-		if (adapter->state == __IAVF_REMOVE) {
+-			netdev_unlock(netdev);
+-			return;
+-		}
+-
+-		goto restart_watchdog;
+-	}
++	netdev_assert_locked(adapter->netdev);
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		iavf_change_state(adapter, __IAVF_COMM_FAILED);
+@@ -2940,39 +2923,19 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	switch (adapter->state) {
+ 	case __IAVF_STARTUP:
+ 		iavf_startup(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(30));
+-		return;
++		return 30;
+ 	case __IAVF_INIT_VERSION_CHECK:
+ 		iavf_init_version_check(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(30));
+-		return;
++		return 30;
+ 	case __IAVF_INIT_GET_RESOURCES:
+ 		iavf_init_get_resources(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(1));
+-		return;
++		return 1;
+ 	case __IAVF_INIT_EXTENDED_CAPS:
+ 		iavf_init_process_extended_caps(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(1));
+-		return;
++		return 1;
+ 	case __IAVF_INIT_CONFIG_ADAPTER:
+ 		iavf_init_config_adapter(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(1));
+-		return;
++		return 1;
+ 	case __IAVF_INIT_FAILED:
+ 		if (test_bit(__IAVF_IN_REMOVE_TASK,
+ 			     &adapter->crit_section)) {
+@@ -2980,27 +2943,18 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			 * watchdog task, iavf_remove should handle this state
+ 			 * as it can loop forever
+ 			 */
+-			mutex_unlock(&adapter->crit_lock);
+-			netdev_unlock(netdev);
+-			return;
++			return IAVF_NO_RESCHED;
+ 		}
+ 		if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
+ 			dev_err(&adapter->pdev->dev,
+ 				"Failed to communicate with PF; waiting before retry\n");
+ 			adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;
+ 			iavf_shutdown_adminq(hw);
+-			mutex_unlock(&adapter->crit_lock);
+-			netdev_unlock(netdev);
+-			queue_delayed_work(adapter->wq,
+-					   &adapter->watchdog_task, (5 * HZ));
+-			return;
++			return 5000;
+ 		}
+ 		/* Try again from failed step*/
+ 		iavf_change_state(adapter, adapter->last_state);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task, HZ);
+-		return;
++		return 1000;
+ 	case __IAVF_COMM_FAILED:
+ 		if (test_bit(__IAVF_IN_REMOVE_TASK,
+ 			     &adapter->crit_section)) {
+@@ -3010,9 +2964,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			 */
+ 			iavf_change_state(adapter, __IAVF_INIT_FAILED);
+ 			adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+-			mutex_unlock(&adapter->crit_lock);
+-			netdev_unlock(netdev);
+-			return;
++			return IAVF_NO_RESCHED;
+ 		}
+ 		reg_val = rd32(hw, IAVF_VFGEN_RSTAT) &
+ 			  IAVF_VFGEN_RSTAT_VFR_STATE_MASK;
+@@ -3030,18 +2982,9 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq,
+-				   &adapter->watchdog_task,
+-				   msecs_to_jiffies(10));
+-		return;
++		return 10;
+ 	case __IAVF_RESETTING:
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   HZ * 2);
+-		return;
++		return 2000;
+ 	case __IAVF_DOWN:
+ 	case __IAVF_DOWN_PENDING:
+ 	case __IAVF_TESTING:
+@@ -3068,9 +3011,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		break;
+ 	case __IAVF_REMOVE:
+ 	default:
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		return;
++		return IAVF_NO_RESCHED;
+ 	}
+ 
+ 	/* check for hw reset */
+@@ -3080,24 +3021,29 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ 		dev_err(&adapter->pdev->dev, "Hardware reset detected\n");
+ 		iavf_schedule_reset(adapter, IAVF_FLAG_RESET_PENDING);
+-		mutex_unlock(&adapter->crit_lock);
+-		netdev_unlock(netdev);
+-		queue_delayed_work(adapter->wq,
+-				   &adapter->watchdog_task, HZ * 2);
+-		return;
+ 	}
+ 
+-	mutex_unlock(&adapter->crit_lock);
+-restart_watchdog:
+-	netdev_unlock(netdev);
++	return adapter->aq_required ? 20 : 2000;
++}
++
++static void iavf_watchdog_task(struct work_struct *work)
++{
++	struct iavf_adapter *adapter = container_of(work,
++						    struct iavf_adapter,
++						    watchdog_task.work);
++	struct net_device *netdev = adapter->netdev;
++	int msec_delay;
++
++	netdev_lock(netdev);
++	msec_delay = iavf_watchdog_step(adapter);
++	/* note that we schedule a different task */
+ 	if (adapter->state >= __IAVF_DOWN)
+ 		queue_work(adapter->wq, &adapter->adminq_task);
+-	if (adapter->aq_required)
+-		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   msecs_to_jiffies(20));
+-	else
++
++	if (msec_delay != IAVF_NO_RESCHED)
+ 		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
+-				   HZ * 2);
++				   msecs_to_jiffies(msec_delay));
++	netdev_unlock(netdev);
+ }
+ 
+ /**
+@@ -3105,14 +3051,15 @@ static void iavf_watchdog_task(struct work_struct *work)
+  * @adapter: board private structure
+  *
+  * Set communication failed flag and free all resources.
+- * NOTE: This function is expected to be called with crit_lock being held.
+- **/
++ */
+ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ {
+ 	struct iavf_mac_filter *f, *ftmp;
+ 	struct iavf_vlan_filter *fv, *fvtmp;
+ 	struct iavf_cloud_filter *cf, *cftmp;
+ 
++	netdev_assert_locked(adapter->netdev);
++
+ 	adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED;
+ 
+ 	/* We don't use netif_running() because it may be true prior to
+@@ -3212,17 +3159,7 @@ static void iavf_reset_task(struct work_struct *work)
+ 	int i = 0, err;
+ 	bool running;
+ 
+-	/* When device is being removed it doesn't make sense to run the reset
+-	 * task, just return in such a case.
+-	 */
+ 	netdev_lock(netdev);
+-	if (!mutex_trylock(&adapter->crit_lock)) {
+-		if (adapter->state != __IAVF_REMOVE)
+-			queue_work(adapter->wq, &adapter->reset_task);
+-
+-		netdev_unlock(netdev);
+-		return;
+-	}
+ 
+ 	iavf_misc_irq_disable(adapter);
+ 	if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {
+@@ -3267,12 +3204,22 @@ static void iavf_reset_task(struct work_struct *work)
+ 		dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+ 			reg_val);
+ 		iavf_disable_vf(adapter);
+-		mutex_unlock(&adapter->crit_lock);
+ 		netdev_unlock(netdev);
+ 		return; /* Do not attempt to reinit. It's dead, Jim. */
+ 	}
+ 
+ continue_reset:
++	/* If we are still early in the state machine, just restart. */
++	if (adapter->state <= __IAVF_INIT_FAILED) {
++		iavf_shutdown_adminq(hw);
++		iavf_change_state(adapter, __IAVF_STARTUP);
++		iavf_startup(adapter);
++		queue_delayed_work(adapter->wq, &adapter->watchdog_task,
++				   msecs_to_jiffies(30));
++		netdev_unlock(netdev);
++		return;
++	}
++
+ 	/* We don't use netif_running() because it may be true prior to
+ 	 * ndo_open() returning, so we can't assume it means all our open
+ 	 * tasks have finished, since we're not holding the rtnl_lock here.
+@@ -3411,7 +3358,6 @@ static void iavf_reset_task(struct work_struct *work)
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 
+ 	wake_up(&adapter->reset_waitqueue);
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 
+ 	return;
+@@ -3422,7 +3368,6 @@ static void iavf_reset_task(struct work_struct *work)
+ 	}
+ 	iavf_disable_vf(adapter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 	dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+ }
+@@ -3435,6 +3380,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ {
+ 	struct iavf_adapter *adapter =
+ 		container_of(work, struct iavf_adapter, adminq_task);
++	struct net_device *netdev = adapter->netdev;
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	struct iavf_arq_event_info event;
+ 	enum virtchnl_ops v_op;
+@@ -3442,13 +3388,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	u32 val, oldval;
+ 	u16 pending;
+ 
+-	if (!mutex_trylock(&adapter->crit_lock)) {
+-		if (adapter->state == __IAVF_REMOVE)
+-			return;
+-
+-		queue_work(adapter->wq, &adapter->adminq_task);
+-		goto out;
+-	}
++	netdev_lock(netdev);
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ 		goto unlock;
+@@ -3515,8 +3455,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ freedom:
+ 	kfree(event.msg_buf);
+ unlock:
+-	mutex_unlock(&adapter->crit_lock);
+-out:
++	netdev_unlock(netdev);
+ 	/* re-enable Admin queue interrupt cause */
+ 	iavf_misc_irq_enable(adapter);
+ }
+@@ -4209,8 +4148,8 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 				    struct flow_cls_offload *cls_flower)
+ {
+ 	int tc = tc_classid_to_hwtc(adapter->netdev, cls_flower->classid);
+-	struct iavf_cloud_filter *filter = NULL;
+-	int err = -EINVAL, count = 50;
++	struct iavf_cloud_filter *filter;
++	int err;
+ 
+ 	if (tc < 0) {
+ 		dev_err(&adapter->pdev->dev, "Invalid traffic class\n");
+@@ -4220,17 +4159,10 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	filter = kzalloc(sizeof(*filter), GFP_KERNEL);
+ 	if (!filter)
+ 		return -ENOMEM;
+-
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		if (--count == 0) {
+-			kfree(filter);
+-			return err;
+-		}
+-		udelay(1);
+-	}
+-
+ 	filter->cookie = cls_flower->cookie;
+ 
++	netdev_lock(adapter->netdev);
++
+ 	/* bail out here if filter already exists */
+ 	spin_lock_bh(&adapter->cloud_filter_list_lock);
+ 	if (iavf_find_cf(adapter, &cls_flower->cookie)) {
+@@ -4264,7 +4196,7 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	if (err)
+ 		kfree(filter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
++	netdev_unlock(adapter->netdev);
+ 	return err;
+ }
+ 
+@@ -4568,28 +4500,13 @@ static int iavf_open(struct net_device *netdev)
+ 		return -EIO;
+ 	}
+ 
+-	while (!mutex_trylock(&adapter->crit_lock)) {
+-		/* If we are in __IAVF_INIT_CONFIG_ADAPTER state the crit_lock
+-		 * is already taken and iavf_open is called from an upper
+-		 * device's notifier reacting on NETDEV_REGISTER event.
+-		 * We have to leave here to avoid dead lock.
+-		 */
+-		if (adapter->state == __IAVF_INIT_CONFIG_ADAPTER)
+-			return -EBUSY;
+-
+-		usleep_range(500, 1000);
+-	}
+-
+-	if (adapter->state != __IAVF_DOWN) {
+-		err = -EBUSY;
+-		goto err_unlock;
+-	}
++	if (adapter->state != __IAVF_DOWN)
++		return -EBUSY;
+ 
+ 	if (adapter->state == __IAVF_RUNNING &&
+ 	    !test_bit(__IAVF_VSI_DOWN, adapter->vsi.state)) {
+ 		dev_dbg(&adapter->pdev->dev, "VF is already open.\n");
+-		err = 0;
+-		goto err_unlock;
++		return 0;
+ 	}
+ 
+ 	/* allocate transmit descriptors */
+@@ -4608,9 +4525,7 @@ static int iavf_open(struct net_device *netdev)
+ 		goto err_req_irq;
+ 
+ 	spin_lock_bh(&adapter->mac_vlan_list_lock);
+-
+ 	iavf_add_filter(adapter, adapter->hw.mac.addr);
+-
+ 	spin_unlock_bh(&adapter->mac_vlan_list_lock);
+ 
+ 	/* Restore filters that were removed with IFF_DOWN */
+@@ -4623,8 +4538,6 @@ static int iavf_open(struct net_device *netdev)
+ 
+ 	iavf_irq_enable(adapter, true);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+-
+ 	return 0;
+ 
+ err_req_irq:
+@@ -4634,8 +4547,6 @@ static int iavf_open(struct net_device *netdev)
+ 	iavf_free_all_rx_resources(adapter);
+ err_setup_tx:
+ 	iavf_free_all_tx_resources(adapter);
+-err_unlock:
+-	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return err;
+ }
+@@ -4659,12 +4570,8 @@ static int iavf_close(struct net_device *netdev)
+ 
+ 	netdev_assert_locked(netdev);
+ 
+-	mutex_lock(&adapter->crit_lock);
+-
+-	if (adapter->state <= __IAVF_DOWN_PENDING) {
+-		mutex_unlock(&adapter->crit_lock);
++	if (adapter->state <= __IAVF_DOWN_PENDING)
+ 		return 0;
+-	}
+ 
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 	/* We cannot send IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS before
+@@ -4695,7 +4602,6 @@ static int iavf_close(struct net_device *netdev)
+ 	iavf_change_state(adapter, __IAVF_DOWN_PENDING);
+ 	iavf_free_traffic_irqs(adapter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 
+ 	/* We explicitly don't free resources here because the hardware is
+@@ -4714,11 +4620,10 @@ static int iavf_close(struct net_device *netdev)
+ 				    msecs_to_jiffies(500));
+ 	if (!status)
+ 		netdev_warn(netdev, "Device resources not yet released\n");
+-
+ 	netdev_lock(netdev);
+-	mutex_lock(&adapter->crit_lock);
++
+ 	adapter->aq_required |= aq_to_restore;
+-	mutex_unlock(&adapter->crit_lock);
++
+ 	return 0;
+ }
+ 
+@@ -5227,15 +5132,16 @@ iavf_shaper_set(struct net_shaper_binding *binding,
+ 	struct iavf_adapter *adapter = netdev_priv(binding->netdev);
+ 	const struct net_shaper_handle *handle = &shaper->handle;
+ 	struct iavf_ring *tx_ring;
+-	int ret = 0;
++	int ret;
++
++	netdev_assert_locked(adapter->netdev);
+ 
+-	mutex_lock(&adapter->crit_lock);
+ 	if (handle->id >= adapter->num_active_queues)
+-		goto unlock;
++		return 0;
+ 
+ 	ret = iavf_verify_shaper(binding, shaper, extack);
+ 	if (ret)
+-		goto unlock;
++		return ret;
+ 
+ 	tx_ring = &adapter->tx_rings[handle->id];
+ 
+@@ -5245,9 +5151,7 @@ iavf_shaper_set(struct net_shaper_binding *binding,
+ 
+ 	adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW;
+ 
+-unlock:
+-	mutex_unlock(&adapter->crit_lock);
+-	return ret;
++	return 0;
+ }
+ 
+ static int iavf_shaper_del(struct net_shaper_binding *binding,
+@@ -5257,9 +5161,10 @@ static int iavf_shaper_del(struct net_shaper_binding *binding,
+ 	struct iavf_adapter *adapter = netdev_priv(binding->netdev);
+ 	struct iavf_ring *tx_ring;
+ 
+-	mutex_lock(&adapter->crit_lock);
++	netdev_assert_locked(adapter->netdev);
++
+ 	if (handle->id >= adapter->num_active_queues)
+-		goto unlock;
++		return 0;
+ 
+ 	tx_ring = &adapter->tx_rings[handle->id];
+ 	tx_ring->q_shaper.bw_min = 0;
+@@ -5268,8 +5173,6 @@ static int iavf_shaper_del(struct net_shaper_binding *binding,
+ 
+ 	adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_QUEUES_BW;
+ 
+-unlock:
+-	mutex_unlock(&adapter->crit_lock);
+ 	return 0;
+ }
+ 
+@@ -5530,10 +5433,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto err_alloc_qos_cap;
+ 	}
+ 
+-	/* set up the locks for the AQ, do this only once in probe
+-	 * and destroy them only once in remove
+-	 */
+-	mutex_init(&adapter->crit_lock);
+ 	mutex_init(&hw->aq.asq_mutex);
+ 	mutex_init(&hw->aq.arq_mutex);
+ 
+@@ -5596,22 +5495,24 @@ static int iavf_suspend(struct device *dev_d)
+ {
+ 	struct net_device *netdev = dev_get_drvdata(dev_d);
+ 	struct iavf_adapter *adapter = netdev_priv(netdev);
++	bool running;
+ 
+ 	netif_device_detach(netdev);
+ 
++	running = netif_running(netdev);
++	if (running)
++		rtnl_lock();
+ 	netdev_lock(netdev);
+-	mutex_lock(&adapter->crit_lock);
+ 
+-	if (netif_running(netdev)) {
+-		rtnl_lock();
++	if (running)
+ 		iavf_down(adapter);
+-		rtnl_unlock();
+-	}
++
+ 	iavf_free_misc_irq(adapter);
+ 	iavf_reset_interrupt_capability(adapter);
+ 
+-	mutex_unlock(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
++	if (running)
++		rtnl_unlock();
+ 
+ 	return 0;
+ }
+@@ -5688,20 +5589,20 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	 * There are flows where register/unregister netdev may race.
+ 	 */
+ 	while (1) {
+-		mutex_lock(&adapter->crit_lock);
++		netdev_lock(netdev);
+ 		if (adapter->state == __IAVF_RUNNING ||
+ 		    adapter->state == __IAVF_DOWN ||
+ 		    adapter->state == __IAVF_INIT_FAILED) {
+-			mutex_unlock(&adapter->crit_lock);
++			netdev_unlock(netdev);
+ 			break;
+ 		}
+ 		/* Simply return if we already went through iavf_shutdown */
+ 		if (adapter->state == __IAVF_REMOVE) {
+-			mutex_unlock(&adapter->crit_lock);
++			netdev_unlock(netdev);
+ 			return;
+ 		}
+ 
+-		mutex_unlock(&adapter->crit_lock);
++		netdev_unlock(netdev);
+ 		usleep_range(500, 1000);
+ 	}
+ 	cancel_delayed_work_sync(&adapter->watchdog_task);
+@@ -5711,7 +5612,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		unregister_netdev(netdev);
+ 
+ 	netdev_lock(netdev);
+-	mutex_lock(&adapter->crit_lock);
+ 	dev_info(&adapter->pdev->dev, "Removing device\n");
+ 	iavf_change_state(adapter, __IAVF_REMOVE);
+ 
+@@ -5727,9 +5627,11 @@ static void iavf_remove(struct pci_dev *pdev)
+ 
+ 	iavf_misc_irq_disable(adapter);
+ 	/* Shut down all the garbage mashers on the detention level */
++	netdev_unlock(netdev);
+ 	cancel_work_sync(&adapter->reset_task);
+ 	cancel_delayed_work_sync(&adapter->watchdog_task);
+ 	cancel_work_sync(&adapter->adminq_task);
++	netdev_lock(netdev);
+ 
+ 	adapter->aq_required = 0;
+ 	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+@@ -5747,8 +5649,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	/* destroy the locks only once, here */
+ 	mutex_destroy(&hw->aq.arq_mutex);
+ 	mutex_destroy(&hw->aq.asq_mutex);
+-	mutex_unlock(&adapter->crit_lock);
+-	mutex_destroy(&adapter->crit_lock);
+ 	netdev_unlock(netdev);
+ 
+ 	iounmap(hw->hw_addr);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index a6f0e5990be250..07f0d0a0f1e28a 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -79,6 +79,23 @@ iavf_poll_virtchnl_msg(struct iavf_hw *hw, struct iavf_arq_event_info *event,
+ 			return iavf_status_to_errno(status);
+ 		received_op =
+ 		    (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high);
++
++		if (received_op == VIRTCHNL_OP_EVENT) {
++			struct iavf_adapter *adapter = hw->back;
++			struct virtchnl_pf_event *vpe =
++				(struct virtchnl_pf_event *)event->msg_buf;
++
++			if (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING)
++				continue;
++
++			dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n");
++			if (!(adapter->flags & IAVF_FLAG_RESET_PENDING))
++				iavf_schedule_reset(adapter,
++						    IAVF_FLAG_RESET_PENDING);
++
++			return -EIO;
++		}
++
+ 		if (op_to_poll == received_op)
+ 			break;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index d390157b59fe18..82d472f1d781a7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2740,6 +2740,27 @@ void ice_map_xdp_rings(struct ice_vsi *vsi)
+ 	}
+ }
+ 
++/**
++ * ice_unmap_xdp_rings - Unmap XDP rings from interrupt vectors
++ * @vsi: the VSI with XDP rings being unmapped
++ */
++static void ice_unmap_xdp_rings(struct ice_vsi *vsi)
++{
++	int v_idx;
++
++	ice_for_each_q_vector(vsi, v_idx) {
++		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
++		struct ice_tx_ring *ring;
++
++		ice_for_each_tx_ring(ring, q_vector->tx)
++			if (!ring->tx_buf || !ice_ring_is_xdp(ring))
++				break;
++
++		/* restore the value of last node prior to XDP setup */
++		q_vector->tx.tx_ring = ring;
++	}
++}
++
+ /**
+  * ice_prepare_xdp_rings - Allocate, configure and setup Tx rings for XDP
+  * @vsi: VSI to bring up Tx rings used by XDP
+@@ -2803,7 +2824,7 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 	if (status) {
+ 		dev_err(dev, "Failed VSI LAN queue config for XDP, error: %d\n",
+ 			status);
+-		goto clear_xdp_rings;
++		goto unmap_xdp_rings;
+ 	}
+ 
+ 	/* assign the prog only when it's not already present on VSI;
+@@ -2819,6 +2840,8 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 		ice_vsi_assign_bpf_prog(vsi, prog);
+ 
+ 	return 0;
++unmap_xdp_rings:
++	ice_unmap_xdp_rings(vsi);
+ clear_xdp_rings:
+ 	ice_for_each_xdp_txq(vsi, i)
+ 		if (vsi->xdp_rings[i]) {
+@@ -2835,6 +2858,8 @@ int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 	mutex_unlock(&pf->avail_q_mutex);
+ 
+ 	devm_kfree(dev, vsi->xdp_rings);
++	vsi->xdp_rings = NULL;
++
+ 	return -ENOMEM;
+ }
+ 
+@@ -2850,7 +2875,7 @@ int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type)
+ {
+ 	u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
+ 	struct ice_pf *pf = vsi->back;
+-	int i, v_idx;
++	int i;
+ 
+ 	/* q_vectors are freed in reset path so there's no point in detaching
+ 	 * rings
+@@ -2858,17 +2883,7 @@ int ice_destroy_xdp_rings(struct ice_vsi *vsi, enum ice_xdp_cfg cfg_type)
+ 	if (cfg_type == ICE_XDP_CFG_PART)
+ 		goto free_qmap;
+ 
+-	ice_for_each_q_vector(vsi, v_idx) {
+-		struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
+-		struct ice_tx_ring *ring;
+-
+-		ice_for_each_tx_ring(ring, q_vector->tx)
+-			if (!ring->tx_buf || !ice_ring_is_xdp(ring))
+-				break;
+-
+-		/* restore the value of last node prior to XDP setup */
+-		q_vector->tx.tx_ring = ring;
+-	}
++	ice_unmap_xdp_rings(vsi);
+ 
+ free_qmap:
+ 	mutex_lock(&pf->avail_q_mutex);
+@@ -3013,11 +3028,14 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 		xdp_ring_err = ice_vsi_determine_xdp_res(vsi);
+ 		if (xdp_ring_err) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Not enough Tx resources for XDP");
++			goto resume_if;
+ 		} else {
+ 			xdp_ring_err = ice_prepare_xdp_rings(vsi, prog,
+ 							     ICE_XDP_CFG_FULL);
+-			if (xdp_ring_err)
++			if (xdp_ring_err) {
+ 				NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed");
++				goto resume_if;
++			}
+ 		}
+ 		xdp_features_set_redirect_target(vsi->netdev, true);
+ 		/* reallocate Rx queues that are used for zero-copy */
+@@ -3035,6 +3053,7 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
+ 			NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Rx resources failed");
+ 	}
+ 
++resume_if:
+ 	if (if_running)
+ 		ret = ice_up(vsi);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 1fd1ae03eb9096..11ed48a62b5360 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -2307,6 +2307,7 @@ static int ice_capture_crosststamp(ktime_t *device,
+ 	ts = ((u64)ts_hi << 32) | ts_lo;
+ 	system->cycles = ts;
+ 	system->cs_id = CSID_X86_ART;
++	system->use_nsecs = true;
+ 
+ 	/* Read Device source clock time */
+ 	ts_lo = rd32(hw, cfg->dev_time_l[tmr_idx]);
+diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
+index 6ca13c5dcb14e7..d9d09296d1d481 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sched.c
++++ b/drivers/net/ethernet/intel/ice/ice_sched.c
+@@ -84,6 +84,27 @@ ice_sched_find_node_by_teid(struct ice_sched_node *start_node, u32 teid)
+ 	return NULL;
+ }
+ 
++/**
++ * ice_sched_find_next_vsi_node - find the next node for a given VSI
++ * @vsi_node: VSI support node to start search with
++ *
++ * Return: Next VSI support node, or NULL.
++ *
++ * The function returns a pointer to the next node from the VSI layer
++ * assigned to the given VSI, or NULL if there is no such a node.
++ */
++static struct ice_sched_node *
++ice_sched_find_next_vsi_node(struct ice_sched_node *vsi_node)
++{
++	unsigned int vsi_handle = vsi_node->vsi_handle;
++
++	while ((vsi_node = vsi_node->sibling) != NULL)
++		if (vsi_node->vsi_handle == vsi_handle)
++			break;
++
++	return vsi_node;
++}
++
+ /**
+  * ice_aqc_send_sched_elem_cmd - send scheduling elements cmd
+  * @hw: pointer to the HW struct
+@@ -1084,8 +1105,10 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
+ 		if (parent->num_children < max_child_nodes) {
+ 			new_num_nodes = max_child_nodes - parent->num_children;
+ 		} else {
+-			/* This parent is full, try the next sibling */
+-			parent = parent->sibling;
++			/* This parent is full,
++			 * try the next available sibling.
++			 */
++			parent = ice_sched_find_next_vsi_node(parent);
+ 			/* Don't modify the first node TEID memory if the
+ 			 * first node was added already in the above call.
+ 			 * Instead send some temp memory for all other
+@@ -1528,12 +1551,23 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ 	/* get the first queue group node from VSI sub-tree */
+ 	qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
+ 	while (qgrp_node) {
++		struct ice_sched_node *next_vsi_node;
++
+ 		/* make sure the qgroup node is part of the VSI subtree */
+ 		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
+ 			if (qgrp_node->num_children < max_children &&
+ 			    qgrp_node->owner == owner)
+ 				break;
+ 		qgrp_node = qgrp_node->sibling;
++		if (qgrp_node)
++			continue;
++
++		next_vsi_node = ice_sched_find_next_vsi_node(vsi_node);
++		if (!next_vsi_node)
++			break;
++
++		vsi_node = next_vsi_node;
++		qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
+ 	}
+ 
+ 	/* Select the best queue group */
+@@ -1604,16 +1638,16 @@ ice_sched_get_agg_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
+ /**
+  * ice_sched_calc_vsi_child_nodes - calculate number of VSI child nodes
+  * @hw: pointer to the HW struct
+- * @num_qs: number of queues
++ * @num_new_qs: number of new queues that will be added to the tree
+  * @num_nodes: num nodes array
+  *
+  * This function calculates the number of VSI child nodes based on the
+  * number of queues.
+  */
+ static void
+-ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
++ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_new_qs, u16 *num_nodes)
+ {
+-	u16 num = num_qs;
++	u16 num = num_new_qs;
+ 	u8 i, qgl, vsil;
+ 
+ 	qgl = ice_sched_get_qgrp_layer(hw);
+@@ -1779,7 +1813,11 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_handle,
+ 		if (!parent)
+ 			return -EIO;
+ 
+-		if (i == vsil)
++		/* Do not modify the VSI handle for already existing VSI nodes,
++		 * (if no new VSI node was added to the tree).
++		 * Assign the VSI handle only to newly added VSI nodes.
++		 */
++		if (i == vsil && num_added)
+ 			parent->vsi_handle = vsi_handle;
+ 	}
+ 
+@@ -1812,6 +1850,41 @@ ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
+ 					       num_nodes);
+ }
+ 
++/**
++ * ice_sched_recalc_vsi_support_nodes - recalculate VSI support nodes count
++ * @hw: pointer to the HW struct
++ * @vsi_node: pointer to the leftmost VSI node that needs to be extended
++ * @new_numqs: new number of queues that has to be handled by the VSI
++ * @new_num_nodes: pointer to nodes count table to modify the VSI layer entry
++ *
++ * This function recalculates the number of supported nodes that need to
++ * be added after adding more Tx queues for a given VSI.
++ * The number of new VSI support nodes that shall be added will be saved
++ * to the @new_num_nodes table for the VSI layer.
++ */
++static void
++ice_sched_recalc_vsi_support_nodes(struct ice_hw *hw,
++				   struct ice_sched_node *vsi_node,
++				   unsigned int new_numqs, u16 *new_num_nodes)
++{
++	u32 vsi_nodes_cnt = 1;
++	u32 max_queue_cnt = 1;
++	u32 qgl, vsil;
++
++	qgl = ice_sched_get_qgrp_layer(hw);
++	vsil = ice_sched_get_vsi_layer(hw);
++
++	for (u32 i = vsil; i <= qgl; i++)
++		max_queue_cnt *= hw->max_children[i];
++
++	while ((vsi_node = ice_sched_find_next_vsi_node(vsi_node)) != NULL)
++		vsi_nodes_cnt++;
++
++	if (new_numqs > (max_queue_cnt * vsi_nodes_cnt))
++		new_num_nodes[vsil] = DIV_ROUND_UP(new_numqs, max_queue_cnt) -
++				      vsi_nodes_cnt;
++}
++
+ /**
+  * ice_sched_update_vsi_child_nodes - update VSI child nodes
+  * @pi: port information structure
+@@ -1863,15 +1936,25 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
+ 			return status;
+ 	}
+ 
+-	if (new_numqs)
+-		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
+-	/* Keep the max number of queue configuration all the time. Update the
+-	 * tree only if number of queues > previous number of queues. This may
++	ice_sched_recalc_vsi_support_nodes(hw, vsi_node,
++					   new_numqs, new_num_nodes);
++	ice_sched_calc_vsi_child_nodes(hw, new_numqs - prev_numqs,
++				       new_num_nodes);
++
++	/* Never decrease the number of queues in the tree. Update the tree
++	 * only if number of queues > previous number of queues. This may
+ 	 * leave some extra nodes in the tree if number of queues < previous
+ 	 * number but that wouldn't harm anything. Removing those extra nodes
+ 	 * may complicate the code if those nodes are part of SRL or
+ 	 * individually rate limited.
++	 * Also, add the required VSI support nodes if the existing ones cannot
++	 * handle the requested new number of queues.
+ 	 */
++	status = ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
++						 new_num_nodes);
++	if (status)
++		return status;
++
+ 	status = ice_sched_add_vsi_child_nodes(pi, vsi_handle, tc_node,
+ 					       new_num_nodes, owner);
+ 	if (status)
+@@ -2012,6 +2095,58 @@ static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node)
+ 	return (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF);
+ }
+ 
++/**
++ * ice_sched_rm_vsi_subtree - remove all nodes assigned to a given VSI
++ * @pi: port information structure
++ * @vsi_node: pointer to the leftmost node of the VSI to be removed
++ * @owner: LAN or RDMA
++ * @tc: TC number
++ *
++ * Return: Zero in case of success, or -EBUSY if the VSI has leaf nodes in TC.
++ *
++ * This function removes all the VSI support nodes associated with a given VSI
++ * and its LAN or RDMA children nodes from the scheduler tree.
++ */
++static int
++ice_sched_rm_vsi_subtree(struct ice_port_info *pi,
++			 struct ice_sched_node *vsi_node, u8 owner, u8 tc)
++{
++	u16 vsi_handle = vsi_node->vsi_handle;
++	bool all_vsi_nodes_removed = true;
++	int j = 0;
++
++	while (vsi_node) {
++		struct ice_sched_node *next_vsi_node;
++
++		if (ice_sched_is_leaf_node_present(vsi_node)) {
++			ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", tc);
++			return -EBUSY;
++		}
++		while (j < vsi_node->num_children) {
++			if (vsi_node->children[j]->owner == owner)
++				ice_free_sched_node(pi, vsi_node->children[j]);
++			else
++				j++;
++		}
++
++		next_vsi_node = ice_sched_find_next_vsi_node(vsi_node);
++
++		/* remove the VSI if it has no children */
++		if (!vsi_node->num_children)
++			ice_free_sched_node(pi, vsi_node);
++		else
++			all_vsi_nodes_removed = false;
++
++		vsi_node = next_vsi_node;
++	}
++
++	/* clean up aggregator related VSI info if any */
++	if (all_vsi_nodes_removed)
++		ice_sched_rm_agg_vsi_info(pi, vsi_handle);
++
++	return 0;
++}
++
+ /**
+  * ice_sched_rm_vsi_cfg - remove the VSI and its children nodes
+  * @pi: port information structure
+@@ -2038,7 +2173,6 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+ 
+ 	ice_for_each_traffic_class(i) {
+ 		struct ice_sched_node *vsi_node, *tc_node;
+-		u8 j = 0;
+ 
+ 		tc_node = ice_sched_get_tc_node(pi, i);
+ 		if (!tc_node)
+@@ -2048,31 +2182,12 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
+ 		if (!vsi_node)
+ 			continue;
+ 
+-		if (ice_sched_is_leaf_node_present(vsi_node)) {
+-			ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", i);
+-			status = -EBUSY;
++		status = ice_sched_rm_vsi_subtree(pi, vsi_node, owner, i);
++		if (status)
+ 			goto exit_sched_rm_vsi_cfg;
+-		}
+-		while (j < vsi_node->num_children) {
+-			if (vsi_node->children[j]->owner == owner) {
+-				ice_free_sched_node(pi, vsi_node->children[j]);
+ 
+-				/* reset the counter again since the num
+-				 * children will be updated after node removal
+-				 */
+-				j = 0;
+-			} else {
+-				j++;
+-			}
+-		}
+-		/* remove the VSI if it has no children */
+-		if (!vsi_node->num_children) {
+-			ice_free_sched_node(pi, vsi_node);
+-			vsi_ctx->sched.vsi_node[i] = NULL;
++		vsi_ctx->sched.vsi_node[i] = NULL;
+ 
+-			/* clean up aggregator related VSI info if any */
+-			ice_sched_rm_agg_vsi_info(pi, vsi_handle);
+-		}
+ 		if (owner == ICE_SCHED_NODE_OWNER_LAN)
+ 			vsi_ctx->sched.max_lanq[i] = 0;
+ 		else
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index 3a033ce19cda23..2ed801398971cc 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1816,11 +1816,19 @@ void idpf_vc_event_task(struct work_struct *work)
+ 	if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
+ 		return;
+ 
+-	if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
+-	    test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
+-		set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+-		idpf_init_hard_reset(adapter);
+-	}
++	if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags))
++		goto func_reset;
++
++	if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags))
++		goto drv_load;
++
++	return;
++
++func_reset:
++	idpf_vc_xn_shutdown(adapter->vcxn_mngr);
++drv_load:
++	set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
++	idpf_init_hard_reset(adapter);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+index eae1b6f474e624..6ade54e213259c 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+@@ -362,17 +362,18 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ {
+ 	struct idpf_tx_offload_params offload = { };
+ 	struct idpf_tx_buf *first;
++	int csum, tso, needed;
+ 	unsigned int count;
+ 	__be16 protocol;
+-	int csum, tso;
+ 
+ 	count = idpf_tx_desc_count_required(tx_q, skb);
+ 	if (unlikely(!count))
+ 		return idpf_tx_drop_skb(tx_q, skb);
+ 
+-	if (idpf_tx_maybe_stop_common(tx_q,
+-				      count + IDPF_TX_DESCS_PER_CACHE_LINE +
+-				      IDPF_TX_DESCS_FOR_CTX)) {
++	needed = count + IDPF_TX_DESCS_PER_CACHE_LINE + IDPF_TX_DESCS_FOR_CTX;
++	if (!netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
++				       IDPF_DESC_UNUSED(tx_q),
++				       needed, needed)) {
+ 		idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+ 
+ 		u64_stats_update_begin(&tx_q->stats_sync);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 2d5f5c9f91ce1e..aa16e4c1edbb8b 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -2132,6 +2132,19 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ 	desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
+ }
+ 
++/* Global conditions to tell whether the txq (and related resources)
++ * has room to allow the use of "size" descriptors.
++ */
++static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size)
++{
++	if (IDPF_DESC_UNUSED(tx_q) < size ||
++	    IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
++		IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) ||
++	    IDPF_TX_BUF_RSV_LOW(tx_q))
++		return 0;
++	return 1;
++}
++
+ /**
+  * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
+  * @tx_q: the queue to be checked
+@@ -2142,29 +2155,11 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
+ 				     unsigned int descs_needed)
+ {
+-	if (idpf_tx_maybe_stop_common(tx_q, descs_needed))
+-		goto out;
+-
+-	/* If there are too many outstanding completions expected on the
+-	 * completion queue, stop the TX queue to give the device some time to
+-	 * catch up
+-	 */
+-	if (unlikely(IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
+-		     IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq)))
+-		goto splitq_stop;
+-
+-	/* Also check for available book keeping buffers; if we are low, stop
+-	 * the queue to wait for more completions
+-	 */
+-	if (unlikely(IDPF_TX_BUF_RSV_LOW(tx_q)))
+-		goto splitq_stop;
+-
+-	return 0;
+-
+-splitq_stop:
+-	netif_stop_subqueue(tx_q->netdev, tx_q->idx);
++	if (netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
++				      idpf_txq_has_room(tx_q, descs_needed),
++				      1, 1))
++		return 0;
+ 
+-out:
+ 	u64_stats_update_begin(&tx_q->stats_sync);
+ 	u64_stats_inc(&tx_q->q_stats.q_busy);
+ 	u64_stats_update_end(&tx_q->stats_sync);
+@@ -2190,12 +2185,6 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
+ 	nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
+ 	tx_q->next_to_use = val;
+ 
+-	if (idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED)) {
+-		u64_stats_update_begin(&tx_q->stats_sync);
+-		u64_stats_inc(&tx_q->q_stats.q_busy);
+-		u64_stats_update_end(&tx_q->stats_sync);
+-	}
+-
+ 	/* Force memory writes to complete before letting h/w
+ 	 * know there are new descriptors to fetch.  (Only
+ 	 * applicable for weak-ordered memory model archs,
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+index b029f566e57cd6..c192a6c547dd32 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+@@ -1037,12 +1037,4 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq,
+ 				      u16 cleaned_count);
+ int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off);
+ 
+-static inline bool idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q,
+-					     u32 needed)
+-{
+-	return !netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
+-					  IDPF_DESC_UNUSED(tx_q),
+-					  needed, needed);
+-}
+-
+ #endif /* !_IDPF_TXRX_H_ */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index 3d2413b8684fca..5d2ca007f6828e 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -376,7 +376,7 @@ static void idpf_vc_xn_init(struct idpf_vc_xn_manager *vcxn_mngr)
+  * All waiting threads will be woken-up and their transaction aborted. Further
+  * operations on that object will fail.
+  */
+-static void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr)
++void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr)
+ {
+ 	int i;
+ 
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+index 83da5d8da56bf2..23271cf0a21605 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h
+@@ -66,5 +66,6 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport);
+ int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs);
+ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get);
+ int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get);
++void idpf_vc_xn_shutdown(struct idpf_vc_xn_manager *vcxn_mngr);
+ 
+ #endif /* _IDPF_VIRTCHNL_H_ */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index 07ea1954a276ed..796e90d741f022 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -9,7 +9,7 @@
+ #define IXGBE_IPSEC_KEY_BITS  160
+ static const char aes_gcm_name[] = "rfc4106(gcm(aes))";
+ 
+-static void ixgbe_ipsec_del_sa(struct xfrm_state *xs);
++static void ixgbe_ipsec_del_sa(struct net_device *dev, struct xfrm_state *xs);
+ 
+ /**
+  * ixgbe_ipsec_set_tx_sa - set the Tx SA registers
+@@ -321,7 +321,7 @@ void ixgbe_ipsec_restore(struct ixgbe_adapter *adapter)
+ 
+ 		if (r->used) {
+ 			if (r->mode & IXGBE_RXTXMOD_VF)
+-				ixgbe_ipsec_del_sa(r->xs);
++				ixgbe_ipsec_del_sa(adapter->netdev, r->xs);
+ 			else
+ 				ixgbe_ipsec_set_rx_sa(hw, i, r->xs->id.spi,
+ 						      r->key, r->salt,
+@@ -330,7 +330,7 @@ void ixgbe_ipsec_restore(struct ixgbe_adapter *adapter)
+ 
+ 		if (t->used) {
+ 			if (t->mode & IXGBE_RXTXMOD_VF)
+-				ixgbe_ipsec_del_sa(t->xs);
++				ixgbe_ipsec_del_sa(adapter->netdev, t->xs);
+ 			else
+ 				ixgbe_ipsec_set_tx_sa(hw, i, t->key, t->salt);
+ 		}
+@@ -417,6 +417,7 @@ static struct xfrm_state *ixgbe_ipsec_find_rx_state(struct ixgbe_ipsec *ipsec,
+ 
+ /**
+  * ixgbe_ipsec_parse_proto_keys - find the key and salt based on the protocol
++ * @dev: pointer to net device
+  * @xs: pointer to xfrm_state struct
+  * @mykey: pointer to key array to populate
+  * @mysalt: pointer to salt value to populate
+@@ -424,10 +425,10 @@ static struct xfrm_state *ixgbe_ipsec_find_rx_state(struct ixgbe_ipsec *ipsec,
+  * This copies the protocol keys and salt to our own data tables.  The
+  * 82599 family only supports the one algorithm.
+  **/
+-static int ixgbe_ipsec_parse_proto_keys(struct xfrm_state *xs,
++static int ixgbe_ipsec_parse_proto_keys(struct net_device *dev,
++					struct xfrm_state *xs,
+ 					u32 *mykey, u32 *mysalt)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -473,11 +474,12 @@ static int ixgbe_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbe_ipsec_check_mgmt_ip - make sure there is no clash with mgmt IP filters
++ * @dev: pointer to net device
+  * @xs: pointer to transformer state struct
+  **/
+-static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs)
++static int ixgbe_ipsec_check_mgmt_ip(struct net_device *dev,
++				     struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_hw *hw = &adapter->hw;
+ 	u32 mfval, manc, reg;
+@@ -556,13 +558,14 @@ static int ixgbe_ipsec_check_mgmt_ip(struct xfrm_state *xs)
+ 
+ /**
+  * ixgbe_ipsec_add_sa - program device with a security association
++ * @dev: pointer to device to program
+  * @xs: pointer to transformer state struct
+  * @extack: extack point to fill failure reason
+  **/
+-static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
++static int ixgbe_ipsec_add_sa(struct net_device *dev,
++			      struct xfrm_state *xs,
+ 			      struct netlink_ext_ack *extack)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_ipsec *ipsec = adapter->ipsec;
+ 	struct ixgbe_hw *hw = &adapter->hw;
+@@ -581,7 +584,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (ixgbe_ipsec_check_mgmt_ip(xs)) {
++	if (ixgbe_ipsec_check_mgmt_ip(dev, xs)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "IPsec IP addr clash with mgmt filters");
+ 		return -EINVAL;
+ 	}
+@@ -615,7 +618,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 			rsa.decrypt = xs->ealg || xs->aead;
+ 
+ 		/* get the key and salt */
+-		ret = ixgbe_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt);
++		ret = ixgbe_ipsec_parse_proto_keys(dev, xs, rsa.key, &rsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table");
+ 			return ret;
+@@ -724,7 +727,7 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 		if (xs->id.proto & IPPROTO_ESP)
+ 			tsa.encrypt = xs->ealg || xs->aead;
+ 
+-		ret = ixgbe_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt);
++		ret = ixgbe_ipsec_parse_proto_keys(dev, xs, tsa.key, &tsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table");
+ 			memset(&tsa, 0, sizeof(tsa));
+@@ -752,11 +755,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbe_ipsec_del_sa - clear out this specific SA
++ * @dev: pointer to device to program
+  * @xs: pointer to transformer state struct
+  **/
+-static void ixgbe_ipsec_del_sa(struct xfrm_state *xs)
++static void ixgbe_ipsec_del_sa(struct net_device *dev, struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbe_adapter *adapter = netdev_priv(dev);
+ 	struct ixgbe_ipsec *ipsec = adapter->ipsec;
+ 	struct ixgbe_hw *hw = &adapter->hw;
+@@ -841,7 +844,8 @@ void ixgbe_ipsec_vf_clear(struct ixgbe_adapter *adapter, u32 vf)
+ 			continue;
+ 		if (ipsec->rx_tbl[i].mode & IXGBE_RXTXMOD_VF &&
+ 		    ipsec->rx_tbl[i].vf == vf)
+-			ixgbe_ipsec_del_sa(ipsec->rx_tbl[i].xs);
++			ixgbe_ipsec_del_sa(adapter->netdev,
++					   ipsec->rx_tbl[i].xs);
+ 	}
+ 
+ 	/* search tx sa table */
+@@ -850,7 +854,8 @@ void ixgbe_ipsec_vf_clear(struct ixgbe_adapter *adapter, u32 vf)
+ 			continue;
+ 		if (ipsec->tx_tbl[i].mode & IXGBE_RXTXMOD_VF &&
+ 		    ipsec->tx_tbl[i].vf == vf)
+-			ixgbe_ipsec_del_sa(ipsec->tx_tbl[i].xs);
++			ixgbe_ipsec_del_sa(adapter->netdev,
++					   ipsec->tx_tbl[i].xs);
+ 	}
+ }
+ 
+@@ -930,7 +935,7 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 	memcpy(xs->aead->alg_name, aes_gcm_name, sizeof(aes_gcm_name));
+ 
+ 	/* set up the HW offload */
+-	err = ixgbe_ipsec_add_sa(xs, NULL);
++	err = ixgbe_ipsec_add_sa(adapter->netdev, xs, NULL);
+ 	if (err)
+ 		goto err_aead;
+ 
+@@ -1034,7 +1039,7 @@ int ixgbe_ipsec_vf_del_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ 		xs = ipsec->tx_tbl[sa_idx].xs;
+ 	}
+ 
+-	ixgbe_ipsec_del_sa(xs);
++	ixgbe_ipsec_del_sa(adapter->netdev, xs);
+ 
+ 	/* remove the xs that was made-up in the add request */
+ 	kfree_sensitive(xs);
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index 8ba037e3d9c270..65580b9cb06f21 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -201,6 +201,7 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
+ 
+ /**
+  * ixgbevf_ipsec_parse_proto_keys - find the key and salt based on the protocol
++ * @dev: pointer to net device to program
+  * @xs: pointer to xfrm_state struct
+  * @mykey: pointer to key array to populate
+  * @mysalt: pointer to salt value to populate
+@@ -208,10 +209,10 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
+  * This copies the protocol keys and salt to our own data tables.  The
+  * 82599 family only supports the one algorithm.
+  **/
+-static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
++static int ixgbevf_ipsec_parse_proto_keys(struct net_device *dev,
++					  struct xfrm_state *xs,
+ 					  u32 *mykey, u32 *mysalt)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -256,13 +257,14 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbevf_ipsec_add_sa - program device with a security association
++ * @dev: pointer to net device to program
+  * @xs: pointer to transformer state struct
+  * @extack: extack point to fill failure reason
+  **/
+-static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
++static int ixgbevf_ipsec_add_sa(struct net_device *dev,
++				struct xfrm_state *xs,
+ 				struct netlink_ext_ack *extack)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbevf_adapter *adapter;
+ 	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+@@ -310,7 +312,8 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ 			rsa.decrypt = xs->ealg || xs->aead;
+ 
+ 		/* get the key and salt */
+-		ret = ixgbevf_ipsec_parse_proto_keys(xs, rsa.key, &rsa.salt);
++		ret = ixgbevf_ipsec_parse_proto_keys(dev, xs, rsa.key,
++						     &rsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Rx SA table");
+ 			return ret;
+@@ -363,7 +366,8 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ 		if (xs->id.proto & IPPROTO_ESP)
+ 			tsa.encrypt = xs->ealg || xs->aead;
+ 
+-		ret = ixgbevf_ipsec_parse_proto_keys(xs, tsa.key, &tsa.salt);
++		ret = ixgbevf_ipsec_parse_proto_keys(dev, xs, tsa.key,
++						     &tsa.salt);
+ 		if (ret) {
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for Tx SA table");
+ 			memset(&tsa, 0, sizeof(tsa));
+@@ -388,11 +392,12 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs,
+ 
+ /**
+  * ixgbevf_ipsec_del_sa - clear out this specific SA
++ * @dev: pointer to net device to program
+  * @xs: pointer to transformer state struct
+  **/
+-static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs)
++static void ixgbevf_ipsec_del_sa(struct net_device *dev,
++				 struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.real_dev;
+ 	struct ixgbevf_adapter *adapter;
+ 	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index 655dd4726d36ef..0277d226293e9c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -143,6 +143,8 @@ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ 
+ 	otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
+ 
++	otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf);
++
+ 	mutex_unlock(&rvu->mbox_lock);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 992fa0b82e8d2d..ebb56eb0d18cfd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -272,6 +272,8 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
+ 
+ 		otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pfid);
+ 
++		otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid);
++
+ 		mutex_unlock(&rvu->mbox_lock);
+ 	} while (pfmap);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+index 052ae5923e3a85..32953cca108c80 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+@@ -60,6 +60,8 @@ static int rvu_rep_up_notify(struct rvu *rvu, struct rep_event *event)
+ 
+ 	otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
+ 
++	otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf);
++
+ 	mutex_unlock(&rvu->mbox_lock);
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+index fc59e50bafce66..a6500e3673f248 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+@@ -663,10 +663,10 @@ static int cn10k_ipsec_inb_add_state(struct xfrm_state *x,
+ 	return -EOPNOTSUPP;
+ }
+ 
+-static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
++static int cn10k_ipsec_outb_add_state(struct net_device *dev,
++				      struct xfrm_state *x,
+ 				      struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xso.dev;
+ 	struct cn10k_tx_sa_s *sa_entry;
+ 	struct qmem *sa_info;
+ 	struct otx2_nic *pf;
+@@ -676,7 +676,7 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
+ 	if (err)
+ 		return err;
+ 
+-	pf = netdev_priv(netdev);
++	pf = netdev_priv(dev);
+ 
+ 	err = qmem_alloc(pf->dev, &sa_info, pf->ipsec.sa_size, OTX2_ALIGN);
+ 	if (err)
+@@ -700,18 +700,18 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
+ 	return 0;
+ }
+ 
+-static int cn10k_ipsec_add_state(struct xfrm_state *x,
++static int cn10k_ipsec_add_state(struct net_device *dev,
++				 struct xfrm_state *x,
+ 				 struct netlink_ext_ack *extack)
+ {
+ 	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ 		return cn10k_ipsec_inb_add_state(x, extack);
+ 	else
+-		return cn10k_ipsec_outb_add_state(x, extack);
++		return cn10k_ipsec_outb_add_state(dev, x, extack);
+ }
+ 
+-static void cn10k_ipsec_del_state(struct xfrm_state *x)
++static void cn10k_ipsec_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+-	struct net_device *netdev = x->xso.dev;
+ 	struct cn10k_tx_sa_s *sa_entry;
+ 	struct qmem *sa_info;
+ 	struct otx2_nic *pf;
+@@ -720,7 +720,7 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x)
+ 	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ 		return;
+ 
+-	pf = netdev_priv(netdev);
++	pf = netdev_priv(dev);
+ 
+ 	sa_info = (struct qmem *)x->xso.offload_handle;
+ 	sa_entry = (struct cn10k_tx_sa_s *)sa_info->base;
+@@ -732,7 +732,7 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x)
+ 
+ 	err = cn10k_outb_write_sa(pf, sa_info);
+ 	if (err)
+-		netdev_err(netdev, "Error (%d) deleting SA\n", err);
++		netdev_err(dev, "Error (%d) deleting SA\n", err);
+ 
+ 	x->xso.offload_handle = 0;
+ 	qmem_free(pf->dev, sa_info);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+index 35acc07bd96489..5765bac119f0e7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+@@ -1638,6 +1638,7 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force
+ 	if (!node->is_static)
+ 		dwrr_del_node = true;
+ 
++	WRITE_ONCE(node->qid, OTX2_QOS_QID_INNER);
+ 	/* destroy the leaf node */
+ 	otx2_qos_disable_sq(pfvf, qid);
+ 	otx2_qos_destroy_node(pfvf, node);
+@@ -1682,9 +1683,6 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force
+ 	}
+ 	kfree(new_cfg);
+ 
+-	/* update tx_real_queues */
+-	otx2_qos_update_tx_netdev_queues(pfvf);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+index c5dbae0e513b64..58d572ce08eff5 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos_sq.c
+@@ -256,6 +256,26 @@ int otx2_qos_enable_sq(struct otx2_nic *pfvf, int qidx)
+ 	return err;
+ }
+ 
++static int otx2_qos_nix_npa_ndc_sync(struct otx2_nic *pfvf)
++{
++	struct ndc_sync_op *req;
++	int rc;
++
++	mutex_lock(&pfvf->mbox.lock);
++
++	req = otx2_mbox_alloc_msg_ndc_sync_op(&pfvf->mbox);
++	if (!req) {
++		mutex_unlock(&pfvf->mbox.lock);
++		return -ENOMEM;
++	}
++
++	req->nix_lf_tx_sync = true;
++	req->npa_lf_sync = true;
++	rc = otx2_sync_mbox_msg(&pfvf->mbox);
++	mutex_unlock(&pfvf->mbox.lock);
++	return rc;
++}
++
+ void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
+ {
+ 	struct otx2_qset *qset = &pfvf->qset;
+@@ -285,6 +305,8 @@ void otx2_qos_disable_sq(struct otx2_nic *pfvf, int qidx)
+ 
+ 	otx2_qos_sqb_flush(pfvf, sq_idx);
+ 	otx2_smq_flush(pfvf, otx2_get_smq_idx(pfvf, sq_idx));
++	/* NIX/NPA NDC sync */
++	otx2_qos_nix_npa_ndc_sync(pfvf);
+ 	otx2_cleanup_tx_cqes(pfvf, cq);
+ 
+ 	mutex_lock(&pfvf->mbox.lock);
+diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+index b175119a6a7da5..b83886a4112105 100644
+--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
++++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
+@@ -1463,6 +1463,8 @@ static __maybe_unused int mtk_star_suspend(struct device *dev)
+ 	if (netif_running(ndev))
+ 		mtk_star_disable(ndev);
+ 
++	netif_device_detach(ndev);
++
+ 	clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
+ 
+ 	return 0;
+@@ -1487,6 +1489,8 @@ static __maybe_unused int mtk_star_resume(struct device *dev)
+ 			clk_bulk_disable_unprepare(MTK_STAR_NCLKS, priv->clks);
+ 	}
+ 
++	netif_device_attach(ndev);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_clock.c b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
+index cd754cd76bde1b..d73a2044dc2662 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
+@@ -249,7 +249,7 @@ static const struct ptp_clock_info mlx4_en_ptp_clock_info = {
+ static u32 freq_to_shift(u16 freq)
+ {
+ 	u32 freq_khz = freq * 1000;
+-	u64 max_val_cycles = freq_khz * 1000 * MLX4_EN_WRAP_AROUND_SEC;
++	u64 max_val_cycles = freq_khz * 1000ULL * MLX4_EN_WRAP_AROUND_SEC;
+ 	u64 max_val_cycles_rounded = 1ULL << fls64(max_val_cycles - 1);
+ 	/* calculate max possible multiplier in order to fit in 64bit */
+ 	u64 max_mul = div64_u64(ULLONG_MAX, max_val_cycles_rounded);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+index f803e1c9359006..5ce1b463b7a8dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+@@ -707,8 +707,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
+ 				xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo);
+ 				page = xdpi.page.page;
+ 
+-				/* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
+-				 * as we know this is a page_pool page.
++				/* No need to check page_pool_page_is_pp() as we
++				 * know this is a page_pool page.
+ 				 */
+ 				page_pool_recycle_direct(page->pp, page);
+ 			} while (++n < num);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index 2dd842aac6fc47..77f61cd28a7993 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -259,8 +259,7 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
+ 				  struct mlx5_accel_esp_xfrm_attrs *attrs)
+ {
+ 	struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
+-	struct xfrm_state *x = sa_entry->x;
+-	struct net_device *netdev;
++	struct net_device *netdev = sa_entry->dev;
+ 	struct neighbour *n;
+ 	u8 addr[ETH_ALEN];
+ 	const void *pkey;
+@@ -270,8 +269,6 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
+ 	    attrs->type != XFRM_DEV_OFFLOAD_PACKET)
+ 		return;
+ 
+-	netdev = x->xso.real_dev;
+-
+ 	mlx5_query_mac_address(mdev, addr);
+ 	switch (attrs->dir) {
+ 	case XFRM_DEV_OFFLOAD_IN:
+@@ -692,17 +689,17 @@ static int mlx5e_ipsec_create_dwork(struct mlx5e_ipsec_sa_entry *sa_entry)
+ 	return 0;
+ }
+ 
+-static int mlx5e_xfrm_add_state(struct xfrm_state *x,
++static int mlx5e_xfrm_add_state(struct net_device *dev,
++				struct xfrm_state *x,
+ 				struct netlink_ext_ack *extack)
+ {
+ 	struct mlx5e_ipsec_sa_entry *sa_entry = NULL;
+-	struct net_device *netdev = x->xso.real_dev;
+ 	struct mlx5e_ipsec *ipsec;
+ 	struct mlx5e_priv *priv;
+ 	gfp_t gfp;
+ 	int err;
+ 
+-	priv = netdev_priv(netdev);
++	priv = netdev_priv(dev);
+ 	if (!priv->ipsec)
+ 		return -EOPNOTSUPP;
+ 
+@@ -713,6 +710,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ 		return -ENOMEM;
+ 
+ 	sa_entry->x = x;
++	sa_entry->dev = dev;
+ 	sa_entry->ipsec = ipsec;
+ 	/* Check if this SA is originated from acquire flow temporary SA */
+ 	if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+@@ -809,7 +807,7 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ 	return err;
+ }
+ 
+-static void mlx5e_xfrm_del_state(struct xfrm_state *x)
++static void mlx5e_xfrm_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+ 	struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+ 	struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+@@ -822,7 +820,7 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
+ 	WARN_ON(old != sa_entry);
+ }
+ 
+-static void mlx5e_xfrm_free_state(struct xfrm_state *x)
++static void mlx5e_xfrm_free_state(struct net_device *dev, struct xfrm_state *x)
+ {
+ 	struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+ 	struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+@@ -855,8 +853,6 @@ static int mlx5e_ipsec_netevent_event(struct notifier_block *nb,
+ 	struct mlx5e_ipsec_sa_entry *sa_entry;
+ 	struct mlx5e_ipsec *ipsec;
+ 	struct neighbour *n = ptr;
+-	struct net_device *netdev;
+-	struct xfrm_state *x;
+ 	unsigned long idx;
+ 
+ 	if (event != NETEVENT_NEIGH_UPDATE || !(n->nud_state & NUD_VALID))
+@@ -876,11 +872,9 @@ static int mlx5e_ipsec_netevent_event(struct notifier_block *nb,
+ 				continue;
+ 		}
+ 
+-		x = sa_entry->x;
+-		netdev = x->xso.real_dev;
+ 		data = sa_entry->work->data;
+ 
+-		neigh_ha_snapshot(data->addr, n, netdev);
++		neigh_ha_snapshot(data->addr, n, sa_entry->dev);
+ 		queue_work(ipsec->wq, &sa_entry->work->work);
+ 	}
+ 
+@@ -996,8 +990,8 @@ static void mlx5e_xfrm_update_stats(struct xfrm_state *x)
+ 	size_t headers;
+ 
+ 	lockdep_assert(lockdep_is_held(&x->lock) ||
+-		       lockdep_is_held(&dev_net(x->xso.real_dev)->xfrm.xfrm_cfg_mutex) ||
+-		       lockdep_is_held(&dev_net(x->xso.real_dev)->xfrm.xfrm_state_lock));
++		       lockdep_is_held(&net->xfrm.xfrm_cfg_mutex) ||
++		       lockdep_is_held(&net->xfrm.xfrm_state_lock));
+ 
+ 	if (x->xso.flags & XFRM_DEV_OFFLOAD_FLAG_ACQ)
+ 		return;
+@@ -1170,7 +1164,7 @@ mlx5e_ipsec_build_accel_pol_attrs(struct mlx5e_ipsec_pol_entry *pol_entry,
+ static int mlx5e_xfrm_add_policy(struct xfrm_policy *x,
+ 				 struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xdo.real_dev;
++	struct net_device *netdev = x->xdo.dev;
+ 	struct mlx5e_ipsec_pol_entry *pol_entry;
+ 	struct mlx5e_priv *priv;
+ 	int err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+index a63c2289f8af92..ffcd0cdeb77544 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+@@ -274,6 +274,7 @@ struct mlx5e_ipsec_limits {
+ struct mlx5e_ipsec_sa_entry {
+ 	struct mlx5e_ipsec_esn_state esn_state;
+ 	struct xfrm_state *x;
++	struct net_device *dev;
+ 	struct mlx5e_ipsec *ipsec;
+ 	struct mlx5_accel_esp_xfrm_attrs attrs;
+ 	void (*set_iv_op)(struct sk_buff *skb, struct xfrm_state *x,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index fdf9e9bb99ace6..6253ea4e99a44f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -43,7 +43,6 @@
+ #include "en/fs_ethtool.h"
+ 
+ #define LANES_UNKNOWN		 0
+-#define MAX_LANES		 8
+ 
+ void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
+ 			       struct ethtool_drvinfo *drvinfo)
+@@ -1098,10 +1097,8 @@ static void get_link_properties(struct net_device *netdev,
+ 		speed = info->speed;
+ 		lanes = info->lanes;
+ 		duplex = DUPLEX_FULL;
+-	} else if (data_rate_oper) {
++	} else if (data_rate_oper)
+ 		speed = 100 * data_rate_oper;
+-		lanes = MAX_LANES;
+-	}
+ 
+ out:
+ 	link_ksettings->base.duplex = duplex;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index f1d908f611349f..fef418e1ed1a08 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2028,9 +2028,8 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ 	return err;
+ }
+ 
+-static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow)
++static bool mlx5_flow_has_geneve_opt(struct mlx5_flow_spec *spec)
+ {
+-	struct mlx5_flow_spec *spec = &flow->attr->parse_attr->spec;
+ 	void *headers_v = MLX5_ADDR_OF(fte_match_param,
+ 				       spec->match_value,
+ 				       misc_parameters_3);
+@@ -2069,7 +2068,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ 	}
+ 	complete_all(&flow->del_hw_done);
+ 
+-	if (mlx5_flow_has_geneve_opt(flow))
++	if (mlx5_flow_has_geneve_opt(&attr->parse_attr->spec))
+ 		mlx5_geneve_tlv_option_del(priv->mdev->geneve);
+ 
+ 	if (flow->decap_route)
+@@ -2574,12 +2573,13 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
+ 
+ 		err = mlx5e_tc_tun_parse(filter_dev, priv, tmp_spec, f, match_level);
+ 		if (err) {
+-			kvfree(tmp_spec);
+ 			NL_SET_ERR_MSG_MOD(extack, "Failed to parse tunnel attributes");
+ 			netdev_warn(priv->netdev, "Failed to parse tunnel attributes");
+-			return err;
++		} else {
++			err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec);
+ 		}
+-		err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec);
++		if (mlx5_flow_has_geneve_opt(tmp_spec))
++			mlx5_geneve_tlv_option_del(priv->mdev->geneve);
+ 		kvfree(tmp_spec);
+ 		if (err)
+ 			return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 7fb8a3381f849e..4917d185d0c352 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1295,12 +1295,15 @@ mlx5_eswitch_enable_pf_vf_vports(struct mlx5_eswitch *esw,
+ 		ret = mlx5_eswitch_load_pf_vf_vport(esw, MLX5_VPORT_ECPF, enabled_events);
+ 		if (ret)
+ 			goto ecpf_err;
+-		if (mlx5_core_ec_sriov_enabled(esw->dev)) {
+-			ret = mlx5_eswitch_load_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs,
+-							     enabled_events);
+-			if (ret)
+-				goto ec_vf_err;
+-		}
++	}
++
++	/* Enable ECVF vports */
++	if (mlx5_core_ec_sriov_enabled(esw->dev)) {
++		ret = mlx5_eswitch_load_ec_vf_vports(esw,
++						     esw->esw_funcs.num_ec_vfs,
++						     enabled_events);
++		if (ret)
++			goto ec_vf_err;
+ 	}
+ 
+ 	/* Enable VF vports */
+@@ -1331,9 +1334,11 @@ void mlx5_eswitch_disable_pf_vf_vports(struct mlx5_eswitch *esw)
+ {
+ 	mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs);
+ 
++	if (mlx5_core_ec_sriov_enabled(esw->dev))
++		mlx5_eswitch_unload_ec_vf_vports(esw,
++						 esw->esw_funcs.num_ec_vfs);
++
+ 	if (mlx5_ecpf_vport_exists(esw->dev)) {
+-		if (mlx5_core_ec_sriov_enabled(esw->dev))
+-			mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_vfs);
+ 		mlx5_eswitch_unload_pf_vf_vport(esw, MLX5_VPORT_ECPF);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6163bc98d94a94..445301ea70426d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2207,6 +2207,7 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 	struct mlx5_flow_handle *rule;
+ 	struct match_list *iter;
+ 	bool take_write = false;
++	bool try_again = false;
+ 	struct fs_fte *fte;
+ 	u64  version = 0;
+ 	int err;
+@@ -2271,6 +2272,7 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
+ 
+ 		if (!g->node.active) {
++			try_again = true;
+ 			up_write_ref_node(&g->node, false);
+ 			continue;
+ 		}
+@@ -2292,7 +2294,8 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 			tree_put_node(&fte->node, false);
+ 		return rule;
+ 	}
+-	rule = ERR_PTR(-ENOENT);
++	err = try_again ? -EAGAIN : -ENOENT;
++	rule = ERR_PTR(err);
+ out:
+ 	kmem_cache_free(steering->ftes_cache, fte);
+ 	return rule;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index 972e8e9df585ba..9bc9bd83c2324c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -291,7 +291,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function)
+ static int alloc_system_page(struct mlx5_core_dev *dev, u32 function)
+ {
+ 	struct device *device = mlx5_core_dma_dev(dev);
+-	int nid = dev_to_node(device);
++	int nid = dev->priv.numa_node;
+ 	struct page *page;
+ 	u64 zero_addr = 1;
+ 	u64 addr;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+index b5332c54d4fb0f..17b8a3beb11732 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
+@@ -1361,8 +1361,8 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 	struct mlx5hws_cmd_set_fte_attr fte_attr = {0};
+ 	struct mlx5hws_cmd_forward_tbl *fw_island;
+ 	struct mlx5hws_action *action;
+-	u32 i /*, packet_reformat_id*/;
+-	int ret;
++	int ret, last_dest_idx = -1;
++	u32 i;
+ 
+ 	if (num_dest <= 1) {
+ 		mlx5hws_err(ctx, "Action must have multiple dests\n");
+@@ -1392,11 +1392,8 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 			dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;
+ 			fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ 			fte_attr.ignore_flow_level = ignore_flow_level;
+-			/* ToDo: In SW steering we have a handling of 'go to WIRE'
+-			 * destination here by upper layer setting 'is_wire_ft' flag
+-			 * if the destination is wire.
+-			 * This is because uplink should be last dest in the list.
+-			 */
++			if (dests[i].is_wire_ft)
++				last_dest_idx = i;
+ 			break;
+ 		case MLX5HWS_ACTION_TYP_VPORT:
+ 			dest_list[i].destination_type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+@@ -1420,6 +1417,9 @@ mlx5hws_action_create_dest_array(struct mlx5hws_context *ctx,
+ 		}
+ 	}
+ 
++	if (last_dest_idx != -1)
++		swap(dest_list[last_dest_idx], dest_list[num_dest - 1]);
++
+ 	fte_attr.dests_num = num_dest;
+ 	fte_attr.dests = dest_list;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+index 19dce1ba512d42..32de8bfc7644f5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.c
+@@ -90,13 +90,19 @@ int mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher,
+ 	bwc_matcher->priority = priority;
+ 	bwc_matcher->size_log = MLX5HWS_BWC_MATCHER_INIT_SIZE_LOG;
+ 
++	bwc_matcher->size_of_at_array = MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM;
++	bwc_matcher->at = kcalloc(bwc_matcher->size_of_at_array,
++				  sizeof(*bwc_matcher->at), GFP_KERNEL);
++	if (!bwc_matcher->at)
++		goto free_bwc_matcher_rules;
++
+ 	/* create dummy action template */
+ 	bwc_matcher->at[0] =
+ 		mlx5hws_action_template_create(action_types ?
+ 					       action_types : init_action_types);
+ 	if (!bwc_matcher->at[0]) {
+ 		mlx5hws_err(table->ctx, "BWC matcher: failed creating action template\n");
+-		goto free_bwc_matcher_rules;
++		goto free_bwc_matcher_at_array;
+ 	}
+ 
+ 	bwc_matcher->num_of_at = 1;
+@@ -126,6 +132,8 @@ int mlx5hws_bwc_matcher_create_simple(struct mlx5hws_bwc_matcher *bwc_matcher,
+ 	mlx5hws_match_template_destroy(bwc_matcher->mt);
+ free_at:
+ 	mlx5hws_action_template_destroy(bwc_matcher->at[0]);
++free_bwc_matcher_at_array:
++	kfree(bwc_matcher->at);
+ free_bwc_matcher_rules:
+ 	kfree(bwc_matcher->rules);
+ err:
+@@ -192,6 +200,7 @@ int mlx5hws_bwc_matcher_destroy_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
+ 
+ 	for (i = 0; i < bwc_matcher->num_of_at; i++)
+ 		mlx5hws_action_template_destroy(bwc_matcher->at[i]);
++	kfree(bwc_matcher->at);
+ 
+ 	mlx5hws_match_template_destroy(bwc_matcher->mt);
+ 	kfree(bwc_matcher->rules);
+@@ -520,6 +529,23 @@ hws_bwc_matcher_extend_at(struct mlx5hws_bwc_matcher *bwc_matcher,
+ 			  struct mlx5hws_rule_action rule_actions[])
+ {
+ 	enum mlx5hws_action_type action_types[MLX5HWS_BWC_MAX_ACTS];
++	void *p;
++
++	if (unlikely(bwc_matcher->num_of_at >= bwc_matcher->size_of_at_array)) {
++		if (bwc_matcher->size_of_at_array >= MLX5HWS_MATCHER_MAX_AT)
++			return -ENOMEM;
++		bwc_matcher->size_of_at_array *= 2;
++		p = krealloc(bwc_matcher->at,
++			     bwc_matcher->size_of_at_array *
++				     sizeof(*bwc_matcher->at),
++			     __GFP_ZERO | GFP_KERNEL);
++		if (!p) {
++			bwc_matcher->size_of_at_array /= 2;
++			return -ENOMEM;
++		}
++
++		bwc_matcher->at = p;
++	}
+ 
+ 	hws_bwc_rule_actions_to_action_types(rule_actions, action_types);
+ 
+@@ -777,6 +803,7 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 	struct mlx5hws_rule_attr rule_attr;
+ 	struct mutex *queue_lock; /* Protect the queue */
+ 	u32 num_of_rules;
++	bool need_rehash;
+ 	int ret = 0;
+ 	int at_idx;
+ 
+@@ -803,10 +830,14 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
+ 		at_idx = bwc_matcher->num_of_at - 1;
+ 
+ 		ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher,
+-						bwc_matcher->at[at_idx]);
++						bwc_matcher->at[at_idx],
++						&need_rehash);
+ 		if (unlikely(ret)) {
+-			/* Action template attach failed, possibly due to
+-			 * requiring more action STEs.
++			hws_bwc_unlock_all_queues(ctx);
++			return ret;
++		}
++		if (unlikely(need_rehash)) {
++			/* The new action template requires more action STEs.
+ 			 * Need to attempt creating new matcher with all
+ 			 * the action templates, including the new one.
+ 			 */
+@@ -942,6 +973,7 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule,
+ 	struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx;
+ 	struct mlx5hws_rule_attr rule_attr;
+ 	struct mutex *queue_lock; /* Protect the queue */
++	bool need_rehash;
+ 	int at_idx, ret;
+ 	u16 idx;
+ 
+@@ -973,12 +1005,17 @@ hws_bwc_rule_action_update(struct mlx5hws_bwc_rule *bwc_rule,
+ 			at_idx = bwc_matcher->num_of_at - 1;
+ 
+ 			ret = mlx5hws_matcher_attach_at(bwc_matcher->matcher,
+-							bwc_matcher->at[at_idx]);
++							bwc_matcher->at[at_idx],
++							&need_rehash);
+ 			if (unlikely(ret)) {
+-				/* Action template attach failed, possibly due to
+-				 * requiring more action STEs.
+-				 * Need to attempt creating new matcher with all
+-				 * the action templates, including the new one.
++				hws_bwc_unlock_all_queues(ctx);
++				return ret;
++			}
++			if (unlikely(need_rehash)) {
++				/* The new action template requires more action
++				 * STEs. Need to attempt creating new matcher
++				 * with all the action templates, including the
++				 * new one.
+ 				 */
+ 				ret = hws_bwc_matcher_rehash_at(bwc_matcher);
+ 				if (unlikely(ret)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
+index 47f7ed1415535f..bb0cf4b922ceba 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
+@@ -10,9 +10,7 @@
+ #define MLX5HWS_BWC_MATCHER_REHASH_BURST_TH 32
+ 
+ /* Max number of AT attach operations for the same matcher.
+- * When the limit is reached, next attempt to attach new AT
+- * will result in creation of a new matcher and moving all
+- * the rules to this matcher.
++ * When the limit is reached, a larger buffer is allocated for the ATs.
+  */
+ #define MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM 8
+ 
+@@ -23,10 +21,11 @@
+ struct mlx5hws_bwc_matcher {
+ 	struct mlx5hws_matcher *matcher;
+ 	struct mlx5hws_match_template *mt;
+-	struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM];
+-	u32 priority;
++	struct mlx5hws_action_template **at;
+ 	u8 num_of_at;
++	u8 size_of_at_array;
+ 	u8 size_log;
++	u32 priority;
+ 	atomic_t num_of_rules;
+ 	struct list_head *rules;
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
+index c8cc0c8115f537..293459458cc5f9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
+@@ -559,6 +559,9 @@ hws_definer_conv_outer(struct mlx5hws_definer_conv_data *cd,
+ 	HWS_SET_HDR(fc, match_param, IP_PROTOCOL_O,
+ 		    outer_headers.ip_protocol,
+ 		    eth_l3_outer.protocol_next_header);
++	HWS_SET_HDR(fc, match_param, IP_VERSION_O,
++		    outer_headers.ip_version,
++		    eth_l3_outer.ip_version);
+ 	HWS_SET_HDR(fc, match_param, IP_TTL_O,
+ 		    outer_headers.ttl_hoplimit,
+ 		    eth_l3_outer.time_to_live_hop_limit);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+index 1b787cd66e6fd3..29c5e00af1aa06 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
+@@ -966,6 +966,9 @@ static int mlx5_fs_fte_get_hws_actions(struct mlx5_flow_root_namespace *ns,
+ 			switch (attr->type) {
+ 			case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE:
+ 				dest_action = mlx5_fs_get_dest_action_ft(fs_ctx, dst);
++				if (dst->dest_attr.ft->flags &
++				    MLX5_FLOW_TABLE_UPLINK_VPORT)
++					dest_actions[num_dest_actions].is_wire_ft = true;
+ 				break;
+ 			case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM:
+ 				dest_action = mlx5_fs_get_dest_action_table_num(fs_ctx,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
+index b61864b320536d..37a4497048a6fa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c
+@@ -905,18 +905,48 @@ static int hws_matcher_uninit(struct mlx5hws_matcher *matcher)
+ 	return 0;
+ }
+ 
++static int hws_matcher_grow_at_array(struct mlx5hws_matcher *matcher)
++{
++	void *p;
++
++	if (matcher->size_of_at_array >= MLX5HWS_MATCHER_MAX_AT)
++		return -ENOMEM;
++
++	matcher->size_of_at_array *= 2;
++	p = krealloc(matcher->at,
++		     matcher->size_of_at_array * sizeof(*matcher->at),
++		     __GFP_ZERO | GFP_KERNEL);
++	if (!p) {
++		matcher->size_of_at_array /= 2;
++		return -ENOMEM;
++	}
++
++	matcher->at = p;
++
++	return 0;
++}
++
+ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher,
+-			      struct mlx5hws_action_template *at)
++			      struct mlx5hws_action_template *at,
++			      bool *need_rehash)
+ {
+ 	bool is_jumbo = mlx5hws_matcher_mt_is_jumbo(matcher->mt);
+ 	struct mlx5hws_context *ctx = matcher->tbl->ctx;
+ 	u32 required_stes;
+ 	int ret;
+ 
+-	if (!matcher->attr.max_num_of_at_attach) {
+-		mlx5hws_dbg(ctx, "Num of current at (%d) exceed allowed value\n",
+-			    matcher->num_of_at);
+-		return -EOPNOTSUPP;
++	*need_rehash = false;
++
++	if (unlikely(matcher->num_of_at >= matcher->size_of_at_array)) {
++		ret = hws_matcher_grow_at_array(matcher);
++		if (ret)
++			return ret;
++
++		if (matcher->col_matcher) {
++			ret = hws_matcher_grow_at_array(matcher->col_matcher);
++			if (ret)
++				return ret;
++		}
+ 	}
+ 
+ 	ret = hws_matcher_check_and_process_at(matcher, at);
+@@ -927,12 +957,11 @@ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher,
+ 	if (matcher->action_ste.max_stes < required_stes) {
+ 		mlx5hws_dbg(ctx, "Required STEs [%d] exceeds initial action template STE [%d]\n",
+ 			    required_stes, matcher->action_ste.max_stes);
+-		return -ENOMEM;
++		*need_rehash = true;
+ 	}
+ 
+ 	matcher->at[matcher->num_of_at] = *at;
+ 	matcher->num_of_at += 1;
+-	matcher->attr.max_num_of_at_attach -= 1;
+ 
+ 	if (matcher->col_matcher)
+ 		matcher->col_matcher->num_of_at = matcher->num_of_at;
+@@ -960,8 +989,9 @@ hws_matcher_set_templates(struct mlx5hws_matcher *matcher,
+ 	if (!matcher->mt)
+ 		return -ENOMEM;
+ 
+-	matcher->at = kvcalloc(num_of_at + matcher->attr.max_num_of_at_attach,
+-			       sizeof(*matcher->at),
++	matcher->size_of_at_array =
++		num_of_at + matcher->attr.max_num_of_at_attach;
++	matcher->at = kvcalloc(matcher->size_of_at_array, sizeof(*matcher->at),
+ 			       GFP_KERNEL);
+ 	if (!matcher->at) {
+ 		mlx5hws_err(ctx, "Failed to allocate action template array\n");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h
+index 020de70270c501..20b32012c418be 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.h
+@@ -23,6 +23,9 @@
+  */
+ #define MLX5HWS_MATCHER_ACTION_RTC_UPDATE_MULT 1
+ 
++/* Maximum number of action templates that can be attached to a matcher. */
++#define MLX5HWS_MATCHER_MAX_AT 128
++
+ enum mlx5hws_matcher_offset {
+ 	MLX5HWS_MATCHER_OFFSET_TAG_DW1 = 12,
+ 	MLX5HWS_MATCHER_OFFSET_TAG_DW0 = 13,
+@@ -72,6 +75,7 @@ struct mlx5hws_matcher {
+ 	struct mlx5hws_match_template *mt;
+ 	struct mlx5hws_action_template *at;
+ 	u8 num_of_at;
++	u8 size_of_at_array;
+ 	u8 num_of_mt;
+ 	/* enum mlx5hws_matcher_flags */
+ 	u8 flags;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+index 5121951f2778a8..173f7ed1c17c3d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
+@@ -211,6 +211,7 @@ struct mlx5hws_action_dest_attr {
+ 	struct mlx5hws_action *dest;
+ 	/* Optional reformat action */
+ 	struct mlx5hws_action *reformat;
++	bool is_wire_ft;
+ };
+ 
+ /**
+@@ -399,11 +400,14 @@ int mlx5hws_matcher_destroy(struct mlx5hws_matcher *matcher);
+  *
+  * @matcher: Matcher to attach the action template to.
+  * @at: Action template to be attached to the matcher.
++ * @need_rehash: Output parameter that tells callers if the matcher needs to be
++ * rehashed.
+  *
+  * Return: Zero on success, non-zero otherwise.
+  */
+ int mlx5hws_matcher_attach_at(struct mlx5hws_matcher *matcher,
+-			      struct mlx5hws_action_template *at);
++			      struct mlx5hws_action_template *at,
++			      bool *need_rehash);
+ 
+ /**
+  * mlx5hws_matcher_resize_set_target - Link two matchers and enable moving rules.
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index a70b88037a208b..7f36443832ada3 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -1330,7 +1330,7 @@ static int lan743x_mac_set_mtu(struct lan743x_adapter *adapter, int new_mtu)
+ }
+ 
+ /* PHY */
+-static int lan743x_phy_reset(struct lan743x_adapter *adapter)
++static int lan743x_hw_reset_phy(struct lan743x_adapter *adapter)
+ {
+ 	u32 data;
+ 
+@@ -1346,11 +1346,6 @@ static int lan743x_phy_reset(struct lan743x_adapter *adapter)
+ 				  50000, 1000000);
+ }
+ 
+-static int lan743x_phy_init(struct lan743x_adapter *adapter)
+-{
+-	return lan743x_phy_reset(adapter);
+-}
+-
+ static void lan743x_phy_interface_select(struct lan743x_adapter *adapter)
+ {
+ 	u32 id_rev;
+@@ -3534,10 +3529,6 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = lan743x_phy_init(adapter);
+-	if (ret)
+-		return ret;
+-
+ 	ret = lan743x_ptp_init(adapter);
+ 	if (ret)
+ 		return ret;
+@@ -3674,6 +3665,10 @@ static int lan743x_pcidev_probe(struct pci_dev *pdev,
+ 	if (ret)
+ 		goto cleanup_pci;
+ 
++	ret = lan743x_hw_reset_phy(adapter);
++	if (ret)
++		goto cleanup_pci;
++
+ 	ret = lan743x_hardware_init(adapter, pdev);
+ 	if (ret)
+ 		goto cleanup_pci;
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+index 0af143ec0f8694..7001584f1b7a62 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+@@ -353,6 +353,11 @@ static void lan966x_ifh_set_rew_op(void *ifh, u64 rew_op)
+ 	lan966x_ifh_set(ifh, rew_op, IFH_POS_REW_CMD, IFH_WID_REW_CMD);
+ }
+ 
++static void lan966x_ifh_set_oam_type(void *ifh, u64 oam_type)
++{
++	lan966x_ifh_set(ifh, oam_type, IFH_POS_PDU_TYPE, IFH_WID_PDU_TYPE);
++}
++
+ static void lan966x_ifh_set_timestamp(void *ifh, u64 timestamp)
+ {
+ 	lan966x_ifh_set(ifh, timestamp, IFH_POS_TIMESTAMP, IFH_WID_TIMESTAMP);
+@@ -380,6 +385,7 @@ static netdev_tx_t lan966x_port_xmit(struct sk_buff *skb,
+ 			return err;
+ 
+ 		lan966x_ifh_set_rew_op(ifh, LAN966X_SKB_CB(skb)->rew_op);
++		lan966x_ifh_set_oam_type(ifh, LAN966X_SKB_CB(skb)->pdu_type);
+ 		lan966x_ifh_set_timestamp(ifh, LAN966X_SKB_CB(skb)->ts_id);
+ 	}
+ 
+@@ -873,6 +879,7 @@ static int lan966x_probe_port(struct lan966x *lan966x, u32 p,
+ 	lan966x_vlan_port_set_vlan_aware(port, 0);
+ 	lan966x_vlan_port_set_vid(port, HOST_PVID, false, false);
+ 	lan966x_vlan_port_apply(port);
++	lan966x_vlan_port_rew_host(port);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
+index 1efa584e710777..4f75f068836933 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.h
+@@ -75,6 +75,10 @@
+ #define IFH_REW_OP_ONE_STEP_PTP		0x3
+ #define IFH_REW_OP_TWO_STEP_PTP		0x4
+ 
++#define IFH_PDU_TYPE_NONE		0
++#define IFH_PDU_TYPE_IPV4		7
++#define IFH_PDU_TYPE_IPV6		8
++
+ #define FDMA_RX_DCB_MAX_DBS		1
+ #define FDMA_TX_DCB_MAX_DBS		1
+ 
+@@ -254,6 +258,7 @@ struct lan966x_phc {
+ 
+ struct lan966x_skb_cb {
+ 	u8 rew_op;
++	u8 pdu_type;
+ 	u16 ts_id;
+ 	unsigned long jiffies;
+ };
+@@ -492,6 +497,7 @@ void lan966x_vlan_port_apply(struct lan966x_port *port);
+ bool lan966x_vlan_cpu_member_cpu_vlan_mask(struct lan966x *lan966x, u16 vid);
+ void lan966x_vlan_port_set_vlan_aware(struct lan966x_port *port,
+ 				      bool vlan_aware);
++void lan966x_vlan_port_rew_host(struct lan966x_port *port);
+ int lan966x_vlan_port_set_vid(struct lan966x_port *port,
+ 			      u16 vid,
+ 			      bool pvid,
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+index 63905bb5a63a83..87e5e81d40dc68 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+@@ -322,34 +322,55 @@ void lan966x_ptp_hwtstamp_get(struct lan966x_port *port,
+ 	*cfg = phc->hwtstamp_config;
+ }
+ 
+-static int lan966x_ptp_classify(struct lan966x_port *port, struct sk_buff *skb)
++static void lan966x_ptp_classify(struct lan966x_port *port, struct sk_buff *skb,
++				 u8 *rew_op, u8 *pdu_type)
+ {
+ 	struct ptp_header *header;
+ 	u8 msgtype;
+ 	int type;
+ 
+-	if (port->ptp_tx_cmd == IFH_REW_OP_NOOP)
+-		return IFH_REW_OP_NOOP;
++	if (port->ptp_tx_cmd == IFH_REW_OP_NOOP) {
++		*rew_op = IFH_REW_OP_NOOP;
++		*pdu_type = IFH_PDU_TYPE_NONE;
++		return;
++	}
+ 
+ 	type = ptp_classify_raw(skb);
+-	if (type == PTP_CLASS_NONE)
+-		return IFH_REW_OP_NOOP;
++	if (type == PTP_CLASS_NONE) {
++		*rew_op = IFH_REW_OP_NOOP;
++		*pdu_type = IFH_PDU_TYPE_NONE;
++		return;
++	}
+ 
+ 	header = ptp_parse_header(skb, type);
+-	if (!header)
+-		return IFH_REW_OP_NOOP;
++	if (!header) {
++		*rew_op = IFH_REW_OP_NOOP;
++		*pdu_type = IFH_PDU_TYPE_NONE;
++		return;
++	}
+ 
+-	if (port->ptp_tx_cmd == IFH_REW_OP_TWO_STEP_PTP)
+-		return IFH_REW_OP_TWO_STEP_PTP;
++	if (type & PTP_CLASS_L2)
++		*pdu_type = IFH_PDU_TYPE_NONE;
++	if (type & PTP_CLASS_IPV4)
++		*pdu_type = IFH_PDU_TYPE_IPV4;
++	if (type & PTP_CLASS_IPV6)
++		*pdu_type = IFH_PDU_TYPE_IPV6;
++
++	if (port->ptp_tx_cmd == IFH_REW_OP_TWO_STEP_PTP) {
++		*rew_op = IFH_REW_OP_TWO_STEP_PTP;
++		return;
++	}
+ 
+ 	/* If it is sync and run 1 step then set the correct operation,
+ 	 * otherwise run as 2 step
+ 	 */
+ 	msgtype = ptp_get_msgtype(header, type);
+-	if ((msgtype & 0xf) == 0)
+-		return IFH_REW_OP_ONE_STEP_PTP;
++	if ((msgtype & 0xf) == 0) {
++		*rew_op = IFH_REW_OP_ONE_STEP_PTP;
++		return;
++	}
+ 
+-	return IFH_REW_OP_TWO_STEP_PTP;
++	*rew_op = IFH_REW_OP_TWO_STEP_PTP;
+ }
+ 
+ static void lan966x_ptp_txtstamp_old_release(struct lan966x_port *port)
+@@ -374,10 +395,12 @@ int lan966x_ptp_txtstamp_request(struct lan966x_port *port,
+ {
+ 	struct lan966x *lan966x = port->lan966x;
+ 	unsigned long flags;
++	u8 pdu_type;
+ 	u8 rew_op;
+ 
+-	rew_op = lan966x_ptp_classify(port, skb);
++	lan966x_ptp_classify(port, skb, &rew_op, &pdu_type);
+ 	LAN966X_SKB_CB(skb)->rew_op = rew_op;
++	LAN966X_SKB_CB(skb)->pdu_type = pdu_type;
+ 
+ 	if (rew_op != IFH_REW_OP_TWO_STEP_PTP)
+ 		return 0;
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c b/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
+index 1c88120eb291a2..bcb4db76b75cd5 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
+@@ -297,6 +297,7 @@ static void lan966x_port_bridge_leave(struct lan966x_port *port,
+ 	lan966x_vlan_port_set_vlan_aware(port, false);
+ 	lan966x_vlan_port_set_vid(port, HOST_PVID, false, false);
+ 	lan966x_vlan_port_apply(port);
++	lan966x_vlan_port_rew_host(port);
+ }
+ 
+ int lan966x_port_changeupper(struct net_device *dev,
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c b/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
+index fa34a739c748e1..7da22520724ce2 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_vlan.c
+@@ -149,6 +149,27 @@ void lan966x_vlan_port_set_vlan_aware(struct lan966x_port *port,
+ 	port->vlan_aware = vlan_aware;
+ }
+ 
++/* When the interface is in host mode, the interface should not be vlan aware
++ * but it should insert all the tags that it gets from the network stack.
++ * The tags are not in the data of the frame but actually in the skb and the ifh
++ * is configured already to get this tag. So what we need to do is to update the
++ * rewriter to insert the vlan tag for all frames which have a vlan tag
++ * different than 0.
++ */
++void lan966x_vlan_port_rew_host(struct lan966x_port *port)
++{
++	struct lan966x *lan966x = port->lan966x;
++	u32 val;
++
++	/* Tag all frames except when VID=0*/
++	val = REW_TAG_CFG_TAG_CFG_SET(2);
++
++	/* Update only some bits in the register */
++	lan_rmw(val,
++		REW_TAG_CFG_TAG_CFG,
++		lan966x, REW_TAG_CFG(port->chip_port));
++}
++
+ void lan966x_vlan_port_apply(struct lan966x_port *port)
+ {
+ 	struct lan966x *lan966x = port->lan966x;
+diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+index 671af5d4c5d25c..9e7c285eaa6bca 100644
+--- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
++++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+@@ -266,17 +266,17 @@ static void set_sha2_512hmac(struct nfp_ipsec_cfg_add_sa *cfg, int *trunc_len)
+ 	}
+ }
+ 
+-static int nfp_net_xfrm_add_state(struct xfrm_state *x,
++static int nfp_net_xfrm_add_state(struct net_device *dev,
++				  struct xfrm_state *x,
+ 				  struct netlink_ext_ack *extack)
+ {
+-	struct net_device *netdev = x->xso.real_dev;
+ 	struct nfp_ipsec_cfg_mssg msg = {};
+ 	int i, key_len, trunc_len, err = 0;
+ 	struct nfp_ipsec_cfg_add_sa *cfg;
+ 	struct nfp_net *nn;
+ 	unsigned int saidx;
+ 
+-	nn = netdev_priv(netdev);
++	nn = netdev_priv(dev);
+ 	cfg = &msg.cfg_add_sa;
+ 
+ 	/* General */
+@@ -546,17 +546,16 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x,
+ 	return 0;
+ }
+ 
+-static void nfp_net_xfrm_del_state(struct xfrm_state *x)
++static void nfp_net_xfrm_del_state(struct net_device *dev, struct xfrm_state *x)
+ {
+ 	struct nfp_ipsec_cfg_mssg msg = {
+ 		.cmd = NFP_IPSEC_CFG_MSSG_INV_SA,
+ 		.sa_idx = x->xso.offload_handle - 1,
+ 	};
+-	struct net_device *netdev = x->xso.real_dev;
+ 	struct nfp_net *nn;
+ 	int err;
+ 
+-	nn = netdev_priv(netdev);
++	nn = netdev_priv(dev);
+ 	err = nfp_net_sched_mbox_amsg_work(nn, NFP_NET_CFG_MBOX_CMD_IPSEC, &msg,
+ 					   sizeof(msg), nfp_net_ipsec_cfg);
+ 	if (err)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c
+index c9693f77e1f61f..ac6f2e3a3fcd2f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_est.c
+@@ -32,6 +32,11 @@ static int est_configure(struct stmmac_priv *priv, struct stmmac_est *cfg,
+ 	int i, ret = 0;
+ 	u32 ctrl;
+ 
++	if (!ptp_rate) {
++		netdev_warn(priv->dev, "Invalid PTP rate");
++		return -EINVAL;
++	}
++
+ 	ret |= est_write(est_addr, EST_BTR_LOW, cfg->btr[0], false);
+ 	ret |= est_write(est_addr, EST_BTR_HIGH, cfg->btr[1], false);
+ 	ret |= est_write(est_addr, EST_TER, cfg->ter, false);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 59d07d0d3369db..3a049a158ea111 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -803,6 +803,11 @@ int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags)
+ 	if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp))
+ 		return -EOPNOTSUPP;
+ 
++	if (!priv->plat->clk_ptp_rate) {
++		netdev_err(priv->dev, "Invalid PTP clock rate");
++		return -EINVAL;
++	}
++
+ 	stmmac_config_hw_tstamping(priv, priv->ptpaddr, systime_flags);
+ 	priv->systime_flags = systime_flags;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index c73eff6a56b87a..15205a47cafc27 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -430,6 +430,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct plat_stmmacenet_data *plat;
+ 	struct stmmac_dma_cfg *dma_cfg;
++	static int bus_id = -ENODEV;
+ 	int phy_mode;
+ 	void *ret;
+ 	int rc;
+@@ -465,8 +466,14 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 	of_property_read_u32(np, "max-speed", &plat->max_speed);
+ 
+ 	plat->bus_id = of_alias_get_id(np, "ethernet");
+-	if (plat->bus_id < 0)
+-		plat->bus_id = 0;
++	if (plat->bus_id < 0) {
++		if (bus_id < 0)
++			bus_id = of_alias_get_highest_id("ethernet");
++		/* No ethernet alias found, init at -1 so first bus_id is 0 */
++		if (bus_id < 0)
++			bus_id = -1;
++		plat->bus_id = ++bus_id;
++	}
+ 
+ 	/* Default to phy auto-detection */
+ 	plat->phy_addr = -1;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+index 429b2d357813c8..3767ba495e78d2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -317,7 +317,7 @@ void stmmac_ptp_register(struct stmmac_priv *priv)
+ 
+ 	/* Calculate the clock domain crossing (CDC) error if necessary */
+ 	priv->plat->cdc_error_adj = 0;
+-	if (priv->plat->has_gmac4 && priv->plat->clk_ptp_rate)
++	if (priv->plat->has_gmac4)
+ 		priv->plat->cdc_error_adj = (2 * NSEC_PER_SEC) / priv->plat->clk_ptp_rate;
+ 
+ 	/* Update the ptp clock parameters based on feature discovery, when
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_stats.c b/drivers/net/ethernet/ti/icssg/icssg_stats.c
+index 6f0edae38ea242..172ae38381b453 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_stats.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_stats.c
+@@ -29,6 +29,14 @@ void emac_update_hardware_stats(struct prueth_emac *emac)
+ 	spin_lock(&prueth->stats_lock);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(icssg_all_miig_stats); i++) {
++		/* In MII mode TX lines are swapped inside ICSSG, so read Tx stats
++		 * from slice1 for port0 and slice0 for port1 to get accurate Tx
++		 * stats for a given port
++		 */
++		if (emac->phy_if == PHY_INTERFACE_MODE_MII &&
++		    icssg_all_miig_stats[i].offset >= ICSSG_TX_PACKET_OFFSET &&
++		    icssg_all_miig_stats[i].offset <= ICSSG_TX_BYTE_OFFSET)
++			base = stats_base[slice ^ 1];
+ 		regmap_read(prueth->miig_rt,
+ 			    base + icssg_all_miig_stats[i].offset,
+ 			    &val);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 054abf283ab33e..5f912b27bfd7fc 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -880,7 +880,7 @@ static void axienet_dma_tx_cb(void *data, const struct dmaengine_result *result)
+ 	dev_consume_skb_any(skbuf_dma->skb);
+ 	netif_txq_completed_wake(txq, 1, len,
+ 				 CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX),
+-				 2 * MAX_SKB_FRAGS);
++				 2);
+ }
+ 
+ /**
+@@ -914,7 +914,7 @@ axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+ 
+ 	dma_dev = lp->tx_chan->device;
+ 	sg_len = skb_shinfo(skb)->nr_frags + 1;
+-	if (CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX) <= sg_len) {
++	if (CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX) <= 1) {
+ 		netif_stop_queue(ndev);
+ 		if (net_ratelimit())
+ 			netdev_warn(ndev, "TX ring unexpectedly full\n");
+@@ -964,7 +964,7 @@ axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+ 	txq = skb_get_tx_queue(lp->ndev, skb);
+ 	netdev_tx_sent_queue(txq, skb->len);
+ 	netif_txq_maybe_stop(txq, CIRC_SPACE(lp->tx_ring_head, lp->tx_ring_tail, TX_BD_NUM_MAX),
+-			     MAX_SKB_FRAGS + 1, 2 * MAX_SKB_FRAGS);
++			     1, 2);
+ 
+ 	dmaengine_submit(dma_tx_desc);
+ 	dma_async_issue_pending(lp->tx_chan);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 3d315e30ee4725..7edbe76b5455a8 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -247,15 +247,39 @@ static sci_t make_sci(const u8 *addr, __be16 port)
+ 	return sci;
+ }
+ 
+-static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present)
++static sci_t macsec_active_sci(struct macsec_secy *secy)
+ {
+-	sci_t sci;
++	struct macsec_rx_sc *rx_sc = rcu_dereference_bh(secy->rx_sc);
++
++	/* Case single RX SC */
++	if (rx_sc && !rcu_dereference_bh(rx_sc->next))
++		return (rx_sc->active) ? rx_sc->sci : 0;
++	/* Case no RX SC or multiple */
++	else
++		return 0;
++}
++
++static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present,
++			      struct macsec_rxh_data *rxd)
++{
++	struct macsec_dev *macsec;
++	sci_t sci = 0;
+ 
+-	if (sci_present)
++	/* SC = 1 */
++	if (sci_present) {
+ 		memcpy(&sci, hdr->secure_channel_id,
+ 		       sizeof(hdr->secure_channel_id));
+-	else
++	/* SC = 0; ES = 0 */
++	} else if ((!(hdr->tci_an & (MACSEC_TCI_ES | MACSEC_TCI_SC))) &&
++		   (list_is_singular(&rxd->secys))) {
++		/* Only one SECY should exist on this scenario */
++		macsec = list_first_or_null_rcu(&rxd->secys, struct macsec_dev,
++						secys);
++		if (macsec)
++			return macsec_active_sci(&macsec->secy);
++	} else {
+ 		sci = make_sci(hdr->eth.h_source, MACSEC_PORT_ES);
++	}
+ 
+ 	return sci;
+ }
+@@ -1109,7 +1133,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 	struct macsec_rxh_data *rxd;
+ 	struct macsec_dev *macsec;
+ 	unsigned int len;
+-	sci_t sci;
++	sci_t sci = 0;
+ 	u32 hdr_pn;
+ 	bool cbit;
+ 	struct pcpu_rx_sc_stats *rxsc_stats;
+@@ -1156,11 +1180,14 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 
+ 	macsec_skb_cb(skb)->has_sci = !!(hdr->tci_an & MACSEC_TCI_SC);
+ 	macsec_skb_cb(skb)->assoc_num = hdr->tci_an & MACSEC_AN_MASK;
+-	sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci);
+ 
+ 	rcu_read_lock();
+ 	rxd = macsec_data_rcu(skb->dev);
+ 
++	sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci, rxd);
++	if (!sci)
++		goto drop_nosc;
++
+ 	list_for_each_entry_rcu(macsec, &rxd->secys, secys) {
+ 		struct macsec_rx_sc *sc = find_rx_sc(&macsec->secy, sci);
+ 
+@@ -1283,6 +1310,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
+ 	macsec_rxsa_put(rx_sa);
+ drop_nosa:
+ 	macsec_rxsc_put(rx_sc);
++drop_nosc:
+ 	rcu_read_unlock();
+ drop_direct:
+ 	kfree_skb(skb);
+diff --git a/drivers/net/mctp/mctp-usb.c b/drivers/net/mctp/mctp-usb.c
+index e8d4b01c3f3458..775a386d0aca12 100644
+--- a/drivers/net/mctp/mctp-usb.c
++++ b/drivers/net/mctp/mctp-usb.c
+@@ -257,6 +257,8 @@ static int mctp_usb_open(struct net_device *dev)
+ 
+ 	WRITE_ONCE(mctp_usb->stopped, false);
+ 
++	netif_start_queue(dev);
++
+ 	return mctp_usb_rx_queue(mctp_usb, GFP_KERNEL);
+ }
+ 
+diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
+index 4289ccd3e41bff..176935a8645ff1 100644
+--- a/drivers/net/netconsole.c
++++ b/drivers/net/netconsole.c
+@@ -1252,7 +1252,6 @@ static int sysdata_append_release(struct netconsole_target *nt, int offset)
+  */
+ static int prepare_extradata(struct netconsole_target *nt)
+ {
+-	u32 fields = SYSDATA_CPU_NR | SYSDATA_TASKNAME;
+ 	int extradata_len;
+ 
+ 	/* userdata was appended when configfs write helper was called
+@@ -1260,7 +1259,7 @@ static int prepare_extradata(struct netconsole_target *nt)
+ 	 */
+ 	extradata_len = nt->userdata_length;
+ 
+-	if (!(nt->sysdata_fields & fields))
++	if (!nt->sysdata_fields)
+ 		goto out;
+ 
+ 	if (nt->sysdata_fields & SYSDATA_CPU_NR)
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index d88bdb9a17176a..47cdee5577d461 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -85,11 +85,11 @@ static int nsim_ipsec_find_empty_idx(struct nsim_ipsec *ipsec)
+ 	return -ENOSPC;
+ }
+ 
+-static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
++static int nsim_ipsec_parse_proto_keys(struct net_device *dev,
++				       struct xfrm_state *xs,
+ 				       u32 *mykey, u32 *mysalt)
+ {
+ 	const char aes_gcm_name[] = "rfc4106(gcm(aes))";
+-	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -129,17 +129,16 @@ static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 	return 0;
+ }
+ 
+-static int nsim_ipsec_add_sa(struct xfrm_state *xs,
++static int nsim_ipsec_add_sa(struct net_device *dev,
++			     struct xfrm_state *xs,
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	struct nsim_ipsec *ipsec;
+-	struct net_device *dev;
+ 	struct netdevsim *ns;
+ 	struct nsim_sa sa;
+ 	u16 sa_idx;
+ 	int ret;
+ 
+-	dev = xs->xso.real_dev;
+ 	ns = netdev_priv(dev);
+ 	ipsec = &ns->ipsec;
+ 
+@@ -174,7 +173,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ 		sa.crypt = xs->ealg || xs->aead;
+ 
+ 	/* get the key and salt */
+-	ret = nsim_ipsec_parse_proto_keys(xs, sa.key, &sa.salt);
++	ret = nsim_ipsec_parse_proto_keys(dev, xs, sa.key, &sa.salt);
+ 	if (ret) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Failed to get key data for SA table");
+ 		return ret;
+@@ -200,9 +199,9 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ 	return 0;
+ }
+ 
+-static void nsim_ipsec_del_sa(struct xfrm_state *xs)
++static void nsim_ipsec_del_sa(struct net_device *dev, struct xfrm_state *xs)
+ {
+-	struct netdevsim *ns = netdev_priv(xs->xso.real_dev);
++	struct netdevsim *ns = netdev_priv(dev);
+ 	struct nsim_ipsec *ipsec = &ns->ipsec;
+ 	u16 sa_idx;
+ 
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 0e0321a7ddd710..31a06e71be25bb 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -369,7 +369,8 @@ static int nsim_poll(struct napi_struct *napi, int budget)
+ 	int done;
+ 
+ 	done = nsim_rcv(rq, budget);
+-	napi_complete(napi);
++	if (done < budget)
++		napi_complete_done(napi, done);
+ 
+ 	return done;
+ }
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index ede596c1a69d1b..909b4d53fdacdc 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -903,6 +903,9 @@ int __mdiobus_read(struct mii_bus *bus, int addr, u32 regnum)
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->read)
+ 		retval = bus->read(bus, addr, regnum);
+ 	else
+@@ -932,6 +935,9 @@ int __mdiobus_write(struct mii_bus *bus, int addr, u32 regnum, u16 val)
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->write)
+ 		err = bus->write(bus, addr, regnum, val);
+ 	else
+@@ -993,6 +999,9 @@ int __mdiobus_c45_read(struct mii_bus *bus, int addr, int devad, u32 regnum)
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->read_c45)
+ 		retval = bus->read_c45(bus, addr, devad, regnum);
+ 	else
+@@ -1024,6 +1033,9 @@ int __mdiobus_c45_write(struct mii_bus *bus, int addr, int devad, u32 regnum,
+ 
+ 	lockdep_assert_held_once(&bus->mdio_lock);
+ 
++	if (addr >= PHY_MAX_ADDR)
++		return -ENXIO;
++
+ 	if (bus->write_c45)
+ 		err = bus->write_c45(bus, addr, devad, regnum, val);
+ 	else
+diff --git a/drivers/net/phy/mediatek/Kconfig b/drivers/net/phy/mediatek/Kconfig
+index 2a8ac5aed0f893..6a4c2b328c4183 100644
+--- a/drivers/net/phy/mediatek/Kconfig
++++ b/drivers/net/phy/mediatek/Kconfig
+@@ -15,8 +15,7 @@ config MEDIATEK_GE_PHY
+ 
+ config MEDIATEK_GE_SOC_PHY
+ 	tristate "MediaTek SoC Ethernet PHYs"
+-	depends on (ARM64 && ARCH_MEDIATEK) || COMPILE_TEST
+-	depends on NVMEM_MTK_EFUSE
++	depends on (ARM64 && ARCH_MEDIATEK && NVMEM_MTK_EFUSE) || COMPILE_TEST
+ 	select MTK_NET_PHYLIB
+ 	help
+ 	  Supports MediaTek SoC built-in Gigabit Ethernet PHYs.
+diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c
+index ed8fb14a7f215e..6b800081eed52f 100644
+--- a/drivers/net/phy/mscc/mscc_ptp.c
++++ b/drivers/net/phy/mscc/mscc_ptp.c
+@@ -946,7 +946,9 @@ static int vsc85xx_ip1_conf(struct phy_device *phydev, enum ts_blk blk,
+ 	/* UDP checksum offset in IPv4 packet
+ 	 * according to: https://tools.ietf.org/html/rfc768
+ 	 */
+-	val |= IP1_NXT_PROT_UDP_CHKSUM_OFF(26) | IP1_NXT_PROT_UDP_CHKSUM_CLEAR;
++	val |= IP1_NXT_PROT_UDP_CHKSUM_OFF(26);
++	if (enable)
++		val |= IP1_NXT_PROT_UDP_CHKSUM_CLEAR;
+ 	vsc85xx_ts_write_csr(phydev, blk, MSCC_ANA_IP1_NXT_PROT_UDP_CHKSUM,
+ 			     val);
+ 
+@@ -1166,18 +1168,24 @@ static void vsc85xx_txtstamp(struct mii_timestamper *mii_ts,
+ 		container_of(mii_ts, struct vsc8531_private, mii_ts);
+ 
+ 	if (!vsc8531->ptp->configured)
+-		return;
++		goto out;
+ 
+-	if (vsc8531->ptp->tx_type == HWTSTAMP_TX_OFF) {
+-		kfree_skb(skb);
+-		return;
+-	}
++	if (vsc8531->ptp->tx_type == HWTSTAMP_TX_OFF)
++		goto out;
++
++	if (vsc8531->ptp->tx_type == HWTSTAMP_TX_ONESTEP_SYNC)
++		if (ptp_msg_is_sync(skb, type))
++			goto out;
+ 
+ 	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 
+ 	mutex_lock(&vsc8531->ts_lock);
+ 	__skb_queue_tail(&vsc8531->ptp->tx_queue, skb);
+ 	mutex_unlock(&vsc8531->ts_lock);
++	return;
++
++out:
++	kfree_skb(skb);
+ }
+ 
+ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts,
+diff --git a/drivers/net/phy/phy_caps.c b/drivers/net/phy/phy_caps.c
+index 7033216897264e..38417e2886118c 100644
+--- a/drivers/net/phy/phy_caps.c
++++ b/drivers/net/phy/phy_caps.c
+@@ -188,6 +188,9 @@ phy_caps_lookup_by_linkmode_rev(const unsigned long *linkmodes, bool fdx_only)
+  * When @exact is not set, we return either an exact match, or matching capabilities
+  * at lower speed, or the lowest matching speed, or NULL.
+  *
++ * Non-exact matches will try to return an exact speed and duplex match, but may
++ * return matching capabilities with same speed but a different duplex.
++ *
+  * Returns: a matched link_capabilities according to the above process, NULL
+  *	    otherwise.
+  */
+@@ -195,7 +198,7 @@ const struct link_capabilities *
+ phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,
+ 		bool exact)
+ {
+-	const struct link_capabilities *lcap, *last = NULL;
++	const struct link_capabilities *lcap, *match = NULL, *last = NULL;
+ 
+ 	for_each_link_caps_desc_speed(lcap) {
+ 		if (linkmode_intersects(lcap->linkmodes, supported)) {
+@@ -204,16 +207,19 @@ phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,
+ 			if (lcap->speed == speed && lcap->duplex == duplex) {
+ 				return lcap;
+ 			} else if (!exact) {
+-				if (lcap->speed <= speed)
+-					return lcap;
++				if (!match && lcap->speed <= speed)
++					match = lcap;
++
++				if (lcap->speed < speed)
++					break;
+ 			}
+ 		}
+ 	}
+ 
+-	if (!exact)
+-		return last;
++	if (!match && !exact)
++		match = last;
+ 
+-	return NULL;
++	return match;
+ }
+ EXPORT_SYMBOL_GPL(phy_caps_lookup);
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index cc1bfd22fb8120..7d5e76a3db0e94 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1727,8 +1727,10 @@ void phy_detach(struct phy_device *phydev)
+ 	struct module *ndev_owner = NULL;
+ 	struct mii_bus *bus;
+ 
+-	if (phydev->devlink)
++	if (phydev->devlink) {
+ 		device_link_del(phydev->devlink);
++		phydev->devlink = NULL;
++	}
+ 
+ 	if (phydev->sysfs_links) {
+ 		if (dev)
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index ff5be2cbf17b90..9201ee10a13f78 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -30,11 +30,14 @@ static int aqc111_read_cmd_nopm(struct usbnet *dev, u8 cmd, u16 value,
+ 	ret = usbnet_read_cmd_nopm(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR |
+ 				   USB_RECIP_DEVICE, value, index, data, size);
+ 
+-	if (unlikely(ret < 0))
++	if (unlikely(ret < size)) {
+ 		netdev_warn(dev->net,
+ 			    "Failed to read(0x%x) reg index 0x%04x: %d\n",
+ 			    cmd, index, ret);
+ 
++		ret = ret < 0 ? ret : -ENODATA;
++	}
++
+ 	return ret;
+ }
+ 
+@@ -46,11 +49,14 @@ static int aqc111_read_cmd(struct usbnet *dev, u8 cmd, u16 value,
+ 	ret = usbnet_read_cmd(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR |
+ 			      USB_RECIP_DEVICE, value, index, data, size);
+ 
+-	if (unlikely(ret < 0))
++	if (unlikely(ret < size)) {
+ 		netdev_warn(dev->net,
+ 			    "Failed to read(0x%x) reg index 0x%04x: %d\n",
+ 			    cmd, index, ret);
+ 
++		ret = ret < 0 ? ret : -ENODATA;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index c676979c7ab940..287b7c20c0d6c6 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -1568,6 +1568,30 @@ vmxnet3_get_hdr_len(struct vmxnet3_adapter *adapter, struct sk_buff *skb,
+ 	return (hlen + (hdr.tcp->doff << 2));
+ }
+ 
++static void
++vmxnet3_lro_tunnel(struct sk_buff *skb, __be16 ip_proto)
++{
++	struct udphdr *uh = NULL;
++
++	if (ip_proto == htons(ETH_P_IP)) {
++		struct iphdr *iph = (struct iphdr *)skb->data;
++
++		if (iph->protocol == IPPROTO_UDP)
++			uh = (struct udphdr *)(iph + 1);
++	} else {
++		struct ipv6hdr *iph = (struct ipv6hdr *)skb->data;
++
++		if (iph->nexthdr == IPPROTO_UDP)
++			uh = (struct udphdr *)(iph + 1);
++	}
++	if (uh) {
++		if (uh->check)
++			skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL_CSUM;
++		else
++			skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL;
++	}
++}
++
+ static int
+ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 		       struct vmxnet3_adapter *adapter, int quota)
+@@ -1881,6 +1905,8 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
+ 			if (segCnt != 0 && mss != 0) {
+ 				skb_shinfo(skb)->gso_type = rcd->v4 ?
+ 					SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
++				if (encap_lro)
++					vmxnet3_lro_tunnel(skb, skb->protocol);
+ 				skb_shinfo(skb)->gso_size = mss;
+ 				skb_shinfo(skb)->gso_segs = segCnt;
+ 			} else if ((segCnt != 0 || skb->len > mtu) && !encap_lro) {
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index 3ffeeba5dccf40..4a529f1f9beab6 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -366,6 +366,7 @@ static int wg_newlink(struct net_device *dev,
+ 	if (ret < 0)
+ 		goto err_free_handshake_queue;
+ 
++	dev_set_threaded(dev, true);
+ 	ret = register_netdevice(dev);
+ 	if (ret < 0)
+ 		goto err_uninit_ratelimiter;
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index 866bad2db33487..65673b1aba55d2 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -937,7 +937,9 @@ static int ath10k_snoc_hif_start(struct ath10k *ar)
+ 
+ 	dev_set_threaded(ar->napi_dev, true);
+ 	ath10k_core_napi_enable(ar);
+-	ath10k_snoc_irq_enable(ar);
++	/* IRQs are left enabled when we restart due to a firmware crash */
++	if (!test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags))
++		ath10k_snoc_irq_enable(ar);
+ 	ath10k_snoc_rx_post(ar);
+ 
+ 	clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags);
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 3d39ff85ba94ad..22eb1b0377ffed 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -951,6 +951,7 @@ void ath11k_fw_stats_init(struct ath11k *ar)
+ 	INIT_LIST_HEAD(&ar->fw_stats.bcn);
+ 
+ 	init_completion(&ar->fw_stats_complete);
++	init_completion(&ar->fw_stats_done);
+ }
+ 
+ void ath11k_fw_stats_free(struct ath11k_fw_stats *stats)
+@@ -1946,6 +1947,20 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
+ {
+ 	int ret;
+ 
++	switch (ath11k_crypto_mode) {
++	case ATH11K_CRYPT_MODE_SW:
++		set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
++		set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
++		break;
++	case ATH11K_CRYPT_MODE_HW:
++		clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
++		clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
++		break;
++	default:
++		ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);
++		return -EINVAL;
++	}
++
+ 	ret = ath11k_core_start_firmware(ab, ab->fw_mode);
+ 	if (ret) {
+ 		ath11k_err(ab, "failed to start firmware: %d\n", ret);
+@@ -1964,20 +1979,6 @@ int ath11k_core_qmi_firmware_ready(struct ath11k_base *ab)
+ 		goto err_firmware_stop;
+ 	}
+ 
+-	switch (ath11k_crypto_mode) {
+-	case ATH11K_CRYPT_MODE_SW:
+-		set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
+-		set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
+-		break;
+-	case ATH11K_CRYPT_MODE_HW:
+-		clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);
+-		clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
+-		break;
+-	default:
+-		ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);
+-		return -EINVAL;
+-	}
+-
+ 	if (ath11k_frame_mode == ATH11K_HW_TXRX_RAW)
+ 		set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);
+ 
+@@ -2050,6 +2051,7 @@ static int ath11k_core_reconfigure_on_crash(struct ath11k_base *ab)
+ void ath11k_core_halt(struct ath11k *ar)
+ {
+ 	struct ath11k_base *ab = ar->ab;
++	struct list_head *pos, *n;
+ 
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
+@@ -2065,7 +2067,12 @@ void ath11k_core_halt(struct ath11k *ar)
+ 
+ 	rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
+ 	synchronize_rcu();
+-	INIT_LIST_HEAD(&ar->arvifs);
++
++	spin_lock_bh(&ar->data_lock);
++	list_for_each_safe(pos, n, &ar->arvifs)
++		list_del_init(pos);
++	spin_unlock_bh(&ar->data_lock);
++
+ 	idr_init(&ar->txmgmt_idr);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 1a3d0de4afde83..529aca4f40621e 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -599,6 +599,8 @@ struct ath11k_fw_stats {
+ 	struct list_head pdevs;
+ 	struct list_head vdevs;
+ 	struct list_head bcn;
++	u32 num_vdev_recvd;
++	u32 num_bcn_recvd;
+ };
+ 
+ struct ath11k_dbg_htt_stats {
+@@ -783,7 +785,7 @@ struct ath11k {
+ 	u8 alpha2[REG_ALPHA2_LEN + 1];
+ 	struct ath11k_fw_stats fw_stats;
+ 	struct completion fw_stats_complete;
+-	bool fw_stats_done;
++	struct completion fw_stats_done;
+ 
+ 	/* protected by conf_mutex */
+ 	bool ps_state_enable;
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
+index bf192529e3fe26..5d46f8e4c231fb 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.c
++++ b/drivers/net/wireless/ath/ath11k/debugfs.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/vmalloc.h>
+@@ -93,57 +93,14 @@ void ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
+ 	spin_unlock_bh(&dbr_data->lock);
+ }
+ 
+-static void ath11k_debugfs_fw_stats_reset(struct ath11k *ar)
+-{
+-	spin_lock_bh(&ar->data_lock);
+-	ar->fw_stats_done = false;
+-	ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
+-	ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);
+-	spin_unlock_bh(&ar->data_lock);
+-}
+-
+ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats)
+ {
+ 	struct ath11k_base *ab = ar->ab;
+-	struct ath11k_pdev *pdev;
+-	bool is_end;
+-	static unsigned int num_vdev, num_bcn;
+-	size_t total_vdevs_started = 0;
+-	int i;
+-
+-	/* WMI_REQUEST_PDEV_STAT request has been already processed */
+-
+-	if (stats->stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {
+-		ar->fw_stats_done = true;
+-		return;
+-	}
+-
+-	if (stats->stats_id == WMI_REQUEST_VDEV_STAT) {
+-		if (list_empty(&stats->vdevs)) {
+-			ath11k_warn(ab, "empty vdev stats");
+-			return;
+-		}
+-		/* FW sends all the active VDEV stats irrespective of PDEV,
+-		 * hence limit until the count of all VDEVs started
+-		 */
+-		for (i = 0; i < ab->num_radios; i++) {
+-			pdev = rcu_dereference(ab->pdevs_active[i]);
+-			if (pdev && pdev->ar)
+-				total_vdevs_started += ar->num_started_vdevs;
+-		}
+-
+-		is_end = ((++num_vdev) == total_vdevs_started);
+-
+-		list_splice_tail_init(&stats->vdevs,
+-				      &ar->fw_stats.vdevs);
+-
+-		if (is_end) {
+-			ar->fw_stats_done = true;
+-			num_vdev = 0;
+-		}
+-		return;
+-	}
++	bool is_end = true;
+ 
++	/* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_RSSI_PER_CHAIN_STAT and
++	 * WMI_REQUEST_VDEV_STAT requests have been already processed.
++	 */
+ 	if (stats->stats_id == WMI_REQUEST_BCN_STAT) {
+ 		if (list_empty(&stats->bcn)) {
+ 			ath11k_warn(ab, "empty bcn stats");
+@@ -152,97 +109,18 @@ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *
+ 		/* Mark end until we reached the count of all started VDEVs
+ 		 * within the PDEV
+ 		 */
+-		is_end = ((++num_bcn) == ar->num_started_vdevs);
++		if (ar->num_started_vdevs)
++			is_end = ((++ar->fw_stats.num_bcn_recvd) ==
++				  ar->num_started_vdevs);
+ 
+ 		list_splice_tail_init(&stats->bcn,
+ 				      &ar->fw_stats.bcn);
+ 
+-		if (is_end) {
+-			ar->fw_stats_done = true;
+-			num_bcn = 0;
+-		}
++		if (is_end)
++			complete(&ar->fw_stats_done);
+ 	}
+ }
+ 
+-static int ath11k_debugfs_fw_stats_request(struct ath11k *ar,
+-					   struct stats_request_params *req_param)
+-{
+-	struct ath11k_base *ab = ar->ab;
+-	unsigned long timeout, time_left;
+-	int ret;
+-
+-	lockdep_assert_held(&ar->conf_mutex);
+-
+-	/* FW stats can get split when exceeding the stats data buffer limit.
+-	 * In that case, since there is no end marking for the back-to-back
+-	 * received 'update stats' event, we keep a 3 seconds timeout in case,
+-	 * fw_stats_done is not marked yet
+-	 */
+-	timeout = jiffies + secs_to_jiffies(3);
+-
+-	ath11k_debugfs_fw_stats_reset(ar);
+-
+-	reinit_completion(&ar->fw_stats_complete);
+-
+-	ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
+-
+-	if (ret) {
+-		ath11k_warn(ab, "could not request fw stats (%d)\n",
+-			    ret);
+-		return ret;
+-	}
+-
+-	time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);
+-
+-	if (!time_left)
+-		return -ETIMEDOUT;
+-
+-	for (;;) {
+-		if (time_after(jiffies, timeout))
+-			break;
+-
+-		spin_lock_bh(&ar->data_lock);
+-		if (ar->fw_stats_done) {
+-			spin_unlock_bh(&ar->data_lock);
+-			break;
+-		}
+-		spin_unlock_bh(&ar->data_lock);
+-	}
+-	return 0;
+-}
+-
+-int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,
+-				u32 vdev_id, u32 stats_id)
+-{
+-	struct ath11k_base *ab = ar->ab;
+-	struct stats_request_params req_param;
+-	int ret;
+-
+-	mutex_lock(&ar->conf_mutex);
+-
+-	if (ar->state != ATH11K_STATE_ON) {
+-		ret = -ENETDOWN;
+-		goto err_unlock;
+-	}
+-
+-	req_param.pdev_id = pdev_id;
+-	req_param.vdev_id = vdev_id;
+-	req_param.stats_id = stats_id;
+-
+-	ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
+-	if (ret)
+-		ath11k_warn(ab, "failed to request fw stats: %d\n", ret);
+-
+-	ath11k_dbg(ab, ATH11K_DBG_WMI,
+-		   "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",
+-		   pdev_id, vdev_id, stats_id);
+-
+-err_unlock:
+-	mutex_unlock(&ar->conf_mutex);
+-
+-	return ret;
+-}
+-
+ static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)
+ {
+ 	struct ath11k *ar = inode->i_private;
+@@ -268,7 +146,7 @@ static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)
+ 	req_param.vdev_id = 0;
+ 	req_param.stats_id = WMI_REQUEST_PDEV_STAT;
+ 
+-	ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
++	ret = ath11k_mac_fw_stats_request(ar, &req_param);
+ 	if (ret) {
+ 		ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret);
+ 		goto err_free;
+@@ -339,7 +217,7 @@ static int ath11k_open_vdev_stats(struct inode *inode, struct file *file)
+ 	req_param.vdev_id = 0;
+ 	req_param.stats_id = WMI_REQUEST_VDEV_STAT;
+ 
+-	ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
++	ret = ath11k_mac_fw_stats_request(ar, &req_param);
+ 	if (ret) {
+ 		ath11k_warn(ar->ab, "failed to request fw vdev stats: %d\n", ret);
+ 		goto err_free;
+@@ -415,7 +293,7 @@ static int ath11k_open_bcn_stats(struct inode *inode, struct file *file)
+ 			continue;
+ 
+ 		req_param.vdev_id = arvif->vdev_id;
+-		ret = ath11k_debugfs_fw_stats_request(ar, &req_param);
++		ret = ath11k_mac_fw_stats_request(ar, &req_param);
+ 		if (ret) {
+ 			ath11k_warn(ar->ab, "failed to request fw bcn stats: %d\n", ret);
+ 			goto err_free;
+diff --git a/drivers/net/wireless/ath/ath11k/debugfs.h b/drivers/net/wireless/ath/ath11k/debugfs.h
+index a39e458637b013..ed7fec177588f6 100644
+--- a/drivers/net/wireless/ath/ath11k/debugfs.h
++++ b/drivers/net/wireless/ath/ath11k/debugfs.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef _ATH11K_DEBUGFS_H_
+@@ -273,8 +273,6 @@ void ath11k_debugfs_unregister(struct ath11k *ar);
+ void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats);
+ 
+ void ath11k_debugfs_fw_stats_init(struct ath11k *ar);
+-int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,
+-				u32 vdev_id, u32 stats_id);
+ 
+ static inline bool ath11k_debugfs_is_pktlog_lite_mode_enabled(struct ath11k *ar)
+ {
+@@ -381,12 +379,6 @@ static inline int ath11k_debugfs_rx_filter(struct ath11k *ar)
+ 	return 0;
+ }
+ 
+-static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar,
+-					      u32 pdev_id, u32 vdev_id, u32 stats_id)
+-{
+-	return 0;
+-}
+-
+ static inline void
+ ath11k_debugfs_add_dbring_entry(struct ath11k *ar,
+ 				enum wmi_direct_buffer_module id,
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 97816916abac96..4763b271309aa2 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -8991,6 +8991,86 @@ static void ath11k_mac_put_chain_rssi(struct station_info *sinfo,
+ 	}
+ }
+ 
++static void ath11k_mac_fw_stats_reset(struct ath11k *ar)
++{
++	spin_lock_bh(&ar->data_lock);
++	ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
++	ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);
++	ar->fw_stats.num_vdev_recvd = 0;
++	ar->fw_stats.num_bcn_recvd = 0;
++	spin_unlock_bh(&ar->data_lock);
++}
++
++int ath11k_mac_fw_stats_request(struct ath11k *ar,
++				struct stats_request_params *req_param)
++{
++	struct ath11k_base *ab = ar->ab;
++	unsigned long time_left;
++	int ret;
++
++	lockdep_assert_held(&ar->conf_mutex);
++
++	ath11k_mac_fw_stats_reset(ar);
++
++	reinit_completion(&ar->fw_stats_complete);
++	reinit_completion(&ar->fw_stats_done);
++
++	ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
++
++	if (ret) {
++		ath11k_warn(ab, "could not request fw stats (%d)\n",
++			    ret);
++		return ret;
++	}
++
++	time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);
++	if (!time_left)
++		return -ETIMEDOUT;
++
++	/* FW stats can get split when exceeding the stats data buffer limit.
++	 * In that case, since there is no end marking for the back-to-back
++	 * received 'update stats' event, we keep a 3 seconds timeout in case,
++	 * fw_stats_done is not marked yet
++	 */
++	time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ);
++	if (!time_left)
++		return -ETIMEDOUT;
++
++	return 0;
++}
++
++static int ath11k_mac_get_fw_stats(struct ath11k *ar, u32 pdev_id,
++				   u32 vdev_id, u32 stats_id)
++{
++	struct ath11k_base *ab = ar->ab;
++	struct stats_request_params req_param;
++	int ret;
++
++	mutex_lock(&ar->conf_mutex);
++
++	if (ar->state != ATH11K_STATE_ON) {
++		ret = -ENETDOWN;
++		goto err_unlock;
++	}
++
++	req_param.pdev_id = pdev_id;
++	req_param.vdev_id = vdev_id;
++	req_param.stats_id = stats_id;
++
++	ret = ath11k_mac_fw_stats_request(ar, &req_param);
++	if (ret)
++		ath11k_warn(ab, "failed to request fw stats: %d\n", ret);
++
++	ath11k_dbg(ab, ATH11K_DBG_WMI,
++		   "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",
++		   pdev_id, vdev_id, stats_id);
++
++err_unlock:
++	mutex_unlock(&ar->conf_mutex);
++
++	return ret;
++}
++
+ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ 					 struct ieee80211_vif *vif,
+ 					 struct ieee80211_sta *sta,
+@@ -9028,8 +9108,8 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ 	if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL)) &&
+ 	    arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&
+ 	    ar->ab->hw_params.supports_rssi_stats &&
+-	    !ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,
+-					 WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {
++	    !ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
++				     WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {
+ 		ath11k_mac_put_chain_rssi(sinfo, arsta, "fw stats", true);
+ 	}
+ 
+@@ -9037,8 +9117,8 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
+ 	if (!signal &&
+ 	    arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&
+ 	    ar->ab->hw_params.supports_rssi_stats &&
+-	    !(ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,
+-					WMI_REQUEST_VDEV_STAT)))
++	    !(ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,
++				      WMI_REQUEST_VDEV_STAT)))
+ 		signal = arsta->rssi_beacon;
+ 
+ 	ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
+@@ -9384,11 +9464,13 @@ static int ath11k_fw_stats_request(struct ath11k *ar,
+ 	lockdep_assert_held(&ar->conf_mutex);
+ 
+ 	spin_lock_bh(&ar->data_lock);
+-	ar->fw_stats_done = false;
+ 	ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);
++	ar->fw_stats.num_vdev_recvd = 0;
++	ar->fw_stats.num_bcn_recvd = 0;
+ 	spin_unlock_bh(&ar->data_lock);
+ 
+ 	reinit_completion(&ar->fw_stats_complete);
++	reinit_completion(&ar->fw_stats_done);
+ 
+ 	ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);
+ 	if (ret) {
+diff --git a/drivers/net/wireless/ath/ath11k/mac.h b/drivers/net/wireless/ath/ath11k/mac.h
+index f5800fbecff89e..5e61eea1bb0378 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.h
++++ b/drivers/net/wireless/ath/ath11k/mac.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH11K_MAC_H
+@@ -179,4 +179,6 @@ int ath11k_mac_vif_set_keepalive(struct ath11k_vif *arvif,
+ void ath11k_mac_fill_reg_tpc_info(struct ath11k *ar,
+ 				  struct ieee80211_vif *vif,
+ 				  struct ieee80211_chanctx_conf *ctx);
++int ath11k_mac_fw_stats_request(struct ath11k *ar,
++				struct stats_request_params *req_param);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index d7f852bebf4aa2..98811726d33bf1 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -8158,6 +8158,11 @@ static void ath11k_peer_assoc_conf_event(struct ath11k_base *ab, struct sk_buff
+ static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *skb)
+ {
+ 	struct ath11k_fw_stats stats = {};
++	size_t total_vdevs_started = 0;
++	struct ath11k_pdev *pdev;
++	bool is_end = true;
++	int i;
++
+ 	struct ath11k *ar;
+ 	int ret;
+ 
+@@ -8184,18 +8189,50 @@ static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *sk
+ 
+ 	spin_lock_bh(&ar->data_lock);
+ 
+-	/* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via
++	/* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_VDEV_STAT and
++	 * WMI_REQUEST_RSSI_PER_CHAIN_STAT can be requested via mac ops or via
+ 	 * debugfs fw stats. Therefore, processing it separately.
+ 	 */
+ 	if (stats.stats_id == WMI_REQUEST_PDEV_STAT) {
+ 		list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs);
+-		ar->fw_stats_done = true;
++		complete(&ar->fw_stats_done);
++		goto complete;
++	}
++
++	if (stats.stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {
++		complete(&ar->fw_stats_done);
++		goto complete;
++	}
++
++	if (stats.stats_id == WMI_REQUEST_VDEV_STAT) {
++		if (list_empty(&stats.vdevs)) {
++			ath11k_warn(ab, "empty vdev stats");
++			goto complete;
++		}
++		/* FW sends all the active VDEV stats irrespective of PDEV,
++		 * hence limit until the count of all VDEVs started
++		 */
++		for (i = 0; i < ab->num_radios; i++) {
++			pdev = rcu_dereference(ab->pdevs_active[i]);
++			if (pdev && pdev->ar)
++				total_vdevs_started += ar->num_started_vdevs;
++		}
++
++		if (total_vdevs_started)
++			is_end = ((++ar->fw_stats.num_vdev_recvd) ==
++				  total_vdevs_started);
++
++		list_splice_tail_init(&stats.vdevs,
++				      &ar->fw_stats.vdevs);
++
++		if (is_end)
++			complete(&ar->fw_stats_done);
++
+ 		goto complete;
+ 	}
+ 
+-	/* WMI_REQUEST_VDEV_STAT, WMI_REQUEST_BCN_STAT and WMI_REQUEST_RSSI_PER_CHAIN_STAT
+-	 * are currently requested only via debugfs fw stats. Hence, processing these
+-	 * in debugfs context
++	/* WMI_REQUEST_BCN_STAT is currently requested only via debugfs fw stats.
++	 * Hence, processing it in debugfs context
+ 	 */
+ 	ath11k_debugfs_fw_stats_process(ar, &stats);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
+index 0b2dec081c6ee8..261f52b327e89c 100644
+--- a/drivers/net/wireless/ath/ath12k/core.c
++++ b/drivers/net/wireless/ath/ath12k/core.c
+@@ -891,6 +891,9 @@ static void ath12k_core_hw_group_stop(struct ath12k_hw_group *ag)
+ 		ab = ag->ab[i];
+ 		if (!ab)
+ 			continue;
++
++		clear_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
++
+ 		ath12k_core_device_cleanup(ab);
+ 	}
+ 
+@@ -1026,6 +1029,8 @@ static int ath12k_core_hw_group_start(struct ath12k_hw_group *ag)
+ 
+ 		mutex_lock(&ab->core_lock);
+ 
++		set_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
++
+ 		ret = ath12k_core_pdev_create(ab);
+ 		if (ret) {
+ 			ath12k_err(ab, "failed to create pdev core %d\n", ret);
+@@ -1246,6 +1251,7 @@ static void ath12k_rfkill_work(struct work_struct *work)
+ 
+ void ath12k_core_halt(struct ath12k *ar)
+ {
++	struct list_head *pos, *n;
+ 	struct ath12k_base *ab = ar->ab;
+ 
+ 	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+@@ -1261,7 +1267,12 @@ void ath12k_core_halt(struct ath12k *ar)
+ 
+ 	rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
+ 	synchronize_rcu();
+-	INIT_LIST_HEAD(&ar->arvifs);
++
++	spin_lock_bh(&ar->data_lock);
++	list_for_each_safe(pos, n, &ar->arvifs)
++		list_del_init(pos);
++	spin_unlock_bh(&ar->data_lock);
++
+ 	idr_init(&ar->txmgmt_idr);
+ }
+ 
+@@ -1774,7 +1785,7 @@ static void ath12k_core_hw_group_destroy(struct ath12k_hw_group *ag)
+ 	}
+ }
+ 
+-static void ath12k_core_hw_group_cleanup(struct ath12k_hw_group *ag)
++void ath12k_core_hw_group_cleanup(struct ath12k_hw_group *ag)
+ {
+ 	struct ath12k_base *ab;
+ 	int i;
+@@ -1891,7 +1902,8 @@ int ath12k_core_init(struct ath12k_base *ab)
+ 	if (!ag) {
+ 		mutex_unlock(&ath12k_hw_group_mutex);
+ 		ath12k_warn(ab, "unable to get hw group\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_unregister_notifier;
+ 	}
+ 
+ 	mutex_unlock(&ath12k_hw_group_mutex);
+@@ -1906,7 +1918,7 @@ int ath12k_core_init(struct ath12k_base *ab)
+ 		if (ret) {
+ 			mutex_unlock(&ag->mutex);
+ 			ath12k_warn(ab, "unable to create hw group\n");
+-			goto err;
++			goto err_destroy_hw_group;
+ 		}
+ 	}
+ 
+@@ -1914,18 +1926,20 @@ int ath12k_core_init(struct ath12k_base *ab)
+ 
+ 	return 0;
+ 
+-err:
++err_destroy_hw_group:
+ 	ath12k_core_hw_group_destroy(ab->ag);
+ 	ath12k_core_hw_group_unassign(ab);
++err_unregister_notifier:
++	ath12k_core_panic_notifier_unregister(ab);
++
+ 	return ret;
+ }
+ 
+ void ath12k_core_deinit(struct ath12k_base *ab)
+ {
+-	ath12k_core_panic_notifier_unregister(ab);
+-	ath12k_core_hw_group_cleanup(ab->ag);
+ 	ath12k_core_hw_group_destroy(ab->ag);
+ 	ath12k_core_hw_group_unassign(ab);
++	ath12k_core_panic_notifier_unregister(ab);
+ }
+ 
+ void ath12k_core_free(struct ath12k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 3fac4f00d3832c..f5f1ec796f7c55 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -533,11 +533,21 @@ struct ath12k_sta {
+ 	enum ieee80211_sta_state state;
+ };
+ 
+-#define ATH12K_MIN_5G_FREQ 4150
+-#define ATH12K_MIN_6G_FREQ 5925
+-#define ATH12K_MAX_6G_FREQ 7115
++#define ATH12K_HALF_20MHZ_BW	10
++#define ATH12K_2GHZ_MIN_CENTER	2412
++#define ATH12K_2GHZ_MAX_CENTER	2484
++#define ATH12K_5GHZ_MIN_CENTER	4900
++#define ATH12K_5GHZ_MAX_CENTER	5920
++#define ATH12K_6GHZ_MIN_CENTER	5935
++#define ATH12K_6GHZ_MAX_CENTER	7115
++#define ATH12K_MIN_2GHZ_FREQ	(ATH12K_2GHZ_MIN_CENTER - ATH12K_HALF_20MHZ_BW - 1)
++#define ATH12K_MAX_2GHZ_FREQ	(ATH12K_2GHZ_MAX_CENTER + ATH12K_HALF_20MHZ_BW + 1)
++#define ATH12K_MIN_5GHZ_FREQ	(ATH12K_5GHZ_MIN_CENTER - ATH12K_HALF_20MHZ_BW)
++#define ATH12K_MAX_5GHZ_FREQ	(ATH12K_5GHZ_MAX_CENTER + ATH12K_HALF_20MHZ_BW)
++#define ATH12K_MIN_6GHZ_FREQ	(ATH12K_6GHZ_MIN_CENTER - ATH12K_HALF_20MHZ_BW)
++#define ATH12K_MAX_6GHZ_FREQ	(ATH12K_6GHZ_MAX_CENTER + ATH12K_HALF_20MHZ_BW)
+ #define ATH12K_NUM_CHANS 101
+-#define ATH12K_MAX_5G_CHAN 173
++#define ATH12K_MAX_5GHZ_CHAN 173
+ 
+ enum ath12k_hw_state {
+ 	ATH12K_HW_STATE_OFF,
+@@ -1185,6 +1195,7 @@ struct ath12k_fw_stats_pdev {
+ };
+ 
+ int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab);
++void ath12k_core_hw_group_cleanup(struct ath12k_hw_group *ag);
+ int ath12k_core_pre_init(struct ath12k_base *ab);
+ int ath12k_core_init(struct ath12k_base *ath12k);
+ void ath12k_core_deinit(struct ath12k_base *ath12k);
+diff --git a/drivers/net/wireless/ath/ath12k/debugfs.c b/drivers/net/wireless/ath/ath12k/debugfs.c
+index 57002215ddf168..5efe30cf77470a 100644
+--- a/drivers/net/wireless/ath/ath12k/debugfs.c
++++ b/drivers/net/wireless/ath/ath12k/debugfs.c
+@@ -88,8 +88,8 @@ static int ath12k_get_tpc_ctl_mode_idx(struct wmi_tpc_stats_arg *tpc_stats,
+ 	u32 chan_freq = le32_to_cpu(tpc_stats->tpc_config.chan_freq);
+ 	u8 band;
+ 
+-	band = ((chan_freq > ATH12K_MIN_6G_FREQ) ? NL80211_BAND_6GHZ :
+-		((chan_freq > ATH12K_MIN_5G_FREQ) ? NL80211_BAND_5GHZ :
++	band = ((chan_freq > ATH12K_MIN_6GHZ_FREQ) ? NL80211_BAND_6GHZ :
++		((chan_freq > ATH12K_MIN_5GHZ_FREQ) ? NL80211_BAND_5GHZ :
+ 		NL80211_BAND_2GHZ));
+ 
+ 	if (band == NL80211_BAND_5GHZ || band == NL80211_BAND_6GHZ) {
+diff --git a/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c b/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c
+index 1c0d5fa39a8dcb..aeaf970339d4dc 100644
+--- a/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c
++++ b/drivers/net/wireless/ath/ath12k/debugfs_htt_stats.c
+@@ -5377,6 +5377,9 @@ static ssize_t ath12k_write_htt_stats_type(struct file *file,
+ 	const int size = 32;
+ 	int num_args;
+ 
++	if (count > size)
++		return -EINVAL;
++
+ 	char *buf __free(kfree) = kzalloc(size, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+diff --git a/drivers/net/wireless/ath/ath12k/dp.h b/drivers/net/wireless/ath/ath12k/dp.h
+index 75435a931548c9..427a87b63dec3b 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.h
++++ b/drivers/net/wireless/ath/ath12k/dp.h
+@@ -106,6 +106,8 @@ struct dp_mon_mpdu {
+ 	struct list_head list;
+ 	struct sk_buff *head;
+ 	struct sk_buff *tail;
++	u32 err_bitmap;
++	u8 decap_format;
+ };
+ 
+ #define DP_MON_MAX_STATUS_BUF 32
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index d22800e894850d..600d97169f241a 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -1647,7 +1647,7 @@ ath12k_dp_mon_rx_parse_status_tlv(struct ath12k *ar,
+ 				u32_get_bits(info[0], HAL_RX_MPDU_START_INFO0_PPDU_ID);
+ 		}
+ 
+-		break;
++		return HAL_RX_MON_STATUS_MPDU_START;
+ 	}
+ 	case HAL_RX_MSDU_START:
+ 		/* TODO: add msdu start parsing logic */
+@@ -1700,33 +1700,159 @@ static void ath12k_dp_mon_rx_msdus_set_payload(struct ath12k *ar,
+ 	skb_pull(head_msdu, rx_pkt_offset + l2_hdr_offset);
+ }
+ 
++static void
++ath12k_dp_mon_fill_rx_stats_info(struct ath12k *ar,
++				 struct hal_rx_mon_ppdu_info *ppdu_info,
++				 struct ieee80211_rx_status *rx_status)
++{
++	u32 center_freq = ppdu_info->freq;
++
++	rx_status->freq = center_freq;
++	rx_status->bw = ath12k_mac_bw_to_mac80211_bw(ppdu_info->bw);
++	rx_status->nss = ppdu_info->nss;
++	rx_status->rate_idx = 0;
++	rx_status->encoding = RX_ENC_LEGACY;
++	rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
++
++	if (center_freq >= ATH12K_MIN_6GHZ_FREQ &&
++	    center_freq <= ATH12K_MAX_6GHZ_FREQ) {
++		rx_status->band = NL80211_BAND_6GHZ;
++	} else if (center_freq >= ATH12K_MIN_2GHZ_FREQ &&
++		   center_freq <= ATH12K_MAX_2GHZ_FREQ) {
++		rx_status->band = NL80211_BAND_2GHZ;
++	} else if (center_freq >= ATH12K_MIN_5GHZ_FREQ &&
++		   center_freq <= ATH12K_MAX_5GHZ_FREQ) {
++		rx_status->band = NL80211_BAND_5GHZ;
++	} else {
++		rx_status->band = NUM_NL80211_BANDS;
++	}
++}
++
++static void
++ath12k_dp_mon_fill_rx_rate(struct ath12k *ar,
++			   struct hal_rx_mon_ppdu_info *ppdu_info,
++			   struct ieee80211_rx_status *rx_status)
++{
++	struct ieee80211_supported_band *sband;
++	enum rx_msdu_start_pkt_type pkt_type;
++	u8 rate_mcs, nss, sgi;
++	bool is_cck;
++
++	pkt_type = ppdu_info->preamble_type;
++	rate_mcs = ppdu_info->rate;
++	nss = ppdu_info->nss;
++	sgi = ppdu_info->gi;
++
++	switch (pkt_type) {
++	case RX_MSDU_START_PKT_TYPE_11A:
++	case RX_MSDU_START_PKT_TYPE_11B:
++		is_cck = (pkt_type == RX_MSDU_START_PKT_TYPE_11B);
++		if (rx_status->band < NUM_NL80211_BANDS) {
++			sband = &ar->mac.sbands[rx_status->band];
++			rx_status->rate_idx = ath12k_mac_hw_rate_to_idx(sband, rate_mcs,
++									is_cck);
++		}
++		break;
++	case RX_MSDU_START_PKT_TYPE_11N:
++		rx_status->encoding = RX_ENC_HT;
++		if (rate_mcs > ATH12K_HT_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in HT mode %d\n",
++				     rate_mcs);
++			break;
++		}
++		rx_status->rate_idx = rate_mcs + (8 * (nss - 1));
++		if (sgi)
++			rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
++		break;
++	case RX_MSDU_START_PKT_TYPE_11AC:
++		rx_status->encoding = RX_ENC_VHT;
++		rx_status->rate_idx = rate_mcs;
++		if (rate_mcs > ATH12K_VHT_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in VHT mode %d\n",
++				     rate_mcs);
++			break;
++		}
++		if (sgi)
++			rx_status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
++		break;
++	case RX_MSDU_START_PKT_TYPE_11AX:
++		rx_status->rate_idx = rate_mcs;
++		if (rate_mcs > ATH12K_HE_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in HE mode %d\n",
++				    rate_mcs);
++			break;
++		}
++		rx_status->encoding = RX_ENC_HE;
++		rx_status->he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
++		break;
++	case RX_MSDU_START_PKT_TYPE_11BE:
++		rx_status->rate_idx = rate_mcs;
++		if (rate_mcs > ATH12K_EHT_MCS_MAX) {
++			ath12k_warn(ar->ab,
++				    "Received with invalid mcs in EHT mode %d\n",
++				    rate_mcs);
++			break;
++		}
++		rx_status->encoding = RX_ENC_EHT;
++		rx_status->he_gi = ath12k_he_gi_to_nl80211_he_gi(sgi);
++		break;
++	default:
++		ath12k_dbg(ar->ab, ATH12K_DBG_DATA,
++			   "monitor receives invalid preamble type %d",
++			    pkt_type);
++		break;
++	}
++}
++
+ static struct sk_buff *
+ ath12k_dp_mon_rx_merg_msdus(struct ath12k *ar,
+-			    struct sk_buff *head_msdu, struct sk_buff *tail_msdu,
+-			    struct ieee80211_rx_status *rxs, bool *fcs_err)
++			    struct dp_mon_mpdu *mon_mpdu,
++			    struct hal_rx_mon_ppdu_info *ppdu_info,
++			    struct ieee80211_rx_status *rxs)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct sk_buff *msdu, *mpdu_buf, *prev_buf, *head_frag_list;
+-	struct hal_rx_desc *rx_desc, *tail_rx_desc;
+-	u8 *hdr_desc, *dest, decap_format;
++	struct sk_buff *head_msdu, *tail_msdu;
++	struct hal_rx_desc *rx_desc;
++	u8 *hdr_desc, *dest, decap_format = mon_mpdu->decap_format;
+ 	struct ieee80211_hdr_3addr *wh;
+-	u32 err_bitmap, frag_list_sum_len = 0;
++	struct ieee80211_channel *channel;
++	u32 frag_list_sum_len = 0;
++	u8 channel_num = ppdu_info->chan_num;
+ 
+ 	mpdu_buf = NULL;
++	head_msdu = mon_mpdu->head;
++	tail_msdu = mon_mpdu->tail;
+ 
+ 	if (!head_msdu)
+ 		goto err_merge_fail;
+ 
+-	rx_desc = (struct hal_rx_desc *)head_msdu->data;
+-	tail_rx_desc = (struct hal_rx_desc *)tail_msdu->data;
++	ath12k_dp_mon_fill_rx_stats_info(ar, ppdu_info, rxs);
+ 
+-	err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, tail_rx_desc);
+-	if (err_bitmap & HAL_RX_MPDU_ERR_FCS)
+-		*fcs_err = true;
++	if (unlikely(rxs->band == NUM_NL80211_BANDS ||
++		     !ath12k_ar_to_hw(ar)->wiphy->bands[rxs->band])) {
++		ath12k_dbg(ar->ab, ATH12K_DBG_DATA,
++			   "sband is NULL for status band %d channel_num %d center_freq %d pdev_id %d\n",
++			   rxs->band, channel_num, ppdu_info->freq, ar->pdev_idx);
+ 
+-	decap_format = ath12k_dp_rx_h_decap_type(ab, tail_rx_desc);
++		spin_lock_bh(&ar->data_lock);
++		channel = ar->rx_channel;
++		if (channel) {
++			rxs->band = channel->band;
++			channel_num =
++				ieee80211_frequency_to_channel(channel->center_freq);
++		}
++		spin_unlock_bh(&ar->data_lock);
++	}
++
++	if (rxs->band < NUM_NL80211_BANDS)
++		rxs->freq = ieee80211_channel_to_frequency(channel_num,
++							   rxs->band);
+ 
+-	ath12k_dp_rx_h_ppdu(ar, tail_rx_desc, rxs);
++	ath12k_dp_mon_fill_rx_rate(ar, ppdu_info, rxs);
+ 
+ 	if (decap_format == DP_RX_DECAP_TYPE_RAW) {
+ 		ath12k_dp_mon_rx_msdus_set_payload(ar, head_msdu, tail_msdu);
+@@ -1954,7 +2080,8 @@ static void ath12k_dp_mon_update_radiotap(struct ath12k *ar,
+ 
+ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *napi,
+ 					  struct sk_buff *msdu,
+-					  struct ieee80211_rx_status *status)
++					  struct ieee80211_rx_status *status,
++					  u8 decap)
+ {
+ 	static const struct ieee80211_radiotap_he known = {
+ 		.data1 = cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA1_DATA_MCS_KNOWN |
+@@ -1966,7 +2093,7 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ 	struct ieee80211_sta *pubsta = NULL;
+ 	struct ath12k_peer *peer;
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+-	u8 decap = DP_RX_DECAP_TYPE_RAW;
++	struct ath12k_dp_rx_info rx_info;
+ 	bool is_mcbc = rxcb->is_mcbc;
+ 	bool is_eapol_tkip = rxcb->is_eapol;
+ 
+@@ -1977,10 +2104,9 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ 		status->flag |= RX_FLAG_RADIOTAP_HE;
+ 	}
+ 
+-	if (!(status->flag & RX_FLAG_ONLY_MONITOR))
+-		decap = ath12k_dp_rx_h_decap_type(ar->ab, rxcb->rx_desc);
+ 	spin_lock_bh(&ar->ab->base_lock);
+-	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu);
++	rx_info.addr2_present = false;
++	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu, &rx_info);
+ 	if (peer && peer->sta) {
+ 		pubsta = peer->sta;
+ 		if (pubsta->valid_links) {
+@@ -2035,25 +2161,23 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
+ }
+ 
+ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+-				    struct sk_buff *head_msdu, struct sk_buff *tail_msdu,
++				    struct dp_mon_mpdu *mon_mpdu,
+ 				    struct hal_rx_mon_ppdu_info *ppduinfo,
+ 				    struct napi_struct *napi)
+ {
+ 	struct ath12k_pdev_dp *dp = &ar->dp;
+ 	struct sk_buff *mon_skb, *skb_next, *header;
+ 	struct ieee80211_rx_status *rxs = &dp->rx_status;
+-	bool fcs_err = false;
++	u8 decap = DP_RX_DECAP_TYPE_RAW;
+ 
+-	mon_skb = ath12k_dp_mon_rx_merg_msdus(ar,
+-					      head_msdu, tail_msdu,
+-					      rxs, &fcs_err);
++	mon_skb = ath12k_dp_mon_rx_merg_msdus(ar, mon_mpdu, ppduinfo, rxs);
+ 	if (!mon_skb)
+ 		goto mon_deliver_fail;
+ 
+ 	header = mon_skb;
+ 	rxs->flag = 0;
+ 
+-	if (fcs_err)
++	if (mon_mpdu->err_bitmap & HAL_RX_MPDU_ERR_FCS)
+ 		rxs->flag = RX_FLAG_FAILED_FCS_CRC;
+ 
+ 	do {
+@@ -2070,8 +2194,12 @@ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+ 			rxs->flag |= RX_FLAG_ALLOW_SAME_PN;
+ 		}
+ 		rxs->flag |= RX_FLAG_ONLY_MONITOR;
++
++		if (!(rxs->flag & RX_FLAG_ONLY_MONITOR))
++			decap = mon_mpdu->decap_format;
++
+ 		ath12k_dp_mon_update_radiotap(ar, ppduinfo, mon_skb, rxs);
+-		ath12k_dp_mon_rx_deliver_msdu(ar, napi, mon_skb, rxs);
++		ath12k_dp_mon_rx_deliver_msdu(ar, napi, mon_skb, rxs, decap);
+ 		mon_skb = skb_next;
+ 	} while (mon_skb);
+ 	rxs->flag = 0;
+@@ -2079,7 +2207,7 @@ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+ 	return 0;
+ 
+ mon_deliver_fail:
+-	mon_skb = head_msdu;
++	mon_skb = mon_mpdu->head;
+ 	while (mon_skb) {
+ 		skb_next = mon_skb->next;
+ 		dev_kfree_skb_any(mon_skb);
+@@ -2088,6 +2216,144 @@ static int ath12k_dp_mon_rx_deliver(struct ath12k *ar,
+ 	return -EINVAL;
+ }
+ 
++static int ath12k_dp_pkt_set_pktlen(struct sk_buff *skb, u32 len)
++{
++	if (skb->len > len) {
++		skb_trim(skb, len);
++	} else {
++		if (skb_tailroom(skb) < len - skb->len) {
++			if ((pskb_expand_head(skb, 0,
++					      len - skb->len - skb_tailroom(skb),
++					      GFP_ATOMIC))) {
++				return -ENOMEM;
++			}
++		}
++		skb_put(skb, (len - skb->len));
++	}
++
++	return 0;
++}
++
++static void ath12k_dp_mon_parse_rx_msdu_end_err(u32 info, u32 *errmap)
++{
++	if (info & RX_MSDU_END_INFO13_FCS_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_FCS;
++
++	if (info & RX_MSDU_END_INFO13_DECRYPT_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_DECRYPT;
++
++	if (info & RX_MSDU_END_INFO13_TKIP_MIC_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_TKIP_MIC;
++
++	if (info & RX_MSDU_END_INFO13_A_MSDU_ERROR)
++		*errmap |= HAL_RX_MPDU_ERR_AMSDU_ERR;
++
++	if (info & RX_MSDU_END_INFO13_OVERFLOW_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_OVERFLOW;
++
++	if (info & RX_MSDU_END_INFO13_MSDU_LEN_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_MSDU_LEN;
++
++	if (info & RX_MSDU_END_INFO13_MPDU_LEN_ERR)
++		*errmap |= HAL_RX_MPDU_ERR_MPDU_LEN;
++}
++
++static int
++ath12k_dp_mon_parse_status_msdu_end(struct ath12k_mon_data *pmon,
++				    const struct hal_rx_msdu_end *msdu_end)
++{
++	struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
++
++	ath12k_dp_mon_parse_rx_msdu_end_err(__le32_to_cpu(msdu_end->info2),
++					    &mon_mpdu->err_bitmap);
++
++	mon_mpdu->decap_format = le32_get_bits(msdu_end->info1,
++					       RX_MSDU_END_INFO11_DECAP_FORMAT);
++
++	return 0;
++}
++
++static int
++ath12k_dp_mon_parse_status_buf(struct ath12k *ar,
++			       struct ath12k_mon_data *pmon,
++			       const struct dp_mon_packet_info *packet_info)
++{
++	struct ath12k_base *ab = ar->ab;
++	struct dp_rxdma_mon_ring *buf_ring = &ab->dp.rxdma_mon_buf_ring;
++	struct sk_buff *msdu;
++	int buf_id;
++	u32 offset;
++
++	buf_id = u32_get_bits(packet_info->cookie, DP_RXDMA_BUF_COOKIE_BUF_ID);
++
++	spin_lock_bh(&buf_ring->idr_lock);
++	msdu = idr_remove(&buf_ring->bufs_idr, buf_id);
++	spin_unlock_bh(&buf_ring->idr_lock);
++
++	if (unlikely(!msdu)) {
++		ath12k_warn(ab, "mon dest desc with inval buf_id %d\n", buf_id);
++		return 0;
++	}
++
++	dma_unmap_single(ab->dev, ATH12K_SKB_RXCB(msdu)->paddr,
++			 msdu->len + skb_tailroom(msdu),
++			 DMA_FROM_DEVICE);
++
++	offset = packet_info->dma_length + ATH12K_MON_RX_DOT11_OFFSET;
++	if (ath12k_dp_pkt_set_pktlen(msdu, offset)) {
++		dev_kfree_skb_any(msdu);
++		goto dest_replenish;
++	}
++
++	if (!pmon->mon_mpdu->head)
++		pmon->mon_mpdu->head = msdu;
++	else
++		pmon->mon_mpdu->tail->next = msdu;
++
++	pmon->mon_mpdu->tail = msdu;
++
++dest_replenish:
++	ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
++
++	return 0;
++}
++
++static int
++ath12k_dp_mon_parse_rx_dest_tlv(struct ath12k *ar,
++				struct ath12k_mon_data *pmon,
++				enum hal_rx_mon_status hal_status,
++				const void *tlv_data)
++{
++	switch (hal_status) {
++	case HAL_RX_MON_STATUS_MPDU_START:
++		if (WARN_ON_ONCE(pmon->mon_mpdu))
++			break;
++
++		pmon->mon_mpdu = kzalloc(sizeof(*pmon->mon_mpdu), GFP_ATOMIC);
++		if (!pmon->mon_mpdu)
++			return -ENOMEM;
++		break;
++	case HAL_RX_MON_STATUS_BUF_ADDR:
++		return ath12k_dp_mon_parse_status_buf(ar, pmon, tlv_data);
++	case HAL_RX_MON_STATUS_MPDU_END:
++		/* If no MSDU then free empty MPDU */
++		if (pmon->mon_mpdu->tail) {
++			pmon->mon_mpdu->tail->next = NULL;
++			list_add_tail(&pmon->mon_mpdu->list, &pmon->dp_rx_mon_mpdu_list);
++		} else {
++			kfree(pmon->mon_mpdu);
++		}
++		pmon->mon_mpdu = NULL;
++		break;
++	case HAL_RX_MON_STATUS_MSDU_END:
++		return ath12k_dp_mon_parse_status_msdu_end(pmon, tlv_data);
++	default:
++		break;
++	}
++
++	return 0;
++}
++
+ static enum hal_rx_mon_status
+ ath12k_dp_mon_parse_rx_dest(struct ath12k *ar, struct ath12k_mon_data *pmon,
+ 			    struct sk_buff *skb)
+@@ -2114,14 +2380,20 @@ ath12k_dp_mon_parse_rx_dest(struct ath12k *ar, struct ath12k_mon_data *pmon,
+ 			tlv_len = le64_get_bits(tlv->tl, HAL_TLV_64_HDR_LEN);
+ 
+ 		hal_status = ath12k_dp_mon_rx_parse_status_tlv(ar, pmon, tlv);
++
++		if (ar->monitor_started &&
++		    ath12k_dp_mon_parse_rx_dest_tlv(ar, pmon, hal_status, tlv->value))
++			return HAL_RX_MON_STATUS_PPDU_DONE;
++
+ 		ptr += sizeof(*tlv) + tlv_len;
+ 		ptr = PTR_ALIGN(ptr, HAL_TLV_64_ALIGN);
+ 
+-		if ((ptr - skb->data) >= DP_RX_BUFFER_SIZE)
++		if ((ptr - skb->data) > skb->len)
+ 			break;
+ 
+ 	} while ((hal_status == HAL_RX_MON_STATUS_PPDU_NOT_DONE) ||
+ 		 (hal_status == HAL_RX_MON_STATUS_BUF_ADDR) ||
++		 (hal_status == HAL_RX_MON_STATUS_MPDU_START) ||
+ 		 (hal_status == HAL_RX_MON_STATUS_MPDU_END) ||
+ 		 (hal_status == HAL_RX_MON_STATUS_MSDU_END));
+ 
+@@ -2141,23 +2413,21 @@ ath12k_dp_mon_rx_parse_mon_status(struct ath12k *ar,
+ 	struct hal_rx_mon_ppdu_info *ppdu_info = &pmon->mon_ppdu_info;
+ 	struct dp_mon_mpdu *tmp;
+ 	struct dp_mon_mpdu *mon_mpdu = pmon->mon_mpdu;
+-	struct sk_buff *head_msdu, *tail_msdu;
+-	enum hal_rx_mon_status hal_status = HAL_RX_MON_STATUS_BUF_DONE;
++	enum hal_rx_mon_status hal_status;
+ 
+-	ath12k_dp_mon_parse_rx_dest(ar, pmon, skb);
++	hal_status = ath12k_dp_mon_parse_rx_dest(ar, pmon, skb);
++	if (hal_status != HAL_RX_MON_STATUS_PPDU_DONE)
++		return hal_status;
+ 
+ 	list_for_each_entry_safe(mon_mpdu, tmp, &pmon->dp_rx_mon_mpdu_list, list) {
+ 		list_del(&mon_mpdu->list);
+-		head_msdu = mon_mpdu->head;
+-		tail_msdu = mon_mpdu->tail;
+ 
+-		if (head_msdu && tail_msdu) {
+-			ath12k_dp_mon_rx_deliver(ar, head_msdu,
+-						 tail_msdu, ppdu_info, napi);
+-		}
++		if (mon_mpdu->head && mon_mpdu->tail)
++			ath12k_dp_mon_rx_deliver(ar, mon_mpdu, ppdu_info, napi);
+ 
+ 		kfree(mon_mpdu);
+ 	}
++
+ 	return hal_status;
+ }
+ 
+@@ -2838,16 +3108,13 @@ ath12k_dp_mon_tx_process_ppdu_info(struct ath12k *ar,
+ 				   struct dp_mon_tx_ppdu_info *tx_ppdu_info)
+ {
+ 	struct dp_mon_mpdu *tmp, *mon_mpdu;
+-	struct sk_buff *head_msdu, *tail_msdu;
+ 
+ 	list_for_each_entry_safe(mon_mpdu, tmp,
+ 				 &tx_ppdu_info->dp_tx_mon_mpdu_list, list) {
+ 		list_del(&mon_mpdu->list);
+-		head_msdu = mon_mpdu->head;
+-		tail_msdu = mon_mpdu->tail;
+ 
+-		if (head_msdu)
+-			ath12k_dp_mon_rx_deliver(ar, head_msdu, tail_msdu,
++		if (mon_mpdu->head)
++			ath12k_dp_mon_rx_deliver(ar, mon_mpdu,
+ 						 &tx_ppdu_info->rx_status, napi);
+ 
+ 		kfree(mon_mpdu);
+@@ -3346,7 +3613,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ 		ath12k_dp_mon_rx_memset_ppdu_info(ppdu_info);
+ 
+ 	while ((skb = __skb_dequeue(&skb_list))) {
+-		hal_status = ath12k_dp_mon_parse_rx_dest(ar, pmon, skb);
++		hal_status = ath12k_dp_mon_rx_parse_mon_status(ar, pmon, skb, napi);
+ 		if (hal_status != HAL_RX_MON_STATUS_PPDU_DONE) {
+ 			ppdu_info->ppdu_continuation = true;
+ 			dev_kfree_skb_any(skb);
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.h b/drivers/net/wireless/ath/ath12k/dp_mon.h
+index e4368eb42aca83..b039f6b9277c69 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.h
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_DP_MON_H
+@@ -9,6 +9,8 @@
+ 
+ #include "core.h"
+ 
++#define ATH12K_MON_RX_DOT11_OFFSET	5
++
+ enum dp_monitor_mode {
+ 	ATH12K_DP_TX_MONITOR_MODE,
+ 	ATH12K_DP_RX_MONITOR_MODE
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 75bf4211ad4227..7fadd366ec13de 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -228,12 +228,6 @@ static void ath12k_dp_rx_desc_get_crypto_header(struct ath12k_base *ab,
+ 	ab->hal_rx_ops->rx_desc_get_crypto_header(desc, crypto_hdr, enctype);
+ }
+ 
+-static u16 ath12k_dp_rxdesc_get_mpdu_frame_ctrl(struct ath12k_base *ab,
+-						struct hal_rx_desc *desc)
+-{
+-	return ab->hal_rx_ops->rx_desc_get_mpdu_frame_ctl(desc);
+-}
+-
+ static inline u8 ath12k_dp_rx_get_msdu_src_link(struct ath12k_base *ab,
+ 						struct hal_rx_desc *desc)
+ {
+@@ -1823,6 +1817,7 @@ static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ 	struct hal_rx_desc *ldesc;
+ 	int space_extra, rem_len, buf_len;
+ 	u32 hal_rx_desc_sz = ar->ab->hal.hal_desc_sz;
++	bool is_continuation;
+ 
+ 	/* As the msdu is spread across multiple rx buffers,
+ 	 * find the offset to the start of msdu for computing
+@@ -1871,7 +1866,8 @@ static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ 	rem_len = msdu_len - buf_first_len;
+ 	while ((skb = __skb_dequeue(msdu_list)) != NULL && rem_len > 0) {
+ 		rxcb = ATH12K_SKB_RXCB(skb);
+-		if (rxcb->is_continuation)
++		is_continuation = rxcb->is_continuation;
++		if (is_continuation)
+ 			buf_len = DP_RX_BUFFER_SIZE - hal_rx_desc_sz;
+ 		else
+ 			buf_len = rem_len;
+@@ -1889,7 +1885,7 @@ static int ath12k_dp_rx_msdu_coalesce(struct ath12k *ar,
+ 		dev_kfree_skb_any(skb);
+ 
+ 		rem_len -= buf_len;
+-		if (!rxcb->is_continuation)
++		if (!is_continuation)
+ 			break;
+ 	}
+ 
+@@ -1914,21 +1910,14 @@ static struct sk_buff *ath12k_dp_rx_get_msdu_last_buf(struct sk_buff_head *msdu_
+ 	return NULL;
+ }
+ 
+-static void ath12k_dp_rx_h_csum_offload(struct ath12k *ar, struct sk_buff *msdu)
++static void ath12k_dp_rx_h_csum_offload(struct sk_buff *msdu,
++					struct ath12k_dp_rx_info *rx_info)
+ {
+-	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+-	struct ath12k_base *ab = ar->ab;
+-	bool ip_csum_fail, l4_csum_fail;
+-
+-	ip_csum_fail = ath12k_dp_rx_h_ip_cksum_fail(ab, rxcb->rx_desc);
+-	l4_csum_fail = ath12k_dp_rx_h_l4_cksum_fail(ab, rxcb->rx_desc);
+-
+-	msdu->ip_summed = (ip_csum_fail || l4_csum_fail) ?
+-			  CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
++	msdu->ip_summed = (rx_info->ip_csum_fail || rx_info->l4_csum_fail) ?
++			   CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
+ }
+ 
+-static int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar,
+-				       enum hal_encrypt_type enctype)
++int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, enum hal_encrypt_type enctype)
+ {
+ 	switch (enctype) {
+ 	case HAL_ENCRYPT_TYPE_OPEN:
+@@ -2122,10 +2111,13 @@ static void ath12k_get_dot11_hdr_from_rx_desc(struct ath12k *ar,
+ 	struct hal_rx_desc *rx_desc = rxcb->rx_desc;
+ 	struct ath12k_base *ab = ar->ab;
+ 	size_t hdr_len, crypto_len;
+-	struct ieee80211_hdr *hdr;
+-	u16 qos_ctl;
+-	__le16 fc;
+-	u8 *crypto_hdr;
++	struct ieee80211_hdr hdr;
++	__le16 qos_ctl;
++	u8 *crypto_hdr, mesh_ctrl;
++
++	ath12k_dp_rx_desc_get_dot11_hdr(ab, rx_desc, &hdr);
++	hdr_len = ieee80211_hdrlen(hdr.frame_control);
++	mesh_ctrl = ath12k_dp_rx_h_mesh_ctl_present(ab, rx_desc);
+ 
+ 	if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
+ 		crypto_len = ath12k_dp_rx_crypto_param_len(ar, enctype);
+@@ -2133,27 +2125,21 @@ static void ath12k_get_dot11_hdr_from_rx_desc(struct ath12k *ar,
+ 		ath12k_dp_rx_desc_get_crypto_header(ab, rx_desc, crypto_hdr, enctype);
+ 	}
+ 
+-	fc = cpu_to_le16(ath12k_dp_rxdesc_get_mpdu_frame_ctrl(ab, rx_desc));
+-	hdr_len = ieee80211_hdrlen(fc);
+ 	skb_push(msdu, hdr_len);
+-	hdr = (struct ieee80211_hdr *)msdu->data;
+-	hdr->frame_control = fc;
+-
+-	/* Get wifi header from rx_desc */
+-	ath12k_dp_rx_desc_get_dot11_hdr(ab, rx_desc, hdr);
++	memcpy(msdu->data, &hdr, min(hdr_len, sizeof(hdr)));
+ 
+ 	if (rxcb->is_mcbc)
+ 		status->flag &= ~RX_FLAG_PN_VALIDATED;
+ 
+ 	/* Add QOS header */
+-	if (ieee80211_is_data_qos(hdr->frame_control)) {
+-		qos_ctl = rxcb->tid;
+-		if (ath12k_dp_rx_h_mesh_ctl_present(ab, rx_desc))
+-			qos_ctl |= IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT;
++	if (ieee80211_is_data_qos(hdr.frame_control)) {
++		struct ieee80211_hdr *qos_ptr = (struct ieee80211_hdr *)msdu->data;
+ 
+-		/* TODO: Add other QoS ctl fields when required */
+-		memcpy(msdu->data + (hdr_len - IEEE80211_QOS_CTL_LEN),
+-		       &qos_ctl, IEEE80211_QOS_CTL_LEN);
++		qos_ctl = cpu_to_le16(rxcb->tid & IEEE80211_QOS_CTL_TID_MASK);
++		if (mesh_ctrl)
++			qos_ctl |= cpu_to_le16(IEEE80211_QOS_CTL_MESH_CONTROL_PRESENT);
++
++		memcpy(ieee80211_get_qos_ctl(qos_ptr), &qos_ctl, IEEE80211_QOS_CTL_LEN);
+ 	}
+ }
+ 
+@@ -2229,10 +2215,10 @@ static void ath12k_dp_rx_h_undecap(struct ath12k *ar, struct sk_buff *msdu,
+ }
+ 
+ struct ath12k_peer *
+-ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu)
++ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu,
++			 struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+-	struct hal_rx_desc *rx_desc = rxcb->rx_desc;
+ 	struct ath12k_peer *peer = NULL;
+ 
+ 	lockdep_assert_held(&ab->base_lock);
+@@ -2243,39 +2229,35 @@ ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu)
+ 	if (peer)
+ 		return peer;
+ 
+-	if (!rx_desc || !(ath12k_dp_rxdesc_mac_addr2_valid(ab, rx_desc)))
+-		return NULL;
++	if (rx_info->addr2_present)
++		peer = ath12k_peer_find_by_addr(ab, rx_info->addr2);
+ 
+-	peer = ath12k_peer_find_by_addr(ab,
+-					ath12k_dp_rxdesc_get_mpdu_start_addr2(ab,
+-									      rx_desc));
+ 	return peer;
+ }
+ 
+ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 				struct sk_buff *msdu,
+ 				struct hal_rx_desc *rx_desc,
+-				struct ieee80211_rx_status *rx_status)
++				struct ath12k_dp_rx_info *rx_info)
+ {
+-	bool  fill_crypto_hdr;
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ath12k_skb_rxcb *rxcb;
+ 	enum hal_encrypt_type enctype;
+ 	bool is_decrypted = false;
+ 	struct ieee80211_hdr *hdr;
+ 	struct ath12k_peer *peer;
++	struct ieee80211_rx_status *rx_status = rx_info->rx_status;
+ 	u32 err_bitmap;
+ 
+ 	/* PN for multicast packets will be checked in mac80211 */
+ 	rxcb = ATH12K_SKB_RXCB(msdu);
+-	fill_crypto_hdr = ath12k_dp_rx_h_is_da_mcbc(ar->ab, rx_desc);
+-	rxcb->is_mcbc = fill_crypto_hdr;
++	rxcb->is_mcbc = rx_info->is_mcbc;
+ 
+ 	if (rxcb->is_mcbc)
+-		rxcb->peer_id = ath12k_dp_rx_h_peer_id(ar->ab, rx_desc);
++		rxcb->peer_id = rx_info->peer_id;
+ 
+ 	spin_lock_bh(&ar->ab->base_lock);
+-	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu);
++	peer = ath12k_dp_rx_h_find_peer(ar->ab, msdu, rx_info);
+ 	if (peer) {
+ 		if (rxcb->is_mcbc)
+ 			enctype = peer->sec_type_grp;
+@@ -2305,7 +2287,7 @@ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 	if (is_decrypted) {
+ 		rx_status->flag |= RX_FLAG_DECRYPTED | RX_FLAG_MMIC_STRIPPED;
+ 
+-		if (fill_crypto_hdr)
++		if (rx_info->is_mcbc)
+ 			rx_status->flag |= RX_FLAG_MIC_STRIPPED |
+ 					RX_FLAG_ICV_STRIPPED;
+ 		else
+@@ -2313,37 +2295,28 @@ static void ath12k_dp_rx_h_mpdu(struct ath12k *ar,
+ 					   RX_FLAG_PN_VALIDATED;
+ 	}
+ 
+-	ath12k_dp_rx_h_csum_offload(ar, msdu);
++	ath12k_dp_rx_h_csum_offload(msdu, rx_info);
+ 	ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ 			       enctype, rx_status, is_decrypted);
+ 
+-	if (!is_decrypted || fill_crypto_hdr)
++	if (!is_decrypted || rx_info->is_mcbc)
+ 		return;
+ 
+-	if (ath12k_dp_rx_h_decap_type(ar->ab, rx_desc) !=
+-	    DP_RX_DECAP_TYPE_ETHERNET2_DIX) {
++	if (rx_info->decap_type != DP_RX_DECAP_TYPE_ETHERNET2_DIX) {
+ 		hdr = (void *)msdu->data;
+ 		hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
+ 	}
+ }
+ 
+-static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+-				struct ieee80211_rx_status *rx_status)
++static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct ath12k_dp_rx_info *rx_info)
+ {
+-	struct ath12k_base *ab = ar->ab;
+ 	struct ieee80211_supported_band *sband;
+-	enum rx_msdu_start_pkt_type pkt_type;
+-	u8 bw;
+-	u8 rate_mcs, nss;
+-	u8 sgi;
++	struct ieee80211_rx_status *rx_status = rx_info->rx_status;
++	enum rx_msdu_start_pkt_type pkt_type = rx_info->pkt_type;
++	u8 bw = rx_info->bw, sgi = rx_info->sgi;
++	u8 rate_mcs = rx_info->rate_mcs, nss = rx_info->nss;
+ 	bool is_cck;
+ 
+-	pkt_type = ath12k_dp_rx_h_pkt_type(ab, rx_desc);
+-	bw = ath12k_dp_rx_h_rx_bw(ab, rx_desc);
+-	rate_mcs = ath12k_dp_rx_h_rate_mcs(ab, rx_desc);
+-	nss = ath12k_dp_rx_h_nss(ab, rx_desc);
+-	sgi = ath12k_dp_rx_h_sgi(ab, rx_desc);
+-
+ 	switch (pkt_type) {
+ 	case RX_MSDU_START_PKT_TYPE_11A:
+ 	case RX_MSDU_START_PKT_TYPE_11B:
+@@ -2412,10 +2385,35 @@ static void ath12k_dp_rx_h_rate(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 	}
+ }
+ 
+-void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+-			 struct ieee80211_rx_status *rx_status)
++void ath12k_dp_rx_h_fetch_info(struct ath12k_base *ab, struct hal_rx_desc *rx_desc,
++			       struct ath12k_dp_rx_info *rx_info)
+ {
+-	struct ath12k_base *ab = ar->ab;
++	rx_info->ip_csum_fail = ath12k_dp_rx_h_ip_cksum_fail(ab, rx_desc);
++	rx_info->l4_csum_fail = ath12k_dp_rx_h_l4_cksum_fail(ab, rx_desc);
++	rx_info->is_mcbc = ath12k_dp_rx_h_is_da_mcbc(ab, rx_desc);
++	rx_info->decap_type = ath12k_dp_rx_h_decap_type(ab, rx_desc);
++	rx_info->pkt_type = ath12k_dp_rx_h_pkt_type(ab, rx_desc);
++	rx_info->sgi = ath12k_dp_rx_h_sgi(ab, rx_desc);
++	rx_info->rate_mcs = ath12k_dp_rx_h_rate_mcs(ab, rx_desc);
++	rx_info->bw = ath12k_dp_rx_h_rx_bw(ab, rx_desc);
++	rx_info->nss = ath12k_dp_rx_h_nss(ab, rx_desc);
++	rx_info->tid = ath12k_dp_rx_h_tid(ab, rx_desc);
++	rx_info->peer_id = ath12k_dp_rx_h_peer_id(ab, rx_desc);
++	rx_info->phy_meta_data = ath12k_dp_rx_h_freq(ab, rx_desc);
++
++	if (ath12k_dp_rxdesc_mac_addr2_valid(ab, rx_desc)) {
++		ether_addr_copy(rx_info->addr2,
++				ath12k_dp_rxdesc_get_mpdu_start_addr2(ab, rx_desc));
++		rx_info->addr2_present = true;
++	}
++
++	ath12k_dbg_dump(ab, ATH12K_DBG_DATA, NULL, "rx_desc: ",
++			rx_desc, sizeof(*rx_desc));
++}
++
++void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct ath12k_dp_rx_info *rx_info)
++{
++	struct ieee80211_rx_status *rx_status = rx_info->rx_status;
+ 	u8 channel_num;
+ 	u32 center_freq, meta_data;
+ 	struct ieee80211_channel *channel;
+@@ -2429,12 +2427,12 @@ void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 
+ 	rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
+ 
+-	meta_data = ath12k_dp_rx_h_freq(ab, rx_desc);
++	meta_data = rx_info->phy_meta_data;
+ 	channel_num = meta_data;
+ 	center_freq = meta_data >> 16;
+ 
+-	if (center_freq >= ATH12K_MIN_6G_FREQ &&
+-	    center_freq <= ATH12K_MAX_6G_FREQ) {
++	if (center_freq >= ATH12K_MIN_6GHZ_FREQ &&
++	    center_freq <= ATH12K_MAX_6GHZ_FREQ) {
+ 		rx_status->band = NL80211_BAND_6GHZ;
+ 		rx_status->freq = center_freq;
+ 	} else if (channel_num >= 1 && channel_num <= 14) {
+@@ -2450,20 +2448,18 @@ void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+ 				ieee80211_frequency_to_channel(channel->center_freq);
+ 		}
+ 		spin_unlock_bh(&ar->data_lock);
+-		ath12k_dbg_dump(ar->ab, ATH12K_DBG_DATA, NULL, "rx_desc: ",
+-				rx_desc, sizeof(*rx_desc));
+ 	}
+ 
+ 	if (rx_status->band != NL80211_BAND_6GHZ)
+ 		rx_status->freq = ieee80211_channel_to_frequency(channel_num,
+ 								 rx_status->band);
+ 
+-	ath12k_dp_rx_h_rate(ar, rx_desc, rx_status);
++	ath12k_dp_rx_h_rate(ar, rx_info);
+ }
+ 
+ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *napi,
+ 				      struct sk_buff *msdu,
+-				      struct ieee80211_rx_status *status)
++				      struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	static const struct ieee80211_radiotap_he known = {
+@@ -2476,6 +2472,7 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ 	struct ieee80211_sta *pubsta;
+ 	struct ath12k_peer *peer;
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
++	struct ieee80211_rx_status *status = rx_info->rx_status;
+ 	u8 decap = DP_RX_DECAP_TYPE_RAW;
+ 	bool is_mcbc = rxcb->is_mcbc;
+ 	bool is_eapol = rxcb->is_eapol;
+@@ -2488,10 +2485,10 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ 	}
+ 
+ 	if (!(status->flag & RX_FLAG_ONLY_MONITOR))
+-		decap = ath12k_dp_rx_h_decap_type(ab, rxcb->rx_desc);
++		decap = rx_info->decap_type;
+ 
+ 	spin_lock_bh(&ab->base_lock);
+-	peer = ath12k_dp_rx_h_find_peer(ab, msdu);
++	peer = ath12k_dp_rx_h_find_peer(ab, msdu, rx_info);
+ 
+ 	pubsta = peer ? peer->sta : NULL;
+ 
+@@ -2574,7 +2571,7 @@ static bool ath12k_dp_rx_check_nwifi_hdr_len_valid(struct ath12k_base *ab,
+ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ 				     struct sk_buff *msdu,
+ 				     struct sk_buff_head *msdu_list,
+-				     struct ieee80211_rx_status *rx_status)
++				     struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct hal_rx_desc *rx_desc, *lrx_desc;
+@@ -2634,10 +2631,11 @@ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ 		goto free_out;
+ 	}
+ 
+-	ath12k_dp_rx_h_ppdu(ar, rx_desc, rx_status);
+-	ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_status);
++	ath12k_dp_rx_h_fetch_info(ab, rx_desc, rx_info);
++	ath12k_dp_rx_h_ppdu(ar, rx_info);
++	ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_info);
+ 
+-	rx_status->flag |= RX_FLAG_SKIP_MONITOR | RX_FLAG_DUP_VALIDATED;
++	rx_info->rx_status->flag |= RX_FLAG_SKIP_MONITOR | RX_FLAG_DUP_VALIDATED;
+ 
+ 	return 0;
+ 
+@@ -2657,12 +2655,16 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 	struct ath12k *ar;
+ 	struct ath12k_hw_link *hw_links = ag->hw_links;
+ 	struct ath12k_base *partner_ab;
++	struct ath12k_dp_rx_info rx_info;
+ 	u8 hw_link_id, pdev_id;
+ 	int ret;
+ 
+ 	if (skb_queue_empty(msdu_list))
+ 		return;
+ 
++	rx_info.addr2_present = false;
++	rx_info.rx_status = &rx_status;
++
+ 	rcu_read_lock();
+ 
+ 	while ((msdu = __skb_dequeue(msdu_list))) {
+@@ -2683,7 +2685,7 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 			continue;
+ 		}
+ 
+-		ret = ath12k_dp_rx_process_msdu(ar, msdu, msdu_list, &rx_status);
++		ret = ath12k_dp_rx_process_msdu(ar, msdu, msdu_list, &rx_info);
+ 		if (ret) {
+ 			ath12k_dbg(ab, ATH12K_DBG_DATA,
+ 				   "Unable to process msdu %d", ret);
+@@ -2691,7 +2693,7 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
+ 			continue;
+ 		}
+ 
+-		ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_status);
++		ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_info);
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -2984,6 +2986,7 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	struct ieee80211_rx_status *rxs = IEEE80211_SKB_RXCB(msdu);
+ 	struct ieee80211_key_conf *key_conf;
+ 	struct ieee80211_hdr *hdr;
++	struct ath12k_dp_rx_info rx_info;
+ 	u8 mic[IEEE80211_CCMP_MIC_LEN];
+ 	int head_len, tail_len, ret;
+ 	size_t data_len;
+@@ -2994,6 +2997,9 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	if (ath12k_dp_rx_h_enctype(ab, rx_desc) != HAL_ENCRYPT_TYPE_TKIP_MIC)
+ 		return 0;
+ 
++	rx_info.addr2_present = false;
++	rx_info.rx_status = rxs;
++
+ 	hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_sz);
+ 	hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ 	head_len = hdr_len + hal_rx_desc_sz + IEEE80211_TKIP_IV_LEN;
+@@ -3020,6 +3026,8 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	(ATH12K_SKB_RXCB(msdu))->is_first_msdu = true;
+ 	(ATH12K_SKB_RXCB(msdu))->is_last_msdu = true;
+ 
++	ath12k_dp_rx_h_fetch_info(ab, rx_desc, &rx_info);
++
+ 	rxs->flag |= RX_FLAG_MMIC_ERROR | RX_FLAG_MMIC_STRIPPED |
+ 		    RX_FLAG_IV_STRIPPED | RX_FLAG_DECRYPTED;
+ 	skb_pull(msdu, hal_rx_desc_sz);
+@@ -3027,7 +3035,7 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ 	if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, rx_desc, msdu)))
+ 		return -EINVAL;
+ 
+-	ath12k_dp_rx_h_ppdu(ar, rx_desc, rxs);
++	ath12k_dp_rx_h_ppdu(ar, &rx_info);
+ 	ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ 			       HAL_ENCRYPT_TYPE_TKIP_MIC, rxs, true);
+ 	ieee80211_rx(ath12k_ar_to_hw(ar), msdu);
+@@ -3716,7 +3724,7 @@ static void ath12k_dp_rx_null_q_desc_sg_drop(struct ath12k *ar,
+ }
+ 
+ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+-				      struct ieee80211_rx_status *status,
++				      struct ath12k_dp_rx_info *rx_info,
+ 				      struct sk_buff_head *msdu_list)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+@@ -3772,11 +3780,11 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ 	if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
+ 		return -EINVAL;
+ 
+-	ath12k_dp_rx_h_ppdu(ar, desc, status);
+-
+-	ath12k_dp_rx_h_mpdu(ar, msdu, desc, status);
++	ath12k_dp_rx_h_fetch_info(ab, desc, rx_info);
++	ath12k_dp_rx_h_ppdu(ar, rx_info);
++	ath12k_dp_rx_h_mpdu(ar, msdu, desc, rx_info);
+ 
+-	rxcb->tid = ath12k_dp_rx_h_tid(ab, desc);
++	rxcb->tid = rx_info->tid;
+ 
+ 	/* Please note that caller will having the access to msdu and completing
+ 	 * rx with mac80211. Need not worry about cleaning up amsdu_list.
+@@ -3786,7 +3794,7 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ }
+ 
+ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+-				   struct ieee80211_rx_status *status,
++				   struct ath12k_dp_rx_info *rx_info,
+ 				   struct sk_buff_head *msdu_list)
+ {
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+@@ -3796,7 +3804,7 @@ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ 
+ 	switch (rxcb->err_code) {
+ 	case HAL_REO_DEST_RING_ERROR_CODE_DESC_ADDR_ZERO:
+-		if (ath12k_dp_rx_h_null_q_desc(ar, msdu, status, msdu_list))
++		if (ath12k_dp_rx_h_null_q_desc(ar, msdu, rx_info, msdu_list))
+ 			drop = true;
+ 		break;
+ 	case HAL_REO_DEST_RING_ERROR_CODE_PN_CHECK_FAILED:
+@@ -3817,7 +3825,7 @@ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ }
+ 
+ static bool ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+-					struct ieee80211_rx_status *status)
++					struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	u16 msdu_len;
+@@ -3831,24 +3839,33 @@ static bool ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ 
+ 	l3pad_bytes = ath12k_dp_rx_h_l3pad(ab, desc);
+ 	msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
++
++	if ((hal_rx_desc_sz + l3pad_bytes + msdu_len) > DP_RX_BUFFER_SIZE) {
++		ath12k_dbg(ab, ATH12K_DBG_DATA,
++			   "invalid msdu len in tkip mic err %u\n", msdu_len);
++		ath12k_dbg_dump(ab, ATH12K_DBG_DATA, NULL, "", desc,
++				sizeof(*desc));
++		return true;
++	}
++
+ 	skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ 	skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+ 
+ 	if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
+ 		return true;
+ 
+-	ath12k_dp_rx_h_ppdu(ar, desc, status);
++	ath12k_dp_rx_h_ppdu(ar, rx_info);
+ 
+-	status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
+-			 RX_FLAG_DECRYPTED);
++	rx_info->rx_status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
++				     RX_FLAG_DECRYPTED);
+ 
+ 	ath12k_dp_rx_h_undecap(ar, msdu, desc,
+-			       HAL_ENCRYPT_TYPE_TKIP_MIC, status, false);
++			       HAL_ENCRYPT_TYPE_TKIP_MIC, rx_info->rx_status, false);
+ 	return false;
+ }
+ 
+ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar,  struct sk_buff *msdu,
+-				     struct ieee80211_rx_status *status)
++				     struct ath12k_dp_rx_info *rx_info)
+ {
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+@@ -3863,7 +3880,8 @@ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar,  struct sk_buff *msdu,
+ 	case HAL_REO_ENTR_RING_RXDMA_ECODE_TKIP_MIC_ERR:
+ 		err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+ 		if (err_bitmap & HAL_RX_MPDU_ERR_TKIP_MIC) {
+-			drop = ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
++			ath12k_dp_rx_h_fetch_info(ab, rx_desc, rx_info);
++			drop = ath12k_dp_rx_h_tkip_mic_err(ar, msdu, rx_info);
+ 			break;
+ 		}
+ 		fallthrough;
+@@ -3885,14 +3903,18 @@ static void ath12k_dp_rx_wbm_err(struct ath12k *ar,
+ {
+ 	struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
+ 	struct ieee80211_rx_status rxs = {0};
++	struct ath12k_dp_rx_info rx_info;
+ 	bool drop = true;
+ 
++	rx_info.addr2_present = false;
++	rx_info.rx_status = &rxs;
++
+ 	switch (rxcb->err_rel_src) {
+ 	case HAL_WBM_REL_SRC_MODULE_REO:
+-		drop = ath12k_dp_rx_h_reo_err(ar, msdu, &rxs, msdu_list);
++		drop = ath12k_dp_rx_h_reo_err(ar, msdu, &rx_info, msdu_list);
+ 		break;
+ 	case HAL_WBM_REL_SRC_MODULE_RXDMA:
+-		drop = ath12k_dp_rx_h_rxdma_err(ar, msdu, &rxs);
++		drop = ath12k_dp_rx_h_rxdma_err(ar, msdu, &rx_info);
+ 		break;
+ 	default:
+ 		/* msdu will get freed */
+@@ -3904,7 +3926,7 @@ static void ath12k_dp_rx_wbm_err(struct ath12k *ar,
+ 		return;
+ 	}
+ 
+-	ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rxs);
++	ath12k_dp_rx_deliver_msdu(ar, napi, msdu, &rx_info);
+ }
+ 
+ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
+@@ -4480,6 +4502,8 @@ int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar)
+ 
+ 	pmon->mon_last_linkdesc_paddr = 0;
+ 	pmon->mon_last_buf_cookie = DP_RX_DESC_COOKIE_MAX + 1;
++	INIT_LIST_HEAD(&pmon->dp_rx_mon_mpdu_list);
++	pmon->mon_mpdu = NULL;
+ 	spin_lock_init(&pmon->mon_lock);
+ 
+ 	return 0;
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.h b/drivers/net/wireless/ath/ath12k/dp_rx.h
+index 88e42365a9d8bc..a4e179c6f2664f 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.h
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.h
+@@ -65,6 +65,24 @@ struct ath12k_dp_rx_rfc1042_hdr {
+ 	__be16 snap_type;
+ } __packed;
+ 
++struct ath12k_dp_rx_info {
++	struct ieee80211_rx_status *rx_status;
++	u32 phy_meta_data;
++	u16 peer_id;
++	u8 decap_type;
++	u8 pkt_type;
++	u8 sgi;
++	u8 rate_mcs;
++	u8 bw;
++	u8 nss;
++	u8 addr2[ETH_ALEN];
++	u8 tid;
++	bool ip_csum_fail;
++	bool l4_csum_fail;
++	bool is_mcbc;
++	bool addr2_present;
++};
++
+ static inline u32 ath12k_he_gi_to_nl80211_he_gi(u8 sgi)
+ {
+ 	u32 ret = 0;
+@@ -131,13 +149,13 @@ int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev
+ u8 ath12k_dp_rx_h_l3pad(struct ath12k_base *ab,
+ 			struct hal_rx_desc *desc);
+ struct ath12k_peer *
+-ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
++ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu,
++			 struct ath12k_dp_rx_info *rx_info);
+ u8 ath12k_dp_rx_h_decap_type(struct ath12k_base *ab,
+ 			     struct hal_rx_desc *desc);
+ u32 ath12k_dp_rx_h_mpdu_err(struct ath12k_base *ab,
+ 			    struct hal_rx_desc *desc);
+-void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct hal_rx_desc *rx_desc,
+-			 struct ieee80211_rx_status *rx_status);
++void ath12k_dp_rx_h_ppdu(struct ath12k *ar, struct ath12k_dp_rx_info *rx_info);
+ int ath12k_dp_rxdma_ring_sel_config_qcn9274(struct ath12k_base *ab);
+ int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab);
+ 
+@@ -145,4 +163,9 @@ int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
+ 			   int (*iter)(struct ath12k_base *ar, u16 tag, u16 len,
+ 				       const void *ptr, void *data),
+ 			   void *data);
++void ath12k_dp_rx_h_fetch_info(struct ath12k_base *ab,  struct hal_rx_desc *rx_desc,
++			       struct ath12k_dp_rx_info *rx_info);
++
++int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, enum hal_encrypt_type enctype);
++
+ #endif /* ATH12K_DP_RX_H */
+diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
+index ced232bf4aed01..f82d2c58eff3f6 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
+@@ -229,7 +229,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 	struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb);
+ 	struct hal_tcl_data_cmd *hal_tcl_desc;
+ 	struct hal_tx_msdu_ext_desc *msg;
+-	struct sk_buff *skb_ext_desc;
++	struct sk_buff *skb_ext_desc = NULL;
+ 	struct hal_srng *tcl_ring;
+ 	struct ieee80211_hdr *hdr = (void *)skb->data;
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+@@ -415,18 +415,15 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 			if (ret < 0) {
+ 				ath12k_dbg(ab, ATH12K_DBG_DP_TX,
+ 					   "Failed to add HTT meta data, dropping packet\n");
+-				kfree_skb(skb_ext_desc);
+-				goto fail_unmap_dma;
++				goto fail_free_ext_skb;
+ 			}
+ 		}
+ 
+ 		ti.paddr = dma_map_single(ab->dev, skb_ext_desc->data,
+ 					  skb_ext_desc->len, DMA_TO_DEVICE);
+ 		ret = dma_mapping_error(ab->dev, ti.paddr);
+-		if (ret) {
+-			kfree_skb(skb_ext_desc);
+-			goto fail_unmap_dma;
+-		}
++		if (ret)
++			goto fail_free_ext_skb;
+ 
+ 		ti.data_len = skb_ext_desc->len;
+ 		ti.type = HAL_TCL_DESC_TYPE_EXT_DESC;
+@@ -462,7 +459,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 			ring_selector++;
+ 		}
+ 
+-		goto fail_unmap_dma;
++		goto fail_unmap_dma_ext;
+ 	}
+ 
+ 	ath12k_hal_tx_cmd_desc_setup(ab, hal_tcl_desc, &ti);
+@@ -478,13 +475,16 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_link_vif *arvif,
+ 
+ 	return 0;
+ 
+-fail_unmap_dma:
+-	dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE);
+-
++fail_unmap_dma_ext:
+ 	if (skb_cb->paddr_ext_desc)
+ 		dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ 				 sizeof(struct hal_tx_msdu_ext_desc),
+ 				 DMA_TO_DEVICE);
++fail_free_ext_skb:
++	kfree_skb(skb_ext_desc);
++
++fail_unmap_dma:
++	dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE);
+ 
+ fail_remove_tx_buf:
+ 	ath12k_dp_tx_release_txbuf(dp, tx_desc, pool_id);
+@@ -585,6 +585,7 @@ ath12k_dp_tx_process_htt_tx_complete(struct ath12k_base *ab,
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_TTL:
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ:
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT:
++	case HAL_WBM_REL_HTT_TX_COMP_STATUS_VDEVID_MISMATCH:
+ 		ath12k_dp_tx_free_txbuf(ab, msdu, mac_id, tx_ring);
+ 		break;
+ 	case HAL_WBM_REL_HTT_TX_COMP_STATUS_MEC_NOTIFY:
+diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
+index cd59ff8e6c7b0c..d00869a33fea06 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.c
++++ b/drivers/net/wireless/ath/ath12k/hal.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include <linux/dma-mapping.h>
+ #include "hal_tx.h"
+@@ -511,11 +511,6 @@ static void ath12k_hw_qcn9274_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ 	crypto_hdr[7] = HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274.mpdu_start.pn[1]);
+ }
+ 
+-static u16 ath12k_hw_qcn9274_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+-{
+-	return __le16_to_cpu(desc->u.qcn9274.mpdu_start.frame_ctrl);
+-}
+-
+ static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+ {
+ 	struct ath12k_hal *hal = &ab->hal;
+@@ -552,9 +547,9 @@ static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_REO_REG + HAL_REO_STATUS_HP;
+ 
+ 	s = &hal->srng_config[HAL_TCL_DATA];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_HP;
+-	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB - HAL_TCL1_RING_BASE_LSB;
++	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_size[1] = HAL_TCL2_RING_HP - HAL_TCL1_RING_HP;
+ 
+ 	s = &hal->srng_config[HAL_TCL_CMD];
+@@ -566,29 +561,29 @@ static int ath12k_hal_srng_create_config_qcn9274(struct ath12k_base *ab)
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL_STATUS_RING_HP;
+ 
+ 	s = &hal->srng_config[HAL_CE_SRC];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST_STATUS];
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG +
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) +
+ 		HAL_CE_DST_STATUS_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_STATUS_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_WBM_IDLE_LINK];
+ 	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab);
+@@ -736,7 +731,6 @@ const struct hal_rx_ops hal_rx_qcn9274_ops = {
+ 	.rx_desc_is_da_mcbc = ath12k_hw_qcn9274_rx_desc_is_da_mcbc,
+ 	.rx_desc_get_dot11_hdr = ath12k_hw_qcn9274_rx_desc_get_dot11_hdr,
+ 	.rx_desc_get_crypto_header = ath12k_hw_qcn9274_rx_desc_get_crypto_hdr,
+-	.rx_desc_get_mpdu_frame_ctl = ath12k_hw_qcn9274_rx_desc_get_mpdu_frame_ctl,
+ 	.dp_rx_h_msdu_done = ath12k_hw_qcn9274_dp_rx_h_msdu_done,
+ 	.dp_rx_h_l4_cksum_fail = ath12k_hw_qcn9274_dp_rx_h_l4_cksum_fail,
+ 	.dp_rx_h_ip_cksum_fail = ath12k_hw_qcn9274_dp_rx_h_ip_cksum_fail,
+@@ -975,11 +969,6 @@ ath12k_hw_qcn9274_compact_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ 		HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.qcn9274_compact.mpdu_start.pn[1]);
+ }
+ 
+-static u16 ath12k_hw_qcn9274_compact_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+-{
+-	return __le16_to_cpu(desc->u.qcn9274_compact.mpdu_start.frame_ctrl);
+-}
+-
+ static bool ath12k_hw_qcn9274_compact_dp_rx_h_msdu_done(struct hal_rx_desc *desc)
+ {
+ 	return !!le32_get_bits(desc->u.qcn9274_compact.msdu_end.info14,
+@@ -1080,8 +1069,6 @@ const struct hal_rx_ops hal_rx_qcn9274_compact_ops = {
+ 	.rx_desc_is_da_mcbc = ath12k_hw_qcn9274_compact_rx_desc_is_da_mcbc,
+ 	.rx_desc_get_dot11_hdr = ath12k_hw_qcn9274_compact_rx_desc_get_dot11_hdr,
+ 	.rx_desc_get_crypto_header = ath12k_hw_qcn9274_compact_rx_desc_get_crypto_hdr,
+-	.rx_desc_get_mpdu_frame_ctl =
+-		ath12k_hw_qcn9274_compact_rx_desc_get_mpdu_frame_ctl,
+ 	.dp_rx_h_msdu_done = ath12k_hw_qcn9274_compact_dp_rx_h_msdu_done,
+ 	.dp_rx_h_l4_cksum_fail = ath12k_hw_qcn9274_compact_dp_rx_h_l4_cksum_fail,
+ 	.dp_rx_h_ip_cksum_fail = ath12k_hw_qcn9274_compact_dp_rx_h_ip_cksum_fail,
+@@ -1330,11 +1317,6 @@ static void ath12k_hw_wcn7850_rx_desc_get_crypto_hdr(struct hal_rx_desc *desc,
+ 	crypto_hdr[7] = HAL_RX_MPDU_INFO_PN_GET_BYTE2(desc->u.wcn7850.mpdu_start.pn[1]);
+ }
+ 
+-static u16 ath12k_hw_wcn7850_rx_desc_get_mpdu_frame_ctl(struct hal_rx_desc *desc)
+-{
+-	return __le16_to_cpu(desc->u.wcn7850.mpdu_start.frame_ctrl);
+-}
+-
+ static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+ {
+ 	struct ath12k_hal *hal = &ab->hal;
+@@ -1371,9 +1353,9 @@ static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+ 
+ 	s = &hal->srng_config[HAL_TCL_DATA];
+ 	s->max_rings = 5;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_TCL_REG + HAL_TCL1_RING_HP;
+-	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB - HAL_TCL1_RING_BASE_LSB;
++	s->reg_size[0] = HAL_TCL2_RING_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB(ab);
+ 	s->reg_size[1] = HAL_TCL2_RING_HP - HAL_TCL1_RING_HP;
+ 
+ 	s = &hal->srng_config[HAL_TCL_CMD];
+@@ -1386,31 +1368,31 @@ static int ath12k_hal_srng_create_config_wcn7850(struct ath12k_base *ab)
+ 
+ 	s = &hal->srng_config[HAL_CE_SRC];
+ 	s->max_rings = 12;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST];
+ 	s->max_rings = 12;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_BASE_LSB;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_CE_DST_STATUS];
+ 	s->max_rings = 12;
+-	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG +
++	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) +
+ 		HAL_CE_DST_STATUS_RING_BASE_LSB;
+-	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG + HAL_CE_DST_STATUS_RING_HP;
+-	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
+-	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG -
+-		HAL_SEQ_WCSS_UMAC_CE0_DST_REG;
++	s->reg_start[1] = HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) + HAL_CE_DST_STATUS_RING_HP;
++	s->reg_size[0] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
++	s->reg_size[1] = HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) -
++		HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab);
+ 
+ 	s = &hal->srng_config[HAL_WBM_IDLE_LINK];
+ 	s->reg_start[0] = HAL_SEQ_WCSS_UMAC_WBM_REG + HAL_WBM_IDLE_LINK_RING_BASE_LSB(ab);
+@@ -1555,7 +1537,6 @@ const struct hal_rx_ops hal_rx_wcn7850_ops = {
+ 	.rx_desc_is_da_mcbc = ath12k_hw_wcn7850_rx_desc_is_da_mcbc,
+ 	.rx_desc_get_dot11_hdr = ath12k_hw_wcn7850_rx_desc_get_dot11_hdr,
+ 	.rx_desc_get_crypto_header = ath12k_hw_wcn7850_rx_desc_get_crypto_hdr,
+-	.rx_desc_get_mpdu_frame_ctl = ath12k_hw_wcn7850_rx_desc_get_mpdu_frame_ctl,
+ 	.dp_rx_h_msdu_done = ath12k_hw_wcn7850_dp_rx_h_msdu_done,
+ 	.dp_rx_h_l4_cksum_fail = ath12k_hw_wcn7850_dp_rx_h_l4_cksum_fail,
+ 	.dp_rx_h_ip_cksum_fail = ath12k_hw_wcn7850_dp_rx_h_ip_cksum_fail,
+@@ -1756,7 +1737,7 @@ static void ath12k_hal_srng_src_hw_init(struct ath12k_base *ab,
+ 			      HAL_TCL1_RING_BASE_MSB_RING_BASE_ADDR_MSB) |
+ 	      u32_encode_bits((srng->entry_size * srng->num_entries),
+ 			      HAL_TCL1_RING_BASE_MSB_RING_SIZE);
+-	ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_BASE_MSB_OFFSET, val);
++	ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_BASE_MSB_OFFSET(ab), val);
+ 
+ 	val = u32_encode_bits(srng->entry_size, HAL_REO1_RING_ID_ENTRY_SIZE);
+ 	ath12k_hif_write32(ab, reg_base + HAL_TCL1_RING_ID_OFFSET(ab), val);
+diff --git a/drivers/net/wireless/ath/ath12k/hal.h b/drivers/net/wireless/ath/ath12k/hal.h
+index 94e2e873595831..c8205672cd3dd5 100644
+--- a/drivers/net/wireless/ath/ath12k/hal.h
++++ b/drivers/net/wireless/ath/ath12k/hal.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HAL_H
+@@ -44,10 +44,14 @@ struct ath12k_base;
+ #define HAL_SEQ_WCSS_UMAC_OFFSET		0x00a00000
+ #define HAL_SEQ_WCSS_UMAC_REO_REG		0x00a38000
+ #define HAL_SEQ_WCSS_UMAC_TCL_REG		0x00a44000
+-#define HAL_SEQ_WCSS_UMAC_CE0_SRC_REG		0x01b80000
+-#define HAL_SEQ_WCSS_UMAC_CE0_DST_REG		0x01b81000
+-#define HAL_SEQ_WCSS_UMAC_CE1_SRC_REG		0x01b82000
+-#define HAL_SEQ_WCSS_UMAC_CE1_DST_REG		0x01b83000
++#define HAL_SEQ_WCSS_UMAC_CE0_SRC_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce0_src_reg_base)
++#define HAL_SEQ_WCSS_UMAC_CE0_DST_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce0_dest_reg_base)
++#define HAL_SEQ_WCSS_UMAC_CE1_SRC_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce1_src_reg_base)
++#define HAL_SEQ_WCSS_UMAC_CE1_DST_REG(ab) \
++	((ab)->hw_params->regs->hal_umac_ce1_dest_reg_base)
+ #define HAL_SEQ_WCSS_UMAC_WBM_REG		0x00a34000
+ 
+ #define HAL_CE_WFSS_CE_REG_BASE			0x01b80000
+@@ -57,8 +61,10 @@ struct ath12k_base;
+ /* SW2TCL(x) R0 ring configuration address */
+ #define HAL_TCL1_RING_CMN_CTRL_REG		0x00000020
+ #define HAL_TCL1_RING_DSCP_TID_MAP		0x00000240
+-#define HAL_TCL1_RING_BASE_LSB			0x00000900
+-#define HAL_TCL1_RING_BASE_MSB			0x00000904
++#define HAL_TCL1_RING_BASE_LSB(ab) \
++	((ab)->hw_params->regs->hal_tcl1_ring_base_lsb)
++#define HAL_TCL1_RING_BASE_MSB(ab) \
++	((ab)->hw_params->regs->hal_tcl1_ring_base_msb)
+ #define HAL_TCL1_RING_ID(ab)			((ab)->hw_params->regs->hal_tcl1_ring_id)
+ #define HAL_TCL1_RING_MISC(ab) \
+ 	((ab)->hw_params->regs->hal_tcl1_ring_misc)
+@@ -76,30 +82,31 @@ struct ath12k_base;
+ 	((ab)->hw_params->regs->hal_tcl1_ring_msi1_base_msb)
+ #define HAL_TCL1_RING_MSI1_DATA(ab) \
+ 	((ab)->hw_params->regs->hal_tcl1_ring_msi1_data)
+-#define HAL_TCL2_RING_BASE_LSB			0x00000978
++#define HAL_TCL2_RING_BASE_LSB(ab) \
++	((ab)->hw_params->regs->hal_tcl2_ring_base_lsb)
+ #define HAL_TCL_RING_BASE_LSB(ab) \
+ 	((ab)->hw_params->regs->hal_tcl_ring_base_lsb)
+ 
+-#define HAL_TCL1_RING_MSI1_BASE_LSB_OFFSET(ab)				\
+-	(HAL_TCL1_RING_MSI1_BASE_LSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_MSI1_BASE_MSB_OFFSET(ab)				\
+-	(HAL_TCL1_RING_MSI1_BASE_MSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_MSI1_DATA_OFFSET(ab)				\
+-	(HAL_TCL1_RING_MSI1_DATA(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_BASE_MSB_OFFSET				\
+-	(HAL_TCL1_RING_BASE_MSB - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_ID_OFFSET(ab)				\
+-	(HAL_TCL1_RING_ID(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_OFFSET(ab)			\
+-	(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX0(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_CONSR_INT_SETUP_IX1_OFFSET(ab) \
+-		(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX1(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_TP_ADDR_LSB_OFFSET(ab) \
+-		(HAL_TCL1_RING_TP_ADDR_LSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_TP_ADDR_MSB_OFFSET(ab) \
+-		(HAL_TCL1_RING_TP_ADDR_MSB(ab) - HAL_TCL1_RING_BASE_LSB)
+-#define HAL_TCL1_RING_MISC_OFFSET(ab) \
+-		(HAL_TCL1_RING_MISC(ab) - HAL_TCL1_RING_BASE_LSB)
++#define HAL_TCL1_RING_MSI1_BASE_LSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MSI1_BASE_LSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_MSI1_BASE_MSB_OFFSET(ab)	({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MSI1_BASE_MSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_MSI1_DATA_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MSI1_DATA(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_BASE_MSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_BASE_MSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_ID_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_ID(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_CONSR_INT_SETUP_IX0_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX0(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_CONSR_INT_SETUP_IX1_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_CONSUMER_INT_SETUP_IX1(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_TP_ADDR_LSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_TP_ADDR_LSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_TP_ADDR_MSB_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_TP_ADDR_MSB(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
++#define HAL_TCL1_RING_MISC_OFFSET(ab) ({ typeof(ab) _ab = (ab); \
++	(HAL_TCL1_RING_MISC(_ab) - HAL_TCL1_RING_BASE_LSB(_ab)); })
+ 
+ /* SW2TCL(x) R2 ring pointers (head/tail) address */
+ #define HAL_TCL1_RING_HP			0x00002000
+@@ -1068,7 +1075,6 @@ struct hal_rx_ops {
+ 	bool (*rx_desc_is_da_mcbc)(struct hal_rx_desc *desc);
+ 	void (*rx_desc_get_dot11_hdr)(struct hal_rx_desc *desc,
+ 				      struct ieee80211_hdr *hdr);
+-	u16 (*rx_desc_get_mpdu_frame_ctl)(struct hal_rx_desc *desc);
+ 	void (*rx_desc_get_crypto_header)(struct hal_rx_desc *desc,
+ 					  u8 *crypto_hdr,
+ 					  enum hal_encrypt_type enctype);
+diff --git a/drivers/net/wireless/ath/ath12k/hal_desc.h b/drivers/net/wireless/ath/ath12k/hal_desc.h
+index 3e8983b85de863..63d279fab32249 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_desc.h
++++ b/drivers/net/wireless/ath/ath12k/hal_desc.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2022, 2024-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ #include "core.h"
+ 
+@@ -1298,6 +1298,7 @@ enum hal_wbm_htt_tx_comp_status {
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ,
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT,
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_MEC_NOTIFY,
++	HAL_WBM_REL_HTT_TX_COMP_STATUS_VDEVID_MISMATCH,
+ 	HAL_WBM_REL_HTT_TX_COMP_STATUS_MAX,
+ };
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.h b/drivers/net/wireless/ath/ath12k/hal_rx.h
+index 6bdcd0867d86e3..c753eb2a03ad24 100644
+--- a/drivers/net/wireless/ath/ath12k/hal_rx.h
++++ b/drivers/net/wireless/ath/ath12k/hal_rx.h
+@@ -108,11 +108,12 @@ enum hal_rx_mon_status {
+ 	HAL_RX_MON_STATUS_PPDU_DONE,
+ 	HAL_RX_MON_STATUS_BUF_DONE,
+ 	HAL_RX_MON_STATUS_BUF_ADDR,
++	HAL_RX_MON_STATUS_MPDU_START,
+ 	HAL_RX_MON_STATUS_MPDU_END,
+ 	HAL_RX_MON_STATUS_MSDU_END,
+ };
+ 
+-#define HAL_RX_MAX_MPDU		256
++#define HAL_RX_MAX_MPDU				1024
+ #define HAL_RX_NUM_WORDS_PER_PPDU_BITMAP	(HAL_RX_MAX_MPDU >> 5)
+ 
+ struct hal_rx_user_status {
+@@ -506,6 +507,18 @@ struct hal_rx_mpdu_start {
+ 	__le32 rsvd2[16];
+ } __packed;
+ 
++struct hal_rx_msdu_end {
++	__le32 info0;
++	__le32 rsvd0[9];
++	__le16 info00;
++	__le16 info01;
++	__le32 rsvd00[8];
++	__le32 info1;
++	__le32 rsvd1[10];
++	__le32 info2;
++	__le32 rsvd2;
++} __packed;
++
+ #define HAL_RX_PPDU_END_DURATION	GENMASK(23, 0)
+ struct hal_rx_ppdu_end_duration {
+ 	__le32 rsvd0[9];
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index a106ebed7870de..1bfb11bae7add3 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/types.h>
+@@ -619,6 +619,9 @@ static const struct ath12k_hw_regs qcn9274_v1_regs = {
+ 	.hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ 	.hal_tcl1_ring_msi1_data = 0x00000950,
+ 	.hal_tcl_ring_base_lsb = 0x00000b58,
++	.hal_tcl1_ring_base_lsb = 0x00000900,
++	.hal_tcl1_ring_base_msb = 0x00000904,
++	.hal_tcl2_ring_base_lsb = 0x00000978,
+ 
+ 	/* TCL STATUS ring address */
+ 	.hal_tcl_status_ring_base_lsb = 0x00000d38,
+@@ -681,6 +684,14 @@ static const struct ath12k_hw_regs qcn9274_v1_regs = {
+ 
+ 	/* REO status ring address */
+ 	.hal_reo_status_ring_base = 0x00000a84,
++
++	/* CE base address */
++	.hal_umac_ce0_src_reg_base = 0x01b80000,
++	.hal_umac_ce0_dest_reg_base = 0x01b81000,
++	.hal_umac_ce1_src_reg_base = 0x01b82000,
++	.hal_umac_ce1_dest_reg_base = 0x01b83000,
++
++	.gcc_gcc_pcie_hot_rst = 0x1e38338,
+ };
+ 
+ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+@@ -695,6 +706,9 @@ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+ 	.hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ 	.hal_tcl1_ring_msi1_data = 0x00000950,
+ 	.hal_tcl_ring_base_lsb = 0x00000b58,
++	.hal_tcl1_ring_base_lsb = 0x00000900,
++	.hal_tcl1_ring_base_msb = 0x00000904,
++	.hal_tcl2_ring_base_lsb = 0x00000978,
+ 
+ 	/* TCL STATUS ring address */
+ 	.hal_tcl_status_ring_base_lsb = 0x00000d38,
+@@ -761,6 +775,14 @@ static const struct ath12k_hw_regs qcn9274_v2_regs = {
+ 
+ 	/* REO status ring address */
+ 	.hal_reo_status_ring_base = 0x00000aa0,
++
++	/* CE base address */
++	.hal_umac_ce0_src_reg_base = 0x01b80000,
++	.hal_umac_ce0_dest_reg_base = 0x01b81000,
++	.hal_umac_ce1_src_reg_base = 0x01b82000,
++	.hal_umac_ce1_dest_reg_base = 0x01b83000,
++
++	.gcc_gcc_pcie_hot_rst = 0x1e38338,
+ };
+ 
+ static const struct ath12k_hw_regs wcn7850_regs = {
+@@ -775,6 +797,9 @@ static const struct ath12k_hw_regs wcn7850_regs = {
+ 	.hal_tcl1_ring_msi1_base_msb = 0x0000094c,
+ 	.hal_tcl1_ring_msi1_data = 0x00000950,
+ 	.hal_tcl_ring_base_lsb = 0x00000b58,
++	.hal_tcl1_ring_base_lsb = 0x00000900,
++	.hal_tcl1_ring_base_msb = 0x00000904,
++	.hal_tcl2_ring_base_lsb = 0x00000978,
+ 
+ 	/* TCL STATUS ring address */
+ 	.hal_tcl_status_ring_base_lsb = 0x00000d38,
+@@ -837,6 +862,14 @@ static const struct ath12k_hw_regs wcn7850_regs = {
+ 
+ 	/* REO status ring address */
+ 	.hal_reo_status_ring_base = 0x00000a84,
++
++	/* CE base address */
++	.hal_umac_ce0_src_reg_base = 0x01b80000,
++	.hal_umac_ce0_dest_reg_base = 0x01b81000,
++	.hal_umac_ce1_src_reg_base = 0x01b82000,
++	.hal_umac_ce1_dest_reg_base = 0x01b83000,
++
++	.gcc_gcc_pcie_hot_rst = 0x1e40304,
+ };
+ 
+ static const struct ath12k_hw_hal_params ath12k_hw_hal_params_qcn9274 = {
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 8d52182e28aef4..862b11325a9021 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: BSD-3-Clause-Clear */
+ /*
+  * Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #ifndef ATH12K_HW_H
+@@ -293,6 +293,9 @@ struct ath12k_hw_regs {
+ 	u32 hal_tcl1_ring_msi1_base_msb;
+ 	u32 hal_tcl1_ring_msi1_data;
+ 	u32 hal_tcl_ring_base_lsb;
++	u32 hal_tcl1_ring_base_lsb;
++	u32 hal_tcl1_ring_base_msb;
++	u32 hal_tcl2_ring_base_lsb;
+ 
+ 	u32 hal_tcl_status_ring_base_lsb;
+ 
+@@ -316,6 +319,11 @@ struct ath12k_hw_regs {
+ 	u32 pcie_qserdes_sysclk_en_sel;
+ 	u32 pcie_pcs_osc_dtct_config_base;
+ 
++	u32 hal_umac_ce0_src_reg_base;
++	u32 hal_umac_ce0_dest_reg_base;
++	u32 hal_umac_ce1_src_reg_base;
++	u32 hal_umac_ce1_dest_reg_base;
++
+ 	u32 hal_ppe_rel_ring_base;
+ 
+ 	u32 hal_reo2_ring_base;
+@@ -347,6 +355,8 @@ struct ath12k_hw_regs {
+ 	u32 hal_reo_cmd_ring_base;
+ 
+ 	u32 hal_reo_status_ring_base;
++
++	u32 gcc_gcc_pcie_hot_rst;
+ };
+ 
+ static inline const char *ath12k_bd_ie_type_str(enum ath12k_bd_ie_type type)
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index dfa05f0ee6c9f7..331bcf5e6c4cce 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -229,7 +229,8 @@ ath12k_phymodes[NUM_NL80211_BANDS][ATH12K_CHAN_WIDTH_NUM] = {
+ const struct htt_rx_ring_tlv_filter ath12k_mac_mon_status_filter_default = {
+ 	.rx_filter = HTT_RX_FILTER_TLV_FLAGS_MPDU_START |
+ 		     HTT_RX_FILTER_TLV_FLAGS_PPDU_END |
+-		     HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE,
++		     HTT_RX_FILTER_TLV_FLAGS_PPDU_END_STATUS_DONE |
++		     HTT_RX_FILTER_TLV_FLAGS_PPDU_START_USER_INFO,
+ 	.pkt_filter_flags0 = HTT_RX_FP_MGMT_FILTER_FLAGS0,
+ 	.pkt_filter_flags1 = HTT_RX_FP_MGMT_FILTER_FLAGS1,
+ 	.pkt_filter_flags2 = HTT_RX_FP_CTRL_FILTER_FLASG2,
+@@ -874,12 +875,12 @@ static bool ath12k_mac_band_match(enum nl80211_band band1, enum WMI_HOST_WLAN_BA
+ {
+ 	switch (band1) {
+ 	case NL80211_BAND_2GHZ:
+-		if (band2 & WMI_HOST_WLAN_2G_CAP)
++		if (band2 & WMI_HOST_WLAN_2GHZ_CAP)
+ 			return true;
+ 		break;
+ 	case NL80211_BAND_5GHZ:
+ 	case NL80211_BAND_6GHZ:
+-		if (band2 & WMI_HOST_WLAN_5G_CAP)
++		if (band2 & WMI_HOST_WLAN_5GHZ_CAP)
+ 			return true;
+ 		break;
+ 	default:
+@@ -980,7 +981,7 @@ static int ath12k_mac_txpower_recalc(struct ath12k *ar)
+ 	ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "txpower to set in hw %d\n",
+ 		   txpower / 2);
+ 
+-	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) &&
++	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP) &&
+ 	    ar->txpower_limit_2g != txpower) {
+ 		param = WMI_PDEV_PARAM_TXPOWER_LIMIT2G;
+ 		ret = ath12k_wmi_pdev_set_param(ar, param,
+@@ -990,7 +991,7 @@ static int ath12k_mac_txpower_recalc(struct ath12k *ar)
+ 		ar->txpower_limit_2g = txpower;
+ 	}
+ 
+-	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) &&
++	if ((pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP) &&
+ 	    ar->txpower_limit_5g != txpower) {
+ 		param = WMI_PDEV_PARAM_TXPOWER_LIMIT5G;
+ 		ret = ath12k_wmi_pdev_set_param(ar, param,
+@@ -1272,12 +1273,12 @@ static int ath12k_mac_monitor_vdev_create(struct ath12k *ar)
+ 	arg.pdev_id = pdev->pdev_id;
+ 	arg.if_stats_id = ATH12K_INVAL_VDEV_STATS_ID;
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		arg.chains[NL80211_BAND_2GHZ].tx = ar->num_tx_chains;
+ 		arg.chains[NL80211_BAND_2GHZ].rx = ar->num_rx_chains;
+ 	}
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		arg.chains[NL80211_BAND_5GHZ].tx = ar->num_tx_chains;
+ 		arg.chains[NL80211_BAND_5GHZ].rx = ar->num_rx_chains;
+ 	}
+@@ -3988,7 +3989,7 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ 		else
+ 			rateidx = ffs(info->basic_rates) - 1;
+ 
+-		if (ar->pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP)
++		if (ar->pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP)
+ 			rateidx += ATH12K_MAC_FIRST_OFDM_RATE_IDX;
+ 
+ 		bitrate = ath12k_legacy_rates[rateidx].bitrate;
+@@ -4162,9 +4163,9 @@ ath12k_mac_select_scan_device(struct ieee80211_hw *hw,
+ 	 * split the hw request and perform multiple scans
+ 	 */
+ 
+-	if (center_freq < ATH12K_MIN_5G_FREQ)
++	if (center_freq < ATH12K_MIN_5GHZ_FREQ)
+ 		band = NL80211_BAND_2GHZ;
+-	else if (center_freq < ATH12K_MIN_6G_FREQ)
++	else if (center_freq < ATH12K_MIN_6GHZ_FREQ)
+ 		band = NL80211_BAND_5GHZ;
+ 	else
+ 		band = NL80211_BAND_6GHZ;
+@@ -4605,7 +4606,6 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		.macaddr = macaddr,
+ 	};
+ 	struct ath12k_vif *ahvif = arvif->ahvif;
+-	struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+ 
+ 	lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+ 
+@@ -4624,8 +4624,8 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 
+ 	switch (key->cipher) {
+ 	case WLAN_CIPHER_SUITE_CCMP:
++	case WLAN_CIPHER_SUITE_CCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_CCM;
+-		/* TODO: Re-check if flag is valid */
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ 		break;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+@@ -4633,12 +4633,10 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 		arg.key_txmic_len = 8;
+ 		arg.key_rxmic_len = 8;
+ 		break;
+-	case WLAN_CIPHER_SUITE_CCMP_256:
+-		arg.key_cipher = WMI_CIPHER_AES_CCM;
+-		break;
+ 	case WLAN_CIPHER_SUITE_GCMP:
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 		arg.key_cipher = WMI_CIPHER_AES_GCM;
++		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
+ 		break;
+ 	default:
+ 		ath12k_warn(ar->ab, "cipher %d is not supported\n", key->cipher);
+@@ -4658,7 +4656,7 @@ static int ath12k_install_key(struct ath12k_link_vif *arvif,
+ 	if (!wait_for_completion_timeout(&ar->install_key_done, 1 * HZ))
+ 		return -ETIMEDOUT;
+ 
+-	if (ether_addr_equal(macaddr, vif->addr))
++	if (ether_addr_equal(macaddr, arvif->bssid))
+ 		ahvif->key_cipher = key->cipher;
+ 
+ 	return ar->install_key_status ? -EINVAL : 0;
+@@ -6475,7 +6473,7 @@ static void ath12k_mac_setup_ht_vht_cap(struct ath12k *ar,
+ 	rate_cap_tx_chainmask = ar->cfg_tx_chainmask >> cap->tx_chain_mask_shift;
+ 	rate_cap_rx_chainmask = ar->cfg_rx_chainmask >> cap->rx_chain_mask_shift;
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		band = &ar->mac.sbands[NL80211_BAND_2GHZ];
+ 		ht_cap = cap->band[NL80211_BAND_2GHZ].ht_cap_info;
+ 		if (ht_cap_info)
+@@ -6484,7 +6482,7 @@ static void ath12k_mac_setup_ht_vht_cap(struct ath12k *ar,
+ 						    rate_cap_rx_chainmask);
+ 	}
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
++	if (cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP &&
+ 	    (ar->ab->hw_params->single_pdev_only ||
+ 	     !ar->supports_6ghz)) {
+ 		band = &ar->mac.sbands[NL80211_BAND_5GHZ];
+@@ -6893,7 +6891,7 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
+ 	enum nl80211_band band;
+ 	int count;
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		band = NL80211_BAND_2GHZ;
+ 		count = ath12k_mac_copy_sband_iftype_data(ar, cap,
+ 							  ar->mac.iftype[band],
+@@ -6903,7 +6901,7 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
+ 						 count);
+ 	}
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP) {
++	if (cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		band = NL80211_BAND_5GHZ;
+ 		count = ath12k_mac_copy_sband_iftype_data(ar, cap,
+ 							  ar->mac.iftype[band],
+@@ -6913,7 +6911,7 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
+ 						 count);
+ 	}
+ 
+-	if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
++	if (cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP &&
+ 	    ar->supports_6ghz) {
+ 		band = NL80211_BAND_6GHZ;
+ 		count = ath12k_mac_copy_sband_iftype_data(ar, cap,
+@@ -7042,6 +7040,8 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+ 	struct ath12k_base *ab = ar->ab;
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct ieee80211_tx_info *info;
++	enum hal_encrypt_type enctype;
++	unsigned int mic_len;
+ 	dma_addr_t paddr;
+ 	int buf_id;
+ 	int ret;
+@@ -7057,12 +7057,16 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+ 		return -ENOSPC;
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+-	if (!(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) {
++	if ((ATH12K_SKB_CB(skb)->flags & ATH12K_SKB_CIPHER_SET) &&
++	    !(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) {
+ 		if ((ieee80211_is_action(hdr->frame_control) ||
+ 		     ieee80211_is_deauth(hdr->frame_control) ||
+ 		     ieee80211_is_disassoc(hdr->frame_control)) &&
+ 		     ieee80211_has_protected(hdr->frame_control)) {
+-			skb_put(skb, IEEE80211_CCMP_MIC_LEN);
++			enctype =
++			    ath12k_dp_tx_get_encrypt_type(ATH12K_SKB_CB(skb)->cipher);
++			mic_len = ath12k_dp_rx_crypto_mic_len(ar, enctype);
++			skb_put(skb, mic_len);
+ 		}
+ 	}
+ 
+@@ -7429,7 +7433,6 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ 								info_flags);
+ 
+ 			skb_cb = ATH12K_SKB_CB(msdu_copied);
+-			info = IEEE80211_SKB_CB(msdu_copied);
+ 			skb_cb->link_id = link_id;
+ 
+ 			/* For open mode, skip peer find logic */
+@@ -7452,7 +7455,6 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+ 			if (key) {
+ 				skb_cb->cipher = key->cipher;
+ 				skb_cb->flags |= ATH12K_SKB_CIPHER_SET;
+-				info->control.hw_key = key;
+ 
+ 				hdr = (struct ieee80211_hdr *)msdu_copied->data;
+ 				if (!ieee80211_has_protected(hdr->frame_control))
+@@ -7903,15 +7905,15 @@ static int ath12k_mac_setup_vdev_create_arg(struct ath12k_link_vif *arvif,
+ 			return ret;
+ 	}
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		arg->chains[NL80211_BAND_2GHZ].tx = ar->num_tx_chains;
+ 		arg->chains[NL80211_BAND_2GHZ].rx = ar->num_rx_chains;
+ 	}
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP) {
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		arg->chains[NL80211_BAND_5GHZ].tx = ar->num_tx_chains;
+ 		arg->chains[NL80211_BAND_5GHZ].rx = ar->num_rx_chains;
+ 	}
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5G_CAP &&
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_5GHZ_CAP &&
+ 	    ar->supports_6ghz) {
+ 		arg->chains[NL80211_BAND_6GHZ].tx = ar->num_tx_chains;
+ 		arg->chains[NL80211_BAND_6GHZ].rx = ar->num_rx_chains;
+@@ -7940,7 +7942,7 @@ ath12k_mac_prepare_he_mode(struct ath12k_pdev *pdev, u32 viftype)
+ 	u32 *hecap_phy_ptr = NULL;
+ 	u32 hemode;
+ 
+-	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2G_CAP)
++	if (pdev->cap.supported_bands & WMI_HOST_WLAN_2GHZ_CAP)
+ 		cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
+ 	else
+ 		cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
+@@ -9462,8 +9464,8 @@ ath12k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
+ 
+ 	ar = ath12k_mac_assign_vif_to_vdev(hw, arvif, ctx);
+ 	if (!ar) {
+-		ath12k_warn(arvif->ar->ab, "failed to assign chanctx for vif %pM link id %u link vif is already started",
+-			    vif->addr, link_id);
++		ath12k_hw_warn(ah, "failed to assign chanctx for vif %pM link id %u link vif is already started",
++			       vif->addr, link_id);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -10640,10 +10642,10 @@ static u32 ath12k_get_phy_id(struct ath12k *ar, u32 band)
+ 	struct ath12k_pdev *pdev = ar->pdev;
+ 	struct ath12k_pdev_cap *pdev_cap = &pdev->cap;
+ 
+-	if (band == WMI_HOST_WLAN_2G_CAP)
++	if (band == WMI_HOST_WLAN_2GHZ_CAP)
+ 		return pdev_cap->band[NL80211_BAND_2GHZ].phy_id;
+ 
+-	if (band == WMI_HOST_WLAN_5G_CAP)
++	if (band == WMI_HOST_WLAN_5GHZ_CAP)
+ 		return pdev_cap->band[NL80211_BAND_5GHZ].phy_id;
+ 
+ 	ath12k_warn(ar->ab, "unsupported phy cap:%d\n", band);
+@@ -10668,7 +10670,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 
+ 	reg_cap = &ar->ab->hal_reg_cap[ar->pdev_idx];
+ 
+-	if (supported_bands & WMI_HOST_WLAN_2G_CAP) {
++	if (supported_bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		channels = kmemdup(ath12k_2ghz_channels,
+ 				   sizeof(ath12k_2ghz_channels),
+ 				   GFP_KERNEL);
+@@ -10684,7 +10686,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 		bands[NL80211_BAND_2GHZ] = band;
+ 
+ 		if (ar->ab->hw_params->single_pdev_only) {
+-			phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_2G_CAP);
++			phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_2GHZ_CAP);
+ 			reg_cap = &ar->ab->hal_reg_cap[phy_id];
+ 		}
+ 		ath12k_mac_update_ch_list(ar, band,
+@@ -10692,8 +10694,8 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 					  reg_cap->high_2ghz_chan);
+ 	}
+ 
+-	if (supported_bands & WMI_HOST_WLAN_5G_CAP) {
+-		if (reg_cap->high_5ghz_chan >= ATH12K_MIN_6G_FREQ) {
++	if (supported_bands & WMI_HOST_WLAN_5GHZ_CAP) {
++		if (reg_cap->high_5ghz_chan >= ATH12K_MIN_6GHZ_FREQ) {
+ 			channels = kmemdup(ath12k_6ghz_channels,
+ 					   sizeof(ath12k_6ghz_channels), GFP_KERNEL);
+ 			if (!channels) {
+@@ -10715,7 +10717,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 			ah->use_6ghz_regd = true;
+ 		}
+ 
+-		if (reg_cap->low_5ghz_chan < ATH12K_MIN_6G_FREQ) {
++		if (reg_cap->low_5ghz_chan < ATH12K_MIN_6GHZ_FREQ) {
+ 			channels = kmemdup(ath12k_5ghz_channels,
+ 					   sizeof(ath12k_5ghz_channels),
+ 					   GFP_KERNEL);
+@@ -10734,7 +10736,7 @@ static int ath12k_mac_setup_channels_rates(struct ath12k *ar,
+ 			bands[NL80211_BAND_5GHZ] = band;
+ 
+ 			if (ar->ab->hw_params->single_pdev_only) {
+-				phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_5G_CAP);
++				phy_id = ath12k_get_phy_id(ar, WMI_HOST_WLAN_5GHZ_CAP);
+ 				reg_cap = &ar->ab->hal_reg_cap[phy_id];
+ 			}
+ 
+@@ -11572,7 +11574,6 @@ void ath12k_mac_mlo_teardown(struct ath12k_hw_group *ag)
+ 
+ int ath12k_mac_register(struct ath12k_hw_group *ag)
+ {
+-	struct ath12k_base *ab = ag->ab[0];
+ 	struct ath12k_hw *ah;
+ 	int i;
+ 	int ret;
+@@ -11585,8 +11586,6 @@ int ath12k_mac_register(struct ath12k_hw_group *ag)
+ 			goto err;
+ 	}
+ 
+-	set_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
+-
+ 	return 0;
+ 
+ err:
+@@ -11603,12 +11602,9 @@ int ath12k_mac_register(struct ath12k_hw_group *ag)
+ 
+ void ath12k_mac_unregister(struct ath12k_hw_group *ag)
+ {
+-	struct ath12k_base *ab = ag->ab[0];
+ 	struct ath12k_hw *ah;
+ 	int i;
+ 
+-	clear_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags);
+-
+ 	for (i = ag->num_hw - 1; i >= 0; i--) {
+ 		ah = ath12k_ag_to_ah(ag, i);
+ 		if (!ah)
+diff --git a/drivers/net/wireless/ath/ath12k/mhi.c b/drivers/net/wireless/ath/ath12k/mhi.c
+index 2f6d14382ed70c..4d40c4ec4b8110 100644
+--- a/drivers/net/wireless/ath/ath12k/mhi.c
++++ b/drivers/net/wireless/ath/ath12k/mhi.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+  * Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/msi.h>
+@@ -285,8 +285,11 @@ static void ath12k_mhi_op_status_cb(struct mhi_controller *mhi_cntrl,
+ 			break;
+ 		}
+ 
+-		if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags)))
++		if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))) {
++			set_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags);
++			set_bit(ATH12K_FLAG_RECOVERY, &ab->dev_flags);
+ 			queue_work(ab->workqueue_aux, &ab->reset_work);
++		}
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index b474696ac6d8c9..2e7d302ace679d 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -292,10 +292,10 @@ static void ath12k_pci_enable_ltssm(struct ath12k_base *ab)
+ 
+ 	ath12k_dbg(ab, ATH12K_DBG_PCI, "pci ltssm 0x%x\n", val);
+ 
+-	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
++	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab));
+ 	val |= GCC_GCC_PCIE_HOT_RST_VAL;
+-	ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST, val);
+-	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST);
++	ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST(ab), val);
++	val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab));
+ 
+ 	ath12k_dbg(ab, ATH12K_DBG_PCI, "pci pcie_hot_rst 0x%x\n", val);
+ 
+@@ -1710,12 +1710,12 @@ static int ath12k_pci_probe(struct pci_dev *pdev,
+ err_mhi_unregister:
+ 	ath12k_mhi_unregister(ab_pci);
+ 
+-err_pci_msi_free:
+-	ath12k_pci_msi_free(ab_pci);
+-
+ err_irq_affinity_cleanup:
+ 	ath12k_pci_set_irq_affinity_hint(ab_pci, NULL);
+ 
++err_pci_msi_free:
++	ath12k_pci_msi_free(ab_pci);
++
+ err_pci_free_region:
+ 	ath12k_pci_free_region(ab_pci);
+ 
+@@ -1734,8 +1734,6 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
+ 
+ 	if (test_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags)) {
+ 		ath12k_pci_power_down(ab, false);
+-		ath12k_qmi_deinit_service(ab);
+-		ath12k_core_hw_group_unassign(ab);
+ 		goto qmi_fail;
+ 	}
+ 
+@@ -1743,9 +1741,10 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
+ 
+ 	cancel_work_sync(&ab->reset_work);
+ 	cancel_work_sync(&ab->dump_work);
+-	ath12k_core_deinit(ab);
++	ath12k_core_hw_group_cleanup(ab->ag);
+ 
+ qmi_fail:
++	ath12k_core_deinit(ab);
+ 	ath12k_fw_unmap(ab);
+ 	ath12k_mhi_unregister(ab_pci);
+ 
+diff --git a/drivers/net/wireless/ath/ath12k/pci.h b/drivers/net/wireless/ath/ath12k/pci.h
+index 31584a7ad80eb9..9321674eef8b8f 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.h
++++ b/drivers/net/wireless/ath/ath12k/pci.h
+@@ -28,7 +28,9 @@
+ #define PCIE_PCIE_PARF_LTSSM			0x1e081b0
+ #define PARM_LTSSM_VALUE			0x111
+ 
+-#define GCC_GCC_PCIE_HOT_RST			0x1e38338
++#define GCC_GCC_PCIE_HOT_RST(ab) \
++	((ab)->hw_params->regs->gcc_gcc_pcie_hot_rst)
++
+ #define GCC_GCC_PCIE_HOT_RST_VAL		0x10
+ 
+ #define PCIE_PCIE_INT_ALL_CLEAR			0x1e08228
+diff --git a/drivers/net/wireless/ath/ath12k/reg.c b/drivers/net/wireless/ath/ath12k/reg.c
+index 439d61f284d892..7fa7cd301b7579 100644
+--- a/drivers/net/wireless/ath/ath12k/reg.c
++++ b/drivers/net/wireless/ath/ath12k/reg.c
+@@ -777,8 +777,12 @@ void ath12k_reg_free(struct ath12k_base *ab)
+ {
+ 	int i;
+ 
++	mutex_lock(&ab->core_lock);
+ 	for (i = 0; i < ab->hw_params->max_radios; i++) {
+ 		kfree(ab->default_regd[i]);
+ 		kfree(ab->new_regd[i]);
++		ab->default_regd[i] = NULL;
++		ab->new_regd[i] = NULL;
+ 	}
++	mutex_unlock(&ab->core_lock);
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 6d1ea5f3a791b0..fe50c3d3cb8201 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -520,10 +520,10 @@ ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ 	 * band to band for a single radio, need to see how this should be
+ 	 * handled.
+ 	 */
+-	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2G_CAP) {
++	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		pdev_cap->tx_chain_mask = le32_to_cpu(mac_caps->tx_chain_mask_2g);
+ 		pdev_cap->rx_chain_mask = le32_to_cpu(mac_caps->rx_chain_mask_2g);
+-	} else if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5G_CAP) {
++	} else if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		pdev_cap->vht_cap = le32_to_cpu(mac_caps->vht_cap_info_5g);
+ 		pdev_cap->vht_mcs = le32_to_cpu(mac_caps->vht_supp_mcs_5g);
+ 		pdev_cap->he_mcs = le32_to_cpu(mac_caps->he_supp_mcs_5g);
+@@ -546,7 +546,7 @@ ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ 	pdev_cap->rx_chain_mask_shift =
+ 			find_first_bit((unsigned long *)&pdev_cap->rx_chain_mask, 32);
+ 
+-	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2G_CAP) {
++	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		cap_band = &pdev_cap->band[NL80211_BAND_2GHZ];
+ 		cap_band->phy_id = le32_to_cpu(mac_caps->phy_id);
+ 		cap_band->max_bw_supported = le32_to_cpu(mac_caps->max_bw_supported_2g);
+@@ -566,7 +566,7 @@ ath12k_pull_mac_phy_cap_svc_ready_ext(struct ath12k_wmi_pdev *wmi_handle,
+ 				le32_to_cpu(mac_caps->he_ppet2g.ppet16_ppet8_ru3_ru0[i]);
+ 	}
+ 
+-	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5G_CAP) {
++	if (le32_to_cpu(mac_caps->supported_bands) & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		cap_band = &pdev_cap->band[NL80211_BAND_5GHZ];
+ 		cap_band->phy_id = le32_to_cpu(mac_caps->phy_id);
+ 		cap_band->max_bw_supported =
+@@ -2351,7 +2351,7 @@ int ath12k_wmi_send_peer_assoc_cmd(struct ath12k *ar,
+ 
+ 	for (i = 0; i < arg->peer_eht_mcs_count; i++) {
+ 		eht_mcs = ptr;
+-		eht_mcs->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_HE_RATE_SET,
++		eht_mcs->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_EHT_RATE_SET,
+ 							     sizeof(*eht_mcs));
+ 
+ 		eht_mcs->rx_mcs_set = cpu_to_le32(arg->peer_eht_rx_mcs_set[i]);
+@@ -3646,15 +3646,15 @@ ath12k_fill_band_to_mac_param(struct ath12k_base  *soc,
+ 		arg[i].pdev_id = pdev->pdev_id;
+ 
+ 		switch (pdev->cap.supported_bands) {
+-		case WMI_HOST_WLAN_2G_5G_CAP:
++		case WMI_HOST_WLAN_2GHZ_5GHZ_CAP:
+ 			arg[i].start_freq = hal_reg_cap->low_2ghz_chan;
+ 			arg[i].end_freq = hal_reg_cap->high_5ghz_chan;
+ 			break;
+-		case WMI_HOST_WLAN_2G_CAP:
++		case WMI_HOST_WLAN_2GHZ_CAP:
+ 			arg[i].start_freq = hal_reg_cap->low_2ghz_chan;
+ 			arg[i].end_freq = hal_reg_cap->high_2ghz_chan;
+ 			break;
+-		case WMI_HOST_WLAN_5G_CAP:
++		case WMI_HOST_WLAN_5GHZ_CAP:
+ 			arg[i].start_freq = hal_reg_cap->low_5ghz_chan;
+ 			arg[i].end_freq = hal_reg_cap->high_5ghz_chan;
+ 			break;
+@@ -4601,6 +4601,7 @@ static int ath12k_service_ready_ext_event(struct ath12k_base *ab,
+ 	return 0;
+ 
+ err:
++	kfree(svc_rdy_ext.mac_phy_caps);
+ 	ath12k_wmi_free_dbring_caps(ab);
+ 	return ret;
+ }
+@@ -4699,7 +4700,7 @@ ath12k_wmi_tlv_mac_phy_caps_ext_parse(struct ath12k_base *ab,
+ 		bands = pdev->cap.supported_bands;
+ 	}
+ 
+-	if (bands & WMI_HOST_WLAN_2G_CAP) {
++	if (bands & WMI_HOST_WLAN_2GHZ_CAP) {
+ 		ath12k_wmi_eht_caps_parse(pdev, NL80211_BAND_2GHZ,
+ 					  caps->eht_cap_mac_info_2ghz,
+ 					  caps->eht_cap_phy_info_2ghz,
+@@ -4708,7 +4709,7 @@ ath12k_wmi_tlv_mac_phy_caps_ext_parse(struct ath12k_base *ab,
+ 					  caps->eht_cap_info_internal);
+ 	}
+ 
+-	if (bands & WMI_HOST_WLAN_5G_CAP) {
++	if (bands & WMI_HOST_WLAN_5GHZ_CAP) {
+ 		ath12k_wmi_eht_caps_parse(pdev, NL80211_BAND_5GHZ,
+ 					  caps->eht_cap_mac_info_5ghz,
+ 					  caps->eht_cap_phy_info_5ghz,
+@@ -4922,7 +4923,7 @@ static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_param
+ 	for (count = 0; count < num_reg_rules; count++) {
+ 		start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ);
+ 
+-		if (start_freq >= ATH12K_MIN_6G_FREQ)
++		if (start_freq >= ATH12K_MIN_6GHZ_FREQ)
+ 			num_invalid_5ghz_rules++;
+ 	}
+ 
+@@ -4992,9 +4993,9 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ 	for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ 		num_6g_reg_rules_ap[i] = reg_info->num_6g_reg_rules_ap[i];
+ 
+-		if (num_6g_reg_rules_ap[i] > MAX_6G_REG_RULES) {
++		if (num_6g_reg_rules_ap[i] > MAX_6GHZ_REG_RULES) {
+ 			ath12k_warn(ab, "Num 6G reg rules for AP mode(%d) exceeds max limit (num_6g_reg_rules_ap: %d, max_rules: %d)\n",
+-				    i, num_6g_reg_rules_ap[i], MAX_6G_REG_RULES);
++				    i, num_6g_reg_rules_ap[i], MAX_6GHZ_REG_RULES);
+ 			kfree(tb);
+ 			return -EINVAL;
+ 		}
+@@ -5015,9 +5016,9 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ 				reg_info->num_6g_reg_rules_cl[WMI_REG_VLP_AP][i];
+ 		total_reg_rules += num_6g_reg_rules_cl[WMI_REG_VLP_AP][i];
+ 
+-		if (num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] > MAX_6G_REG_RULES ||
+-		    num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] > MAX_6G_REG_RULES ||
+-		    num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] >  MAX_6G_REG_RULES) {
++		if (num_6g_reg_rules_cl[WMI_REG_INDOOR_AP][i] > MAX_6GHZ_REG_RULES ||
++		    num_6g_reg_rules_cl[WMI_REG_STD_POWER_AP][i] > MAX_6GHZ_REG_RULES ||
++		    num_6g_reg_rules_cl[WMI_REG_VLP_AP][i] >  MAX_6GHZ_REG_RULES) {
+ 			ath12k_warn(ab, "Num 6g client reg rules exceeds max limit, for client(type: %d)\n",
+ 				    i);
+ 			kfree(tb);
+@@ -6317,13 +6318,13 @@ static void ath12k_mgmt_rx_event(struct ath12k_base *ab, struct sk_buff *skb)
+ 	if (rx_ev.status & WMI_RX_STATUS_ERR_MIC)
+ 		status->flag |= RX_FLAG_MMIC_ERROR;
+ 
+-	if (rx_ev.chan_freq >= ATH12K_MIN_6G_FREQ &&
+-	    rx_ev.chan_freq <= ATH12K_MAX_6G_FREQ) {
++	if (rx_ev.chan_freq >= ATH12K_MIN_6GHZ_FREQ &&
++	    rx_ev.chan_freq <= ATH12K_MAX_6GHZ_FREQ) {
+ 		status->band = NL80211_BAND_6GHZ;
+ 		status->freq = rx_ev.chan_freq;
+ 	} else if (rx_ev.channel >= 1 && rx_ev.channel <= 14) {
+ 		status->band = NL80211_BAND_2GHZ;
+-	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH12K_MAX_5G_CHAN) {
++	} else if (rx_ev.channel >= 36 && rx_ev.channel <= ATH12K_MAX_5GHZ_CHAN) {
+ 		status->band = NL80211_BAND_5GHZ;
+ 	} else {
+ 		/* Shouldn't happen unless list of advertised channels to
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 1ba33e30ddd279..be4ac91dd34f50 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -216,9 +216,9 @@ enum wmi_host_hw_mode_priority {
+ };
+ 
+ enum WMI_HOST_WLAN_BAND {
+-	WMI_HOST_WLAN_2G_CAP	= 1,
+-	WMI_HOST_WLAN_5G_CAP	= 2,
+-	WMI_HOST_WLAN_2G_5G_CAP	= 3,
++	WMI_HOST_WLAN_2GHZ_CAP		= 1,
++	WMI_HOST_WLAN_5GHZ_CAP		= 2,
++	WMI_HOST_WLAN_2GHZ_5GHZ_CAP	= 3,
+ };
+ 
+ enum wmi_cmd_group {
+@@ -2690,8 +2690,8 @@ enum wmi_channel_width {
+  * 2 - index for 160 MHz, first 3 bytes valid
+  * 3 - index for 320 MHz, first 3 bytes valid
+  */
+-#define WMI_MAX_EHT_SUPP_MCS_2G_SIZE  2
+-#define WMI_MAX_EHT_SUPP_MCS_5G_SIZE  4
++#define WMI_MAX_EHT_SUPP_MCS_2GHZ_SIZE  2
++#define WMI_MAX_EHT_SUPP_MCS_5GHZ_SIZE  4
+ 
+ #define WMI_EHTCAP_TXRX_MCS_NSS_IDX_80    0
+ #define WMI_EHTCAP_TXRX_MCS_NSS_IDX_160   1
+@@ -2730,8 +2730,8 @@ struct ath12k_wmi_caps_ext_params {
+ 	struct ath12k_wmi_ppe_threshold_params eht_ppet_2ghz;
+ 	struct ath12k_wmi_ppe_threshold_params eht_ppet_5ghz;
+ 	__le32 eht_cap_info_internal;
+-	__le32 eht_supp_mcs_ext_2ghz[WMI_MAX_EHT_SUPP_MCS_2G_SIZE];
+-	__le32 eht_supp_mcs_ext_5ghz[WMI_MAX_EHT_SUPP_MCS_5G_SIZE];
++	__le32 eht_supp_mcs_ext_2ghz[WMI_MAX_EHT_SUPP_MCS_2GHZ_SIZE];
++	__le32 eht_supp_mcs_ext_5ghz[WMI_MAX_EHT_SUPP_MCS_5GHZ_SIZE];
+ 	__le32 eml_capability;
+ 	__le32 mld_capability;
+ } __packed;
+@@ -4108,7 +4108,7 @@ struct ath12k_wmi_eht_rate_set_params {
+ 
+ #define MAX_REG_RULES 10
+ #define REG_ALPHA2_LEN 2
+-#define MAX_6G_REG_RULES 5
++#define MAX_6GHZ_REG_RULES 5
+ 
+ enum wmi_start_event_param {
+ 	WMI_VDEV_START_RESP_EVENT = 0,
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
+index 547634f82183d6..81fa7cbad89213 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_beacon.c
+@@ -290,6 +290,9 @@ void ath9k_htc_swba(struct ath9k_htc_priv *priv,
+ 	struct ath_common *common = ath9k_hw_common(priv->ah);
+ 	int slot;
+ 
++	if (!priv->cur_beacon_conf.enable_beacon)
++		return;
++
+ 	if (swba->beacon_pending != 0) {
+ 		priv->beacon.bmisscnt++;
+ 		if (priv->beacon.bmisscnt > BSTUCK_THRESHOLD) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+index ce4954b0d52462..44a249a753ecf6 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+@@ -328,6 +328,7 @@ iwl_trans_get_rb_size_order(enum iwl_amsdu_size rb_size)
+ 	case IWL_AMSDU_4K:
+ 		return get_order(4 * 1024);
+ 	case IWL_AMSDU_8K:
++		return get_order(8 * 1024);
+ 	case IWL_AMSDU_12K:
+ 		return get_order(16 * 1024);
+ 	default:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mld.c b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+index 73d2166a4c2570..7a098942dc8021 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mld/mld.c
++++ b/drivers/net/wireless/intel/iwlwifi/mld/mld.c
+@@ -638,7 +638,8 @@ iwl_mld_nic_error(struct iwl_op_mode *op_mode,
+ 	 * It might not actually be true that we'll restart, but the
+ 	 * setting doesn't matter if we're going to be unbound either.
+ 	 */
+-	if (type != IWL_ERR_TYPE_RESET_HS_TIMEOUT)
++	if (type != IWL_ERR_TYPE_RESET_HS_TIMEOUT &&
++	    mld->fw_status.running)
+ 		mld->fw_status.in_hw_restart = true;
+ }
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 068c58e9c1eb4e..c2729dab8e79e5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -2,6 +2,7 @@
+ /******************************************************************************
+  *
+  * Copyright(c) 2005 - 2014, 2018 - 2023 Intel Corporation. All rights reserved.
++ * Copyright(c) 2025 Intel Corporation
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+  *****************************************************************************/
+@@ -2709,6 +2710,7 @@ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
+ 							  optimal_rate);
+ 		iwl_mvm_hwrate_to_tx_rate_v1(last_ucode_rate, info->band,
+ 					     &txrc->reported_rate);
++		txrc->reported_rate.count = 1;
+ 	}
+ 	spin_unlock_bh(&lq_sta->pers.lock);
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
+index 738bafc3749b0a..66f0f5377ac181 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n.c
+@@ -403,14 +403,12 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv,
+ 
+ 		if (sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 &&
+ 		    bss_desc->bcn_ht_oper->ht_param &
+-		    IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) {
+-			chan_list->chan_scan_param[0].radio_type |=
+-				CHAN_BW_40MHZ << 2;
++		    IEEE80211_HT_PARAM_CHAN_WIDTH_ANY)
+ 			SET_SECONDARYCHAN(chan_list->chan_scan_param[0].
+ 					  radio_type,
+ 					  (bss_desc->bcn_ht_oper->ht_param &
+ 					  IEEE80211_HT_PARAM_CHA_SEC_OFFSET));
+-		}
++
+ 		*buffer += struct_size(chan_list, chan_scan_param, 1);
+ 		ret_len += struct_size(chan_list, chan_scan_param, 1);
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/channel.c b/drivers/net/wireless/mediatek/mt76/channel.c
+index e7b839e7429034..cc2d888e3f17a5 100644
+--- a/drivers/net/wireless/mediatek/mt76/channel.c
++++ b/drivers/net/wireless/mediatek/mt76/channel.c
+@@ -302,11 +302,13 @@ void mt76_put_vif_phy_link(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 			   struct mt76_vif_link *mlink)
+ {
+ 	struct mt76_dev *dev = phy->dev;
+-	struct mt76_vif_data *mvif = mlink->mvif;
++	struct mt76_vif_data *mvif;
+ 
+ 	if (IS_ERR_OR_NULL(mlink) || !mlink->offchannel)
+ 		return;
+ 
++	mvif = mlink->mvif;
++
+ 	rcu_assign_pointer(mvif->offchannel_link, NULL);
+ 	dev->drv->vif_link_remove(phy, vif, &vif->bss_conf, mlink);
+ 	kfree(mlink);
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index b88d7e10742ee6..e9605dc222910f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -449,8 +449,10 @@ mt76_phy_init(struct mt76_phy *phy, struct ieee80211_hw *hw)
+ 	wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_AIRTIME_FAIRNESS);
+ 	wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_AQL);
+ 
+-	wiphy->available_antennas_tx = phy->antenna_mask;
+-	wiphy->available_antennas_rx = phy->antenna_mask;
++	if (!wiphy->available_antennas_tx)
++		wiphy->available_antennas_tx = phy->antenna_mask;
++	if (!wiphy->available_antennas_rx)
++		wiphy->available_antennas_rx = phy->antenna_mask;
+ 
+ 	wiphy->sar_capa = &mt76_sar_capa;
+ 	phy->frp = devm_kcalloc(dev->dev, wiphy->sar_capa->num_freq_ranges,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index 876f0692850a2e..9c4d5cea0c42e9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -651,6 +651,9 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
+ 		wed->wlan.base = devm_ioremap(dev->mt76.dev,
+ 					      pci_resource_start(pci_dev, 0),
+ 					      pci_resource_len(pci_dev, 0));
++		if (!wed->wlan.base)
++			return -ENOMEM;
++
+ 		wed->wlan.phy_base = pci_resource_start(pci_dev, 0);
+ 		wed->wlan.wpdma_int = pci_resource_start(pci_dev, 0) +
+ 				      MT_INT_WED_SOURCE_CSR;
+@@ -678,6 +681,9 @@ int mt7915_mmio_wed_init(struct mt7915_dev *dev, void *pdev_ptr,
+ 		wed->wlan.bus_type = MTK_WED_BUS_AXI;
+ 		wed->wlan.base = devm_ioremap(dev->mt76.dev, res->start,
+ 					      resource_size(res));
++		if (!wed->wlan.base)
++			return -ENOMEM;
++
+ 		wed->wlan.phy_base = res->start;
+ 		wed->wlan.wpdma_int = res->start + MT_INT_SOURCE_CSR;
+ 		wed->wlan.wpdma_mask = res->start + MT_INT_MASK_CSR;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+index 63cb08f4d87cc4..79639be0d29aca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
+@@ -89,7 +89,7 @@ void mt7925_regd_be_ctrl(struct mt792x_dev *dev, u8 *alpha2)
+ 		}
+ 
+ 		/* Check the last one */
+-		if (rule->flag && BIT(0))
++		if (rule->flag & BIT(0))
+ 			break;
+ 
+ 		pos += sizeof(*rule);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 14b1f603fb6224..dea5b9bcb3fdfb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -783,7 +783,7 @@ int mt7925_mcu_fw_log_2_host(struct mt792x_dev *dev, u8 ctrl)
+ 	int ret;
+ 
+ 	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(WSYS_CONFIG),
+-					&req, sizeof(req), false, NULL);
++					&req, sizeof(req), true, NULL);
+ 	return ret;
+ }
+ 
+@@ -1424,7 +1424,7 @@ int mt7925_mcu_set_eeprom(struct mt792x_dev *dev)
+ 	};
+ 
+ 	return mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(EFUSE_CTRL),
+-					 &req, sizeof(req), false, NULL);
++					 &req, sizeof(req), true, NULL);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_set_eeprom);
+ 
+@@ -2046,8 +2046,6 @@ int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ 		},
+ 	};
+ 
+-	mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(SNIFFER), &req, sizeof(req), true);
+-
+ 	return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(SNIFFER), &req, sizeof(req),
+ 				 true);
+ }
+@@ -2764,7 +2762,7 @@ int mt7925_mcu_set_dbdc(struct mt76_phy *phy, bool enable)
+ 	conf->band = 0; /* unused */
+ 
+ 	err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SET_DBDC_PARMS),
+-				    false);
++				    true);
+ 
+ 	return err;
+ }
+@@ -2790,6 +2788,9 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	struct tlv *tlv;
+ 	int max_len;
+ 
++	if (test_bit(MT76_HW_SCANNING, &phy->state))
++		return -EBUSY;
++
+ 	max_len = sizeof(*hdr) + sizeof(*req) + sizeof(*ssid) +
+ 				sizeof(*bssid) + sizeof(*chan_info) +
+ 				sizeof(*misc) + sizeof(*ie);
+@@ -2869,7 +2870,7 @@ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	}
+ 
+ 	err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+-				    false);
++				    true);
+ 	if (err < 0)
+ 		clear_bit(MT76_HW_SCANNING, &phy->state);
+ 
+@@ -2975,7 +2976,7 @@ int mt7925_mcu_sched_scan_req(struct mt76_phy *phy,
+ 	}
+ 
+ 	return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+-				     false);
++				     true);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_sched_scan_req);
+ 
+@@ -3011,7 +3012,7 @@ mt7925_mcu_sched_scan_enable(struct mt76_phy *phy,
+ 		clear_bit(MT76_HW_SCHED_SCANNING, &phy->state);
+ 
+ 	return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+-				     false);
++				     true);
+ }
+ 
+ int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+@@ -3050,7 +3051,7 @@ int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+ 	}
+ 
+ 	return mt76_mcu_send_msg(phy->dev, MCU_UNI_CMD(SCAN_REQ),
+-				 &req, sizeof(req), false);
++				 &req, sizeof(req), true);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_cancel_hw_scan);
+ 
+@@ -3155,7 +3156,7 @@ int mt7925_mcu_set_channel_domain(struct mt76_phy *phy)
+ 	memcpy(__skb_push(skb, sizeof(req)), &req, sizeof(req));
+ 
+ 	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(SET_DOMAIN_INFO),
+-				     false);
++				     true);
+ }
+ EXPORT_SYMBOL_GPL(mt7925_mcu_set_channel_domain);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/dma.c b/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
+index 69a7d9b2e38bd7..4b68d2fc5e0949 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
+@@ -493,7 +493,7 @@ int mt7996_dma_init(struct mt7996_dev *dev)
+ 	ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MCU],
+ 			       MT_RXQ_ID(MT_RXQ_MCU),
+ 			       MT7996_RX_MCU_RING_SIZE,
+-			       MT_RX_BUF_SIZE,
++			       MT7996_RX_MCU_BUF_SIZE,
+ 			       MT_RXQ_RING_BASE(MT_RXQ_MCU));
+ 	if (ret)
+ 		return ret;
+@@ -502,7 +502,7 @@ int mt7996_dma_init(struct mt7996_dev *dev)
+ 	ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MCU_WA],
+ 			       MT_RXQ_ID(MT_RXQ_MCU_WA),
+ 			       MT7996_RX_MCU_RING_SIZE_WA,
+-			       MT_RX_BUF_SIZE,
++			       MT7996_RX_MCU_BUF_SIZE,
+ 			       MT_RXQ_RING_BASE(MT_RXQ_MCU_WA));
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+index 53dfac02f8af0b..f0c76aac175dff 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/eeprom.c
+@@ -304,6 +304,7 @@ int mt7996_eeprom_parse_hw_cap(struct mt7996_dev *dev, struct mt7996_phy *phy)
+ 		phy->has_aux_rx = true;
+ 
+ 	mphy->antenna_mask = BIT(nss) - 1;
++	phy->orig_antenna_mask = mphy->antenna_mask;
+ 	mphy->chainmask = (BIT(path) - 1) << dev->chainshift[band_idx];
+ 	phy->orig_chainmask = mphy->chainmask;
+ 	dev->chainmask |= mphy->chainmask;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 6b660424aedc31..4906b0ecc73e02 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -217,6 +217,9 @@ static int mt7996_thermal_init(struct mt7996_phy *phy)
+ 
+ 	name = devm_kasprintf(&wiphy->dev, GFP_KERNEL, "mt7996_%s.%d",
+ 			      wiphy_name(wiphy), phy->mt76->band_idx);
++	if (!name)
++		return -ENOMEM;
++
+ 	snprintf(cname, sizeof(cname), "cooling_device%d", phy->mt76->band_idx);
+ 
+ 	cdev = thermal_cooling_device_register(name, phy, &mt7996_thermal_ops);
+@@ -1113,12 +1116,12 @@ mt7996_set_stream_he_txbf_caps(struct mt7996_phy *phy,
+ 
+ 	c = IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE;
+ 
+-	if (is_mt7996(phy->mt76->dev))
+-		c |= IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4 |
+-		     (IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_4 * non_2g);
+-	else
++	if (is_mt7992(phy->mt76->dev))
+ 		c |= IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_5 |
+ 		     (IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_5 * non_2g);
++	else
++		c |= IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4 |
++		     (IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_4 * non_2g);
+ 
+ 	elem->phy_cap_info[4] |= c;
+ 
+@@ -1318,6 +1321,9 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ 		u8_encode_bits(IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_11454,
+ 			       IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK);
+ 
++	eht_cap_elem->mac_cap_info[1] |=
++		IEEE80211_EHT_MAC_CAP1_MAX_AMPDU_LEN_MASK;
++
+ 	eht_cap_elem->phy_cap_info[0] =
+ 		IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+ 		IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 91c64e3a0860ff..a3295b22523a61 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -68,11 +68,13 @@ static int mt7996_start(struct ieee80211_hw *hw)
+ 
+ static void mt7996_stop_phy(struct mt7996_phy *phy)
+ {
+-	struct mt7996_dev *dev = phy->dev;
++	struct mt7996_dev *dev;
+ 
+ 	if (!phy || !test_bit(MT76_STATE_RUNNING, &phy->mt76->state))
+ 		return;
+ 
++	dev = phy->dev;
++
+ 	cancel_delayed_work_sync(&phy->mt76->mac_work);
+ 
+ 	mutex_lock(&dev->mt76.mutex);
+@@ -414,11 +416,13 @@ static void mt7996_phy_set_rxfilter(struct mt7996_phy *phy)
+ 
+ static void mt7996_set_monitor(struct mt7996_phy *phy, bool enabled)
+ {
+-	struct mt7996_dev *dev = phy->dev;
++	struct mt7996_dev *dev;
+ 
+ 	if (!phy)
+ 		return;
+ 
++	dev = phy->dev;
++
+ 	if (enabled == !(phy->rxfilter & MT_WF_RFCR_DROP_OTHER_UC))
+ 		return;
+ 
+@@ -998,16 +1002,22 @@ mt7996_mac_sta_add_links(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ 			continue;
+ 
+ 		link_conf = link_conf_dereference_protected(vif, link_id);
+-		if (!link_conf)
++		if (!link_conf) {
++			err = -EINVAL;
+ 			goto error_unlink;
++		}
+ 
+ 		link = mt7996_vif_link(dev, vif, link_id);
+-		if (!link)
++		if (!link) {
++			err = -EINVAL;
+ 			goto error_unlink;
++		}
+ 
+ 		link_sta = link_sta_dereference_protected(sta, link_id);
+-		if (!link_sta)
++		if (!link_sta) {
++			err = -EINVAL;
+ 			goto error_unlink;
++		}
+ 
+ 		err = mt7996_mac_sta_init_link(dev, link_conf, link_sta, link,
+ 					       link_id);
+@@ -1518,7 +1528,8 @@ mt7996_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+ 		u8 shift = dev->chainshift[band_idx];
+ 
+ 		phy->mt76->chainmask = tx_ant & phy->orig_chainmask;
+-		phy->mt76->antenna_mask = phy->mt76->chainmask >> shift;
++		phy->mt76->antenna_mask = (phy->mt76->chainmask >> shift) &
++					  phy->orig_antenna_mask;
+ 
+ 		mt76_set_stream_caps(phy->mt76, true);
+ 		mt7996_set_stream_vht_txbf_caps(phy);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+index 13b188e281bdb9..af9169030bad99 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+@@ -323,6 +323,9 @@ int mt7996_mmio_wed_init(struct mt7996_dev *dev, void *pdev_ptr,
+ 	wed->wlan.base = devm_ioremap(dev->mt76.dev,
+ 				      pci_resource_start(pci_dev, 0),
+ 				      pci_resource_len(pci_dev, 0));
++	if (!wed->wlan.base)
++		return -ENOMEM;
++
+ 	wed->wlan.phy_base = pci_resource_start(pci_dev, 0);
+ 
+ 	if (hif2) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index 43e646ed6094cb..77605403b39661 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -29,6 +29,9 @@
+ #define MT7996_RX_RING_SIZE		1536
+ #define MT7996_RX_MCU_RING_SIZE		512
+ #define MT7996_RX_MCU_RING_SIZE_WA	1024
++/* scatter-gather of mcu event is not supported in connac3 */
++#define MT7996_RX_MCU_BUF_SIZE		(2048 + \
++					 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+ 
+ #define MT7996_FIRMWARE_WA		"mediatek/mt7996/mt7996_wa.bin"
+ #define MT7996_FIRMWARE_WM		"mediatek/mt7996/mt7996_wm.bin"
+@@ -293,6 +296,7 @@ struct mt7996_phy {
+ 	struct mt76_channel_state state_ts;
+ 
+ 	u16 orig_chainmask;
++	u16 orig_antenna_mask;
+ 
+ 	bool has_aux_rx;
+ 	bool counter_reset;
+diff --git a/drivers/net/wireless/realtek/rtw88/coex.c b/drivers/net/wireless/realtek/rtw88/coex.c
+index c929db1e53ca63..64904278ddad7d 100644
+--- a/drivers/net/wireless/realtek/rtw88/coex.c
++++ b/drivers/net/wireless/realtek/rtw88/coex.c
+@@ -309,7 +309,7 @@ static void rtw_coex_tdma_timer_base(struct rtw_dev *rtwdev, u8 type)
+ {
+ 	struct rtw_coex *coex = &rtwdev->coex;
+ 	struct rtw_coex_stat *coex_stat = &coex->stat;
+-	u8 para[2] = {0};
++	u8 para[6] = {};
+ 	u8 times;
+ 	u16 tbtt_interval = coex_stat->wl_beacon_interval;
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 5e53e0db177efe..8937a7b656edb1 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -3951,7 +3951,8 @@ static void rtw8822c_dpk_cal_coef1(struct rtw_dev *rtwdev)
+ 	rtw_write32(rtwdev, REG_NCTL0, 0x00001148);
+ 	rtw_write32(rtwdev, REG_NCTL0, 0x00001149);
+ 
+-	check_hw_ready(rtwdev, 0x2d9c, MASKBYTE0, 0x55);
++	if (!check_hw_ready(rtwdev, 0x2d9c, MASKBYTE0, 0x55))
++		rtw_warn(rtwdev, "DPK stuck, performance may be suboptimal");
+ 
+ 	rtw_write8(rtwdev, 0x1b10, 0x0);
+ 	rtw_write32_mask(rtwdev, REG_NCTL0, BIT_SUBPAGE, 0x0000000c);
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 6209a49312f176..410f637b1add58 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -718,10 +718,7 @@ static u8 rtw_sdio_get_tx_qsel(struct rtw_dev *rtwdev, struct sk_buff *skb,
+ 	case RTW_TX_QUEUE_H2C:
+ 		return TX_DESC_QSEL_H2C;
+ 	case RTW_TX_QUEUE_MGMT:
+-		if (rtw_chip_wcpu_11n(rtwdev))
+-			return TX_DESC_QSEL_HIGH;
+-		else
+-			return TX_DESC_QSEL_MGMT;
++		return TX_DESC_QSEL_MGMT;
+ 	case RTW_TX_QUEUE_HI0:
+ 		return TX_DESC_QSEL_HIGH;
+ 	default:
+@@ -1227,10 +1224,7 @@ static void rtw_sdio_process_tx_queue(struct rtw_dev *rtwdev,
+ 		return;
+ 	}
+ 
+-	if (queue <= RTW_TX_QUEUE_VO)
+-		rtw_sdio_indicate_tx_status(rtwdev, skb);
+-	else
+-		dev_kfree_skb_any(skb);
++	rtw_sdio_indicate_tx_status(rtwdev, skb);
+ }
+ 
+ static void rtw_sdio_tx_handler(struct work_struct *work)
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 8643b17866f897..6c52b0425f2ea9 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -5477,7 +5477,7 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num,
+ 	return 0;
+ }
+ 
+-#define RTW89_SCAN_DELAY_TSF_UNIT 104800
++#define RTW89_SCAN_DELAY_TSF_UNIT 1000000
+ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ 				 struct rtw89_scan_option *option,
+ 				 struct rtw89_vif_link *rtwvif_link,
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index c2fe5a898dc717..064f6a94010731 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -228,7 +228,7 @@ int rtw89_pci_sync_skb_for_device_and_validate_rx_info(struct rtw89_dev *rtwdev,
+ 						       struct sk_buff *skb)
+ {
+ 	struct rtw89_pci_rx_info *rx_info = RTW89_PCI_RX_SKB_CB(skb);
+-	int rx_tag_retry = 100;
++	int rx_tag_retry = 1000;
+ 	int ret;
+ 
+ 	do {
+@@ -3105,17 +3105,26 @@ static bool rtw89_pci_is_dac_compatible_bridge(struct rtw89_dev *rtwdev)
+ 	return false;
+ }
+ 
+-static void rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev)
++static int rtw89_pci_cfg_dac(struct rtw89_dev *rtwdev, bool force)
+ {
+ 	struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
++	struct pci_dev *pdev = rtwpci->pdev;
++	int ret;
++	u8 val;
+ 
+-	if (!rtwpci->enable_dac)
+-		return;
++	if (!rtwpci->enable_dac && !force)
++		return 0;
+ 
+ 	if (!rtw89_pci_chip_is_manual_dac(rtwdev))
+-		return;
++		return 0;
+ 
+-	rtw89_pci_config_byte_set(rtwdev, RTW89_PCIE_L1_CTRL, RTW89_PCIE_BIT_EN_64BITS);
++	/* Configure DAC only via PCI config API, not DBI interfaces */
++	ret = pci_read_config_byte(pdev, RTW89_PCIE_L1_CTRL, &val);
++	if (ret)
++		return ret;
++
++	val |= RTW89_PCIE_BIT_EN_64BITS;
++	return pci_write_config_byte(pdev, RTW89_PCIE_L1_CTRL, val);
+ }
+ 
+ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+@@ -3133,13 +3142,16 @@ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+ 	}
+ 
+ 	if (!rtw89_pci_is_dac_compatible_bridge(rtwdev))
+-		goto no_dac;
++		goto try_dac_done;
+ 
+ 	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(36));
+ 	if (!ret) {
+-		rtwpci->enable_dac = true;
+-		rtw89_pci_cfg_dac(rtwdev);
+-	} else {
++		ret = rtw89_pci_cfg_dac(rtwdev, true);
++		if (!ret) {
++			rtwpci->enable_dac = true;
++			goto try_dac_done;
++		}
++
+ 		ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ 		if (ret) {
+ 			rtw89_err(rtwdev,
+@@ -3147,7 +3159,7 @@ static int rtw89_pci_setup_mapping(struct rtw89_dev *rtwdev,
+ 			goto err_release_regions;
+ 		}
+ 	}
+-no_dac:
++try_dac_done:
+ 
+ 	resource_len = pci_resource_len(pdev, bar_id);
+ 	rtwpci->mmap = pci_iomap(pdev, bar_id, resource_len);
+@@ -4302,7 +4314,7 @@ static void rtw89_pci_l2_hci_ldo(struct rtw89_dev *rtwdev)
+ void rtw89_pci_basic_cfg(struct rtw89_dev *rtwdev, bool resume)
+ {
+ 	if (resume)
+-		rtw89_pci_cfg_dac(rtwdev);
++		rtw89_pci_cfg_dac(rtwdev, false);
+ 
+ 	rtw89_pci_disable_eq(rtwdev);
+ 	rtw89_pci_filter_out(rtwdev);
+diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c
+index 8755c5e6a65b30..c814fbd756a1e7 100644
+--- a/drivers/net/wwan/mhi_wwan_mbim.c
++++ b/drivers/net/wwan/mhi_wwan_mbim.c
+@@ -550,8 +550,8 @@ static int mhi_mbim_newlink(void *ctxt, struct net_device *ndev, u32 if_id,
+ 	struct mhi_mbim_link *link = wwan_netdev_drvpriv(ndev);
+ 	struct mhi_mbim_context *mbim = ctxt;
+ 
+-	link->session = if_id;
+ 	link->mbim = mbim;
++	link->session = mhi_mbim_get_link_mux_id(link->mbim->mdev->mhi_cntrl) + if_id;
+ 	link->ndev = ndev;
+ 	u64_stats_init(&link->rx_syncp);
+ 	u64_stats_init(&link->tx_syncp);
+@@ -607,7 +607,7 @@ static int mhi_mbim_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
+ {
+ 	struct mhi_controller *cntrl = mhi_dev->mhi_cntrl;
+ 	struct mhi_mbim_context *mbim;
+-	int err, link_id;
++	int err;
+ 
+ 	mbim = devm_kzalloc(&mhi_dev->dev, sizeof(*mbim), GFP_KERNEL);
+ 	if (!mbim)
+@@ -628,11 +628,8 @@ static int mhi_mbim_probe(struct mhi_device *mhi_dev, const struct mhi_device_id
+ 	/* Number of transfer descriptors determines size of the queue */
+ 	mbim->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE);
+ 
+-	/* Get the corresponding mux_id from mhi */
+-	link_id = mhi_mbim_get_link_mux_id(cntrl);
+-
+ 	/* Register wwan link ops with MHI controller representing WWAN instance */
+-	return wwan_register_ops(&cntrl->mhi_dev->dev, &mhi_mbim_wwan_ops, mbim, link_id);
++	return wwan_register_ops(&cntrl->mhi_dev->dev, &mhi_mbim_wwan_ops, mbim, 0);
+ }
+ 
+ static void mhi_mbim_remove(struct mhi_device *mhi_dev)
+diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.c b/drivers/net/wwan/t7xx/t7xx_netdev.c
+index 91fa082e9cab80..fc0a7cb181df2c 100644
+--- a/drivers/net/wwan/t7xx/t7xx_netdev.c
++++ b/drivers/net/wwan/t7xx/t7xx_netdev.c
+@@ -302,7 +302,7 @@ static int t7xx_ccmni_wwan_newlink(void *ctxt, struct net_device *dev, u32 if_id
+ 	ccmni->ctlb = ctlb;
+ 	ccmni->dev = dev;
+ 	atomic_set(&ccmni->usage, 0);
+-	ctlb->ccmni_inst[if_id] = ccmni;
++	WRITE_ONCE(ctlb->ccmni_inst[if_id], ccmni);
+ 
+ 	ret = register_netdevice(dev);
+ 	if (ret)
+@@ -324,6 +324,7 @@ static void t7xx_ccmni_wwan_dellink(void *ctxt, struct net_device *dev, struct l
+ 	if (WARN_ON(ctlb->ccmni_inst[if_id] != ccmni))
+ 		return;
+ 
++	WRITE_ONCE(ctlb->ccmni_inst[if_id], NULL);
+ 	unregister_netdevice(dev);
+ }
+ 
+@@ -419,7 +420,7 @@ static void t7xx_ccmni_recv_skb(struct t7xx_ccmni_ctrl *ccmni_ctlb, struct sk_bu
+ 
+ 	skb_cb = T7XX_SKB_CB(skb);
+ 	netif_id = skb_cb->netif_idx;
+-	ccmni = ccmni_ctlb->ccmni_inst[netif_id];
++	ccmni = READ_ONCE(ccmni_ctlb->ccmni_inst[netif_id]);
+ 	if (!ccmni) {
+ 		dev_kfree_skb(skb);
+ 		return;
+@@ -441,7 +442,7 @@ static void t7xx_ccmni_recv_skb(struct t7xx_ccmni_ctrl *ccmni_ctlb, struct sk_bu
+ 
+ static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno)
+ {
+-	struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0];
++	struct t7xx_ccmni *ccmni = READ_ONCE(ctlb->ccmni_inst[0]);
+ 	struct netdev_queue *net_queue;
+ 
+ 	if (netif_running(ccmni->dev) && atomic_read(&ccmni->usage) > 0) {
+@@ -453,7 +454,7 @@ static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno
+ 
+ static void t7xx_ccmni_queue_tx_full_notify(struct t7xx_ccmni_ctrl *ctlb, int qno)
+ {
+-	struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0];
++	struct t7xx_ccmni *ccmni = READ_ONCE(ctlb->ccmni_inst[0]);
+ 	struct netdev_queue *net_queue;
+ 
+ 	if (atomic_read(&ccmni->usage) > 0) {
+@@ -471,7 +472,7 @@ static void t7xx_ccmni_queue_state_notify(struct t7xx_pci_dev *t7xx_dev,
+ 	if (ctlb->md_sta != MD_STATE_READY)
+ 		return;
+ 
+-	if (!ctlb->ccmni_inst[0]) {
++	if (!READ_ONCE(ctlb->ccmni_inst[0])) {
+ 		dev_warn(&t7xx_dev->pdev->dev, "No netdev registered yet\n");
+ 		return;
+ 	}
+diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c
+index 2b9e6cfaf2a80a..1a0058be582104 100644
+--- a/drivers/nvme/host/constants.c
++++ b/drivers/nvme/host/constants.c
+@@ -145,7 +145,7 @@ static const char * const nvme_statuses[] = {
+ 	[NVME_SC_BAD_ATTRIBUTES] = "Conflicting Attributes",
+ 	[NVME_SC_INVALID_PI] = "Invalid Protection Information",
+ 	[NVME_SC_READ_ONLY] = "Attempted Write to Read Only Range",
+-	[NVME_SC_ONCS_NOT_SUPPORTED] = "ONCS Not Supported",
++	[NVME_SC_CMD_SIZE_LIM_EXCEEDED	] = "Command Size Limits Exceeded",
+ 	[NVME_SC_ZONE_BOUNDARY_ERROR] = "Zoned Boundary Error",
+ 	[NVME_SC_ZONE_FULL] = "Zone Is Full",
+ 	[NVME_SC_ZONE_READ_ONLY] = "Zone Is Read Only",
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6b04473c0ab73c..93a8119ad5ca66 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -286,7 +286,6 @@ static blk_status_t nvme_error_status(u16 status)
+ 	case NVME_SC_NS_NOT_READY:
+ 		return BLK_STS_TARGET;
+ 	case NVME_SC_BAD_ATTRIBUTES:
+-	case NVME_SC_ONCS_NOT_SUPPORTED:
+ 	case NVME_SC_INVALID_OPCODE:
+ 	case NVME_SC_INVALID_FIELD:
+ 	case NVME_SC_INVALID_NS:
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index ca86d3bf7ea49d..f29107d95ff26d 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -521,7 +521,7 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ 	if (d.data_len) {
+ 		ret = nvme_map_user_request(req, d.addr, d.data_len,
+ 			nvme_to_user_ptr(d.metadata), d.metadata_len,
+-			map_iter, vec);
++			map_iter, vec ? NVME_IOCTL_VEC : 0);
+ 		if (ret)
+ 			goto out_free_req;
+ 	}
+diff --git a/drivers/nvme/host/pr.c b/drivers/nvme/host/pr.c
+index cf2d2c5039ddbf..ca6a74607b1397 100644
+--- a/drivers/nvme/host/pr.c
++++ b/drivers/nvme/host/pr.c
+@@ -82,8 +82,6 @@ static int nvme_status_to_pr_err(int status)
+ 		return PR_STS_SUCCESS;
+ 	case NVME_SC_RESERVATION_CONFLICT:
+ 		return PR_STS_RESERVATION_CONFLICT;
+-	case NVME_SC_ONCS_NOT_SUPPORTED:
+-		return -EOPNOTSUPP;
+ 	case NVME_SC_BAD_ATTRIBUTES:
+ 	case NVME_SC_INVALID_OPCODE:
+ 	case NVME_SC_INVALID_FIELD:
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 245475c43127fb..69b1ddff6731fc 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -62,14 +62,7 @@ inline u16 errno_to_nvme_status(struct nvmet_req *req, int errno)
+ 		return  NVME_SC_LBA_RANGE | NVME_STATUS_DNR;
+ 	case -EOPNOTSUPP:
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		switch (req->cmd->common.opcode) {
+-		case nvme_cmd_dsm:
+-		case nvme_cmd_write_zeroes:
+-			return NVME_SC_ONCS_NOT_SUPPORTED | NVME_STATUS_DNR;
+-		default:
+-			return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+-		}
+-		break;
++		return NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 	case -ENODATA:
+ 		req->error_loc = offsetof(struct nvme_rw_command, nsid);
+ 		return NVME_SC_ACCESS_DENIED;
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 641201e62c1baf..20becea1ad9683 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -618,12 +618,13 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ {
+ 	struct fcloop_fcpreq *tfcp_req =
+ 		container_of(work, struct fcloop_fcpreq, fcp_rcv_work);
+-	struct nvmefc_fcp_req *fcpreq = tfcp_req->fcpreq;
++	struct nvmefc_fcp_req *fcpreq;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 	bool aborted = false;
+ 
+ 	spin_lock_irqsave(&tfcp_req->reqlock, flags);
++	fcpreq = tfcp_req->fcpreq;
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_START:
+ 		tfcp_req->inistate = INI_IO_ACTIVE;
+@@ -638,16 +639,19 @@ fcloop_fcp_recv_work(struct work_struct *work)
+ 	}
+ 	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+ 
+-	if (unlikely(aborted))
+-		ret = -ECANCELED;
+-	else {
+-		if (likely(!check_for_drop(tfcp_req)))
+-			ret = nvmet_fc_rcv_fcp_req(tfcp_req->tport->targetport,
+-				&tfcp_req->tgt_fcp_req,
+-				fcpreq->cmdaddr, fcpreq->cmdlen);
+-		else
+-			pr_info("%s: dropped command ********\n", __func__);
++	if (unlikely(aborted)) {
++		/* the abort handler will call fcloop_call_host_done */
++		return;
++	}
++
++	if (unlikely(check_for_drop(tfcp_req))) {
++		pr_info("%s: dropped command ********\n", __func__);
++		return;
+ 	}
++
++	ret = nvmet_fc_rcv_fcp_req(tfcp_req->tport->targetport,
++				   &tfcp_req->tgt_fcp_req,
++				   fcpreq->cmdaddr, fcpreq->cmdlen);
+ 	if (ret)
+ 		fcloop_call_host_done(fcpreq, tfcp_req, ret);
+ }
+@@ -662,9 +666,10 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+-	fcpreq = tfcp_req->fcpreq;
+ 	switch (tfcp_req->inistate) {
+ 	case INI_IO_ABORTED:
++		fcpreq = tfcp_req->fcpreq;
++		tfcp_req->fcpreq = NULL;
+ 		break;
+ 	case INI_IO_COMPLETED:
+ 		completed = true;
+@@ -686,10 +691,6 @@ fcloop_fcp_abort_recv_work(struct work_struct *work)
+ 		nvmet_fc_rcv_fcp_abort(tfcp_req->tport->targetport,
+ 					&tfcp_req->tgt_fcp_req);
+ 
+-	spin_lock_irqsave(&tfcp_req->reqlock, flags);
+-	tfcp_req->fcpreq = NULL;
+-	spin_unlock_irqrestore(&tfcp_req->reqlock, flags);
+-
+ 	fcloop_call_host_done(fcpreq, tfcp_req, -ECANCELED);
+ 	/* call_host_done releases reference for abort downcall */
+ }
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 83be0657e6df4e..1cfa13d029bfa2 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -145,15 +145,8 @@ u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts)
+ 		req->error_loc = offsetof(struct nvme_rw_command, slba);
+ 		break;
+ 	case BLK_STS_NOTSUPP:
++		status = NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+ 		req->error_loc = offsetof(struct nvme_common_command, opcode);
+-		switch (req->cmd->common.opcode) {
+-		case nvme_cmd_dsm:
+-		case nvme_cmd_write_zeroes:
+-			status = NVME_SC_ONCS_NOT_SUPPORTED | NVME_STATUS_DNR;
+-			break;
+-		default:
+-			status = NVME_SC_INVALID_OPCODE | NVME_STATUS_DNR;
+-		}
+ 		break;
+ 	case BLK_STS_MEDIUM:
+ 		status = NVME_SC_ACCESS_DENIED;
+diff --git a/drivers/nvmem/zynqmp_nvmem.c b/drivers/nvmem/zynqmp_nvmem.c
+index 8682adaacd692d..7da717d6c7faf3 100644
+--- a/drivers/nvmem/zynqmp_nvmem.c
++++ b/drivers/nvmem/zynqmp_nvmem.c
+@@ -213,6 +213,7 @@ static int zynqmp_nvmem_probe(struct platform_device *pdev)
+ 	econfig.word_size = 1;
+ 	econfig.size = ZYNQMP_NVMEM_SIZE;
+ 	econfig.dev = dev;
++	econfig.priv = dev;
+ 	econfig.add_legacy_fixed_of_cells = true;
+ 	econfig.reg_read = zynqmp_nvmem_read;
+ 	econfig.reg_write = zynqmp_nvmem_write;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 64d301893af7b8..eeb370e0f50777 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -2029,15 +2029,16 @@ static int __init unittest_data_add(void)
+ 	rc = of_resolve_phandles(unittest_data_node);
+ 	if (rc) {
+ 		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
+-		of_overlay_mutex_unlock();
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto unlock;
+ 	}
+ 
+ 	/* attach the sub-tree to live tree */
+ 	if (!of_root) {
+ 		pr_warn("%s: no live tree to attach sub-tree\n", __func__);
+ 		kfree(unittest_data);
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto unlock;
+ 	}
+ 
+ 	EXPECT_BEGIN(KERN_INFO,
+@@ -2056,9 +2057,10 @@ static int __init unittest_data_add(void)
+ 	EXPECT_END(KERN_INFO,
+ 		   "Duplicate name in testcase-data, renamed to \"duplicate-name#1\"");
+ 
++unlock:
+ 	of_overlay_mutex_unlock();
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ #ifdef CONFIG_OF_OVERLAY
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 8af95e9da7cec6..741e10a575ec75 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -570,14 +570,5 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
+ 	if (!bridge->ops)
+ 		bridge->ops = &cdns_pcie_host_ops;
+ 
+-	ret = pci_host_probe(bridge);
+-	if (ret < 0)
+-		goto err_init;
+-
+-	return 0;
+-
+- err_init:
+-	pm_runtime_put_sync(dev);
+-
+-	return ret;
++	return pci_host_probe(bridge);
+ }
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 5f267dd261b51e..ea5c06371171ff 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -129,6 +129,11 @@ struct imx_pcie_drvdata {
+ 	const struct dw_pcie_host_ops *ops;
+ };
+ 
++struct imx_lut_data {
++	u32 data1;
++	u32 data2;
++};
++
+ struct imx_pcie {
+ 	struct dw_pcie		*pci;
+ 	struct gpio_desc	*reset_gpiod;
+@@ -148,6 +153,8 @@ struct imx_pcie {
+ 	struct regulator	*vph;
+ 	void __iomem		*phy_base;
+ 
++	/* LUT data for pcie */
++	struct imx_lut_data	luts[IMX95_MAX_LUT];
+ 	/* power domain for pcie */
+ 	struct device		*pd_pcie;
+ 	/* power domain for pcie phy */
+@@ -1386,6 +1393,42 @@ static void imx_pcie_msi_save_restore(struct imx_pcie *imx_pcie, bool save)
+ 	}
+ }
+ 
++static void imx_pcie_lut_save(struct imx_pcie *imx_pcie)
++{
++	u32 data1, data2;
++	int i;
++
++	for (i = 0; i < IMX95_MAX_LUT; i++) {
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL,
++			     IMX95_PEO_LUT_RWA | i);
++		regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, &data1);
++		regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2);
++		if (data1 & IMX95_PE0_LUT_VLD) {
++			imx_pcie->luts[i].data1 = data1;
++			imx_pcie->luts[i].data2 = data2;
++		} else {
++			imx_pcie->luts[i].data1 = 0;
++			imx_pcie->luts[i].data2 = 0;
++		}
++	}
++}
++
++static void imx_pcie_lut_restore(struct imx_pcie *imx_pcie)
++{
++	int i;
++
++	for (i = 0; i < IMX95_MAX_LUT; i++) {
++		if ((imx_pcie->luts[i].data1 & IMX95_PE0_LUT_VLD) == 0)
++			continue;
++
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1,
++			     imx_pcie->luts[i].data1);
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2,
++			     imx_pcie->luts[i].data2);
++		regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL, i);
++	}
++}
++
+ static int imx_pcie_suspend_noirq(struct device *dev)
+ {
+ 	struct imx_pcie *imx_pcie = dev_get_drvdata(dev);
+@@ -1394,6 +1437,8 @@ static int imx_pcie_suspend_noirq(struct device *dev)
+ 		return 0;
+ 
+ 	imx_pcie_msi_save_restore(imx_pcie, true);
++	if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT))
++		imx_pcie_lut_save(imx_pcie);
+ 	if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
+ 		/*
+ 		 * The minimum for a workaround would be to set PERST# and to
+@@ -1438,6 +1483,8 @@ static int imx_pcie_resume_noirq(struct device *dev)
+ 		if (ret)
+ 			return ret;
+ 	}
++	if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT))
++		imx_pcie_lut_restore(imx_pcie);
+ 	imx_pcie_msi_save_restore(imx_pcie, false);
+ 
+ 	return 0;
+diff --git a/drivers/pci/controller/dwc/pcie-rcar-gen4.c b/drivers/pci/controller/dwc/pcie-rcar-gen4.c
+index fc872dd35029c0..02638ec442e701 100644
+--- a/drivers/pci/controller/dwc/pcie-rcar-gen4.c
++++ b/drivers/pci/controller/dwc/pcie-rcar-gen4.c
+@@ -403,6 +403,7 @@ static const struct pci_epc_features rcar_gen4_pcie_epc_features = {
+ 	.msix_capable = false,
+ 	.bar[BAR_1] = { .type = BAR_RESERVED, },
+ 	.bar[BAR_3] = { .type = BAR_RESERVED, },
++	.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 },
+ 	.bar[BAR_5] = { .type = BAR_RESERVED, },
+ 	.align = SZ_1M,
+ };
+diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
+index 18e11b9a7f4647..3d778d8b018756 100644
+--- a/drivers/pci/controller/pcie-apple.c
++++ b/drivers/pci/controller/pcie-apple.c
+@@ -540,7 +540,7 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
+ 	rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK);
+ 
+ 	/* Assert PERST# before setting up the clock */
+-	gpiod_set_value(reset, 1);
++	gpiod_set_value_cansleep(reset, 1);
+ 
+ 	ret = apple_pcie_setup_refclk(pcie, port);
+ 	if (ret < 0)
+@@ -551,7 +551,7 @@ static int apple_pcie_setup_port(struct apple_pcie *pcie,
+ 
+ 	/* Deassert PERST# */
+ 	rmw_set(PORT_PERST_OFF, port->base + PORT_PERST);
+-	gpiod_set_value(reset, 0);
++	gpiod_set_value_cansleep(reset, 0);
+ 
+ 	/* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */
+ 	msleep(100);
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 14954f43e5e9af..5864a20323f21a 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -319,11 +319,12 @@ static const char * const rockchip_pci_pm_rsts[] = {
+ 	"aclk",
+ };
+ 
++/* NOTE: Do not reorder the deassert sequence of the following reset pins */
+ static const char * const rockchip_pci_core_rsts[] = {
+-	"mgmt-sticky",
+-	"core",
+-	"mgmt",
+ 	"pipe",
++	"mgmt",
++	"core",
++	"mgmt-sticky",
+ };
+ 
+ struct rockchip_pcie {
+diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
+index 394395c7f8decf..577a9e490115c9 100644
+--- a/drivers/pci/endpoint/pci-epf-core.c
++++ b/drivers/pci/endpoint/pci-epf-core.c
+@@ -236,12 +236,13 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
+ 	}
+ 
+ 	dev = epc->dev.parent;
+-	dma_free_coherent(dev, epf_bar[bar].size, addr,
++	dma_free_coherent(dev, epf_bar[bar].aligned_size, addr,
+ 			  epf_bar[bar].phys_addr);
+ 
+ 	epf_bar[bar].phys_addr = 0;
+ 	epf_bar[bar].addr = NULL;
+ 	epf_bar[bar].size = 0;
++	epf_bar[bar].aligned_size = 0;
+ 	epf_bar[bar].barno = 0;
+ 	epf_bar[bar].flags = 0;
+ }
+@@ -264,7 +265,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 			  enum pci_epc_interface_type type)
+ {
+ 	u64 bar_fixed_size = epc_features->bar[bar].fixed_size;
+-	size_t align = epc_features->align;
++	size_t aligned_size, align = epc_features->align;
+ 	struct pci_epf_bar *epf_bar;
+ 	dma_addr_t phys_addr;
+ 	struct pci_epc *epc;
+@@ -285,12 +286,18 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 			return NULL;
+ 		}
+ 		size = bar_fixed_size;
++	} else {
++		/* BAR size must be power of two */
++		size = roundup_pow_of_two(size);
+ 	}
+ 
+-	if (align)
+-		size = ALIGN(size, align);
+-	else
+-		size = roundup_pow_of_two(size);
++	/*
++	 * Allocate enough memory to accommodate the iATU alignment
++	 * requirement.  In most cases, this will be the same as .size but
++	 * it might be different if, for example, the fixed size of a BAR
++	 * is smaller than align.
++	 */
++	aligned_size = align ? ALIGN(size, align) : size;
+ 
+ 	if (type == PRIMARY_INTERFACE) {
+ 		epc = epf->epc;
+@@ -301,7 +308,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 	}
+ 
+ 	dev = epc->dev.parent;
+-	space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
++	space = dma_alloc_coherent(dev, aligned_size, &phys_addr, GFP_KERNEL);
+ 	if (!space) {
+ 		dev_err(dev, "failed to allocate mem space\n");
+ 		return NULL;
+@@ -310,6 +317,7 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
+ 	epf_bar[bar].phys_addr = phys_addr;
+ 	epf_bar[bar].addr = space;
+ 	epf_bar[bar].size = size;
++	epf_bar[bar].aligned_size = aligned_size;
+ 	epf_bar[bar].barno = bar;
+ 	if (upper_32_bits(size) || epc_features->bar[bar].only_64bit)
+ 		epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index d30f1316c98e2c..d7fc3bc039643c 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -492,6 +492,75 @@ void pci_hp_destroy(struct hotplug_slot *slot)
+ }
+ EXPORT_SYMBOL_GPL(pci_hp_destroy);
+ 
++static DECLARE_WAIT_QUEUE_HEAD(pci_hp_link_change_wq);
++
++/**
++ * pci_hp_ignore_link_change - begin code section causing spurious link changes
++ * @pdev: PCI hotplug bridge
++ *
++ * Mark the beginning of a code section causing spurious link changes on the
++ * Secondary Bus of @pdev, e.g. as a side effect of a Secondary Bus Reset,
++ * D3cold transition, firmware update or FPGA reconfiguration.
++ *
++ * Hotplug drivers can thus check whether such a code section is executing
++ * concurrently, await it with pci_hp_spurious_link_change() and ignore the
++ * resulting link change events.
++ *
++ * Must be paired with pci_hp_unignore_link_change().  May be called both
++ * from the PCI core and from Endpoint drivers.  May be called for bridges
++ * which are not hotplug-capable, in which case it has no effect because
++ * no hotplug driver is bound to the bridge.
++ */
++void pci_hp_ignore_link_change(struct pci_dev *pdev)
++{
++	set_bit(PCI_LINK_CHANGING, &pdev->priv_flags);
++	smp_mb__after_atomic(); /* pairs with implied barrier of wait_event() */
++}
++
++/**
++ * pci_hp_unignore_link_change - end code section causing spurious link changes
++ * @pdev: PCI hotplug bridge
++ *
++ * Mark the end of a code section causing spurious link changes on the
++ * Secondary Bus of @pdev.  Must be paired with pci_hp_ignore_link_change().
++ */
++void pci_hp_unignore_link_change(struct pci_dev *pdev)
++{
++	set_bit(PCI_LINK_CHANGED, &pdev->priv_flags);
++	mb(); /* ensure pci_hp_spurious_link_change() sees either bit set */
++	clear_bit(PCI_LINK_CHANGING, &pdev->priv_flags);
++	wake_up_all(&pci_hp_link_change_wq);
++}
++
++/**
++ * pci_hp_spurious_link_change - check for spurious link changes
++ * @pdev: PCI hotplug bridge
++ *
++ * Check whether a code section is executing concurrently which is causing
++ * spurious link changes on the Secondary Bus of @pdev.  Await the end of the
++ * code section if so.
++ *
++ * May be called by hotplug drivers to check whether a link change is spurious
++ * and can be ignored.
++ *
++ * Because a genuine link change may have occurred in-between a spurious link
++ * change and the invocation of this function, hotplug drivers should perform
++ * sanity checks such as retrieving the current link state and bringing down
++ * the slot if the link is down.
++ *
++ * Return: %true if such a code section has been executing concurrently,
++ * otherwise %false.  Also return %true if such a code section has not been
++ * executing concurrently, but at least once since the last invocation of this
++ * function.
++ */
++bool pci_hp_spurious_link_change(struct pci_dev *pdev)
++{
++	wait_event(pci_hp_link_change_wq,
++		   !test_bit(PCI_LINK_CHANGING, &pdev->priv_flags));
++
++	return test_and_clear_bit(PCI_LINK_CHANGED, &pdev->priv_flags);
++}
++
+ static int __init pci_hotplug_init(void)
+ {
+ 	int result;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 273dd8c66f4eff..debc79b0adfb2c 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -187,6 +187,7 @@ int pciehp_card_present(struct controller *ctrl);
+ int pciehp_card_present_or_link_active(struct controller *ctrl);
+ int pciehp_check_link_status(struct controller *ctrl);
+ int pciehp_check_link_active(struct controller *ctrl);
++bool pciehp_device_replaced(struct controller *ctrl);
+ void pciehp_release_ctrl(struct controller *ctrl);
+ 
+ int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 997841c6989359..f59baa91297099 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -284,35 +284,6 @@ static int pciehp_suspend(struct pcie_device *dev)
+ 	return 0;
+ }
+ 
+-static bool pciehp_device_replaced(struct controller *ctrl)
+-{
+-	struct pci_dev *pdev __free(pci_dev_put) = NULL;
+-	u32 reg;
+-
+-	if (pci_dev_is_disconnected(ctrl->pcie->port))
+-		return false;
+-
+-	pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
+-	if (!pdev)
+-		return true;
+-
+-	if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) ||
+-	    reg != (pdev->vendor | (pdev->device << 16)) ||
+-	    pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) ||
+-	    reg != (pdev->revision | (pdev->class << 8)))
+-		return true;
+-
+-	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL &&
+-	    (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) ||
+-	     reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16))))
+-		return true;
+-
+-	if (pci_get_dsn(pdev) != ctrl->dsn)
+-		return true;
+-
+-	return false;
+-}
+-
+ static int pciehp_resume_noirq(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl = get_service_data(dev);
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 8a09fb6083e276..ebd342bda235d4 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -563,20 +563,50 @@ void pciehp_power_off_slot(struct controller *ctrl)
+ 		 PCI_EXP_SLTCTL_PWR_OFF);
+ }
+ 
+-static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+-					  struct pci_dev *pdev, int irq)
++bool pciehp_device_replaced(struct controller *ctrl)
++{
++	struct pci_dev *pdev __free(pci_dev_put) = NULL;
++	u32 reg;
++
++	if (pci_dev_is_disconnected(ctrl->pcie->port))
++		return false;
++
++	pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
++	if (!pdev)
++		return true;
++
++	if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) ||
++	    reg != (pdev->vendor | (pdev->device << 16)) ||
++	    pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) ||
++	    reg != (pdev->revision | (pdev->class << 8)))
++		return true;
++
++	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL &&
++	    (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) ||
++	     reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16))))
++		return true;
++
++	if (pci_get_dsn(pdev) != ctrl->dsn)
++		return true;
++
++	return false;
++}
++
++static void pciehp_ignore_link_change(struct controller *ctrl,
++				      struct pci_dev *pdev, int irq,
++				      u16 ignored_events)
+ {
+ 	/*
+ 	 * Ignore link changes which occurred while waiting for DPC recovery.
+ 	 * Could be several if DPC triggered multiple times consecutively.
++	 * Also ignore link changes caused by Secondary Bus Reset, etc.
+ 	 */
+ 	synchronize_hardirq(irq);
+-	atomic_and(~PCI_EXP_SLTSTA_DLLSC, &ctrl->pending_events);
++	atomic_and(~ignored_events, &ctrl->pending_events);
+ 	if (pciehp_poll_mode)
+ 		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
+-					   PCI_EXP_SLTSTA_DLLSC);
+-	ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored (recovered by DPC)\n",
+-		  slot_name(ctrl));
++					   ignored_events);
++	ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored\n", slot_name(ctrl));
+ 
+ 	/*
+ 	 * If the link is unexpectedly down after successful recovery,
+@@ -584,8 +614,8 @@ static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
+ 	 * Synthesize it to ensure that it is acted on.
+ 	 */
+ 	down_read_nested(&ctrl->reset_lock, ctrl->depth);
+-	if (!pciehp_check_link_active(ctrl))
+-		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
++	if (!pciehp_check_link_active(ctrl) || pciehp_device_replaced(ctrl))
++		pciehp_request(ctrl, ignored_events);
+ 	up_read(&ctrl->reset_lock);
+ }
+ 
+@@ -732,12 +762,19 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 
+ 	/*
+ 	 * Ignore Link Down/Up events caused by Downstream Port Containment
+-	 * if recovery from the error succeeded.
++	 * if recovery succeeded, or caused by Secondary Bus Reset,
++	 * suspend to D3cold, firmware update, FPGA reconfiguration, etc.
+ 	 */
+-	if ((events & PCI_EXP_SLTSTA_DLLSC) && pci_dpc_recovered(pdev) &&
++	if ((events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) &&
++	    (pci_dpc_recovered(pdev) || pci_hp_spurious_link_change(pdev)) &&
+ 	    ctrl->state == ON_STATE) {
+-		events &= ~PCI_EXP_SLTSTA_DLLSC;
+-		pciehp_ignore_dpc_link_change(ctrl, pdev, irq);
++		u16 ignored_events = PCI_EXP_SLTSTA_DLLSC;
++
++		if (!ctrl->inband_presence_disabled)
++			ignored_events |= events & PCI_EXP_SLTSTA_PDC;
++
++		events &= ~ignored_events;
++		pciehp_ignore_link_change(ctrl, pdev, irq, ignored_events);
+ 	}
+ 
+ 	/*
+@@ -902,7 +939,6 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
+ {
+ 	struct controller *ctrl = to_ctrl(hotplug_slot);
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	u16 stat_mask = 0, ctrl_mask = 0;
+ 	int rc;
+ 
+ 	if (probe)
+@@ -910,23 +946,11 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
+ 
+ 	down_write_nested(&ctrl->reset_lock, ctrl->depth);
+ 
+-	if (!ATTN_BUTTN(ctrl)) {
+-		ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
+-		stat_mask |= PCI_EXP_SLTSTA_PDC;
+-	}
+-	ctrl_mask |= PCI_EXP_SLTCTL_DLLSCE;
+-	stat_mask |= PCI_EXP_SLTSTA_DLLSC;
+-
+-	pcie_write_cmd(ctrl, 0, ctrl_mask);
+-	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+-		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0);
++	pci_hp_ignore_link_change(pdev);
+ 
+ 	rc = pci_bridge_secondary_bus_reset(ctrl->pcie->port);
+ 
+-	pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask);
+-	pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask);
+-	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+-		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
++	pci_hp_unignore_link_change(pdev);
+ 
+ 	up_write(&ctrl->reset_lock);
+ 	return rc;
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index af370628e58393..b78e0e41732445 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -1676,24 +1676,19 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ 		return NULL;
+ 
+ 	root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL);
+-	if (!root_ops) {
+-		kfree(ri);
+-		return NULL;
+-	}
++	if (!root_ops)
++		goto free_ri;
+ 
+ 	ri->cfg = pci_acpi_setup_ecam_mapping(root);
+-	if (!ri->cfg) {
+-		kfree(ri);
+-		kfree(root_ops);
+-		return NULL;
+-	}
++	if (!ri->cfg)
++		goto free_root_ops;
+ 
+ 	root_ops->release_info = pci_acpi_generic_release_info;
+ 	root_ops->prepare_resources = pci_acpi_root_prepare_resources;
+ 	root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops;
+ 	bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg);
+ 	if (!bus)
+-		return NULL;
++		goto free_cfg;
+ 
+ 	/* If we must preserve the resource configuration, claim now */
+ 	host = pci_find_host_bridge(bus);
+@@ -1710,6 +1705,14 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
+ 		pcie_bus_configure_settings(child);
+ 
+ 	return bus;
++
++free_cfg:
++	pci_ecam_free(ri->cfg);
++free_root_ops:
++	kfree(root_ops);
++free_ri:
++	kfree(ri);
++	return NULL;
+ }
+ 
+ void pcibios_add_bus(struct pci_bus *bus)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index e77d5b53c0cec9..4d84ed41248442 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4954,7 +4954,7 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
+ 		delay);
+ 	if (!pcie_wait_for_link_delay(dev, true, delay)) {
+ 		/* Did not train, no need to wait any further */
+-		pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
++		pci_info(dev, "Data Link Layer Link Active not set in %d msec\n", delay);
+ 		return -ENOTTY;
+ 	}
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index b81e99cd4b62a3..7db798bdcaaae6 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -227,6 +227,7 @@ static inline int pci_proc_detach_bus(struct pci_bus *bus) { return 0; }
+ 
+ /* Functions for PCI Hotplug drivers to use */
+ int pci_hp_add_bridge(struct pci_dev *dev);
++bool pci_hp_spurious_link_change(struct pci_dev *pdev);
+ 
+ #if defined(CONFIG_SYSFS) && defined(HAVE_PCI_LEGACY)
+ void pci_create_legacy_files(struct pci_bus *bus);
+@@ -557,6 +558,8 @@ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
+ #define PCI_DPC_RECOVERED 1
+ #define PCI_DPC_RECOVERING 2
+ #define PCI_DEV_REMOVED 3
++#define PCI_LINK_CHANGED 4
++#define PCI_LINK_CHANGING 5
+ 
+ static inline void pci_dev_assign_added(struct pci_dev *dev)
+ {
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index df42f15c98295f..9d85f1b3b76112 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -258,40 +258,48 @@ static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
+ void dpc_process_error(struct pci_dev *pdev)
+ {
+ 	u16 cap = pdev->dpc_cap, status, source, reason, ext_reason;
+-	struct aer_err_info info;
++	struct aer_err_info info = {};
+ 
+ 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
+-	pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
+-
+-	pci_info(pdev, "containment event, status:%#06x source:%#06x\n",
+-		 status, source);
+ 
+ 	reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN;
+-	ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT;
+-	pci_warn(pdev, "%s detected\n",
+-		 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR) ?
+-		 "unmasked uncorrectable error" :
+-		 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE) ?
+-		 "ERR_NONFATAL" :
+-		 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ?
+-		 "ERR_FATAL" :
+-		 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ?
+-		 "RP PIO error" :
+-		 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ?
+-		 "software trigger" :
+-		 "reserved error");
+-
+-	/* show RP PIO error detail information */
+-	if (pdev->dpc_rp_extensions &&
+-	    reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT &&
+-	    ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO)
+-		dpc_process_rp_pio_error(pdev);
+-	else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR &&
+-		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
+-		 aer_get_device_error_info(pdev, &info)) {
+-		aer_print_error(pdev, &info);
+-		pci_aer_clear_nonfatal_status(pdev);
+-		pci_aer_clear_fatal_status(pdev);
++
++	switch (reason) {
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR:
++		pci_warn(pdev, "containment event, status:%#06x: unmasked uncorrectable error detected\n",
++			 status);
++		if (dpc_get_aer_uncorrect_severity(pdev, &info) &&
++		    aer_get_device_error_info(pdev, &info)) {
++			aer_print_error(pdev, &info);
++			pci_aer_clear_nonfatal_status(pdev);
++			pci_aer_clear_fatal_status(pdev);
++		}
++		break;
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE:
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE:
++		pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID,
++				     &source);
++		pci_warn(pdev, "containment event, status:%#06x, %s received from %04x:%02x:%02x.%d\n",
++			 status,
++			 (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ?
++				"ERR_FATAL" : "ERR_NONFATAL",
++			 pci_domain_nr(pdev->bus), PCI_BUS_NUM(source),
++			 PCI_SLOT(source), PCI_FUNC(source));
++		break;
++	case PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT:
++		ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT;
++		pci_warn(pdev, "containment event, status:%#06x: %s detected\n",
++			 status,
++			 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ?
++			 "RP PIO error" :
++			 (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ?
++			 "software trigger" :
++			 "reserved error");
++		/* show RP PIO error detail information */
++		if (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO &&
++		    pdev->dpc_rp_extensions)
++			dpc_process_rp_pio_error(pdev);
++		break;
+ 	}
+ }
+ 
+diff --git a/drivers/pci/pwrctrl/core.c b/drivers/pci/pwrctrl/core.c
+index 9cc7e2b7f2b560..6bdbfed584d6d7 100644
+--- a/drivers/pci/pwrctrl/core.c
++++ b/drivers/pci/pwrctrl/core.c
+@@ -101,6 +101,8 @@ EXPORT_SYMBOL_GPL(pci_pwrctrl_device_set_ready);
+  */
+ void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl)
+ {
++	cancel_work_sync(&pwrctrl->work);
++
+ 	/*
+ 	 * We don't have to delete the link here. Typically, this function
+ 	 * is only called when the power control device is being detached. If
+diff --git a/drivers/perf/amlogic/meson_ddr_pmu_core.c b/drivers/perf/amlogic/meson_ddr_pmu_core.c
+index 07446d784a1a64..c1e755c356a333 100644
+--- a/drivers/perf/amlogic/meson_ddr_pmu_core.c
++++ b/drivers/perf/amlogic/meson_ddr_pmu_core.c
+@@ -511,7 +511,7 @@ int meson_ddr_pmu_create(struct platform_device *pdev)
+ 
+ 	fmt_attr_fill(pmu->info.hw_info->fmt_attr);
+ 
+-	pmu->cpu = smp_processor_id();
++	pmu->cpu = raw_smp_processor_id();
+ 
+ 	name = devm_kasprintf(&pdev->dev, GFP_KERNEL, DDR_PERF_DEV_NAME);
+ 	if (!name)
+diff --git a/drivers/perf/arm-ni.c b/drivers/perf/arm-ni.c
+index fd7a5e60e96302..de7b6cce4d68a8 100644
+--- a/drivers/perf/arm-ni.c
++++ b/drivers/perf/arm-ni.c
+@@ -575,6 +575,23 @@ static int arm_ni_init_cd(struct arm_ni *ni, struct arm_ni_node *node, u64 res_s
+ 	return err;
+ }
+ 
++static void arm_ni_remove(struct platform_device *pdev)
++{
++	struct arm_ni *ni = platform_get_drvdata(pdev);
++
++	for (int i = 0; i < ni->num_cds; i++) {
++		struct arm_ni_cd *cd = ni->cds + i;
++
++		if (!cd->pmu_base)
++			continue;
++
++		writel_relaxed(0, cd->pmu_base + NI_PMCR);
++		writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENCLR);
++		perf_pmu_unregister(&cd->pmu);
++		cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
++	}
++}
++
+ static void arm_ni_probe_domain(void __iomem *base, struct arm_ni_node *node)
+ {
+ 	u32 reg = readl_relaxed(base + NI_NODE_TYPE);
+@@ -643,6 +660,7 @@ static int arm_ni_probe(struct platform_device *pdev)
+ 	ni->num_cds = num_cds;
+ 	ni->part = part;
+ 	ni->id = atomic_fetch_inc(&id);
++	platform_set_drvdata(pdev, ni);
+ 
+ 	for (int v = 0; v < cfg.num_components; v++) {
+ 		reg = readl_relaxed(cfg.base + NI_CHILD_PTR(v));
+@@ -656,8 +674,11 @@ static int arm_ni_probe(struct platform_device *pdev)
+ 				reg = readl_relaxed(pd.base + NI_CHILD_PTR(c));
+ 				arm_ni_probe_domain(base + reg, &cd);
+ 				ret = arm_ni_init_cd(ni, &cd, res->start);
+-				if (ret)
++				if (ret) {
++					ni->cds[cd.id].pmu_base = NULL;
++					arm_ni_remove(pdev);
+ 					return ret;
++				}
+ 			}
+ 		}
+ 	}
+@@ -665,23 +686,6 @@ static int arm_ni_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static void arm_ni_remove(struct platform_device *pdev)
+-{
+-	struct arm_ni *ni = platform_get_drvdata(pdev);
+-
+-	for (int i = 0; i < ni->num_cds; i++) {
+-		struct arm_ni_cd *cd = ni->cds + i;
+-
+-		if (!cd->pmu_base)
+-			continue;
+-
+-		writel_relaxed(0, cd->pmu_base + NI_PMCR);
+-		writel_relaxed(U32_MAX, cd->pmu_base + NI_PMINTENCLR);
+-		perf_pmu_unregister(&cd->pmu);
+-		cpuhp_state_remove_instance_nocalls(arm_ni_hp_state, &cd->cpuhp_node);
+-	}
+-}
+-
+ #ifdef CONFIG_OF
+ static const struct of_device_id arm_ni_of_match[] = {
+ 	{ .compatible = "arm,ni-700" },
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index 78772157045752..ed646a7e705ba3 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -2106,12 +2106,16 @@ static void __iomem *qmp_usb_iomap(struct device *dev, struct device_node *np,
+ 					int index, bool exclusive)
+ {
+ 	struct resource res;
++	void __iomem *mem;
+ 
+ 	if (!exclusive) {
+ 		if (of_address_to_resource(np, index, &res))
+ 			return IOMEM_ERR_PTR(-EINVAL);
+ 
+-		return devm_ioremap(dev, res.start, resource_size(&res));
++		mem = devm_ioremap(dev, res.start, resource_size(&res));
++		if (!mem)
++			return IOMEM_ERR_PTR(-ENOMEM);
++		return mem;
+ 	}
+ 
+ 	return devm_of_iomap(dev, np, index, NULL);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+index 1f5f7df14d5a2f..49c37c53b38e70 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
+@@ -151,21 +151,6 @@ static const struct qusb2_phy_init_tbl ipq6018_init_tbl[] = {
+ 	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_AUTOPGM_CTL1, 0x9F),
+ };
+ 
+-static const struct qusb2_phy_init_tbl ipq5424_init_tbl[] = {
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL, 0x14),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE1, 0x00),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE2, 0x53),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE4, 0xc3),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_TUNE, 0x30),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL1, 0x79),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_USER_CTL2, 0x21),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE5, 0x00),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_PWR_CTRL, 0x00),
+-	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TEST2, 0x14),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_TEST, 0x80),
+-	QUSB2_PHY_INIT_CFG(QUSB2PHY_PLL_AUTOPGM_CTL1, 0x9f),
+-};
+-
+ static const struct qusb2_phy_init_tbl qcs615_init_tbl[] = {
+ 	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE1, 0xc8),
+ 	QUSB2_PHY_INIT_CFG_L(QUSB2PHY_PORT_TUNE2, 0xb3),
+@@ -359,16 +344,6 @@ static const struct qusb2_phy_cfg ipq6018_phy_cfg = {
+ 	.autoresume_en   = BIT(0),
+ };
+ 
+-static const struct qusb2_phy_cfg ipq5424_phy_cfg = {
+-	.tbl            = ipq5424_init_tbl,
+-	.tbl_num        = ARRAY_SIZE(ipq5424_init_tbl),
+-	.regs           = ipq6018_regs_layout,
+-
+-	.disable_ctrl   = POWER_DOWN,
+-	.mask_core_ready = PLL_LOCKED,
+-	.autoresume_en   = BIT(0),
+-};
+-
+ static const struct qusb2_phy_cfg qcs615_phy_cfg = {
+ 	.tbl            = qcs615_init_tbl,
+ 	.tbl_num        = ARRAY_SIZE(qcs615_init_tbl),
+@@ -955,7 +930,7 @@ static const struct phy_ops qusb2_phy_gen_ops = {
+ static const struct of_device_id qusb2_phy_of_match_table[] = {
+ 	{
+ 		.compatible	= "qcom,ipq5424-qusb2-phy",
+-		.data		= &ipq5424_phy_cfg,
++		.data		= &ipq6018_phy_cfg,
+ 	}, {
+ 		.compatible	= "qcom,ipq6018-qusb2-phy",
+ 		.data		= &ipq6018_phy_cfg,
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 77236f012a1f75..61db514ce5cfb5 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -320,6 +320,7 @@
+ #define LN3_TX_SER_RATE_SEL_HBR2_MASK	BIT(3)
+ #define LN3_TX_SER_RATE_SEL_HBR3_MASK	BIT(2)
+ 
++#define HDMI14_MAX_RATE			340000000
+ #define HDMI20_MAX_RATE			600000000
+ 
+ enum dp_link_rate {
+@@ -1007,9 +1008,7 @@ static int rk_hdptx_ropll_tmds_cmn_config(struct rk_hdptx_phy *hdptx,
+ {
+ 	const struct ropll_config *cfg = NULL;
+ 	struct ropll_config rc = {0};
+-	int i;
+-
+-	hdptx->rate = rate * 100;
++	int ret, i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(ropll_tmds_cfg); i++)
+ 		if (rate == ropll_tmds_cfg[i].bit_rate) {
+@@ -1064,7 +1063,11 @@ static int rk_hdptx_ropll_tmds_cmn_config(struct rk_hdptx_phy *hdptx,
+ 	regmap_update_bits(hdptx->regmap, CMN_REG(0086), PLL_PCG_CLK_EN_MASK,
+ 			   FIELD_PREP(PLL_PCG_CLK_EN_MASK, 0x1));
+ 
+-	return rk_hdptx_post_enable_pll(hdptx);
++	ret = rk_hdptx_post_enable_pll(hdptx);
++	if (!ret)
++		hdptx->rate = rate * 100;
++
++	return ret;
+ }
+ 
+ static int rk_hdptx_ropll_tmds_mode_config(struct rk_hdptx_phy *hdptx,
+@@ -1074,7 +1077,7 @@ static int rk_hdptx_ropll_tmds_mode_config(struct rk_hdptx_phy *hdptx,
+ 
+ 	regmap_write(hdptx->regmap, LNTOP_REG(0200), 0x06);
+ 
+-	if (rate >= 3400000) {
++	if (rate > HDMI14_MAX_RATE / 100) {
+ 		/* For 1/40 bitrate clk */
+ 		rk_hdptx_multi_reg_write(hdptx, rk_hdtpx_tmds_lntop_highbr_seq);
+ 	} else {
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c
+index c516c34aaaf603..e235a98ae7ee58 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.c
++++ b/drivers/pinctrl/mediatek/mtk-eint.c
+@@ -506,7 +506,7 @@ EXPORT_SYMBOL_GPL(mtk_eint_find_irq);
+ 
+ int mtk_eint_do_init(struct mtk_eint *eint, struct mtk_eint_pin *eint_pin)
+ {
+-	unsigned int size, i, port, inst = 0;
++	unsigned int size, i, port, virq, inst = 0;
+ 
+ 	/* If clients don't assign a specific regs, let's use generic one */
+ 	if (!eint->regs)
+@@ -580,7 +580,7 @@ int mtk_eint_do_init(struct mtk_eint *eint, struct mtk_eint_pin *eint_pin)
+ 		if (inst >= eint->nbase)
+ 			continue;
+ 		eint->pin_list[inst][eint->pins[i].index] = i;
+-		int virq = irq_create_mapping(eint->domain, i);
++		virq = irq_create_mapping(eint->domain, i);
+ 		irq_set_chip_and_handler(virq, &mtk_eint_irq_chip,
+ 					 handle_level_irq);
+ 		irq_set_chip_data(virq, eint);
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.h b/drivers/pinctrl/mediatek/mtk-eint.h
+index 23801d4b636f62..fc31a4c0c77bf2 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.h
++++ b/drivers/pinctrl/mediatek/mtk-eint.h
+@@ -66,7 +66,7 @@ struct mtk_eint_xt {
+ struct mtk_eint {
+ 	struct device *dev;
+ 	void __iomem **base;
+-	u8 nbase;
++	int nbase;
+ 	u16 *base_pin_num;
+ 	struct irq_domain *domain;
+ 	int irq;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index ba13558bfcd7bb..4918d38abfc29d 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -381,10 +381,13 @@ int mtk_build_eint(struct mtk_pinctrl *hw, struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	count_reg_names = of_property_count_strings(np, "reg-names");
+-	if (count_reg_names < hw->soc->nbase_names)
++	if (count_reg_names < 0)
++		return -EINVAL;
++
++	hw->eint->nbase = count_reg_names - (int)hw->soc->nbase_names;
++	if (hw->eint->nbase <= 0)
+ 		return -EINVAL;
+ 
+-	hw->eint->nbase = count_reg_names - hw->soc->nbase_names;
+ 	hw->eint->base = devm_kmalloc_array(&pdev->dev, hw->eint->nbase,
+ 					    sizeof(*hw->eint->base), GFP_KERNEL | __GFP_ZERO);
+ 	if (!hw->eint->base) {
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 93ab277d9943cf..fbe74e4ef320c1 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1819,12 +1819,16 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 	struct at91_gpio_chip *at91_chip = NULL;
+ 	struct gpio_chip *chip;
+ 	struct pinctrl_gpio_range *range;
++	int alias_idx;
+ 	int ret = 0;
+ 	int irq, i;
+-	int alias_idx = of_alias_get_id(np, "gpio");
+ 	uint32_t ngpio;
+ 	char **names;
+ 
++	alias_idx = of_alias_get_id(np, "gpio");
++	if (alias_idx < 0)
++		return alias_idx;
++
+ 	BUG_ON(alias_idx >= ARRAY_SIZE(gpio_chips));
+ 	if (gpio_chips[alias_idx])
+ 		return dev_err_probe(dev, -EBUSY, "%d slot is occupied.\n", alias_idx);
+diff --git a/drivers/pinctrl/qcom/pinctrl-qcm2290.c b/drivers/pinctrl/qcom/pinctrl-qcm2290.c
+index ba699eac9ee8b2..20e9bccda4cd6d 100644
+--- a/drivers/pinctrl/qcom/pinctrl-qcm2290.c
++++ b/drivers/pinctrl/qcom/pinctrl-qcm2290.c
+@@ -165,6 +165,10 @@ static const struct pinctrl_pin_desc qcm2290_pins[] = {
+ 	PINCTRL_PIN(62, "GPIO_62"),
+ 	PINCTRL_PIN(63, "GPIO_63"),
+ 	PINCTRL_PIN(64, "GPIO_64"),
++	PINCTRL_PIN(65, "GPIO_65"),
++	PINCTRL_PIN(66, "GPIO_66"),
++	PINCTRL_PIN(67, "GPIO_67"),
++	PINCTRL_PIN(68, "GPIO_68"),
+ 	PINCTRL_PIN(69, "GPIO_69"),
+ 	PINCTRL_PIN(70, "GPIO_70"),
+ 	PINCTRL_PIN(71, "GPIO_71"),
+@@ -179,12 +183,17 @@ static const struct pinctrl_pin_desc qcm2290_pins[] = {
+ 	PINCTRL_PIN(80, "GPIO_80"),
+ 	PINCTRL_PIN(81, "GPIO_81"),
+ 	PINCTRL_PIN(82, "GPIO_82"),
++	PINCTRL_PIN(83, "GPIO_83"),
++	PINCTRL_PIN(84, "GPIO_84"),
++	PINCTRL_PIN(85, "GPIO_85"),
+ 	PINCTRL_PIN(86, "GPIO_86"),
+ 	PINCTRL_PIN(87, "GPIO_87"),
+ 	PINCTRL_PIN(88, "GPIO_88"),
+ 	PINCTRL_PIN(89, "GPIO_89"),
+ 	PINCTRL_PIN(90, "GPIO_90"),
+ 	PINCTRL_PIN(91, "GPIO_91"),
++	PINCTRL_PIN(92, "GPIO_92"),
++	PINCTRL_PIN(93, "GPIO_93"),
+ 	PINCTRL_PIN(94, "GPIO_94"),
+ 	PINCTRL_PIN(95, "GPIO_95"),
+ 	PINCTRL_PIN(96, "GPIO_96"),
+diff --git a/drivers/pinctrl/qcom/pinctrl-qcs615.c b/drivers/pinctrl/qcom/pinctrl-qcs615.c
+index 23015b055f6a92..17ca743c2210fc 100644
+--- a/drivers/pinctrl/qcom/pinctrl-qcs615.c
++++ b/drivers/pinctrl/qcom/pinctrl-qcs615.c
+@@ -1062,7 +1062,7 @@ static const struct msm_pinctrl_soc_data qcs615_tlmm = {
+ 	.nfunctions = ARRAY_SIZE(qcs615_functions),
+ 	.groups = qcs615_groups,
+ 	.ngroups = ARRAY_SIZE(qcs615_groups),
+-	.ngpios = 123,
++	.ngpios = 124,
+ 	.tiles = qcs615_tiles,
+ 	.ntiles = ARRAY_SIZE(qcs615_tiles),
+ 	.wakeirq_map = qcs615_pdc_map,
+diff --git a/drivers/pinctrl/qcom/pinctrl-qcs8300.c b/drivers/pinctrl/qcom/pinctrl-qcs8300.c
+index ba6de944a859a0..5f5f7c4ac644c4 100644
+--- a/drivers/pinctrl/qcom/pinctrl-qcs8300.c
++++ b/drivers/pinctrl/qcom/pinctrl-qcs8300.c
+@@ -1204,7 +1204,7 @@ static const struct msm_pinctrl_soc_data qcs8300_pinctrl = {
+ 	.nfunctions = ARRAY_SIZE(qcs8300_functions),
+ 	.groups = qcs8300_groups,
+ 	.ngroups = ARRAY_SIZE(qcs8300_groups),
+-	.ngpios = 133,
++	.ngpios = 134,
+ 	.wakeirq_map = qcs8300_pdc_map,
+ 	.nwakeirq_map = ARRAY_SIZE(qcs8300_pdc_map),
+ 	.egpio_func = 11,
+diff --git a/drivers/pinctrl/qcom/tlmm-test.c b/drivers/pinctrl/qcom/tlmm-test.c
+index fd02bf3a76cbcc..7b99e89e0f6703 100644
+--- a/drivers/pinctrl/qcom/tlmm-test.c
++++ b/drivers/pinctrl/qcom/tlmm-test.c
+@@ -547,6 +547,7 @@ static int tlmm_test_init(struct kunit *test)
+ 	struct tlmm_test_priv *priv;
+ 
+ 	priv = kunit_kzalloc(test, sizeof(*priv), GFP_KERNEL);
++	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, priv);
+ 
+ 	atomic_set(&priv->intr_count, 0);
+ 	atomic_set(&priv->thread_count, 0);
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index dd07720e32cc09..9fd894729a7b87 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -1419,8 +1419,8 @@ static const struct samsung_pin_ctrl exynosautov920_pin_ctrl[] = {
+ 		.pin_banks	= exynosautov920_pin_banks0,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks0),
+ 		.eint_wkup_init	= exynos_eint_wkup_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 		.retention_data	= &exynosautov920_retention_data,
+ 	}, {
+ 		/* pin-controller instance 1 AUD data */
+@@ -1431,43 +1431,43 @@ static const struct samsung_pin_ctrl exynosautov920_pin_ctrl[] = {
+ 		.pin_banks	= exynosautov920_pin_banks2,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks2),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 3 HSI1 data */
+ 		.pin_banks	= exynosautov920_pin_banks3,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks3),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 4 HSI2 data */
+ 		.pin_banks	= exynosautov920_pin_banks4,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks4),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 5 HSI2UFS data */
+ 		.pin_banks	= exynosautov920_pin_banks5,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks5),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 6 PERIC0 data */
+ 		.pin_banks	= exynosautov920_pin_banks6,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks6),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	}, {
+ 		/* pin-controller instance 7 PERIC1 data */
+ 		.pin_banks	= exynosautov920_pin_banks7,
+ 		.nr_banks	= ARRAY_SIZE(exynosautov920_pin_banks7),
+ 		.eint_gpio_init	= exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= exynosautov920_pinctrl_suspend,
++		.resume		= exynosautov920_pinctrl_resume,
+ 	},
+ };
+ 
+@@ -1762,15 +1762,15 @@ static const struct samsung_pin_ctrl gs101_pin_ctrl[] __initconst = {
+ 		.pin_banks	= gs101_pin_alive,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_alive),
+ 		.eint_wkup_init = exynos_eint_wkup_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (FAR_ALIVE) */
+ 		.pin_banks	= gs101_pin_far_alive,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_far_alive),
+ 		.eint_wkup_init = exynos_eint_wkup_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (GSACORE) */
+ 		.pin_banks	= gs101_pin_gsacore,
+@@ -1784,29 +1784,29 @@ static const struct samsung_pin_ctrl gs101_pin_ctrl[] __initconst = {
+ 		.pin_banks	= gs101_pin_peric0,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_peric0),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (PERIC1) */
+ 		.pin_banks	= gs101_pin_peric1,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_peric1),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume	= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (HSI1) */
+ 		.pin_banks	= gs101_pin_hsi1,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_hsi1),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	}, {
+ 		/* pin banks of gs101 pin-controller (HSI2) */
+ 		.pin_banks	= gs101_pin_hsi2,
+ 		.nr_banks	= ARRAY_SIZE(gs101_pin_hsi2),
+ 		.eint_gpio_init = exynos_eint_gpio_init,
+-		.suspend	= exynos_pinctrl_suspend,
+-		.resume		= exynos_pinctrl_resume,
++		.suspend	= gs101_pinctrl_suspend,
++		.resume		= gs101_pinctrl_resume,
+ 	},
+ };
+ 
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index 42093bae8bb793..0879684338c772 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -762,153 +762,187 @@ __init int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d)
+ 	return 0;
+ }
+ 
+-static void exynos_pinctrl_suspend_bank(
+-				struct samsung_pinctrl_drv_data *drvdata,
+-				struct samsung_pin_bank *bank)
++static void exynos_set_wakeup(struct samsung_pin_bank *bank)
+ {
+-	struct exynos_eint_gpio_save *save = bank->soc_priv;
+-	const void __iomem *regs = bank->eint_base;
++	struct exynos_irq_chip *irq_chip;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for saving state\n");
+-		return;
++	if (bank->irq_chip) {
++		irq_chip = bank->irq_chip;
++		irq_chip->set_eint_wakeup_mask(bank->drvdata, irq_chip);
+ 	}
+-
+-	save->eint_con = readl(regs + EXYNOS_GPIO_ECON_OFFSET
+-						+ bank->eint_offset);
+-	save->eint_fltcon0 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset);
+-	save->eint_fltcon1 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset + 4);
+-	save->eint_mask = readl(regs + bank->irq_chip->eint_mask
+-						+ bank->eint_offset);
+-
+-	clk_disable(bank->drvdata->pclk);
+-
+-	pr_debug("%s: save     con %#010x\n", bank->name, save->eint_con);
+-	pr_debug("%s: save fltcon0 %#010x\n", bank->name, save->eint_fltcon0);
+-	pr_debug("%s: save fltcon1 %#010x\n", bank->name, save->eint_fltcon1);
+-	pr_debug("%s: save    mask %#010x\n", bank->name, save->eint_mask);
+ }
+ 
+-static void exynosauto_pinctrl_suspend_bank(struct samsung_pinctrl_drv_data *drvdata,
+-					    struct samsung_pin_bank *bank)
++void exynos_pinctrl_suspend(struct samsung_pin_bank *bank)
+ {
+ 	struct exynos_eint_gpio_save *save = bank->soc_priv;
+ 	const void __iomem *regs = bank->eint_base;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for saving state\n");
+-		return;
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		save->eint_con = readl(regs + EXYNOS_GPIO_ECON_OFFSET
++				       + bank->eint_offset);
++		save->eint_fltcon0 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++					   + 2 * bank->eint_offset);
++		save->eint_fltcon1 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++					   + 2 * bank->eint_offset + 4);
++		save->eint_mask = readl(regs + bank->irq_chip->eint_mask
++					+ bank->eint_offset);
++
++		pr_debug("%s: save     con %#010x\n",
++			 bank->name, save->eint_con);
++		pr_debug("%s: save fltcon0 %#010x\n",
++			 bank->name, save->eint_fltcon0);
++		pr_debug("%s: save fltcon1 %#010x\n",
++			 bank->name, save->eint_fltcon1);
++		pr_debug("%s: save    mask %#010x\n",
++			 bank->name, save->eint_mask);
++	} else if (bank->eint_type == EINT_TYPE_WKUP) {
++		exynos_set_wakeup(bank);
+ 	}
+-
+-	save->eint_con = readl(regs + bank->pctl_offset + bank->eint_con_offset);
+-	save->eint_mask = readl(regs + bank->pctl_offset + bank->eint_mask_offset);
+-
+-	clk_disable(bank->drvdata->pclk);
+-
+-	pr_debug("%s: save     con %#010x\n", bank->name, save->eint_con);
+-	pr_debug("%s: save    mask %#010x\n", bank->name, save->eint_mask);
+ }
+ 
+-void exynos_pinctrl_suspend(struct samsung_pinctrl_drv_data *drvdata)
++void gs101_pinctrl_suspend(struct samsung_pin_bank *bank)
+ {
+-	struct samsung_pin_bank *bank = drvdata->pin_banks;
+-	struct exynos_irq_chip *irq_chip = NULL;
+-	int i;
++	struct exynos_eint_gpio_save *save = bank->soc_priv;
++	const void __iomem *regs = bank->eint_base;
+ 
+-	for (i = 0; i < drvdata->nr_banks; ++i, ++bank) {
+-		if (bank->eint_type == EINT_TYPE_GPIO) {
+-			if (bank->eint_con_offset)
+-				exynosauto_pinctrl_suspend_bank(drvdata, bank);
+-			else
+-				exynos_pinctrl_suspend_bank(drvdata, bank);
+-		}
+-		else if (bank->eint_type == EINT_TYPE_WKUP) {
+-			if (!irq_chip) {
+-				irq_chip = bank->irq_chip;
+-				irq_chip->set_eint_wakeup_mask(drvdata,
+-							       irq_chip);
+-			}
+-		}
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		save->eint_con = readl(regs + EXYNOS_GPIO_ECON_OFFSET
++				       + bank->eint_offset);
++
++		save->eint_fltcon0 = readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++					   + bank->eint_fltcon_offset);
++
++		/* fltcon1 register only exists for pins 4-7 */
++		if (bank->nr_pins > 4)
++			save->eint_fltcon1 = readl(regs +
++						EXYNOS_GPIO_EFLTCON_OFFSET
++						+ bank->eint_fltcon_offset + 4);
++
++		save->eint_mask = readl(regs + bank->irq_chip->eint_mask
++					+ bank->eint_offset);
++
++		pr_debug("%s: save     con %#010x\n",
++			 bank->name, save->eint_con);
++		pr_debug("%s: save fltcon0 %#010x\n",
++			 bank->name, save->eint_fltcon0);
++		if (bank->nr_pins > 4)
++			pr_debug("%s: save fltcon1 %#010x\n",
++				 bank->name, save->eint_fltcon1);
++		pr_debug("%s: save    mask %#010x\n",
++			 bank->name, save->eint_mask);
++	} else if (bank->eint_type == EINT_TYPE_WKUP) {
++		exynos_set_wakeup(bank);
+ 	}
+ }
+ 
+-static void exynos_pinctrl_resume_bank(
+-				struct samsung_pinctrl_drv_data *drvdata,
+-				struct samsung_pin_bank *bank)
++void exynosautov920_pinctrl_suspend(struct samsung_pin_bank *bank)
+ {
+ 	struct exynos_eint_gpio_save *save = bank->soc_priv;
+-	void __iomem *regs = bank->eint_base;
++	const void __iomem *regs = bank->eint_base;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for restoring state\n");
+-		return;
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		save->eint_con = readl(regs + bank->pctl_offset +
++				       bank->eint_con_offset);
++		save->eint_mask = readl(regs + bank->pctl_offset +
++					bank->eint_mask_offset);
++		pr_debug("%s: save     con %#010x\n",
++			 bank->name, save->eint_con);
++		pr_debug("%s: save    mask %#010x\n",
++			 bank->name, save->eint_mask);
++	} else if (bank->eint_type == EINT_TYPE_WKUP) {
++		exynos_set_wakeup(bank);
+ 	}
++}
+ 
+-	pr_debug("%s:     con %#010x => %#010x\n", bank->name,
+-			readl(regs + EXYNOS_GPIO_ECON_OFFSET
+-			+ bank->eint_offset), save->eint_con);
+-	pr_debug("%s: fltcon0 %#010x => %#010x\n", bank->name,
+-			readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-			+ 2 * bank->eint_offset), save->eint_fltcon0);
+-	pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
+-			readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-			+ 2 * bank->eint_offset + 4), save->eint_fltcon1);
+-	pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
+-			readl(regs + bank->irq_chip->eint_mask
+-			+ bank->eint_offset), save->eint_mask);
+-
+-	writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
+-						+ bank->eint_offset);
+-	writel(save->eint_fltcon0, regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset);
+-	writel(save->eint_fltcon1, regs + EXYNOS_GPIO_EFLTCON_OFFSET
+-						+ 2 * bank->eint_offset + 4);
+-	writel(save->eint_mask, regs + bank->irq_chip->eint_mask
+-						+ bank->eint_offset);
++void gs101_pinctrl_resume(struct samsung_pin_bank *bank)
++{
++	struct exynos_eint_gpio_save *save = bank->soc_priv;
+ 
+-	clk_disable(bank->drvdata->pclk);
++	void __iomem *regs = bank->eint_base;
++	void __iomem *eint_fltcfg0 = regs + EXYNOS_GPIO_EFLTCON_OFFSET
++		     + bank->eint_fltcon_offset;
++
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		pr_debug("%s:     con %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_ECON_OFFSET
++			       + bank->eint_offset), save->eint_con);
++
++		pr_debug("%s: fltcon0 %#010x => %#010x\n", bank->name,
++			 readl(eint_fltcfg0), save->eint_fltcon0);
++
++		/* fltcon1 register only exists for pins 4-7 */
++		if (bank->nr_pins > 4)
++			pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
++				 readl(eint_fltcfg0 + 4), save->eint_fltcon1);
++
++		pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->irq_chip->eint_mask
++			       + bank->eint_offset), save->eint_mask);
++
++		writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
++		       + bank->eint_offset);
++		writel(save->eint_fltcon0, eint_fltcfg0);
++
++		if (bank->nr_pins > 4)
++			writel(save->eint_fltcon1, eint_fltcfg0 + 4);
++		writel(save->eint_mask, regs + bank->irq_chip->eint_mask
++		       + bank->eint_offset);
++	}
+ }
+ 
+-static void exynosauto_pinctrl_resume_bank(struct samsung_pinctrl_drv_data *drvdata,
+-					   struct samsung_pin_bank *bank)
++void exynos_pinctrl_resume(struct samsung_pin_bank *bank)
+ {
+ 	struct exynos_eint_gpio_save *save = bank->soc_priv;
+ 	void __iomem *regs = bank->eint_base;
+ 
+-	if (clk_enable(bank->drvdata->pclk)) {
+-		dev_err(bank->gpio_chip.parent,
+-			"unable to enable clock for restoring state\n");
+-		return;
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		pr_debug("%s:     con %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_ECON_OFFSET
++			       + bank->eint_offset), save->eint_con);
++		pr_debug("%s: fltcon0 %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++			       + 2 * bank->eint_offset), save->eint_fltcon0);
++		pr_debug("%s: fltcon1 %#010x => %#010x\n", bank->name,
++			 readl(regs + EXYNOS_GPIO_EFLTCON_OFFSET
++			       + 2 * bank->eint_offset + 4),
++			 save->eint_fltcon1);
++		pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->irq_chip->eint_mask
++			       + bank->eint_offset), save->eint_mask);
++
++		writel(save->eint_con, regs + EXYNOS_GPIO_ECON_OFFSET
++		       + bank->eint_offset);
++		writel(save->eint_fltcon0, regs + EXYNOS_GPIO_EFLTCON_OFFSET
++		       + 2 * bank->eint_offset);
++		writel(save->eint_fltcon1, regs + EXYNOS_GPIO_EFLTCON_OFFSET
++		       + 2 * bank->eint_offset + 4);
++		writel(save->eint_mask, regs + bank->irq_chip->eint_mask
++		       + bank->eint_offset);
+ 	}
+-
+-	pr_debug("%s:     con %#010x => %#010x\n", bank->name,
+-		 readl(regs + bank->pctl_offset + bank->eint_con_offset), save->eint_con);
+-	pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
+-		 readl(regs + bank->pctl_offset + bank->eint_mask_offset), save->eint_mask);
+-
+-	writel(save->eint_con, regs + bank->pctl_offset + bank->eint_con_offset);
+-	writel(save->eint_mask, regs + bank->pctl_offset + bank->eint_mask_offset);
+-
+-	clk_disable(bank->drvdata->pclk);
+ }
+ 
+-void exynos_pinctrl_resume(struct samsung_pinctrl_drv_data *drvdata)
++void exynosautov920_pinctrl_resume(struct samsung_pin_bank *bank)
+ {
+-	struct samsung_pin_bank *bank = drvdata->pin_banks;
+-	int i;
++	struct exynos_eint_gpio_save *save = bank->soc_priv;
++	void __iomem *regs = bank->eint_base;
+ 
+-	for (i = 0; i < drvdata->nr_banks; ++i, ++bank)
+-		if (bank->eint_type == EINT_TYPE_GPIO) {
+-			if (bank->eint_con_offset)
+-				exynosauto_pinctrl_resume_bank(drvdata, bank);
+-			else
+-				exynos_pinctrl_resume_bank(drvdata, bank);
+-		}
++	if (bank->eint_type == EINT_TYPE_GPIO) {
++		/* exynosautov920 has eint_con_offset for all but one bank */
++		if (!bank->eint_con_offset)
++			exynos_pinctrl_resume(bank);
++
++		pr_debug("%s:     con %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->pctl_offset + bank->eint_con_offset),
++			 save->eint_con);
++		pr_debug("%s:    mask %#010x => %#010x\n", bank->name,
++			 readl(regs + bank->pctl_offset +
++			       bank->eint_mask_offset), save->eint_mask);
++
++		writel(save->eint_con,
++		       regs + bank->pctl_offset + bank->eint_con_offset);
++		writel(save->eint_mask,
++		       regs + bank->pctl_offset + bank->eint_mask_offset);
++	}
+ }
+ 
+ static void exynos_retention_enable(struct samsung_pinctrl_drv_data *drvdata)
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.h b/drivers/pinctrl/samsung/pinctrl-exynos.h
+index b483270ddc53c0..2bee52b61b9317 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.h
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.h
+@@ -240,8 +240,12 @@ struct exynos_muxed_weint_data {
+ 
+ int exynos_eint_gpio_init(struct samsung_pinctrl_drv_data *d);
+ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d);
+-void exynos_pinctrl_suspend(struct samsung_pinctrl_drv_data *drvdata);
+-void exynos_pinctrl_resume(struct samsung_pinctrl_drv_data *drvdata);
++void exynosautov920_pinctrl_resume(struct samsung_pin_bank *bank);
++void exynosautov920_pinctrl_suspend(struct samsung_pin_bank *bank);
++void exynos_pinctrl_suspend(struct samsung_pin_bank *bank);
++void exynos_pinctrl_resume(struct samsung_pin_bank *bank);
++void gs101_pinctrl_suspend(struct samsung_pin_bank *bank);
++void gs101_pinctrl_resume(struct samsung_pin_bank *bank);
+ struct samsung_retention_ctrl *
+ exynos_retention_init(struct samsung_pinctrl_drv_data *drvdata,
+ 		      const struct samsung_retention_data *data);
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 2896eb2de2c098..ef557217e173af 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1333,6 +1333,7 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ {
+ 	struct samsung_pinctrl_drv_data *drvdata = dev_get_drvdata(dev);
++	struct samsung_pin_bank *bank;
+ 	int i;
+ 
+ 	i = clk_enable(drvdata->pclk);
+@@ -1343,7 +1344,7 @@ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ 	}
+ 
+ 	for (i = 0; i < drvdata->nr_banks; i++) {
+-		struct samsung_pin_bank *bank = &drvdata->pin_banks[i];
++		bank = &drvdata->pin_banks[i];
+ 		const void __iomem *reg = bank->pctl_base + bank->pctl_offset;
+ 		const u8 *offs = bank->type->reg_offset;
+ 		const u8 *widths = bank->type->fld_width;
+@@ -1371,10 +1372,14 @@ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ 		}
+ 	}
+ 
++	for (i = 0; i < drvdata->nr_banks; i++) {
++		bank = &drvdata->pin_banks[i];
++		if (drvdata->suspend)
++			drvdata->suspend(bank);
++	}
++
+ 	clk_disable(drvdata->pclk);
+ 
+-	if (drvdata->suspend)
+-		drvdata->suspend(drvdata);
+ 	if (drvdata->retention_ctrl && drvdata->retention_ctrl->enable)
+ 		drvdata->retention_ctrl->enable(drvdata);
+ 
+@@ -1392,6 +1397,7 @@ static int __maybe_unused samsung_pinctrl_suspend(struct device *dev)
+ static int __maybe_unused samsung_pinctrl_resume(struct device *dev)
+ {
+ 	struct samsung_pinctrl_drv_data *drvdata = dev_get_drvdata(dev);
++	struct samsung_pin_bank *bank;
+ 	int ret;
+ 	int i;
+ 
+@@ -1406,11 +1412,14 @@ static int __maybe_unused samsung_pinctrl_resume(struct device *dev)
+ 		return ret;
+ 	}
+ 
+-	if (drvdata->resume)
+-		drvdata->resume(drvdata);
++	for (i = 0; i < drvdata->nr_banks; i++) {
++		bank = &drvdata->pin_banks[i];
++		if (drvdata->resume)
++			drvdata->resume(bank);
++	}
+ 
+ 	for (i = 0; i < drvdata->nr_banks; i++) {
+-		struct samsung_pin_bank *bank = &drvdata->pin_banks[i];
++		bank = &drvdata->pin_banks[i];
+ 		void __iomem *reg = bank->pctl_base + bank->pctl_offset;
+ 		const u8 *offs = bank->type->reg_offset;
+ 		const u8 *widths = bank->type->fld_width;
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.h b/drivers/pinctrl/samsung/pinctrl-samsung.h
+index 3cf758df7d6912..fcc57c244d167d 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.h
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.h
+@@ -285,8 +285,8 @@ struct samsung_pin_ctrl {
+ 	int		(*eint_gpio_init)(struct samsung_pinctrl_drv_data *);
+ 	int		(*eint_wkup_init)(struct samsung_pinctrl_drv_data *);
+ 	void		(*pud_value_init)(struct samsung_pinctrl_drv_data *drvdata);
+-	void		(*suspend)(struct samsung_pinctrl_drv_data *);
+-	void		(*resume)(struct samsung_pinctrl_drv_data *);
++	void		(*suspend)(struct samsung_pin_bank *bank);
++	void		(*resume)(struct samsung_pin_bank *bank);
+ };
+ 
+ /**
+@@ -335,8 +335,8 @@ struct samsung_pinctrl_drv_data {
+ 
+ 	struct samsung_retention_ctrl	*retention_ctrl;
+ 
+-	void (*suspend)(struct samsung_pinctrl_drv_data *);
+-	void (*resume)(struct samsung_pinctrl_drv_data *);
++	void (*suspend)(struct samsung_pin_bank *bank);
++	void (*resume)(struct samsung_pin_bank *bank);
+ };
+ 
+ /**
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c b/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
+index 1833078f68776c..4e34b0cd3b73aa 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi-dt.c
+@@ -143,7 +143,7 @@ static struct sunxi_desc_pin *init_pins_table(struct device *dev,
+  */
+ static int prepare_function_table(struct device *dev, struct device_node *pnode,
+ 				  struct sunxi_desc_pin *pins, int npins,
+-				  const u8 *irq_bank_muxes)
++				  unsigned pin_base, const u8 *irq_bank_muxes)
+ {
+ 	struct device_node *node;
+ 	struct property *prop;
+@@ -166,7 +166,7 @@ static int prepare_function_table(struct device *dev, struct device_node *pnode,
+ 	 */
+ 	for (i = 0; i < npins; i++) {
+ 		struct sunxi_desc_pin *pin = &pins[i];
+-		int bank = pin->pin.number / PINS_PER_BANK;
++		int bank = (pin->pin.number - pin_base) / PINS_PER_BANK;
+ 
+ 		if (irq_bank_muxes[bank]) {
+ 			pin->variant++;
+@@ -211,7 +211,7 @@ static int prepare_function_table(struct device *dev, struct device_node *pnode,
+ 	last_bank = 0;
+ 	for (i = 0; i < npins; i++) {
+ 		struct sunxi_desc_pin *pin = &pins[i];
+-		int bank = pin->pin.number / PINS_PER_BANK;
++		int bank = (pin->pin.number - pin_base) / PINS_PER_BANK;
+ 		int lastfunc = pin->variant + 1;
+ 		int irq_mux = irq_bank_muxes[bank];
+ 
+@@ -353,7 +353,7 @@ int sunxi_pinctrl_dt_table_init(struct platform_device *pdev,
+ 		return PTR_ERR(pins);
+ 
+ 	ret = prepare_function_table(&pdev->dev, pnode, pins, desc->npins,
+-				     irq_bank_muxes);
++				     desc->pin_base, irq_bank_muxes);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index d2228720991ffe..7678e3d05fd36f 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -22,8 +22,10 @@
+ 
+ #define DRV_NAME "cros-ec-typec"
+ 
+-#define DP_PORT_VDO	(DP_CONF_SET_PIN_ASSIGN(BIT(DP_PIN_ASSIGN_C) | BIT(DP_PIN_ASSIGN_D)) | \
+-				DP_CAP_DFP_D | DP_CAP_RECEPTACLE)
++#define DP_PORT_VDO	(DP_CAP_DFP_D | DP_CAP_RECEPTACLE | \
++			 DP_CONF_SET_PIN_ASSIGN(BIT(DP_PIN_ASSIGN_C) | \
++						BIT(DP_PIN_ASSIGN_D) | \
++						BIT(DP_PIN_ASSIGN_E)))
+ 
+ static void cros_typec_role_switch_quirk(struct fwnode_handle *fwnode)
+ {
+diff --git a/drivers/power/reset/at91-reset.c b/drivers/power/reset/at91-reset.c
+index 036b18a1f90f80..511f5a8f8961ce 100644
+--- a/drivers/power/reset/at91-reset.c
++++ b/drivers/power/reset/at91-reset.c
+@@ -129,12 +129,11 @@ static int at91_reset(struct notifier_block *this, unsigned long mode,
+ 		"	str	%4, [%0, %6]\n\t"
+ 		/* Disable SDRAM1 accesses */
+ 		"1:	tst	%1, #0\n\t"
+-		"	beq	2f\n\t"
+ 		"	strne	%3, [%1, #" __stringify(AT91_DDRSDRC_RTR) "]\n\t"
+ 		/* Power down SDRAM1 */
+ 		"	strne	%4, [%1, %6]\n\t"
+ 		/* Reset CPU */
+-		"2:	str	%5, [%2, #" __stringify(AT91_RSTC_CR) "]\n\t"
++		"	str	%5, [%2, #" __stringify(AT91_RSTC_CR) "]\n\t"
+ 
+ 		"	b	.\n\t"
+ 		:
+@@ -145,7 +144,7 @@ static int at91_reset(struct notifier_block *this, unsigned long mode,
+ 		  "r" cpu_to_le32(AT91_DDRSDRC_LPCB_POWER_DOWN),
+ 		  "r" (reset->data->reset_args),
+ 		  "r" (reset->ramc_lpr)
+-		: "r4");
++	);
+ 
+ 	return NOTIFY_DONE;
+ }
+diff --git a/drivers/power/supply/max77705_charger.c b/drivers/power/supply/max77705_charger.c
+index eec5e9ef795efd..329b430d0e5065 100644
+--- a/drivers/power/supply/max77705_charger.c
++++ b/drivers/power/supply/max77705_charger.c
+@@ -545,20 +545,28 @@ static int max77705_charger_probe(struct i2c_client *i2c)
+ 		return dev_err_probe(dev, ret, "failed to add irq chip\n");
+ 
+ 	chg->wqueue = create_singlethread_workqueue(dev_name(dev));
+-	if (IS_ERR(chg->wqueue))
+-		return dev_err_probe(dev, PTR_ERR(chg->wqueue), "failed to create workqueue\n");
++	if (!chg->wqueue)
++		return dev_err_probe(dev, -ENOMEM, "failed to create workqueue\n");
+ 
+ 	ret = devm_work_autocancel(dev, &chg->chgin_work, max77705_chgin_isr_work);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "failed to initialize interrupt work\n");
++	if (ret) {
++		dev_err_probe(dev, ret, "failed to initialize interrupt work\n");
++		goto destroy_wq;
++	}
+ 
+ 	max77705_charger_initialize(chg);
+ 
+ 	ret = max77705_charger_enable(chg);
+-	if (ret)
+-		return dev_err_probe(dev, ret, "failed to enable charge\n");
++	if (ret) {
++		dev_err_probe(dev, ret, "failed to enable charge\n");
++		goto destroy_wq;
++	}
+ 
+ 	return devm_add_action_or_reset(dev, max77705_charger_disable, chg);
++
++destroy_wq:
++	destroy_workqueue(chg->wqueue);
++	return ret;
+ }
+ 
+ static const struct of_device_id max77705_charger_of_match[] = {
+diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
+index 18934e28469ee6..528d86a33f37de 100644
+--- a/drivers/ptp/ptp_private.h
++++ b/drivers/ptp/ptp_private.h
+@@ -98,17 +98,7 @@ static inline int queue_cnt(const struct timestamp_event_queue *q)
+ /* Check if ptp virtual clock is in use */
+ static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)
+ {
+-	bool in_use = false;
+-
+-	if (mutex_lock_interruptible(&ptp->n_vclocks_mux))
+-		return true;
+-
+-	if (!ptp->is_virtual_clock && ptp->n_vclocks)
+-		in_use = true;
+-
+-	mutex_unlock(&ptp->n_vclocks_mux);
+-
+-	return in_use;
++	return !ptp->is_virtual_clock;
+ }
+ 
+ /* Check if ptp clock shall be free running */
+diff --git a/drivers/regulator/max20086-regulator.c b/drivers/regulator/max20086-regulator.c
+index 198d45f8e88493..3d333b61fb18c8 100644
+--- a/drivers/regulator/max20086-regulator.c
++++ b/drivers/regulator/max20086-regulator.c
+@@ -5,6 +5,7 @@
+ // Copyright (C) 2022 Laurent Pinchart <laurent.pinchart@idesonboard.com>
+ // Copyright (C) 2018 Avnet, Inc.
+ 
++#include <linux/cleanup.h>
+ #include <linux/err.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+@@ -133,11 +134,11 @@ static int max20086_regulators_register(struct max20086 *chip)
+ static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)
+ {
+ 	struct of_regulator_match *matches;
+-	struct device_node *node;
+ 	unsigned int i;
+ 	int ret;
+ 
+-	node = of_get_child_by_name(chip->dev->of_node, "regulators");
++	struct device_node *node __free(device_node) =
++		of_get_child_by_name(chip->dev->of_node, "regulators");
+ 	if (!node) {
+ 		dev_err(chip->dev, "regulators node not found\n");
+ 		return -ENODEV;
+@@ -153,7 +154,6 @@ static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)
+ 
+ 	ret = of_regulator_match(chip->dev, node, matches,
+ 				 chip->info->num_outputs);
+-	of_node_put(node);
+ 	if (ret < 0) {
+ 		dev_err(chip->dev, "Failed to match regulators\n");
+ 		return -EINVAL;
+diff --git a/drivers/remoteproc/qcom_wcnss_iris.c b/drivers/remoteproc/qcom_wcnss_iris.c
+index b989718776bdb5..2b52b403eb3f76 100644
+--- a/drivers/remoteproc/qcom_wcnss_iris.c
++++ b/drivers/remoteproc/qcom_wcnss_iris.c
+@@ -196,6 +196,7 @@ struct qcom_iris *qcom_iris_probe(struct device *parent, bool *use_48mhz_xo)
+ 
+ err_device_del:
+ 	device_del(&iris->dev);
++	put_device(&iris->dev);
+ 
+ 	return ERR_PTR(ret);
+ }
+@@ -203,4 +204,5 @@ struct qcom_iris *qcom_iris_probe(struct device *parent, bool *use_48mhz_xo)
+ void qcom_iris_remove(struct qcom_iris *iris)
+ {
+ 	device_del(&iris->dev);
++	put_device(&iris->dev);
+ }
+diff --git a/drivers/remoteproc/ti_k3_dsp_remoteproc.c b/drivers/remoteproc/ti_k3_dsp_remoteproc.c
+index a695890254ff76..35e8c3cc313c36 100644
+--- a/drivers/remoteproc/ti_k3_dsp_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_dsp_remoteproc.c
+@@ -115,10 +115,6 @@ static void k3_dsp_rproc_mbox_callback(struct mbox_client *client, void *data)
+ 	const char *name = kproc->rproc->name;
+ 	u32 msg = omap_mbox_message(data);
+ 
+-	/* Do not forward messages from a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	dev_dbg(dev, "mbox msg: 0x%x\n", msg);
+ 
+ 	switch (msg) {
+@@ -159,10 +155,6 @@ static void k3_dsp_rproc_kick(struct rproc *rproc, int vqid)
+ 	mbox_msg_t msg = (mbox_msg_t)vqid;
+ 	int ret;
+ 
+-	/* Do not forward messages to a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	/* send the index of the triggered virtqueue in the mailbox payload */
+ 	ret = mbox_send_message(kproc->mbox, (void *)msg);
+ 	if (ret < 0)
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index dbc513c5569cbf..ba082ca13e7508 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -194,10 +194,6 @@ static void k3_r5_rproc_mbox_callback(struct mbox_client *client, void *data)
+ 	const char *name = kproc->rproc->name;
+ 	u32 msg = omap_mbox_message(data);
+ 
+-	/* Do not forward message from a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	dev_dbg(dev, "mbox msg: 0x%x\n", msg);
+ 
+ 	switch (msg) {
+@@ -233,10 +229,6 @@ static void k3_r5_rproc_kick(struct rproc *rproc, int vqid)
+ 	mbox_msg_t msg = (mbox_msg_t)vqid;
+ 	int ret;
+ 
+-	/* Do not forward message to a detached core */
+-	if (kproc->rproc->state == RPROC_DETACHED)
+-		return;
+-
+ 	/* send the index of the triggered virtqueue in the mailbox payload */
+ 	ret = mbox_send_message(kproc->mbox, (void *)msg);
+ 	if (ret < 0)
+@@ -448,13 +440,36 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct k3_r5_core *core = kproc->core;
++	struct k3_r5_core *core = kproc->core, *core0, *core1;
+ 	struct device *dev = kproc->dev;
+ 	u32 ctrl = 0, cfg = 0, stat = 0;
+ 	u64 boot_vec = 0;
+ 	bool mem_init_dis;
+ 	int ret;
+ 
++	/*
++	 * R5 cores require to be powered on sequentially, core0 should be in
++	 * higher power state than core1 in a cluster. So, wait for core0 to
++	 * power up before proceeding to core1 and put timeout of 2sec. This
++	 * waiting mechanism is necessary because rproc_auto_boot_callback() for
++	 * core1 can be called before core0 due to thread execution order.
++	 *
++	 * By placing the wait mechanism here in .prepare() ops, this condition
++	 * is enforced for rproc boot requests from sysfs as well.
++	 */
++	core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem);
++	core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem);
++	if (cluster->mode == CLUSTER_MODE_SPLIT && core == core1 &&
++	    !core0->released_from_reset) {
++		ret = wait_event_interruptible_timeout(cluster->core_transition,
++						       core0->released_from_reset,
++						       msecs_to_jiffies(2000));
++		if (ret <= 0) {
++			dev_err(dev, "can not power up core1 before core0");
++			return -EPERM;
++		}
++	}
++
+ 	ret = ti_sci_proc_get_status(core->tsp, &boot_vec, &cfg, &ctrl, &stat);
+ 	if (ret < 0)
+ 		return ret;
+@@ -470,6 +485,14 @@ static int k3_r5_rproc_prepare(struct rproc *rproc)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * Notify all threads in the wait queue when core0 state has changed so
++	 * that threads waiting for this condition can be executed.
++	 */
++	core->released_from_reset = true;
++	if (core == core0)
++		wake_up_interruptible(&cluster->core_transition);
++
+ 	/*
+ 	 * Newer IP revisions like on J7200 SoCs support h/w auto-initialization
+ 	 * of TCMs, so there is no need to perform the s/w memzero. This bit is
+@@ -515,10 +538,30 @@ static int k3_r5_rproc_unprepare(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct k3_r5_core *core = kproc->core;
++	struct k3_r5_core *core = kproc->core, *core0, *core1;
+ 	struct device *dev = kproc->dev;
+ 	int ret;
+ 
++	/*
++	 * Ensure power-down of cores is sequential in split mode. Core1 must
++	 * power down before Core0 to maintain the expected state. By placing
++	 * the wait mechanism here in .unprepare() ops, this condition is
++	 * enforced for rproc stop or shutdown requests from sysfs and device
++	 * removal as well.
++	 */
++	core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem);
++	core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem);
++	if (cluster->mode == CLUSTER_MODE_SPLIT && core == core0 &&
++	    core1->released_from_reset) {
++		ret = wait_event_interruptible_timeout(cluster->core_transition,
++						       !core1->released_from_reset,
++						       msecs_to_jiffies(2000));
++		if (ret <= 0) {
++			dev_err(dev, "can not power down core0 before core1");
++			return -EPERM;
++		}
++	}
++
+ 	/* Re-use LockStep-mode reset logic for Single-CPU mode */
+ 	ret = (cluster->mode == CLUSTER_MODE_LOCKSTEP ||
+ 	       cluster->mode == CLUSTER_MODE_SINGLECPU) ?
+@@ -526,6 +569,14 @@ static int k3_r5_rproc_unprepare(struct rproc *rproc)
+ 	if (ret)
+ 		dev_err(dev, "unable to disable cores, ret = %d\n", ret);
+ 
++	/*
++	 * Notify all threads in the wait queue when core1 state has changed so
++	 * that threads waiting for this condition can be executed.
++	 */
++	core->released_from_reset = false;
++	if (core == core1)
++		wake_up_interruptible(&cluster->core_transition);
++
+ 	return ret;
+ }
+ 
+@@ -551,7 +602,7 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+ 	struct device *dev = kproc->dev;
+-	struct k3_r5_core *core0, *core;
++	struct k3_r5_core *core;
+ 	u32 boot_addr;
+ 	int ret;
+ 
+@@ -573,21 +624,9 @@ static int k3_r5_rproc_start(struct rproc *rproc)
+ 				goto unroll_core_run;
+ 		}
+ 	} else {
+-		/* do not allow core 1 to start before core 0 */
+-		core0 = list_first_entry(&cluster->cores, struct k3_r5_core,
+-					 elem);
+-		if (core != core0 && core0->rproc->state == RPROC_OFFLINE) {
+-			dev_err(dev, "%s: can not start core 1 before core 0\n",
+-				__func__);
+-			return -EPERM;
+-		}
+-
+ 		ret = k3_r5_core_run(core);
+ 		if (ret)
+ 			return ret;
+-
+-		core->released_from_reset = true;
+-		wake_up_interruptible(&cluster->core_transition);
+ 	}
+ 
+ 	return 0;
+@@ -628,8 +667,7 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ {
+ 	struct k3_r5_rproc *kproc = rproc->priv;
+ 	struct k3_r5_cluster *cluster = kproc->cluster;
+-	struct device *dev = kproc->dev;
+-	struct k3_r5_core *core1, *core = kproc->core;
++	struct k3_r5_core *core = kproc->core;
+ 	int ret;
+ 
+ 	/* halt all applicable cores */
+@@ -642,16 +680,6 @@ static int k3_r5_rproc_stop(struct rproc *rproc)
+ 			}
+ 		}
+ 	} else {
+-		/* do not allow core 0 to stop before core 1 */
+-		core1 = list_last_entry(&cluster->cores, struct k3_r5_core,
+-					elem);
+-		if (core != core1 && core1->rproc->state != RPROC_OFFLINE) {
+-			dev_err(dev, "%s: can not stop core 0 before core 1\n",
+-				__func__);
+-			ret = -EPERM;
+-			goto out;
+-		}
+-
+ 		ret = k3_r5_core_halt(core);
+ 		if (ret)
+ 			goto out;
+@@ -1279,26 +1307,6 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev)
+ 		    cluster->mode == CLUSTER_MODE_SINGLECPU ||
+ 		    cluster->mode == CLUSTER_MODE_SINGLECORE)
+ 			break;
+-
+-		/*
+-		 * R5 cores require to be powered on sequentially, core0
+-		 * should be in higher power state than core1 in a cluster
+-		 * So, wait for current core to power up before proceeding
+-		 * to next core and put timeout of 2sec for each core.
+-		 *
+-		 * This waiting mechanism is necessary because
+-		 * rproc_auto_boot_callback() for core1 can be called before
+-		 * core0 due to thread execution order.
+-		 */
+-		ret = wait_event_interruptible_timeout(cluster->core_transition,
+-						       core->released_from_reset,
+-						       msecs_to_jiffies(2000));
+-		if (ret <= 0) {
+-			dev_err(dev,
+-				"Timed out waiting for %s core to power up!\n",
+-				rproc->name);
+-			goto out;
+-		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 40d386809d6b78..bb161def317533 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -746,7 +746,7 @@ static int __qcom_smd_send(struct qcom_smd_channel *channel, const void *data,
+ 	__le32 hdr[5] = { cpu_to_le32(len), };
+ 	int tlen = sizeof(hdr) + len;
+ 	unsigned long flags;
+-	int ret;
++	int ret = 0;
+ 
+ 	/* Word aligned channels only accept word size aligned data */
+ 	if (channel->info_word && len % 4)
+diff --git a/drivers/rtc/rtc-loongson.c b/drivers/rtc/rtc-loongson.c
+index 97e5625c064ceb..2ca7ffd5d7a92a 100644
+--- a/drivers/rtc/rtc-loongson.c
++++ b/drivers/rtc/rtc-loongson.c
+@@ -129,6 +129,14 @@ static u32 loongson_rtc_handler(void *id)
+ {
+ 	struct loongson_rtc_priv *priv = (struct loongson_rtc_priv *)id;
+ 
++	rtc_update_irq(priv->rtcdev, 1, RTC_AF | RTC_IRQF);
++
++	/*
++	 * The TOY_MATCH0_REG should be cleared 0 here,
++	 * otherwise the interrupt cannot be cleared.
++	 */
++	regmap_write(priv->regmap, TOY_MATCH0_REG, 0);
++
+ 	spin_lock(&priv->lock);
+ 	/* Disable RTC alarm wakeup and interrupt */
+ 	writel(readl(priv->pm_base + PM1_EN_REG) & ~RTC_EN,
+diff --git a/drivers/rtc/rtc-sh.c b/drivers/rtc/rtc-sh.c
+index 9ea40f40188f3e..3409f576422485 100644
+--- a/drivers/rtc/rtc-sh.c
++++ b/drivers/rtc/rtc-sh.c
+@@ -485,9 +485,15 @@ static int __init sh_rtc_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	rtc->periodic_irq = ret;
+-	rtc->carry_irq = platform_get_irq(pdev, 1);
+-	rtc->alarm_irq = platform_get_irq(pdev, 2);
++	if (!pdev->dev.of_node) {
++		rtc->periodic_irq = ret;
++		rtc->carry_irq = platform_get_irq(pdev, 1);
++		rtc->alarm_irq = platform_get_irq(pdev, 2);
++	} else {
++		rtc->alarm_irq = ret;
++		rtc->periodic_irq = platform_get_irq(pdev, 1);
++		rtc->carry_irq = platform_get_irq(pdev, 2);
++	}
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_IO, 0);
+ 	if (!res)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 944cf2fb05617d..d3981b6779316b 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1885,33 +1885,14 @@ static int hisi_sas_I_T_nexus_reset(struct domain_device *device)
+ 	}
+ 	hisi_sas_dereg_device(hisi_hba, device);
+ 
+-	rc = hisi_sas_debug_I_T_nexus_reset(device);
+-	if (rc == TMF_RESP_FUNC_COMPLETE && dev_is_sata(device)) {
+-		struct sas_phy *local_phy;
+-
++	if (dev_is_sata(device)) {
+ 		rc = hisi_sas_softreset_ata_disk(device);
+-		switch (rc) {
+-		case -ECOMM:
+-			rc = -ENODEV;
+-			break;
+-		case TMF_RESP_FUNC_FAILED:
+-		case -EMSGSIZE:
+-		case -EIO:
+-			local_phy = sas_get_local_phy(device);
+-			rc = sas_phy_enable(local_phy, 0);
+-			if (!rc) {
+-				local_phy->enabled = 0;
+-				dev_err(dev, "Disabled local phy of ATA disk %016llx due to softreset fail (%d)\n",
+-					SAS_ADDR(device->sas_addr), rc);
+-				rc = -ENODEV;
+-			}
+-			sas_put_local_phy(local_phy);
+-			break;
+-		default:
+-			break;
+-		}
++		if (rc == TMF_RESP_FUNC_FAILED)
++			dev_err(dev, "ata disk %016llx reset (%d)\n",
++				SAS_ADDR(device->sas_addr), rc);
+ 	}
+ 
++	rc = hisi_sas_debug_I_T_nexus_reset(device);
+ 	if ((rc == TMF_RESP_FUNC_COMPLETE) || (rc == -ENODEV))
+ 		hisi_sas_release_task(hisi_hba, device);
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 179be6c5a43e07..c2ec4db6728697 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -161,7 +161,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 	struct lpfc_hba   *phba;
+ 	struct lpfc_work_evt *evtp;
+ 	unsigned long iflags;
+-	bool nvme_reg = false;
++	bool drop_initial_node_ref = false;
+ 
+ 	ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode;
+ 	if (!ndlp)
+@@ -188,8 +188,13 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 		spin_lock_irqsave(&ndlp->lock, iflags);
+ 		ndlp->rport = NULL;
+ 
+-		if (ndlp->fc4_xpt_flags & NVME_XPT_REGD)
+-			nvme_reg = true;
++		/* Only 1 thread can drop the initial node reference.
++		 * If not registered for NVME and NLP_DROPPED flag is
++		 * clear, remove the initial reference.
++		 */
++		if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
++			if (!test_and_set_bit(NLP_DROPPED, &ndlp->nlp_flag))
++				drop_initial_node_ref = true;
+ 
+ 		/* The scsi_transport is done with the rport so lpfc cannot
+ 		 * call to unregister.
+@@ -200,13 +205,16 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 			/* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node,
+ 			 * unregister calls were made to the scsi and nvme
+ 			 * transports and refcnt was already decremented. Clear
+-			 * the NLP_XPT_REGD flag only if the NVME Rport is
++			 * the NLP_XPT_REGD flag only if the NVME nrport is
+ 			 * confirmed unregistered.
+ 			 */
+-			if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
+-				ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
++			if (ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
++				if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
++					ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
+ 				spin_unlock_irqrestore(&ndlp->lock, iflags);
+-				lpfc_nlp_put(ndlp); /* may free ndlp */
++
++				/* Release scsi transport reference */
++				lpfc_nlp_put(ndlp);
+ 			} else {
+ 				spin_unlock_irqrestore(&ndlp->lock, iflags);
+ 			}
+@@ -214,14 +222,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ 			spin_unlock_irqrestore(&ndlp->lock, iflags);
+ 		}
+ 
+-		/* Only 1 thread can drop the initial node reference.  If
+-		 * another thread has set NLP_DROPPED, this thread is done.
+-		 */
+-		if (nvme_reg || test_bit(NLP_DROPPED, &ndlp->nlp_flag))
+-			return;
+-
+-		set_bit(NLP_DROPPED, &ndlp->nlp_flag);
+-		lpfc_nlp_put(ndlp);
++		if (drop_initial_node_ref)
++			lpfc_nlp_put(ndlp);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_ctl.c b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+index 063b10dd82514e..02fc204b9bf7b2 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_ctl.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_ctl.c
+@@ -2869,8 +2869,9 @@ _ctl_get_mpt_mctp_passthru_adapter(int dev_index)
+ 		if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) {
+ 			if (count == dev_index) {
+ 				spin_unlock(&gioc_lock);
+-				return 0;
++				return ioc;
+ 			}
++			count++;
+ 		}
+ 	}
+ 	spin_unlock(&gioc_lock);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 436bd29d5ebae6..6b1ebab36fa35b 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -699,7 +699,7 @@ static u32 qedf_get_login_failures(void *cookie)
+ }
+ 
+ static struct qed_fcoe_cb_ops qedf_cb_ops = {
+-	{
++	.common = {
+ 		.link_update = qedf_link_update,
+ 		.bw_update = qedf_bw_update,
+ 		.schedule_recovery_handler = qedf_schedule_recovery_handler,
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 0b8c91bf793fcb..c75a806496d674 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -3499,7 +3499,7 @@ static int iscsi_new_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.new_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_new_fnode;
+ 	}
+ 
+ 	index = transport->new_flashnode(shost, data, len);
+@@ -3509,7 +3509,6 @@ static int iscsi_new_flashnode(struct iscsi_transport *transport,
+ 	else
+ 		err = -EIO;
+ 
+-put_host:
+ 	scsi_host_put(shost);
+ 
+ exit_new_fnode:
+@@ -3534,7 +3533,7 @@ static int iscsi_del_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.del_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_del_fnode;
+ 	}
+ 
+ 	idx = ev->u.del_flashnode.flashnode_idx;
+@@ -3576,7 +3575,7 @@ static int iscsi_login_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.login_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_login_fnode;
+ 	}
+ 
+ 	idx = ev->u.login_flashnode.flashnode_idx;
+@@ -3628,7 +3627,7 @@ static int iscsi_logout_flashnode(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.logout_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_logout_fnode;
+ 	}
+ 
+ 	idx = ev->u.logout_flashnode.flashnode_idx;
+@@ -3678,7 +3677,7 @@ static int iscsi_logout_flashnode_sid(struct iscsi_transport *transport,
+ 		pr_err("%s could not find host no %u\n",
+ 		       __func__, ev->u.logout_flashnode.host_no);
+ 		err = -ENODEV;
+-		goto put_host;
++		goto exit_logout_sid;
+ 	}
+ 
+ 	session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid);
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 8a26eca4fdc9b8..6c9dec7e3128fd 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -5990,7 +5990,7 @@ static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info,
+ 			pqi_stream_data->next_lba = rmd.first_block +
+ 				rmd.block_cnt;
+ 			pqi_stream_data->last_accessed = jiffies;
+-			per_cpu_ptr(device->raid_io_stats, smp_processor_id())->write_stream_cnt++;
++				per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->write_stream_cnt++;
+ 			return true;
+ 		}
+ 
+@@ -6069,7 +6069,7 @@ static int pqi_scsi_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scm
+ 			rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, scmd, queue_group);
+ 			if (rc == 0 || rc == SCSI_MLQUEUE_HOST_BUSY) {
+ 				raid_bypassed = true;
+-				per_cpu_ptr(device->raid_io_stats, smp_processor_id())->raid_bypass_cnt++;
++				per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->raid_bypass_cnt++;
+ 			}
+ 		}
+ 		if (!raid_bypassed)
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index 9ab5ba9cf1d61c..ef8f355589a584 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -166,7 +166,7 @@ static int aspeed_lpc_snoop_config_irq(struct aspeed_lpc_snoop *lpc_snoop,
+ 	int rc;
+ 
+ 	lpc_snoop->irq = platform_get_irq(pdev, 0);
+-	if (!lpc_snoop->irq)
++	if (lpc_snoop->irq < 0)
+ 		return -ENODEV;
+ 
+ 	rc = devm_request_irq(dev, lpc_snoop->irq,
+@@ -200,11 +200,15 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 	lpc_snoop->chan[channel].miscdev.minor = MISC_DYNAMIC_MINOR;
+ 	lpc_snoop->chan[channel].miscdev.name =
+ 		devm_kasprintf(dev, GFP_KERNEL, "%s%d", DEVICE_NAME, channel);
++	if (!lpc_snoop->chan[channel].miscdev.name) {
++		rc = -ENOMEM;
++		goto err_free_fifo;
++	}
+ 	lpc_snoop->chan[channel].miscdev.fops = &snoop_fops;
+ 	lpc_snoop->chan[channel].miscdev.parent = dev;
+ 	rc = misc_register(&lpc_snoop->chan[channel].miscdev);
+ 	if (rc)
+-		return rc;
++		goto err_free_fifo;
+ 
+ 	/* Enable LPC snoop channel at requested port */
+ 	switch (channel) {
+@@ -221,7 +225,8 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 		hicrb_en = HICRB_ENSNP1D;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		rc = -EINVAL;
++		goto err_misc_deregister;
+ 	}
+ 
+ 	regmap_update_bits(lpc_snoop->regmap, HICR5, hicr5_en, hicr5_en);
+@@ -231,6 +236,12 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
+ 		regmap_update_bits(lpc_snoop->regmap, HICRB,
+ 				hicrb_en, hicrb_en);
+ 
++	return 0;
++
++err_misc_deregister:
++	misc_deregister(&lpc_snoop->chan[channel].miscdev);
++err_free_fifo:
++	kfifo_free(&lpc_snoop->chan[channel].fifo);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c
+index a3e88ced328a91..c9199d6fbe26ec 100644
+--- a/drivers/soc/qcom/smp2p.c
++++ b/drivers/soc/qcom/smp2p.c
+@@ -575,7 +575,7 @@ static int qcom_smp2p_probe(struct platform_device *pdev)
+ 	smp2p->mbox_client.knows_txdone = true;
+ 	smp2p->mbox_chan = mbox_request_channel(&smp2p->mbox_client, 0);
+ 	if (IS_ERR(smp2p->mbox_chan)) {
+-		if (PTR_ERR(smp2p->mbox_chan) != -ENODEV)
++		if (PTR_ERR(smp2p->mbox_chan) != -ENOENT)
+ 			return PTR_ERR(smp2p->mbox_chan);
+ 
+ 		smp2p->mbox_chan = NULL;
+diff --git a/drivers/soundwire/generic_bandwidth_allocation.c b/drivers/soundwire/generic_bandwidth_allocation.c
+index 1cfaccf43eac50..c18f0c16f92973 100644
+--- a/drivers/soundwire/generic_bandwidth_allocation.c
++++ b/drivers/soundwire/generic_bandwidth_allocation.c
+@@ -204,6 +204,13 @@ static void _sdw_compute_port_params(struct sdw_bus *bus,
+ 			port_bo = 1;
+ 
+ 			list_for_each_entry(m_rt, &bus->m_rt_list, bus_node) {
++				/*
++				 * Only runtimes with CONFIGURED, PREPARED, ENABLED, and DISABLED
++				 * states should be included in the bandwidth calculation.
++				 */
++				if (m_rt->stream->state > SDW_STREAM_DISABLED ||
++				    m_rt->stream->state < SDW_STREAM_CONFIGURED)
++					continue;
+ 				sdw_compute_master_ports(m_rt, &params[i], &port_bo, hstop);
+ 			}
+ 
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 244ac010686298..e7b61dc4ce6766 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -1436,22 +1436,17 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, 500);
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+-	pm_runtime_set_active(&pdev->dev);
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_get_noresume(&pdev->dev);
++	devm_pm_runtime_set_active_enabled(&pdev->dev);
++	devm_pm_runtime_get_noresume(&pdev->dev);
+ 
+ 	err = atmel_qspi_init(aq);
+ 	if (err)
+ 		goto dma_release;
+ 
+ 	err = spi_register_controller(ctrl);
+-	if (err) {
+-		pm_runtime_put_noidle(&pdev->dev);
+-		pm_runtime_disable(&pdev->dev);
+-		pm_runtime_set_suspended(&pdev->dev);
+-		pm_runtime_dont_use_autosuspend(&pdev->dev);
++	if (err)
+ 		goto dma_release;
+-	}
++
+ 	pm_runtime_mark_last_busy(&pdev->dev);
+ 	pm_runtime_put_autosuspend(&pdev->dev);
+ 
+@@ -1530,10 +1525,6 @@ static void atmel_qspi_remove(struct platform_device *pdev)
+ 		 */
+ 		dev_warn(&pdev->dev, "Failed to resume device on remove\n");
+ 	}
+-
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_dont_use_autosuspend(&pdev->dev);
+-	pm_runtime_put_noidle(&pdev->dev);
+ }
+ 
+ static int __maybe_unused atmel_qspi_suspend(struct device *dev)
+diff --git a/drivers/spi/spi-bcm63xx-hsspi.c b/drivers/spi/spi-bcm63xx-hsspi.c
+index 644b44d2aef24e..18261cbd413b49 100644
+--- a/drivers/spi/spi-bcm63xx-hsspi.c
++++ b/drivers/spi/spi-bcm63xx-hsspi.c
+@@ -745,7 +745,7 @@ static int bcm63xx_hsspi_probe(struct platform_device *pdev)
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	reset = devm_reset_control_get_optional_exclusive(dev, NULL);
++	reset = devm_reset_control_get_optional_shared(dev, NULL);
+ 	if (IS_ERR(reset))
+ 		return PTR_ERR(reset);
+ 
+diff --git a/drivers/spi/spi-bcm63xx.c b/drivers/spi/spi-bcm63xx.c
+index c8f64ec69344af..b56210734caafc 100644
+--- a/drivers/spi/spi-bcm63xx.c
++++ b/drivers/spi/spi-bcm63xx.c
+@@ -523,7 +523,7 @@ static int bcm63xx_spi_probe(struct platform_device *pdev)
+ 		return PTR_ERR(clk);
+ 	}
+ 
+-	reset = devm_reset_control_get_optional_exclusive(dev, NULL);
++	reset = devm_reset_control_get_optional_shared(dev, NULL);
+ 	if (IS_ERR(reset))
+ 		return PTR_ERR(reset);
+ 
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 29c616e2c408cf..70bb74b3bd9c32 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -134,6 +134,7 @@ struct omap2_mcspi {
+ 	size_t			max_xfer_len;
+ 	u32			ref_clk_hz;
+ 	bool			use_multi_mode;
++	bool			last_msg_kept_cs;
+ };
+ 
+ struct omap2_mcspi_cs {
+@@ -1269,6 +1270,10 @@ static int omap2_mcspi_prepare_message(struct spi_controller *ctlr,
+ 	 * multi-mode is applicable.
+ 	 */
+ 	mcspi->use_multi_mode = true;
++
++	if (mcspi->last_msg_kept_cs)
++		mcspi->use_multi_mode = false;
++
+ 	list_for_each_entry(tr, &msg->transfers, transfer_list) {
+ 		if (!tr->bits_per_word)
+ 			bits_per_word = msg->spi->bits_per_word;
+@@ -1287,18 +1292,19 @@ static int omap2_mcspi_prepare_message(struct spi_controller *ctlr,
+ 			mcspi->use_multi_mode = false;
+ 		}
+ 
+-		/* Check if transfer asks to change the CS status after the transfer */
+-		if (!tr->cs_change)
+-			mcspi->use_multi_mode = false;
+-
+-		/*
+-		 * If at least one message is not compatible, switch back to single mode
+-		 *
+-		 * The bits_per_word of certain transfer can be different, but it will have no
+-		 * impact on the signal itself.
+-		 */
+-		if (!mcspi->use_multi_mode)
+-			break;
++		if (list_is_last(&tr->transfer_list, &msg->transfers)) {
++			/* Check if transfer asks to keep the CS status after the whole message */
++			if (tr->cs_change) {
++				mcspi->use_multi_mode = false;
++				mcspi->last_msg_kept_cs = true;
++			} else {
++				mcspi->last_msg_kept_cs = false;
++			}
++		} else {
++			/* Check if transfer asks to change the CS status after the transfer */
++			if (!tr->cs_change)
++				mcspi->use_multi_mode = false;
++		}
+ 	}
+ 
+ 	omap2_mcspi_set_mode(ctlr);
+diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c
+index 94948c8781e83f..44a8f58e46fe12 100644
+--- a/drivers/spi/spi-qpic-snand.c
++++ b/drivers/spi/spi-qpic-snand.c
+@@ -250,9 +250,11 @@ static const struct mtd_ooblayout_ops qcom_spi_ooblayout = {
+ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ {
+ 	struct qcom_nand_controller *snandc = nand_to_qcom_snand(nand);
++	struct nand_ecc_props *reqs = &nand->ecc.requirements;
++	struct nand_ecc_props *user = &nand->ecc.user_conf;
+ 	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+ 	struct mtd_info *mtd = nanddev_to_mtd(nand);
+-	int cwperpage, bad_block_byte;
++	int cwperpage, bad_block_byte, ret;
+ 	struct qpic_ecc *ecc_cfg;
+ 
+ 	cwperpage = mtd->writesize / NANDC_STEP_SIZE;
+@@ -261,11 +263,39 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 	ecc_cfg = kzalloc(sizeof(*ecc_cfg), GFP_KERNEL);
+ 	if (!ecc_cfg)
+ 		return -ENOMEM;
+-	snandc->qspi->oob_buf = kzalloc(mtd->writesize + mtd->oobsize,
++
++	if (user->step_size && user->strength) {
++		ecc_cfg->step_size = user->step_size;
++		ecc_cfg->strength = user->strength;
++	} else if (reqs->step_size && reqs->strength) {
++		ecc_cfg->step_size = reqs->step_size;
++		ecc_cfg->strength = reqs->strength;
++	} else {
++		/* use defaults */
++		ecc_cfg->step_size = NANDC_STEP_SIZE;
++		ecc_cfg->strength = 4;
++	}
++
++	if (ecc_cfg->step_size != NANDC_STEP_SIZE) {
++		dev_err(snandc->dev,
++			"only %u bytes ECC step size is supported\n",
++			NANDC_STEP_SIZE);
++		ret = -EOPNOTSUPP;
++		goto err_free_ecc_cfg;
++	}
++
++	if (ecc_cfg->strength != 4) {
++		dev_err(snandc->dev,
++			"only 4 bits ECC strength is supported\n");
++		ret = -EOPNOTSUPP;
++		goto err_free_ecc_cfg;
++	}
++
++	snandc->qspi->oob_buf = kmalloc(mtd->writesize + mtd->oobsize,
+ 					GFP_KERNEL);
+ 	if (!snandc->qspi->oob_buf) {
+-		kfree(ecc_cfg);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto err_free_ecc_cfg;
+ 	}
+ 
+ 	memset(snandc->qspi->oob_buf, 0xff, mtd->writesize + mtd->oobsize);
+@@ -280,8 +310,6 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 	ecc_cfg->bytes = ecc_cfg->ecc_bytes_hw + ecc_cfg->spare_bytes + ecc_cfg->bbm_size;
+ 
+ 	ecc_cfg->steps = 4;
+-	ecc_cfg->strength = 4;
+-	ecc_cfg->step_size = 512;
+ 	ecc_cfg->cw_data = 516;
+ 	ecc_cfg->cw_size = ecc_cfg->cw_data + ecc_cfg->bytes;
+ 	bad_block_byte = mtd->writesize - ecc_cfg->cw_size * (cwperpage - 1) + 1;
+@@ -339,6 +367,10 @@ static int qcom_spi_ecc_init_ctx_pipelined(struct nand_device *nand)
+ 		ecc_cfg->strength, ecc_cfg->step_size);
+ 
+ 	return 0;
++
++err_free_ecc_cfg:
++	kfree(ecc_cfg);
++	return ret;
+ }
+ 
+ static void qcom_spi_ecc_cleanup_ctx_pipelined(struct nand_device *nand)
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 8a98c313548e37..7d8a7998f8ae73 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -918,6 +918,7 @@ static int sh_msiof_transfer_one(struct spi_controller *ctlr,
+ 	void *rx_buf = t->rx_buf;
+ 	unsigned int len = t->len;
+ 	unsigned int bits = t->bits_per_word;
++	unsigned int max_wdlen = 256;
+ 	unsigned int bytes_per_word;
+ 	unsigned int words;
+ 	int n;
+@@ -931,17 +932,17 @@ static int sh_msiof_transfer_one(struct spi_controller *ctlr,
+ 	if (!spi_controller_is_target(p->ctlr))
+ 		sh_msiof_spi_set_clk_regs(p, t);
+ 
++	if (tx_buf)
++		max_wdlen = min(max_wdlen, p->tx_fifo_size);
++	if (rx_buf)
++		max_wdlen = min(max_wdlen, p->rx_fifo_size);
++
+ 	while (ctlr->dma_tx && len > 15) {
+ 		/*
+ 		 *  DMA supports 32-bit words only, hence pack 8-bit and 16-bit
+ 		 *  words, with byte resp. word swapping.
+ 		 */
+-		unsigned int l = 0;
+-
+-		if (tx_buf)
+-			l = min(round_down(len, 4), p->tx_fifo_size * 4);
+-		if (rx_buf)
+-			l = min(round_down(len, 4), p->rx_fifo_size * 4);
++		unsigned int l = min(round_down(len, 4), max_wdlen * 4);
+ 
+ 		if (bits <= 8) {
+ 			copy32 = copy_bswap32;
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index 64e1b2f8a0001c..665c06e1473beb 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -134,7 +134,7 @@
+ #define QSPI_COMMAND_VALUE_SET(X)		(((x) & 0xFF) << 0)
+ 
+ #define QSPI_CMB_SEQ_CMD_CFG			0x1a0
+-#define QSPI_COMMAND_X1_X2_X4(x)		(((x) & 0x3) << 13)
++#define QSPI_COMMAND_X1_X2_X4(x)		((((x) >> 1) & 0x3) << 13)
+ #define QSPI_COMMAND_X1_X2_X4_MASK		(0x03 << 13)
+ #define QSPI_COMMAND_SDR_DDR			BIT(12)
+ #define QSPI_COMMAND_SIZE_SET(x)		(((x) & 0xFF) << 0)
+@@ -147,7 +147,7 @@
+ #define QSPI_ADDRESS_VALUE_SET(X)		(((x) & 0xFFFF) << 0)
+ 
+ #define QSPI_CMB_SEQ_ADDR_CFG			0x1ac
+-#define QSPI_ADDRESS_X1_X2_X4(x)		(((x) & 0x3) << 13)
++#define QSPI_ADDRESS_X1_X2_X4(x)		((((x) >> 1) & 0x3) << 13)
+ #define QSPI_ADDRESS_X1_X2_X4_MASK		(0x03 << 13)
+ #define QSPI_ADDRESS_SDR_DDR			BIT(12)
+ #define QSPI_ADDRESS_SIZE_SET(x)		(((x) & 0xFF) << 0)
+@@ -1036,10 +1036,6 @@ static u32 tegra_qspi_addr_config(bool is_ddr, u8 bus_width, u8 len)
+ {
+ 	u32 addr_config = 0;
+ 
+-	/* Extract Address configuration and value */
+-	is_ddr = 0; //Only SDR mode supported
+-	bus_width = 0; //X1 mode
+-
+ 	if (is_ddr)
+ 		addr_config |= QSPI_ADDRESS_SDR_DDR;
+ 	else
+@@ -1079,13 +1075,13 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
+ 		switch (transfer_phase) {
+ 		case CMD_TRANSFER:
+ 			/* X1 SDR mode */
+-			cmd_config = tegra_qspi_cmd_config(false, 0,
++			cmd_config = tegra_qspi_cmd_config(false, xfer->tx_nbits,
+ 							   xfer->len);
+ 			cmd_value = *((const u8 *)(xfer->tx_buf));
+ 			break;
+ 		case ADDR_TRANSFER:
+ 			/* X1 SDR mode */
+-			addr_config = tegra_qspi_addr_config(false, 0,
++			addr_config = tegra_qspi_addr_config(false, xfer->tx_nbits,
+ 							     xfer->len);
+ 			address_value = *((const u32 *)(xfer->tx_buf));
+ 			break;
+@@ -1163,26 +1159,22 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
+ 				ret = -EIO;
+ 				goto exit;
+ 			}
+-			if (!xfer->cs_change) {
+-				tegra_qspi_transfer_end(spi);
+-				spi_transfer_delay_exec(xfer);
+-			}
+ 			break;
+ 		default:
+ 			ret = -EINVAL;
+ 			goto exit;
+ 		}
+ 		msg->actual_length += xfer->len;
++		if (!xfer->cs_change && transfer_phase == DATA_TRANSFER) {
++			tegra_qspi_transfer_end(spi);
++			spi_transfer_delay_exec(xfer);
++		}
+ 		transfer_phase++;
+ 	}
+ 	ret = 0;
+ 
+ exit:
+ 	msg->status = ret;
+-	if (ret < 0) {
+-		tegra_qspi_transfer_end(spi);
+-		spi_transfer_delay_exec(xfer);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/gpib/ines/ines_gpib.c b/drivers/staging/gpib/ines/ines_gpib.c
+index d93eb05dab9038..8e2375d8ddac24 100644
+--- a/drivers/staging/gpib/ines/ines_gpib.c
++++ b/drivers/staging/gpib/ines/ines_gpib.c
+@@ -1484,7 +1484,7 @@ static void __exit ines_exit_module(void)
+ 	gpib_unregister_driver(&ines_pci_unaccel_interface);
+ 	gpib_unregister_driver(&ines_pci_accel_interface);
+ 	gpib_unregister_driver(&ines_isa_interface);
+-#ifdef GPIB__PCMCIA
++#ifdef CONFIG_GPIB_PCMCIA
+ 	gpib_unregister_driver(&ines_pcmcia_interface);
+ 	gpib_unregister_driver(&ines_pcmcia_unaccel_interface);
+ 	gpib_unregister_driver(&ines_pcmcia_accel_interface);
+diff --git a/drivers/staging/gpib/uapi/gpib_user.h b/drivers/staging/gpib/uapi/gpib_user.h
+index 5ff4588686fde3..0fd32fb9e7a64d 100644
+--- a/drivers/staging/gpib/uapi/gpib_user.h
++++ b/drivers/staging/gpib/uapi/gpib_user.h
+@@ -178,7 +178,7 @@ static inline uint8_t MTA(unsigned int addr)
+ 
+ static inline uint8_t MSA(unsigned int addr)
+ {
+-	return gpib_address_restrict(addr) | SAD;
++	return (addr & 0x1f) | SAD;
+ }
+ 
+ static inline uint8_t PPE_byte(unsigned int dio_line, int sense)
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index f9bef5173bf25c..a9bfd5305410c2 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -213,8 +213,14 @@ static int rkvdec_enum_framesizes(struct file *file, void *priv,
+ 	if (!fmt)
+ 		return -EINVAL;
+ 
+-	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+-	fsize->stepwise = fmt->frmsize;
++	fsize->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
++	fsize->stepwise.min_width = 1;
++	fsize->stepwise.max_width = fmt->frmsize.max_width;
++	fsize->stepwise.step_width = 1;
++	fsize->stepwise.min_height = 1;
++	fsize->stepwise.max_height = fmt->frmsize.max_height;
++	fsize->stepwise.step_height = 1;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 088481d91e6e29..985925147ac068 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -213,6 +213,13 @@ static const struct debugfs_reg32 lvts_regs[] = {
+ 	LVTS_DEBUG_FS_REGS(LVTS_CLKEN),
+ };
+ 
++static void lvts_debugfs_exit(void *data)
++{
++	struct lvts_domain *lvts_td = data;
++
++	debugfs_remove_recursive(lvts_td->dom_dentry);
++}
++
+ static int lvts_debugfs_init(struct device *dev, struct lvts_domain *lvts_td)
+ {
+ 	struct debugfs_regset32 *regset;
+@@ -245,12 +252,7 @@ static int lvts_debugfs_init(struct device *dev, struct lvts_domain *lvts_td)
+ 		debugfs_create_regset32("registers", 0400, dentry, regset);
+ 	}
+ 
+-	return 0;
+-}
+-
+-static void lvts_debugfs_exit(struct lvts_domain *lvts_td)
+-{
+-	debugfs_remove_recursive(lvts_td->dom_dentry);
++	return devm_add_action_or_reset(dev, lvts_debugfs_exit, lvts_td);
+ }
+ 
+ #else
+@@ -261,8 +263,6 @@ static inline int lvts_debugfs_init(struct device *dev,
+ 	return 0;
+ }
+ 
+-static void lvts_debugfs_exit(struct lvts_domain *lvts_td) { }
+-
+ #endif
+ 
+ static int lvts_raw_to_temp(u32 raw_temp, int temp_factor)
+@@ -1374,8 +1374,6 @@ static void lvts_remove(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < lvts_td->num_lvts_ctrl; i++)
+ 		lvts_ctrl_set_enable(&lvts_td->lvts_ctrl[i], false);
+-
+-	lvts_debugfs_exit(lvts_td);
+ }
+ 
+ static const struct lvts_ctrl_data mt7988_lvts_ap_data_ctrl[] = {
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index e51d01671d8e7c..3e96f1afd4268e 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -440,10 +440,10 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags)
+ 			bool configured = val & PORT_CS_19_PC;
+ 			usb4 = port->usb4;
+ 
+-			if (((flags & TB_WAKE_ON_CONNECT) |
++			if (((flags & TB_WAKE_ON_CONNECT) &&
+ 			      device_may_wakeup(&usb4->dev)) && !configured)
+ 				val |= PORT_CS_19_WOC;
+-			if (((flags & TB_WAKE_ON_DISCONNECT) |
++			if (((flags & TB_WAKE_ON_DISCONNECT) &&
+ 			      device_may_wakeup(&usb4->dev)) && configured)
+ 				val |= PORT_CS_19_WOD;
+ 			if ((flags & TB_WAKE_ON_USB4) && configured)
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 2a0ce11f405d2d..72ae08d6204ff4 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1173,16 +1173,6 @@ static int omap_8250_tx_dma(struct uart_8250_port *p)
+ 		return 0;
+ 	}
+ 
+-	sg_init_table(&sg, 1);
+-	ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1,
+-					   UART_XMIT_SIZE, dma->tx_addr);
+-	if (ret != 1) {
+-		serial8250_clear_THRI(p);
+-		return 0;
+-	}
+-
+-	dma->tx_size = sg_dma_len(&sg);
+-
+ 	if (priv->habit & OMAP_DMA_TX_KICK) {
+ 		unsigned char c;
+ 		u8 tx_lvl;
+@@ -1207,18 +1197,22 @@ static int omap_8250_tx_dma(struct uart_8250_port *p)
+ 			ret = -EBUSY;
+ 			goto err;
+ 		}
+-		if (dma->tx_size < 4) {
++		if (kfifo_len(&tport->xmit_fifo) < 4) {
+ 			ret = -EINVAL;
+ 			goto err;
+ 		}
+-		if (!kfifo_get(&tport->xmit_fifo, &c)) {
++		if (!uart_fifo_out(&p->port, &c, 1)) {
+ 			ret = -EINVAL;
+ 			goto err;
+ 		}
+ 		skip_byte = c;
+-		/* now we need to recompute due to kfifo_get */
+-		kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1,
+-				UART_XMIT_SIZE, dma->tx_addr);
++	}
++
++	sg_init_table(&sg, 1);
++	ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1, UART_XMIT_SIZE, dma->tx_addr);
++	if (ret != 1) {
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	desc = dmaengine_prep_slave_sg(dma->txchan, &sg, 1, DMA_MEM_TO_DEV,
+@@ -1228,6 +1222,7 @@ static int omap_8250_tx_dma(struct uart_8250_port *p)
+ 		goto err;
+ 	}
+ 
++	dma->tx_size = sg_dma_len(&sg);
+ 	dma->tx_running = 1;
+ 
+ 	desc->callback = omap_8250_dma_tx_complete;
+diff --git a/drivers/tty/serial/milbeaut_usio.c b/drivers/tty/serial/milbeaut_usio.c
+index 059bea18dbab56..4e47dca2c4ed9c 100644
+--- a/drivers/tty/serial/milbeaut_usio.c
++++ b/drivers/tty/serial/milbeaut_usio.c
+@@ -523,7 +523,10 @@ static int mlb_usio_probe(struct platform_device *pdev)
+ 	}
+ 	port->membase = devm_ioremap(&pdev->dev, res->start,
+ 				resource_size(res));
+-
++	if (!port->membase) {
++		ret = -ENOMEM;
++		goto failed;
++	}
+ 	ret = platform_get_irq_byname(pdev, "rx");
+ 	mlb_usio_irq[index][RX] = ret;
+ 
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 4b91072f3a4e91..1f2bdd2e1cc593 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -1103,8 +1103,6 @@ long vt_compat_ioctl(struct tty_struct *tty,
+ 	case VT_WAITACTIVE:
+ 	case VT_RELDISP:
+ 	case VT_DISALLOCATE:
+-	case VT_RESIZE:
+-	case VT_RESIZEX:
+ 		return vt_ioctl(tty, cmd, arg);
+ 
+ 	/*
+diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
+index f1294c29f48491..1e50675772febb 100644
+--- a/drivers/ufs/core/ufs-mcq.c
++++ b/drivers/ufs/core/ufs-mcq.c
+@@ -674,7 +674,6 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
+ 	int tag = scsi_cmd_to_rq(cmd)->tag;
+ 	struct ufshcd_lrb *lrbp = &hba->lrb[tag];
+ 	struct ufs_hw_queue *hwq;
+-	unsigned long flags;
+ 	int err;
+ 
+ 	/* Skip task abort in case previous aborts failed and report failure */
+@@ -713,10 +712,5 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
+ 		return FAILED;
+ 	}
+ 
+-	spin_lock_irqsave(&hwq->cq_lock, flags);
+-	if (ufshcd_cmd_inflight(lrbp->cmd))
+-		ufshcd_release_scsi_cmd(hba, lrbp);
+-	spin_unlock_irqrestore(&hwq->cq_lock, flags);
+-
+ 	return SUCCESS;
+ }
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 7735421e399182..04f769d907a446 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -6587,9 +6587,14 @@ static void ufshcd_err_handler(struct work_struct *work)
+ 		up(&hba->host_sem);
+ 		return;
+ 	}
+-	ufshcd_set_eh_in_progress(hba);
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
++
+ 	ufshcd_err_handling_prepare(hba);
++
++	spin_lock_irqsave(hba->host->host_lock, flags);
++	ufshcd_set_eh_in_progress(hba);
++	spin_unlock_irqrestore(hba->host->host_lock, flags);
++
+ 	/* Complete requests that have door-bell cleared by h/w */
+ 	ufshcd_complete_requests(hba, false);
+ 	spin_lock_irqsave(hba->host->host_lock, flags);
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index c0761ccc1381e3..31649f908dd466 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -103,7 +103,9 @@ static const struct __ufs_qcom_bw_table {
+ };
+ 
+ static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
+-static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq);
++static unsigned long ufs_qcom_opp_freq_to_clk_freq(struct ufs_hba *hba,
++						   unsigned long freq, char *name);
++static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up, unsigned long freq);
+ 
+ static struct ufs_qcom_host *rcdev_to_ufs_host(struct reset_controller_dev *rcd)
+ {
+@@ -452,10 +454,9 @@ static int ufs_qcom_power_up_sequence(struct ufs_hba *hba)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (phy->power_count) {
++	if (phy->power_count)
+ 		phy_power_off(phy);
+-		phy_exit(phy);
+-	}
++
+ 
+ 	/* phy initialization - calibrate the phy */
+ 	ret = phy_init(phy);
+@@ -602,7 +603,7 @@ static int ufs_qcom_link_startup_notify(struct ufs_hba *hba,
+ 			return -EINVAL;
+ 		}
+ 
+-		err = ufs_qcom_set_core_clk_ctrl(hba, ULONG_MAX);
++		err = ufs_qcom_set_core_clk_ctrl(hba, true, ULONG_MAX);
+ 		if (err)
+ 			dev_err(hba->dev, "cfg core clk ctrl failed\n");
+ 		/*
+@@ -1360,29 +1361,46 @@ static int ufs_qcom_set_clk_40ns_cycles(struct ufs_hba *hba,
+ 	return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), reg);
+ }
+ 
+-static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq)
++static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up, unsigned long freq)
+ {
+ 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ 	struct list_head *head = &hba->clk_list_head;
+ 	struct ufs_clk_info *clki;
+ 	u32 cycles_in_1us = 0;
+ 	u32 core_clk_ctrl_reg;
++	unsigned long clk_freq;
+ 	int err;
+ 
++	if (hba->use_pm_opp && freq != ULONG_MAX) {
++		clk_freq = ufs_qcom_opp_freq_to_clk_freq(hba, freq, "core_clk_unipro");
++		if (clk_freq) {
++			cycles_in_1us = ceil(clk_freq, HZ_PER_MHZ);
++			goto set_core_clk_ctrl;
++		}
++	}
++
+ 	list_for_each_entry(clki, head, list) {
+ 		if (!IS_ERR_OR_NULL(clki->clk) &&
+ 		    !strcmp(clki->name, "core_clk_unipro")) {
+-			if (!clki->max_freq)
++			if (!clki->max_freq) {
+ 				cycles_in_1us = 150; /* default for backwards compatibility */
+-			else if (freq == ULONG_MAX)
++				break;
++			}
++
++			if (freq == ULONG_MAX) {
+ 				cycles_in_1us = ceil(clki->max_freq, HZ_PER_MHZ);
+-			else
+-				cycles_in_1us = ceil(freq, HZ_PER_MHZ);
++				break;
++			}
+ 
++			if (is_scale_up)
++				cycles_in_1us = ceil(clki->max_freq, HZ_PER_MHZ);
++			else
++				cycles_in_1us = ceil(clk_get_rate(clki->clk), HZ_PER_MHZ);
+ 			break;
+ 		}
+ 	}
+ 
++set_core_clk_ctrl:
+ 	err = ufshcd_dme_get(hba,
+ 			    UIC_ARG_MIB(DME_VS_CORE_CLK_CTRL),
+ 			    &core_clk_ctrl_reg);
+@@ -1425,7 +1443,7 @@ static int ufs_qcom_clk_scale_up_pre_change(struct ufs_hba *hba, unsigned long f
+ 		return ret;
+ 	}
+ 	/* set unipro core clock attributes and clear clock divider */
+-	return ufs_qcom_set_core_clk_ctrl(hba, freq);
++	return ufs_qcom_set_core_clk_ctrl(hba, true, freq);
+ }
+ 
+ static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba)
+@@ -1457,7 +1475,7 @@ static int ufs_qcom_clk_scale_down_pre_change(struct ufs_hba *hba)
+ static int ufs_qcom_clk_scale_down_post_change(struct ufs_hba *hba, unsigned long freq)
+ {
+ 	/* set unipro core clock attributes and clear clock divider */
+-	return ufs_qcom_set_core_clk_ctrl(hba, freq);
++	return ufs_qcom_set_core_clk_ctrl(hba, false, freq);
+ }
+ 
+ static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba, bool scale_up,
+@@ -1922,11 +1940,53 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
+ 	return ret;
+ }
+ 
++static unsigned long ufs_qcom_opp_freq_to_clk_freq(struct ufs_hba *hba,
++						   unsigned long freq, char *name)
++{
++	struct ufs_clk_info *clki;
++	struct dev_pm_opp *opp;
++	unsigned long clk_freq;
++	int idx = 0;
++	bool found = false;
++
++	opp = dev_pm_opp_find_freq_exact_indexed(hba->dev, freq, 0, true);
++	if (IS_ERR(opp)) {
++		dev_err(hba->dev, "Failed to find OPP for exact frequency %lu\n", freq);
++		return 0;
++	}
++
++	list_for_each_entry(clki, &hba->clk_list_head, list) {
++		if (!strcmp(clki->name, name)) {
++			found = true;
++			break;
++		}
++
++		idx++;
++	}
++
++	if (!found) {
++		dev_err(hba->dev, "Failed to find clock '%s' in clk list\n", name);
++		dev_pm_opp_put(opp);
++		return 0;
++	}
++
++	clk_freq = dev_pm_opp_get_freq_indexed(opp, idx);
++
++	dev_pm_opp_put(opp);
++
++	return clk_freq;
++}
++
+ static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
+ {
+-	u32 gear = 0;
++	u32 gear = UFS_HS_DONT_CHANGE;
++	unsigned long unipro_freq;
+ 
+-	switch (freq) {
++	if (!hba->use_pm_opp)
++		return gear;
++
++	unipro_freq = ufs_qcom_opp_freq_to_clk_freq(hba, freq, "core_clk_unipro");
++	switch (unipro_freq) {
+ 	case 403000000:
+ 		gear = UFS_HS_G5;
+ 		break;
+@@ -1946,10 +2006,10 @@ static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
+ 		break;
+ 	default:
+ 		dev_err(hba->dev, "%s: Unsupported clock freq : %lu\n", __func__, freq);
+-		break;
++		return UFS_HS_DONT_CHANGE;
+ 	}
+ 
+-	return gear;
++	return min_t(u32, gear, hba->max_pwr_info.info.gear_rx);
+ }
+ 
+ /*
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c
+index 4824a10df07e7c..55f95f41b3b4dd 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -29,7 +29,8 @@
+ unsigned int cdnsp_port_speed(unsigned int port_status)
+ {
+ 	/*Detect gadget speed based on PORTSC register*/
+-	if (DEV_SUPERSPEEDPLUS(port_status))
++	if (DEV_SUPERSPEEDPLUS(port_status) ||
++	    DEV_SSP_GEN1x2(port_status) || DEV_SSP_GEN2x2(port_status))
+ 		return USB_SPEED_SUPER_PLUS;
+ 	else if (DEV_SUPERSPEED(port_status))
+ 		return USB_SPEED_SUPER;
+@@ -547,6 +548,7 @@ int cdnsp_wait_for_cmd_compl(struct cdnsp_device *pdev)
+ 	dma_addr_t cmd_deq_dma;
+ 	union cdnsp_trb *event;
+ 	u32 cycle_state;
++	u32 retry = 10;
+ 	int ret, val;
+ 	u64 cmd_dma;
+ 	u32  flags;
+@@ -578,8 +580,23 @@ int cdnsp_wait_for_cmd_compl(struct cdnsp_device *pdev)
+ 		flags = le32_to_cpu(event->event_cmd.flags);
+ 
+ 		/* Check the owner of the TRB. */
+-		if ((flags & TRB_CYCLE) != cycle_state)
++		if ((flags & TRB_CYCLE) != cycle_state) {
++			/*
++			 * Give some extra time to get chance controller
++			 * to finish command before returning error code.
++			 * Checking CMD_RING_BUSY is not sufficient because
++			 * this bit is cleared to '0' when the Command
++			 * Descriptor has been executed by controller
++			 * and not when command completion event has
++			 * be added to event ring.
++			 */
++			if (retry--) {
++				udelay(20);
++				continue;
++			}
++
+ 			return -EINVAL;
++		}
+ 
+ 		cmd_dma = le64_to_cpu(event->event_cmd.cmd_trb);
+ 
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index 12534be52f39df..2afa3e558f85ca 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -285,11 +285,15 @@ struct cdnsp_port_regs {
+ #define XDEV_HS			(0x3 << 10)
+ #define XDEV_SS			(0x4 << 10)
+ #define XDEV_SSP		(0x5 << 10)
++#define XDEV_SSP1x2		(0x6 << 10)
++#define XDEV_SSP2x2		(0x7 << 10)
+ #define DEV_UNDEFSPEED(p)	(((p) & DEV_SPEED_MASK) == (0x0 << 10))
+ #define DEV_FULLSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_FS)
+ #define DEV_HIGHSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_HS)
+ #define DEV_SUPERSPEED(p)	(((p) & DEV_SPEED_MASK) == XDEV_SS)
+ #define DEV_SUPERSPEEDPLUS(p)	(((p) & DEV_SPEED_MASK) == XDEV_SSP)
++#define DEV_SSP_GEN1x2(p)	(((p) & DEV_SPEED_MASK) == XDEV_SSP1x2)
++#define DEV_SSP_GEN2x2(p)	(((p) & DEV_SPEED_MASK) == XDEV_SSP2x2)
+ #define DEV_SUPERSPEED_ANY(p)	(((p) & DEV_SPEED_MASK) >= XDEV_SS)
+ #define DEV_PORT_SPEED(p)	(((p) >> 10) & 0x0f)
+ /* Port Link State Write Strobe - set this when changing link state */
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 66f3d9324ba2f3..75de29725a450c 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -565,14 +565,15 @@ static int usbtmc488_ioctl_read_stb(struct usbtmc_file_data *file_data,
+ 
+ 	rv = usbtmc_get_stb(file_data, &stb);
+ 
+-	if (rv > 0) {
+-		srq_asserted = atomic_xchg(&file_data->srq_asserted,
+-					srq_asserted);
+-		if (srq_asserted)
+-			stb |= 0x40; /* Set RQS bit */
++	if (rv < 0)
++		return rv;
++
++	srq_asserted = atomic_xchg(&file_data->srq_asserted, srq_asserted);
++	if (srq_asserted)
++		stb |= 0x40; /* Set RQS bit */
++
++	rv = put_user(stb, (__u8 __user *)arg);
+ 
+-		rv = put_user(stb, (__u8 __user *)arg);
+-	}
+ 	return rv;
+ 
+ }
+@@ -2201,7 +2202,7 @@ static long usbtmc_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 
+ 	case USBTMC_IOCTL_GET_STB:
+ 		retval = usbtmc_get_stb(file_data, &tmp_byte);
+-		if (retval > 0)
++		if (!retval)
+ 			retval = put_user(tmp_byte, (__u8 __user *)arg);
+ 		break;
+ 
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 0e1dd6ef60a719..9f19fc7494e022 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -6133,6 +6133,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	struct usb_hub			*parent_hub;
+ 	struct usb_hcd			*hcd = bus_to_hcd(udev->bus);
+ 	struct usb_device_descriptor	descriptor;
++	struct usb_interface		*intf;
+ 	struct usb_host_bos		*bos;
+ 	int				i, j, ret = 0;
+ 	int				port1 = udev->portnum;
+@@ -6190,6 +6191,18 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	if (!udev->actconfig)
+ 		goto done;
+ 
++	/*
++	 * Some devices can't handle setting default altsetting 0 with a
++	 * Set-Interface request. Disable host-side endpoints of those
++	 * interfaces here. Enable and reset them back after host has set
++	 * its internal endpoint structures during usb_hcd_alloc_bandwith()
++	 */
++	for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
++		intf = udev->actconfig->interface[i];
++		if (intf->cur_altsetting->desc.bAlternateSetting == 0)
++			usb_disable_interface(udev, intf, true);
++	}
++
+ 	mutex_lock(hcd->bandwidth_mutex);
+ 	ret = usb_hcd_alloc_bandwidth(udev, udev->actconfig, NULL, NULL);
+ 	if (ret < 0) {
+@@ -6221,12 +6234,11 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	 */
+ 	for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
+ 		struct usb_host_config *config = udev->actconfig;
+-		struct usb_interface *intf = config->interface[i];
+ 		struct usb_interface_descriptor *desc;
+ 
++		intf = config->interface[i];
+ 		desc = &intf->cur_altsetting->desc;
+ 		if (desc->bAlternateSetting == 0) {
+-			usb_disable_interface(udev, intf, true);
+ 			usb_enable_interface(udev, intf, true);
+ 			ret = 0;
+ 		} else {
+diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c
+index 935c0efea0b640..ea1ce8beb0cbb2 100644
+--- a/drivers/usb/core/usb-acpi.c
++++ b/drivers/usb/core/usb-acpi.c
+@@ -165,6 +165,8 @@ static int usb_acpi_add_usb4_devlink(struct usb_device *udev)
+ 		return 0;
+ 
+ 	hub = usb_hub_to_struct_hub(udev->parent);
++	if (!hub)
++		return 0;
+ 	port_dev = hub->ports[udev->portnum - 1];
+ 
+ 	struct fwnode_handle *nhi_fwnode __free(fwnode_handle) =
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 740311c4fa2496..c7a05f842745bc 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -144,8 +144,8 @@ static struct hid_descriptor hidg_desc = {
+ 	.bcdHID				= cpu_to_le16(0x0101),
+ 	.bCountryCode			= 0x00,
+ 	.bNumDescriptors		= 0x1,
+-	/*.desc[0].bDescriptorType	= DYNAMIC */
+-	/*.desc[0].wDescriptorLenght	= DYNAMIC */
++	/*.rpt_desc.bDescriptorType	= DYNAMIC */
++	/*.rpt_desc.wDescriptorLength	= DYNAMIC */
+ };
+ 
+ /* Super-Speed Support */
+@@ -939,8 +939,8 @@ static int hidg_setup(struct usb_function *f,
+ 			struct hid_descriptor hidg_desc_copy = hidg_desc;
+ 
+ 			VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: HID\n");
+-			hidg_desc_copy.desc[0].bDescriptorType = HID_DT_REPORT;
+-			hidg_desc_copy.desc[0].wDescriptorLength =
++			hidg_desc_copy.rpt_desc.bDescriptorType = HID_DT_REPORT;
++			hidg_desc_copy.rpt_desc.wDescriptorLength =
+ 				cpu_to_le16(hidg->report_desc_length);
+ 
+ 			length = min_t(unsigned short, length,
+@@ -1210,8 +1210,8 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	 * We can use hidg_desc struct here but we should not relay
+ 	 * that its content won't change after returning from this function.
+ 	 */
+-	hidg_desc.desc[0].bDescriptorType = HID_DT_REPORT;
+-	hidg_desc.desc[0].wDescriptorLength =
++	hidg_desc.rpt_desc.bDescriptorType = HID_DT_REPORT;
++	hidg_desc.rpt_desc.wDescriptorLength =
+ 		cpu_to_le16(hidg->report_desc_length);
+ 
+ 	hidg_hs_in_ep_desc.bEndpointAddress =
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 4b3d5075621aa0..d709e24c1fd422 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1570,7 +1570,7 @@ static int gadget_match_driver(struct device *dev, const struct device_driver *d
+ {
+ 	struct usb_gadget *gadget = dev_to_usb_gadget(dev);
+ 	struct usb_udc *udc = gadget->udc;
+-	struct usb_gadget_driver *driver = container_of(drv,
++	const struct usb_gadget_driver *driver = container_of(drv,
+ 			struct usb_gadget_driver, driver);
+ 
+ 	/* If the driver specifies a udc_name, it must match the UDC's name */
+diff --git a/drivers/usb/misc/onboard_usb_dev.c b/drivers/usb/misc/onboard_usb_dev.c
+index f5372dfa241a9c..86f25bcb64253c 100644
+--- a/drivers/usb/misc/onboard_usb_dev.c
++++ b/drivers/usb/misc/onboard_usb_dev.c
+@@ -36,9 +36,10 @@
+ #define USB5744_CMD_CREG_ACCESS			0x99
+ #define USB5744_CMD_CREG_ACCESS_LSB		0x37
+ #define USB5744_CREG_MEM_ADDR			0x00
++#define USB5744_CREG_MEM_RD_ADDR		0x04
+ #define USB5744_CREG_WRITE			0x00
+-#define USB5744_CREG_RUNTIMEFLAGS2		0x41
+-#define USB5744_CREG_RUNTIMEFLAGS2_LSB		0x1D
++#define USB5744_CREG_READ			0x01
++#define USB5744_CREG_RUNTIMEFLAGS2		0x411D
+ #define USB5744_CREG_BYPASS_UDC_SUSPEND		BIT(3)
+ 
+ static void onboard_dev_attach_usb_driver(struct work_struct *work);
+@@ -309,11 +310,88 @@ static void onboard_dev_attach_usb_driver(struct work_struct *work)
+ 		pr_err("Failed to attach USB driver: %pe\n", ERR_PTR(err));
+ }
+ 
++#if IS_ENABLED(CONFIG_USB_ONBOARD_DEV_USB5744)
++static int onboard_dev_5744_i2c_read_byte(struct i2c_client *client, u16 addr, u8 *data)
++{
++	struct i2c_msg msg[2];
++	u8 rd_buf[3];
++	int ret;
++
++	u8 wr_buf[7] = {0, USB5744_CREG_MEM_ADDR, 4,
++			USB5744_CREG_READ, 1,
++			addr >> 8 & 0xff,
++			addr & 0xff};
++	msg[0].addr = client->addr;
++	msg[0].flags = 0;
++	msg[0].len = sizeof(wr_buf);
++	msg[0].buf = wr_buf;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	wr_buf[0] = USB5744_CMD_CREG_ACCESS;
++	wr_buf[1] = USB5744_CMD_CREG_ACCESS_LSB;
++	wr_buf[2] = 0;
++	msg[0].len = 3;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	wr_buf[0] = 0;
++	wr_buf[1] = USB5744_CREG_MEM_RD_ADDR;
++	msg[0].len = 2;
++
++	msg[1].addr = client->addr;
++	msg[1].flags = I2C_M_RD;
++	msg[1].len = 2;
++	msg[1].buf = rd_buf;
++
++	ret = i2c_transfer(client->adapter, msg, 2);
++	if (ret < 0)
++		return ret;
++	*data = rd_buf[1];
++
++	return 0;
++}
++
++static int onboard_dev_5744_i2c_write_byte(struct i2c_client *client, u16 addr, u8 data)
++{
++	struct i2c_msg msg[2];
++	int ret;
++
++	u8 wr_buf[8] = {0, USB5744_CREG_MEM_ADDR, 5,
++			USB5744_CREG_WRITE, 1,
++			addr >> 8 & 0xff,
++			addr & 0xff,
++			data};
++	msg[0].addr = client->addr;
++	msg[0].flags = 0;
++	msg[0].len = sizeof(wr_buf);
++	msg[0].buf = wr_buf;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	msg[0].len = 3;
++	wr_buf[0] = USB5744_CMD_CREG_ACCESS;
++	wr_buf[1] = USB5744_CMD_CREG_ACCESS_LSB;
++	wr_buf[2] = 0;
++
++	ret = i2c_transfer(client->adapter, msg, 1);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
+ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ {
+-#if IS_ENABLED(CONFIG_USB_ONBOARD_DEV_USB5744)
+ 	struct device *dev = &client->dev;
+ 	int ret;
++	u8 reg;
+ 
+ 	/*
+ 	 * Set BYPASS_UDC_SUSPEND bit to ensure MCU is always enabled
+@@ -321,20 +399,16 @@ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ 	 * The command writes 5 bytes to memory and single data byte in
+ 	 * configuration register.
+ 	 */
+-	char wr_buf[7] = {USB5744_CREG_MEM_ADDR, 5,
+-			  USB5744_CREG_WRITE, 1,
+-			  USB5744_CREG_RUNTIMEFLAGS2,
+-			  USB5744_CREG_RUNTIMEFLAGS2_LSB,
+-			  USB5744_CREG_BYPASS_UDC_SUSPEND};
+-
+-	ret = i2c_smbus_write_block_data(client, 0, sizeof(wr_buf), wr_buf);
++	ret = onboard_dev_5744_i2c_read_byte(client,
++					     USB5744_CREG_RUNTIMEFLAGS2, &reg);
+ 	if (ret)
+-		return dev_err_probe(dev, ret, "BYPASS_UDC_SUSPEND bit configuration failed\n");
++		return dev_err_probe(dev, ret, "CREG_RUNTIMEFLAGS2 read failed\n");
+ 
+-	ret = i2c_smbus_write_word_data(client, USB5744_CMD_CREG_ACCESS,
+-					USB5744_CMD_CREG_ACCESS_LSB);
++	reg |= USB5744_CREG_BYPASS_UDC_SUSPEND;
++	ret = onboard_dev_5744_i2c_write_byte(client,
++					      USB5744_CREG_RUNTIMEFLAGS2, reg);
+ 	if (ret)
+-		return dev_err_probe(dev, ret, "Configuration Register Access Command failed\n");
++		return dev_err_probe(dev, ret, "BYPASS_UDC_SUSPEND bit configuration failed\n");
+ 
+ 	/* Send SMBus command to boot hub. */
+ 	ret = i2c_smbus_write_word_data(client, USB5744_CMD_ATTACH,
+@@ -343,10 +417,13 @@ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ 		return dev_err_probe(dev, ret, "USB Attach with SMBus command failed\n");
+ 
+ 	return ret;
++}
+ #else
++static int onboard_dev_5744_i2c_init(struct i2c_client *client)
++{
+ 	return -ENODEV;
+-#endif
+ }
++#endif
+ 
+ static int onboard_dev_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/usb/renesas_usbhs/common.c b/drivers/usb/renesas_usbhs/common.c
+index 4b35ef216125c7..16692e72b73650 100644
+--- a/drivers/usb/renesas_usbhs/common.c
++++ b/drivers/usb/renesas_usbhs/common.c
+@@ -685,10 +685,29 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	INIT_DELAYED_WORK(&priv->notify_hotplug_work, usbhsc_notify_hotplug);
+ 	spin_lock_init(usbhs_priv_to_lock(priv));
+ 
++	/*
++	 * Acquire clocks and enable power management (PM) early in the
++	 * probe process, as the driver accesses registers during
++	 * initialization. Ensure the device is active before proceeding.
++	 */
++	pm_runtime_enable(dev);
++
++	ret = usbhsc_clk_get(dev, priv);
++	if (ret)
++		goto probe_pm_disable;
++
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret)
++		goto probe_clk_put;
++
++	ret = usbhsc_clk_prepare_enable(priv);
++	if (ret)
++		goto probe_pm_put;
++
+ 	/* call pipe and module init */
+ 	ret = usbhs_pipe_probe(priv);
+ 	if (ret < 0)
+-		return ret;
++		goto probe_clk_dis_unprepare;
+ 
+ 	ret = usbhs_fifo_probe(priv);
+ 	if (ret < 0)
+@@ -705,10 +724,6 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto probe_fail_rst;
+ 
+-	ret = usbhsc_clk_get(dev, priv);
+-	if (ret)
+-		goto probe_fail_clks;
+-
+ 	/*
+ 	 * deviece reset here because
+ 	 * USB device might be used in boot loader.
+@@ -721,7 +736,7 @@ static int usbhs_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_warn(dev, "USB function not selected (GPIO)\n");
+ 			ret = -ENOTSUPP;
+-			goto probe_end_mod_exit;
++			goto probe_assert_rest;
+ 		}
+ 	}
+ 
+@@ -735,14 +750,19 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	ret = usbhs_platform_call(priv, hardware_init, pdev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "platform init failed.\n");
+-		goto probe_end_mod_exit;
++		goto probe_assert_rest;
+ 	}
+ 
+ 	/* reset phy for connection */
+ 	usbhs_platform_call(priv, phy_reset, pdev);
+ 
+-	/* power control */
+-	pm_runtime_enable(dev);
++	/*
++	 * Disable the clocks that were enabled earlier in the probe path,
++	 * and let the driver handle the clocks beyond this point.
++	 */
++	usbhsc_clk_disable_unprepare(priv);
++	pm_runtime_put(dev);
++
+ 	if (!usbhs_get_dparam(priv, runtime_pwctrl)) {
+ 		usbhsc_power_ctrl(priv, 1);
+ 		usbhs_mod_autonomy_mode(priv);
+@@ -759,9 +779,7 @@ static int usbhs_probe(struct platform_device *pdev)
+ 
+ 	return ret;
+ 
+-probe_end_mod_exit:
+-	usbhsc_clk_put(priv);
+-probe_fail_clks:
++probe_assert_rest:
+ 	reset_control_assert(priv->rsts);
+ probe_fail_rst:
+ 	usbhs_mod_remove(priv);
+@@ -769,6 +787,14 @@ static int usbhs_probe(struct platform_device *pdev)
+ 	usbhs_fifo_remove(priv);
+ probe_end_pipe_exit:
+ 	usbhs_pipe_remove(priv);
++probe_clk_dis_unprepare:
++	usbhsc_clk_disable_unprepare(priv);
++probe_pm_put:
++	pm_runtime_put(dev);
++probe_clk_put:
++	usbhsc_clk_put(priv);
++probe_pm_disable:
++	pm_runtime_disable(dev);
+ 
+ 	dev_info(dev, "probe failed (%d)\n", ret);
+ 
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index ae90688d23e400..a884cec9ab7e88 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -449,7 +449,7 @@ ATTRIBUTE_GROUPS(typec);
+ 
+ static int typec_match(struct device *dev, const struct device_driver *driver)
+ {
+-	struct typec_altmode_driver *drv = to_altmode_driver(driver);
++	const struct typec_altmode_driver *drv = to_altmode_driver(driver);
+ 	struct typec_altmode *altmode = to_typec_altmode(dev);
+ 	const struct typec_device_id *id;
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim_core.c b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+index fd1b8059336764..648311f5e3cf13 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim_core.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+@@ -166,7 +166,8 @@ static void process_rx(struct max_tcpci_chip *chip, u16 status)
+ 		return;
+ 	}
+ 
+-	if (count > sizeof(struct pd_message) || count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
++	if (count > sizeof(struct pd_message) + 1 ||
++	    count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
+ 		dev_err(chip->dev, "Invalid TCPC_RX_BYTE_CNT %d\n", count);
+ 		return;
+ 	}
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 8adf6f95463304..214d45f8e55c21 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -596,6 +596,15 @@ struct pd_rx_event {
+ 	enum tcpm_transmit_type rx_sop_type;
+ };
+ 
++struct altmode_vdm_event {
++	struct kthread_work work;
++	struct tcpm_port *port;
++	u32 header;
++	u32 *data;
++	int cnt;
++	enum tcpm_transmit_type tx_sop_type;
++};
++
+ static const char * const pd_rev[] = {
+ 	[PD_REV10]		= "rev1",
+ 	[PD_REV20]		= "rev2",
+@@ -1608,18 +1617,68 @@ static void tcpm_queue_vdm(struct tcpm_port *port, const u32 header,
+ 	mod_vdm_delayed_work(port, 0);
+ }
+ 
+-static void tcpm_queue_vdm_unlocked(struct tcpm_port *port, const u32 header,
+-				    const u32 *data, int cnt, enum tcpm_transmit_type tx_sop_type)
++static void tcpm_queue_vdm_work(struct kthread_work *work)
+ {
+-	if (port->state != SRC_READY && port->state != SNK_READY &&
+-	    port->state != SRC_VDM_IDENTITY_REQUEST)
+-		return;
++	struct altmode_vdm_event *event = container_of(work,
++						       struct altmode_vdm_event,
++						       work);
++	struct tcpm_port *port = event->port;
+ 
+ 	mutex_lock(&port->lock);
+-	tcpm_queue_vdm(port, header, data, cnt, tx_sop_type);
++	if (port->state != SRC_READY && port->state != SNK_READY &&
++	    port->state != SRC_VDM_IDENTITY_REQUEST) {
++		tcpm_log_force(port, "dropping altmode_vdm_event");
++		goto port_unlock;
++	}
++
++	tcpm_queue_vdm(port, event->header, event->data, event->cnt, event->tx_sop_type);
++
++port_unlock:
++	kfree(event->data);
++	kfree(event);
+ 	mutex_unlock(&port->lock);
+ }
+ 
++static int tcpm_queue_vdm_unlocked(struct tcpm_port *port, const u32 header,
++				   const u32 *data, int cnt, enum tcpm_transmit_type tx_sop_type)
++{
++	struct altmode_vdm_event *event;
++	u32 *data_cpy;
++	int ret = -ENOMEM;
++
++	event = kzalloc(sizeof(*event), GFP_KERNEL);
++	if (!event)
++		goto err_event;
++
++	data_cpy = kcalloc(cnt, sizeof(u32), GFP_KERNEL);
++	if (!data_cpy)
++		goto err_data;
++
++	kthread_init_work(&event->work, tcpm_queue_vdm_work);
++	event->port = port;
++	event->header = header;
++	memcpy(data_cpy, data, sizeof(u32) * cnt);
++	event->data = data_cpy;
++	event->cnt = cnt;
++	event->tx_sop_type = tx_sop_type;
++
++	ret = kthread_queue_work(port->wq, &event->work);
++	if (!ret) {
++		ret = -EBUSY;
++		goto err_queue;
++	}
++
++	return 0;
++
++err_queue:
++	kfree(data_cpy);
++err_data:
++	kfree(event);
++err_event:
++	tcpm_log_force(port, "failed to queue altmode vdm, err:%d", ret);
++	return ret;
++}
++
+ static void svdm_consume_identity(struct tcpm_port *port, const u32 *p, int cnt)
+ {
+ 	u32 vdo = p[VDO_INDEX_IDH];
+@@ -2830,8 +2889,7 @@ static int tcpm_altmode_enter(struct typec_altmode *altmode, u32 *vdo)
+ 	header = VDO(altmode->svid, vdo ? 2 : 1, svdm_version, CMD_ENTER_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP);
+ }
+ 
+ static int tcpm_altmode_exit(struct typec_altmode *altmode)
+@@ -2847,8 +2905,7 @@ static int tcpm_altmode_exit(struct typec_altmode *altmode)
+ 	header = VDO(altmode->svid, 1, svdm_version, CMD_EXIT_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP);
+ }
+ 
+ static int tcpm_altmode_vdm(struct typec_altmode *altmode,
+@@ -2856,9 +2913,7 @@ static int tcpm_altmode_vdm(struct typec_altmode *altmode,
+ {
+ 	struct tcpm_port *port = typec_altmode_get_drvdata(altmode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP);
+-
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP);
+ }
+ 
+ static const struct typec_altmode_ops tcpm_altmode_ops = {
+@@ -2882,8 +2937,7 @@ static int tcpm_cable_altmode_enter(struct typec_altmode *altmode, enum typec_pl
+ 	header = VDO(altmode->svid, vdo ? 2 : 1, svdm_version, CMD_ENTER_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP_PRIME);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP_PRIME);
+ }
+ 
+ static int tcpm_cable_altmode_exit(struct typec_altmode *altmode, enum typec_plug_index sop)
+@@ -2899,8 +2953,7 @@ static int tcpm_cable_altmode_exit(struct typec_altmode *altmode, enum typec_plu
+ 	header = VDO(altmode->svid, 1, svdm_version, CMD_EXIT_MODE);
+ 	header |= VDO_OPOS(altmode->mode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP_PRIME);
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP_PRIME);
+ }
+ 
+ static int tcpm_cable_altmode_vdm(struct typec_altmode *altmode, enum typec_plug_index sop,
+@@ -2908,9 +2961,7 @@ static int tcpm_cable_altmode_vdm(struct typec_altmode *altmode, enum typec_plug
+ {
+ 	struct tcpm_port *port = typec_altmode_get_drvdata(altmode);
+ 
+-	tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP_PRIME);
+-
+-	return 0;
++	return tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP_PRIME);
+ }
+ 
+ static const struct typec_cable_ops tcpm_cable_ops = {
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+index 451c639299eb3b..d12a350440d3ca 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+@@ -350,6 +350,32 @@ static int vf_qm_func_stop(struct hisi_qm *qm)
+ 	return hisi_qm_mb(qm, QM_MB_CMD_PAUSE_QM, 0, 0, 0);
+ }
+ 
++static int vf_qm_version_check(struct acc_vf_data *vf_data, struct device *dev)
++{
++	switch (vf_data->acc_magic) {
++	case ACC_DEV_MAGIC_V2:
++		if (vf_data->major_ver != ACC_DRV_MAJOR_VER) {
++			dev_info(dev, "migration driver version<%u.%u> not match!\n",
++				 vf_data->major_ver, vf_data->minor_ver);
++			return -EINVAL;
++		}
++		break;
++	case ACC_DEV_MAGIC_V1:
++		/* Correct dma address */
++		vf_data->eqe_dma = vf_data->qm_eqc_dw[QM_XQC_ADDR_HIGH];
++		vf_data->eqe_dma <<= QM_XQC_ADDR_OFFSET;
++		vf_data->eqe_dma |= vf_data->qm_eqc_dw[QM_XQC_ADDR_LOW];
++		vf_data->aeqe_dma = vf_data->qm_aeqc_dw[QM_XQC_ADDR_HIGH];
++		vf_data->aeqe_dma <<= QM_XQC_ADDR_OFFSET;
++		vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[QM_XQC_ADDR_LOW];
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static int vf_qm_check_match(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 			     struct hisi_acc_vf_migration_file *migf)
+ {
+@@ -363,7 +389,8 @@ static int vf_qm_check_match(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	if (migf->total_length < QM_MATCH_SIZE || hisi_acc_vdev->match_done)
+ 		return 0;
+ 
+-	if (vf_data->acc_magic != ACC_DEV_MAGIC) {
++	ret = vf_qm_version_check(vf_data, dev);
++	if (ret) {
+ 		dev_err(dev, "failed to match ACC_DEV_MAGIC\n");
+ 		return -EINVAL;
+ 	}
+@@ -399,13 +426,6 @@ static int vf_qm_check_match(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = qm_write_regs(vf_qm, QM_VF_STATE, &vf_data->vf_qm_state, 1);
+-	if (ret) {
+-		dev_err(dev, "failed to write QM_VF_STATE\n");
+-		return ret;
+-	}
+-
+-	hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state;
+ 	hisi_acc_vdev->match_done = true;
+ 	return 0;
+ }
+@@ -418,7 +438,9 @@ static int vf_qm_get_match_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	int vf_id = hisi_acc_vdev->vf_id;
+ 	int ret;
+ 
+-	vf_data->acc_magic = ACC_DEV_MAGIC;
++	vf_data->acc_magic = ACC_DEV_MAGIC_V2;
++	vf_data->major_ver = ACC_DRV_MAJOR_VER;
++	vf_data->minor_ver = ACC_DRV_MINOR_VER;
+ 	/* Save device id */
+ 	vf_data->dev_id = hisi_acc_vdev->vf_dev->device;
+ 
+@@ -441,6 +463,19 @@ static int vf_qm_get_match_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	return 0;
+ }
+ 
++static void vf_qm_xeqc_save(struct hisi_qm *qm,
++			    struct hisi_acc_vf_migration_file *migf)
++{
++	struct acc_vf_data *vf_data = &migf->vf_data;
++	u16 eq_head, aeq_head;
++
++	eq_head = vf_data->qm_eqc_dw[0] & 0xFFFF;
++	qm_db(qm, 0, QM_DOORBELL_CMD_EQ, eq_head, 0);
++
++	aeq_head = vf_data->qm_aeqc_dw[0] & 0xFFFF;
++	qm_db(qm, 0, QM_DOORBELL_CMD_AEQ, aeq_head, 0);
++}
++
+ static int vf_qm_load_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 			   struct hisi_acc_vf_migration_file *migf)
+ {
+@@ -456,6 +491,20 @@ static int vf_qm_load_data(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	if (migf->total_length < sizeof(struct acc_vf_data))
+ 		return -EINVAL;
+ 
++	if (!vf_data->eqe_dma || !vf_data->aeqe_dma ||
++	    !vf_data->sqc_dma || !vf_data->cqc_dma) {
++		dev_info(dev, "resume dma addr is NULL!\n");
++		hisi_acc_vdev->vf_qm_state = QM_NOT_READY;
++		return 0;
++	}
++
++	ret = qm_write_regs(qm, QM_VF_STATE, &vf_data->vf_qm_state, 1);
++	if (ret) {
++		dev_err(dev, "failed to write QM_VF_STATE\n");
++		return -EINVAL;
++	}
++	hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state;
++
+ 	qm->eqe_dma = vf_data->eqe_dma;
+ 	qm->aeqe_dma = vf_data->aeqe_dma;
+ 	qm->sqc_dma = vf_data->sqc_dma;
+@@ -496,12 +545,12 @@ static int vf_qm_read_data(struct hisi_qm *vf_qm, struct acc_vf_data *vf_data)
+ 		return -EINVAL;
+ 
+ 	/* Every reg is 32 bit, the dma address is 64 bit. */
+-	vf_data->eqe_dma = vf_data->qm_eqc_dw[1];
++	vf_data->eqe_dma = vf_data->qm_eqc_dw[QM_XQC_ADDR_HIGH];
+ 	vf_data->eqe_dma <<= QM_XQC_ADDR_OFFSET;
+-	vf_data->eqe_dma |= vf_data->qm_eqc_dw[0];
+-	vf_data->aeqe_dma = vf_data->qm_aeqc_dw[1];
++	vf_data->eqe_dma |= vf_data->qm_eqc_dw[QM_XQC_ADDR_LOW];
++	vf_data->aeqe_dma = vf_data->qm_aeqc_dw[QM_XQC_ADDR_HIGH];
+ 	vf_data->aeqe_dma <<= QM_XQC_ADDR_OFFSET;
+-	vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[0];
++	vf_data->aeqe_dma |= vf_data->qm_aeqc_dw[QM_XQC_ADDR_LOW];
+ 
+ 	/* Through SQC_BT/CQC_BT to get sqc and cqc address */
+ 	ret = qm_get_sqc(vf_qm, &vf_data->sqc_dma);
+@@ -524,7 +573,6 @@ static int vf_qm_state_save(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ {
+ 	struct acc_vf_data *vf_data = &migf->vf_data;
+ 	struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm;
+-	struct device *dev = &vf_qm->pdev->dev;
+ 	int ret;
+ 
+ 	if (unlikely(qm_wait_dev_not_ready(vf_qm))) {
+@@ -538,17 +586,14 @@ static int vf_qm_state_save(struct hisi_acc_vf_core_device *hisi_acc_vdev,
+ 	vf_data->vf_qm_state = QM_READY;
+ 	hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state;
+ 
+-	ret = vf_qm_cache_wb(vf_qm);
+-	if (ret) {
+-		dev_err(dev, "failed to writeback QM Cache!\n");
+-		return ret;
+-	}
+-
+ 	ret = vf_qm_read_data(vf_qm, vf_data);
+ 	if (ret)
+ 		return -EINVAL;
+ 
+ 	migf->total_length = sizeof(struct acc_vf_data);
++	/* Save eqc and aeqc interrupt information */
++	vf_qm_xeqc_save(vf_qm, migf);
++
+ 	return 0;
+ }
+ 
+@@ -967,6 +1012,13 @@ static int hisi_acc_vf_stop_device(struct hisi_acc_vf_core_device *hisi_acc_vdev
+ 		dev_err(dev, "failed to check QM INT state!\n");
+ 		return ret;
+ 	}
++
++	ret = vf_qm_cache_wb(vf_qm);
++	if (ret) {
++		dev_err(dev, "failed to writeback QM cache!\n");
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1463,6 +1515,7 @@ static void hisi_acc_vfio_pci_close_device(struct vfio_device *core_vdev)
+ 	struct hisi_acc_vf_core_device *hisi_acc_vdev = hisi_acc_get_vf_dev(core_vdev);
+ 	struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm;
+ 
++	hisi_acc_vf_disable_fds(hisi_acc_vdev);
+ 	mutex_lock(&hisi_acc_vdev->open_mutex);
+ 	hisi_acc_vdev->dev_opened = false;
+ 	iounmap(vf_qm->io_base);
+@@ -1485,6 +1538,7 @@ static int hisi_acc_vfio_pci_migrn_init_dev(struct vfio_device *core_vdev)
+ 	hisi_acc_vdev->vf_id = pci_iov_vf_id(pdev) + 1;
+ 	hisi_acc_vdev->pf_qm = pf_qm;
+ 	hisi_acc_vdev->vf_dev = pdev;
++	hisi_acc_vdev->vf_qm_state = QM_NOT_READY;
+ 	mutex_init(&hisi_acc_vdev->state_mutex);
+ 	mutex_init(&hisi_acc_vdev->open_mutex);
+ 
+diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h
+index 245d7537b2bcd4..91002ceeebc18a 100644
+--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h
++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.h
+@@ -39,6 +39,9 @@
+ #define QM_REG_ADDR_OFFSET	0x0004
+ 
+ #define QM_XQC_ADDR_OFFSET	32U
++#define QM_XQC_ADDR_LOW	0x1
++#define QM_XQC_ADDR_HIGH	0x2
++
+ #define QM_VF_AEQ_INT_MASK	0x0004
+ #define QM_VF_EQ_INT_MASK	0x000c
+ #define QM_IFC_INT_SOURCE_V	0x0020
+@@ -50,10 +53,15 @@
+ #define QM_EQC_DW0		0X8000
+ #define QM_AEQC_DW0		0X8020
+ 
++#define ACC_DRV_MAJOR_VER 1
++#define ACC_DRV_MINOR_VER 0
++
++#define ACC_DEV_MAGIC_V1	0XCDCDCDCDFEEDAACC
++#define ACC_DEV_MAGIC_V2	0xAACCFEEDDECADEDE
++
+ struct acc_vf_data {
+ #define QM_MATCH_SIZE offsetofend(struct acc_vf_data, qm_rsv_state)
+ 	/* QM match information */
+-#define ACC_DEV_MAGIC	0XCDCDCDCDFEEDAACC
+ 	u64 acc_magic;
+ 	u32 qp_num;
+ 	u32 dev_id;
+@@ -61,7 +69,9 @@ struct acc_vf_data {
+ 	u32 qp_base;
+ 	u32 vf_qm_state;
+ 	/* QM reserved match information */
+-	u32 qm_rsv_state[3];
++	u16 major_ver;
++	u16 minor_ver;
++	u32 qm_rsv_state[2];
+ 
+ 	/* QM RW regs */
+ 	u32 aeq_int_mask;
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 0ac56072af9f23..ba5d91e576af16 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -293,7 +293,7 @@ static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize)
+ 			struct rb_node *p;
+ 
+ 			for (p = rb_prev(n); p; p = rb_prev(p)) {
+-				struct vfio_dma *dma = rb_entry(n,
++				struct vfio_dma *dma = rb_entry(p,
+ 							struct vfio_dma, node);
+ 
+ 				vfio_dma_bitmap_free(dma);
+diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
+index 9afe701b2a1b64..a63bb42c8f8b03 100644
+--- a/drivers/video/backlight/qcom-wled.c
++++ b/drivers/video/backlight/qcom-wled.c
+@@ -1406,9 +1406,11 @@ static int wled_configure(struct wled *wled)
+ 	wled->ctrl_addr = be32_to_cpu(*prop_addr);
+ 
+ 	rc = of_property_read_string(dev->of_node, "label", &wled->name);
+-	if (rc)
++	if (rc) {
+ 		wled->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node);
+-
++		if (!wled->name)
++			return -ENOMEM;
++	}
+ 	switch (wled->version) {
+ 	case 3:
+ 		u32_opts = wled3_opts;
+diff --git a/drivers/video/fbdev/core/fbcvt.c b/drivers/video/fbdev/core/fbcvt.c
+index 64843464c66135..cd3821bd82e566 100644
+--- a/drivers/video/fbdev/core/fbcvt.c
++++ b/drivers/video/fbdev/core/fbcvt.c
+@@ -312,7 +312,7 @@ int fb_find_mode_cvt(struct fb_videomode *mode, int margins, int rb)
+ 	cvt.f_refresh = cvt.refresh;
+ 	cvt.interlace = 1;
+ 
+-	if (!cvt.xres || !cvt.yres || !cvt.refresh) {
++	if (!cvt.xres || !cvt.yres || !cvt.refresh || cvt.f_refresh > INT_MAX) {
+ 		printk(KERN_INFO "fbcvt: Invalid input parameters\n");
+ 		return 1;
+ 	}
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index d50fe030d82534..7182f43ed05515 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -48,6 +48,7 @@ void vp_modern_avq_done(struct virtqueue *vq)
+ {
+ 	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+ 	struct virtio_pci_admin_vq *admin_vq = &vp_dev->admin_vq;
++	unsigned int status_size = sizeof(struct virtio_admin_cmd_status);
+ 	struct virtio_admin_cmd *cmd;
+ 	unsigned long flags;
+ 	unsigned int len;
+@@ -56,7 +57,17 @@ void vp_modern_avq_done(struct virtqueue *vq)
+ 	do {
+ 		virtqueue_disable_cb(vq);
+ 		while ((cmd = virtqueue_get_buf(vq, &len))) {
+-			cmd->result_sg_size = len;
++			/* If the number of bytes written by the device is less
++			 * than the size of struct virtio_admin_cmd_status, the
++			 * remaining status bytes will remain zero-initialized,
++			 * since the buffer was zeroed during allocation.
++			 * In this case, set the size of command_specific_result
++			 * to 0.
++			 */
++			if (len < status_size)
++				cmd->result_sg_size = 0;
++			else
++				cmd->result_sg_size = len - status_size;
+ 			complete(&cmd->completion);
+ 		}
+ 	} while (!virtqueue_enable_cb(vq));
+diff --git a/drivers/watchdog/exar_wdt.c b/drivers/watchdog/exar_wdt.c
+index 7c61ff34327116..c2e3bb08df899a 100644
+--- a/drivers/watchdog/exar_wdt.c
++++ b/drivers/watchdog/exar_wdt.c
+@@ -221,7 +221,7 @@ static const struct watchdog_info exar_wdt_info = {
+ 	.options	= WDIOF_KEEPALIVEPING |
+ 			  WDIOF_SETTIMEOUT |
+ 			  WDIOF_MAGICCLOSE,
+-	.identity	= "Exar/MaxLinear XR28V38x Watchdog",
++	.identity	= "Exar XR28V38x Watchdog",
+ };
+ 
+ static const struct watchdog_ops exar_wdt_ops = {
+diff --git a/drivers/watchdog/lenovo_se30_wdt.c b/drivers/watchdog/lenovo_se30_wdt.c
+index 024b842499b368..1c73bb7eeeeed1 100644
+--- a/drivers/watchdog/lenovo_se30_wdt.c
++++ b/drivers/watchdog/lenovo_se30_wdt.c
+@@ -271,6 +271,8 @@ static int lenovo_se30_wdt_probe(struct platform_device *pdev)
+ 		return -EBUSY;
+ 
+ 	priv->shm_base_addr = devm_ioremap(dev, base_phys, SHM_WIN_SIZE);
++	if (!priv->shm_base_addr)
++		return -ENOMEM;
+ 
+ 	priv->wdt_cfg.mod = WDT_MODULE;
+ 	priv->wdt_cfg.idx = WDT_CFG_INDEX;
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 8c852807ba1c10..2de37dcd75566f 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -704,15 +704,18 @@ static int __init balloon_add_regions(void)
+ 
+ 		/*
+ 		 * Extra regions are accounted for in the physmap, but need
+-		 * decreasing from current_pages to balloon down the initial
+-		 * allocation, because they are already accounted for in
+-		 * total_pages.
++		 * decreasing from current_pages and target_pages to balloon
++		 * down the initial allocation, because they are already
++		 * accounted for in total_pages.
+ 		 */
+-		if (extra_pfn_end - start_pfn >= balloon_stats.current_pages) {
++		pages = extra_pfn_end - start_pfn;
++		if (pages >= balloon_stats.current_pages ||
++		    pages >= balloon_stats.target_pages) {
+ 			WARN(1, "Extra pages underflow current target");
+ 			return -ERANGE;
+ 		}
+-		balloon_stats.current_pages -= extra_pfn_end - start_pfn;
++		balloon_stats.current_pages -= pages;
++		balloon_stats.target_pages -= pages;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
+index 32619d146cbc19..862164181baca1 100644
+--- a/fs/9p/vfs_addr.c
++++ b/fs/9p/vfs_addr.c
+@@ -59,7 +59,7 @@ static void v9fs_issue_write(struct netfs_io_subrequest *subreq)
+ 	len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err);
+ 	if (len > 0)
+ 		__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+-	netfs_write_subrequest_terminated(subreq, len ?: err, false);
++	netfs_write_subrequest_terminated(subreq, len ?: err);
+ }
+ 
+ /**
+@@ -77,7 +77,8 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
+ 
+ 	/* if we just extended the file size, any portion not in
+ 	 * cache won't be on server and is zeroes */
+-	if (subreq->rreq->origin != NETFS_DIO_READ)
++	if (subreq->rreq->origin != NETFS_UNBUFFERED_READ &&
++	    subreq->rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 	if (pos + total >= i_size_read(rreq->inode))
+ 		__set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);
+@@ -164,4 +165,5 @@ const struct address_space_operations v9fs_addr_operations = {
+ 	.invalidate_folio	= netfs_invalidate_folio,
+ 	.direct_IO		= noop_direct_IO,
+ 	.writepages		= netfs_writepages,
++	.migrate_folio		= filemap_migrate_folio,
+ };
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 18b0a9f1615e44..2e7526ea883ae2 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -120,17 +120,17 @@ static void afs_issue_write_worker(struct work_struct *work)
+ 
+ #if 0 // Error injection
+ 	if (subreq->debug_index == 3)
+-		return netfs_write_subrequest_terminated(subreq, -ENOANO, false);
++		return netfs_write_subrequest_terminated(subreq, -ENOANO);
+ 
+ 	if (!subreq->retry_count) {
+ 		set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+-		return netfs_write_subrequest_terminated(subreq, -EAGAIN, false);
++		return netfs_write_subrequest_terminated(subreq, -EAGAIN);
+ 	}
+ #endif
+ 
+ 	op = afs_alloc_operation(wreq->netfs_priv, vnode->volume);
+ 	if (IS_ERR(op))
+-		return netfs_write_subrequest_terminated(subreq, -EAGAIN, false);
++		return netfs_write_subrequest_terminated(subreq, -EAGAIN);
+ 
+ 	afs_op_set_vnode(op, 0, vnode);
+ 	op->file[0].dv_delta	= 1;
+@@ -166,7 +166,7 @@ static void afs_issue_write_worker(struct work_struct *work)
+ 		break;
+ 	}
+ 
+-	netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len, false);
++	netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len);
+ }
+ 
+ void afs_issue_write(struct netfs_io_subrequest *subreq)
+@@ -202,6 +202,7 @@ void afs_retry_request(struct netfs_io_request *wreq, struct netfs_io_stream *st
+ 	case NETFS_READ_GAPS:
+ 	case NETFS_READ_SINGLE:
+ 	case NETFS_READ_FOR_WRITE:
++	case NETFS_UNBUFFERED_READ:
+ 	case NETFS_DIO_READ:
+ 		return;
+ 	default:
+diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
+index 13de6af279e526..b5b44ea91f9996 100644
+--- a/fs/btrfs/extent-io-tree.c
++++ b/fs/btrfs/extent-io-tree.c
+@@ -1252,8 +1252,11 @@ static int __set_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
+ 		if (!prealloc)
+ 			goto search_again;
+ 		ret = split_state(tree, state, prealloc, end + 1);
+-		if (ret)
++		if (ret) {
+ 			extent_io_tree_panic(tree, state, "split", ret);
++			prealloc = NULL;
++			goto out;
++		}
+ 
+ 		set_state_bits(tree, prealloc, bits, changeset);
+ 		cache_state(prealloc, cached_state);
+@@ -1456,6 +1459,7 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
+ 		if (IS_ERR(inserted_state)) {
+ 			ret = PTR_ERR(inserted_state);
+ 			extent_io_tree_panic(tree, prealloc, "insert", ret);
++			goto out;
+ 		}
+ 		cache_state(inserted_state, cached_state);
+ 		if (inserted_state == prealloc)
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 71b8a825c4479a..22455fbcb29eba 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1862,7 +1862,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
+ 		if (reserved_space < fsize) {
+ 			end = page_start + reserved_space - 1;
+ 			btrfs_delalloc_release_space(BTRFS_I(inode),
+-					data_reserved, page_start,
++					data_reserved, end + 1,
+ 					fsize - reserved_space, true);
+ 		}
+ 	}
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 90f5da3c520ac3..8a3f44302788cd 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4849,8 +4849,11 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
+ 	folio = __filemap_get_folio(mapping, index,
+ 				    FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask);
+ 	if (IS_ERR(folio)) {
+-		btrfs_delalloc_release_space(inode, data_reserved, block_start,
+-					     blocksize, true);
++		if (only_release_metadata)
++			btrfs_delalloc_release_metadata(inode, blocksize, true);
++		else
++			btrfs_delalloc_release_space(inode, data_reserved,
++						     block_start, blocksize, true);
+ 		btrfs_delalloc_release_extents(inode, blocksize);
+ 		ret = -ENOMEM;
+ 		goto out;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index c3b2e29e3e019d..4c525a0408125d 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -153,12 +153,14 @@ struct scrub_stripe {
+ 	unsigned int init_nr_io_errors;
+ 	unsigned int init_nr_csum_errors;
+ 	unsigned int init_nr_meta_errors;
++	unsigned int init_nr_meta_gen_errors;
+ 
+ 	/*
+ 	 * The following error bitmaps are all for the current status.
+ 	 * Every time we submit a new read, these bitmaps may be updated.
+ 	 *
+-	 * error_bitmap = io_error_bitmap | csum_error_bitmap | meta_error_bitmap;
++	 * error_bitmap = io_error_bitmap | csum_error_bitmap |
++	 *		  meta_error_bitmap | meta_generation_bitmap;
+ 	 *
+ 	 * IO and csum errors can happen for both metadata and data.
+ 	 */
+@@ -166,6 +168,7 @@ struct scrub_stripe {
+ 	unsigned long io_error_bitmap;
+ 	unsigned long csum_error_bitmap;
+ 	unsigned long meta_error_bitmap;
++	unsigned long meta_gen_error_bitmap;
+ 
+ 	/* For writeback (repair or replace) error reporting. */
+ 	unsigned long write_error_bitmap;
+@@ -616,7 +619,7 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr
+ 	memcpy(on_disk_csum, header->csum, fs_info->csum_size);
+ 
+ 	if (logical != btrfs_stack_header_bytenr(header)) {
+-		bitmap_set(&stripe->csum_error_bitmap, sector_nr, sectors_per_tree);
++		bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree);
+ 		bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree);
+ 		btrfs_warn_rl(fs_info,
+ 		"tree block %llu mirror %u has bad bytenr, has %llu want %llu",
+@@ -672,7 +675,7 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr
+ 	}
+ 	if (stripe->sectors[sector_nr].generation !=
+ 	    btrfs_stack_header_generation(header)) {
+-		bitmap_set(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree);
++		bitmap_set(&stripe->meta_gen_error_bitmap, sector_nr, sectors_per_tree);
+ 		bitmap_set(&stripe->error_bitmap, sector_nr, sectors_per_tree);
+ 		btrfs_warn_rl(fs_info,
+ 		"tree block %llu mirror %u has bad generation, has %llu want %llu",
+@@ -684,6 +687,7 @@ static void scrub_verify_one_metadata(struct scrub_stripe *stripe, int sector_nr
+ 	bitmap_clear(&stripe->error_bitmap, sector_nr, sectors_per_tree);
+ 	bitmap_clear(&stripe->csum_error_bitmap, sector_nr, sectors_per_tree);
+ 	bitmap_clear(&stripe->meta_error_bitmap, sector_nr, sectors_per_tree);
++	bitmap_clear(&stripe->meta_gen_error_bitmap, sector_nr, sectors_per_tree);
+ }
+ 
+ static void scrub_verify_one_sector(struct scrub_stripe *stripe, int sector_nr)
+@@ -972,8 +976,22 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx,
+ 			if (__ratelimit(&rs) && dev)
+ 				scrub_print_common_warning("header error", dev, false,
+ 						     stripe->logical, physical);
++		if (test_bit(sector_nr, &stripe->meta_gen_error_bitmap))
++			if (__ratelimit(&rs) && dev)
++				scrub_print_common_warning("generation error", dev, false,
++						     stripe->logical, physical);
+ 	}
+ 
++	/* Update the device stats. */
++	for (int i = 0; i < stripe->init_nr_io_errors; i++)
++		btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_READ_ERRS);
++	for (int i = 0; i < stripe->init_nr_csum_errors; i++)
++		btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_CORRUPTION_ERRS);
++	/* Generation mismatch error is based on each metadata, not each block. */
++	for (int i = 0; i < stripe->init_nr_meta_gen_errors;
++	     i += (fs_info->nodesize >> fs_info->sectorsize_bits))
++		btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_GENERATION_ERRS);
++
+ 	spin_lock(&sctx->stat_lock);
+ 	sctx->stat.data_extents_scrubbed += stripe->nr_data_extents;
+ 	sctx->stat.tree_extents_scrubbed += stripe->nr_meta_extents;
+@@ -982,7 +1000,8 @@ static void scrub_stripe_report_errors(struct scrub_ctx *sctx,
+ 	sctx->stat.no_csum += nr_nodatacsum_sectors;
+ 	sctx->stat.read_errors += stripe->init_nr_io_errors;
+ 	sctx->stat.csum_errors += stripe->init_nr_csum_errors;
+-	sctx->stat.verify_errors += stripe->init_nr_meta_errors;
++	sctx->stat.verify_errors += stripe->init_nr_meta_errors +
++				    stripe->init_nr_meta_gen_errors;
+ 	sctx->stat.uncorrectable_errors +=
+ 		bitmap_weight(&stripe->error_bitmap, stripe->nr_sectors);
+ 	sctx->stat.corrected_errors += nr_repaired_sectors;
+@@ -1028,6 +1047,8 @@ static void scrub_stripe_read_repair_worker(struct work_struct *work)
+ 						    stripe->nr_sectors);
+ 	stripe->init_nr_meta_errors = bitmap_weight(&stripe->meta_error_bitmap,
+ 						    stripe->nr_sectors);
++	stripe->init_nr_meta_gen_errors = bitmap_weight(&stripe->meta_gen_error_bitmap,
++							stripe->nr_sectors);
+ 
+ 	if (bitmap_empty(&stripe->init_error_bitmap, stripe->nr_sectors))
+ 		goto out;
+@@ -1142,6 +1163,9 @@ static void scrub_write_endio(struct btrfs_bio *bbio)
+ 		bitmap_set(&stripe->write_error_bitmap, sector_nr,
+ 			   bio_size >> fs_info->sectorsize_bits);
+ 		spin_unlock_irqrestore(&stripe->write_error_lock, flags);
++		for (int i = 0; i < (bio_size >> fs_info->sectorsize_bits); i++)
++			btrfs_dev_stat_inc_and_print(stripe->dev,
++						     BTRFS_DEV_STAT_WRITE_ERRS);
+ 	}
+ 	bio_put(&bbio->bio);
+ 
+@@ -1508,10 +1532,12 @@ static void scrub_stripe_reset_bitmaps(struct scrub_stripe *stripe)
+ 	stripe->init_nr_io_errors = 0;
+ 	stripe->init_nr_csum_errors = 0;
+ 	stripe->init_nr_meta_errors = 0;
++	stripe->init_nr_meta_gen_errors = 0;
+ 	stripe->error_bitmap = 0;
+ 	stripe->io_error_bitmap = 0;
+ 	stripe->csum_error_bitmap = 0;
+ 	stripe->meta_error_bitmap = 0;
++	stripe->meta_gen_error_bitmap = 0;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 90dc094cfa5e5a..f5af11565b8760 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6583,6 +6583,19 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 		btrfs_log_get_delayed_items(inode, &delayed_ins_list,
+ 					    &delayed_del_list);
+ 
++	/*
++	 * If we are fsyncing a file with 0 hard links, then commit the delayed
++	 * inode because the last inode ref (or extref) item may still be in the
++	 * subvolume tree and if we log it the file will still exist after a log
++	 * replay. So commit the delayed inode to delete that last ref and we
++	 * skip logging it.
++	 */
++	if (inode->vfs_inode.i_nlink == 0) {
++		ret = btrfs_commit_inode_delayed_inode(inode);
++		if (ret)
++			goto out_unlock;
++	}
++
+ 	ret = copy_inode_items_to_log(trans, inode, &min_key, &max_key,
+ 				      path, dst_path, logged_isize,
+ 				      inode_only, ctx,
+@@ -7051,14 +7064,9 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	if (btrfs_root_generation(&root->root_item) == trans->transid)
+ 		return BTRFS_LOG_FORCE_COMMIT;
+ 
+-	/*
+-	 * Skip already logged inodes or inodes corresponding to tmpfiles
+-	 * (since logging them is pointless, a link count of 0 means they
+-	 * will never be accessible).
+-	 */
+-	if ((btrfs_inode_in_log(inode, trans->transid) &&
+-	     list_empty(&ctx->ordered_extents)) ||
+-	    inode->vfs_inode.i_nlink == 0)
++	/* Skip already logged inodes and without new extents. */
++	if (btrfs_inode_in_log(inode, trans->transid) &&
++	    list_empty(&ctx->ordered_extents))
+ 		return BTRFS_NO_LOG_SYNC;
+ 
+ 	ret = start_log_trans(trans, root, ctx);
+diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
+index 92058ae4348826..c08e4a66ac07a7 100644
+--- a/fs/cachefiles/io.c
++++ b/fs/cachefiles/io.c
+@@ -63,7 +63,7 @@ static void cachefiles_read_complete(struct kiocb *iocb, long ret)
+ 				ret = -ESTALE;
+ 		}
+ 
+-		ki->term_func(ki->term_func_priv, ret, ki->was_async);
++		ki->term_func(ki->term_func_priv, ret);
+ 	}
+ 
+ 	cachefiles_put_kiocb(ki);
+@@ -188,7 +188,7 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
+ 
+ presubmission_error:
+ 	if (term_func)
+-		term_func(term_func_priv, ret < 0 ? ret : skipped, false);
++		term_func(term_func_priv, ret < 0 ? ret : skipped);
+ 	return ret;
+ }
+ 
+@@ -271,7 +271,7 @@ static void cachefiles_write_complete(struct kiocb *iocb, long ret)
+ 	atomic_long_sub(ki->b_writing, &object->volume->cache->b_writing);
+ 	set_bit(FSCACHE_COOKIE_HAVE_DATA, &object->cookie->flags);
+ 	if (ki->term_func)
+-		ki->term_func(ki->term_func_priv, ret, ki->was_async);
++		ki->term_func(ki->term_func_priv, ret);
+ 	cachefiles_put_kiocb(ki);
+ }
+ 
+@@ -301,7 +301,7 @@ int __cachefiles_write(struct cachefiles_object *object,
+ 	ki = kzalloc(sizeof(struct cachefiles_kiocb), GFP_KERNEL);
+ 	if (!ki) {
+ 		if (term_func)
+-			term_func(term_func_priv, -ENOMEM, false);
++			term_func(term_func_priv, -ENOMEM);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -366,7 +366,7 @@ static int cachefiles_write(struct netfs_cache_resources *cres,
+ {
+ 	if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE)) {
+ 		if (term_func)
+-			term_func(term_func_priv, -ENOBUFS, false);
++			term_func(term_func_priv, -ENOBUFS);
+ 		trace_netfs_sreq(term_func_priv, netfs_sreq_trace_cache_nowrite);
+ 		return -ENOBUFS;
+ 	}
+@@ -665,7 +665,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+ 		pre = CACHEFILES_DIO_BLOCK_SIZE - off;
+ 		if (pre >= len) {
+ 			fscache_count_dio_misfit();
+-			netfs_write_subrequest_terminated(subreq, len, false);
++			netfs_write_subrequest_terminated(subreq, len);
+ 			return;
+ 		}
+ 		subreq->transferred += pre;
+@@ -691,7 +691,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+ 		len -= post;
+ 		if (len == 0) {
+ 			fscache_count_dio_misfit();
+-			netfs_write_subrequest_terminated(subreq, post, false);
++			netfs_write_subrequest_terminated(subreq, post);
+ 			return;
+ 		}
+ 		iov_iter_truncate(&subreq->io_iter, len);
+@@ -703,7 +703,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+ 					 &start, &len, len, true);
+ 	cachefiles_end_secure(cache, saved_cred);
+ 	if (ret < 0) {
+-		netfs_write_subrequest_terminated(subreq, ret, false);
++		netfs_write_subrequest_terminated(subreq, ret);
+ 		return;
+ 	}
+ 
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 29be367905a16f..b95c4cb21c13f0 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -238,6 +238,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
+ 		if (sparse && err > 0)
+ 			err = ceph_sparse_ext_map_end(op);
+ 		if (err < subreq->len &&
++		    subreq->rreq->origin != NETFS_UNBUFFERED_READ &&
+ 		    subreq->rreq->origin != NETFS_DIO_READ)
+ 			__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 		if (IS_ENCRYPTED(inode) && err > 0) {
+@@ -281,7 +282,8 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
+ 	size_t len;
+ 	int mode;
+ 
+-	if (rreq->origin != NETFS_DIO_READ)
++	if (rreq->origin != NETFS_UNBUFFERED_READ &&
++	    rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 	__clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
+ 
+@@ -539,7 +541,7 @@ static void ceph_set_page_fscache(struct page *page)
+ 	folio_start_private_2(page_folio(page)); /* [DEPRECATED] */
+ }
+ 
+-static void ceph_fscache_write_terminated(void *priv, ssize_t error, bool was_async)
++static void ceph_fscache_write_terminated(void *priv, ssize_t error)
+ {
+ 	struct inode *inode = priv;
+ 
+diff --git a/fs/dax.c b/fs/dax.c
+index 676303419e9e8a..f8d8b1afd23244 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -257,7 +257,7 @@ static void *wait_entry_unlocked_exclusive(struct xa_state *xas, void *entry)
+ 		wq = dax_entry_waitqueue(xas, entry, &ewait.key);
+ 		prepare_to_wait_exclusive(wq, &ewait.wait,
+ 					TASK_UNINTERRUPTIBLE);
+-		xas_pause(xas);
++		xas_reset(xas);
+ 		xas_unlock_irq(xas);
+ 		schedule();
+ 		finish_wait(wq, &ewait.wait);
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 9c9129bca3460b..34517ca9df9157 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -102,8 +102,7 @@ static void erofs_fscache_req_io_put(struct erofs_fscache_io *io)
+ 		erofs_fscache_req_put(req);
+ }
+ 
+-static void erofs_fscache_req_end_io(void *priv,
+-		ssize_t transferred_or_error, bool was_async)
++static void erofs_fscache_req_end_io(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct erofs_fscache_io *io = priv;
+ 	struct erofs_fscache_rq *req = io->private;
+@@ -180,8 +179,7 @@ struct erofs_fscache_bio {
+ 	struct bio_vec bvecs[BIO_MAX_VECS];
+ };
+ 
+-static void erofs_fscache_bio_endio(void *priv,
+-		ssize_t transferred_or_error, bool was_async)
++static void erofs_fscache_bio_endio(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct erofs_fscache_bio *io = priv;
+ 
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index da6ee7c39290d5..6e57b9cc6ed2e0 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -165,8 +165,11 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
+ 				filp_open(dif->path, O_RDONLY | O_LARGEFILE, 0) :
+ 				bdev_file_open_by_path(dif->path,
+ 						BLK_OPEN_READ, sb->s_type, NULL);
+-		if (IS_ERR(file))
++		if (IS_ERR(file)) {
++			if (file == ERR_PTR(-ENOTBLK))
++				return -EINVAL;
+ 			return PTR_ERR(file);
++		}
+ 
+ 		if (!erofs_is_fileio_mode(sbi)) {
+ 			dif->dax_dev = fs_dax_get_by_bdev(file_bdev(file),
+@@ -510,24 +513,52 @@ static int erofs_fc_parse_param(struct fs_context *fc,
+ 	return 0;
+ }
+ 
+-static struct inode *erofs_nfs_get_inode(struct super_block *sb,
+-					 u64 ino, u32 generation)
++static int erofs_encode_fh(struct inode *inode, u32 *fh, int *max_len,
++			   struct inode *parent)
+ {
+-	return erofs_iget(sb, ino);
++	erofs_nid_t nid = EROFS_I(inode)->nid;
++	int len = parent ? 6 : 3;
++
++	if (*max_len < len) {
++		*max_len = len;
++		return FILEID_INVALID;
++	}
++
++	fh[0] = (u32)(nid >> 32);
++	fh[1] = (u32)(nid & 0xffffffff);
++	fh[2] = inode->i_generation;
++
++	if (parent) {
++		nid = EROFS_I(parent)->nid;
++
++		fh[3] = (u32)(nid >> 32);
++		fh[4] = (u32)(nid & 0xffffffff);
++		fh[5] = parent->i_generation;
++	}
++
++	*max_len = len;
++	return parent ? FILEID_INO64_GEN_PARENT : FILEID_INO64_GEN;
+ }
+ 
+ static struct dentry *erofs_fh_to_dentry(struct super_block *sb,
+ 		struct fid *fid, int fh_len, int fh_type)
+ {
+-	return generic_fh_to_dentry(sb, fid, fh_len, fh_type,
+-				    erofs_nfs_get_inode);
++	if ((fh_type != FILEID_INO64_GEN &&
++	     fh_type != FILEID_INO64_GEN_PARENT) || fh_len < 3)
++		return NULL;
++
++	return d_obtain_alias(erofs_iget(sb,
++		((u64)fid->raw[0] << 32) | fid->raw[1]));
+ }
+ 
+ static struct dentry *erofs_fh_to_parent(struct super_block *sb,
+ 		struct fid *fid, int fh_len, int fh_type)
+ {
+-	return generic_fh_to_parent(sb, fid, fh_len, fh_type,
+-				    erofs_nfs_get_inode);
++	if (fh_type != FILEID_INO64_GEN_PARENT || fh_len < 6)
++		return NULL;
++
++	return d_obtain_alias(erofs_iget(sb,
++		((u64)fid->raw[3] << 32) | fid->raw[4]));
+ }
+ 
+ static struct dentry *erofs_get_parent(struct dentry *child)
+@@ -543,7 +574,7 @@ static struct dentry *erofs_get_parent(struct dentry *child)
+ }
+ 
+ static const struct export_operations erofs_export_ops = {
+-	.encode_fh = generic_encode_ino32_fh,
++	.encode_fh = erofs_encode_fh,
+ 	.fh_to_dentry = erofs_fh_to_dentry,
+ 	.fh_to_parent = erofs_fh_to_parent,
+ 	.get_parent = erofs_get_parent,
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 54f89f0ee69b18..b0b8748ae287f4 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -53,8 +53,8 @@ bool f2fs_is_cp_guaranteed(struct page *page)
+ 	struct inode *inode;
+ 	struct f2fs_sb_info *sbi;
+ 
+-	if (!mapping)
+-		return false;
++	if (fscrypt_is_bounce_page(page))
++		return page_private_gcing(fscrypt_pagecache_page(page));
+ 
+ 	inode = mapping->host;
+ 	sbi = F2FS_I_SB(inode);
+@@ -3966,7 +3966,7 @@ static int check_swap_activate(struct swap_info_struct *sis,
+ 
+ 		if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec ||
+ 				nr_pblocks % blks_per_sec ||
+-				!f2fs_valid_pinned_area(sbi, pblock)) {
++				f2fs_is_sequential_zone_area(sbi, pblock)) {
+ 			bool last_extent = false;
+ 
+ 			not_aligned++;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index f1576dc6ec6797..4f34a7d9760a10 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1780,7 +1780,7 @@ struct f2fs_sb_info {
+ 	unsigned int dirty_device;		/* for checkpoint data flush */
+ 	spinlock_t dev_lock;			/* protect dirty_device */
+ 	bool aligned_blksize;			/* all devices has the same logical blksize */
+-	unsigned int first_zoned_segno;		/* first zoned segno */
++	unsigned int first_seq_zone_segno;	/* first segno in sequential zone */
+ 
+ 	/* For write statistics */
+ 	u64 sectors_written_start;
+@@ -2518,8 +2518,14 @@ static inline void dec_valid_block_count(struct f2fs_sb_info *sbi,
+ 	blkcnt_t sectors = count << F2FS_LOG_SECTORS_PER_BLOCK;
+ 
+ 	spin_lock(&sbi->stat_lock);
+-	f2fs_bug_on(sbi, sbi->total_valid_block_count < (block_t) count);
+-	sbi->total_valid_block_count -= (block_t)count;
++	if (unlikely(sbi->total_valid_block_count < count)) {
++		f2fs_warn(sbi, "Inconsistent total_valid_block_count:%u, ino:%lu, count:%u",
++			  sbi->total_valid_block_count, inode->i_ino, count);
++		sbi->total_valid_block_count = 0;
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++	} else {
++		sbi->total_valid_block_count -= count;
++	}
+ 	if (sbi->reserved_blocks &&
+ 		sbi->current_reserved_blocks < sbi->reserved_blocks)
+ 		sbi->current_reserved_blocks = min(sbi->reserved_blocks,
+@@ -4622,12 +4628,16 @@ F2FS_FEATURE_FUNCS(readonly, RO);
+ F2FS_FEATURE_FUNCS(device_alias, DEVICE_ALIAS);
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+-static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
+-				    block_t blkaddr)
++static inline bool f2fs_zone_is_seq(struct f2fs_sb_info *sbi, int devi,
++							unsigned int zone)
+ {
+-	unsigned int zno = blkaddr / sbi->blocks_per_blkz;
++	return test_bit(zone, FDEV(devi).blkz_seq);
++}
+ 
+-	return test_bit(zno, FDEV(devi).blkz_seq);
++static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
++								block_t blkaddr)
++{
++	return f2fs_zone_is_seq(sbi, devi, blkaddr / sbi->blocks_per_blkz);
+ }
+ #endif
+ 
+@@ -4699,15 +4709,31 @@ static inline bool f2fs_lfs_mode(struct f2fs_sb_info *sbi)
+ 	return F2FS_OPTION(sbi).fs_mode == FS_MODE_LFS;
+ }
+ 
+-static inline bool f2fs_valid_pinned_area(struct f2fs_sb_info *sbi,
++static inline bool f2fs_is_sequential_zone_area(struct f2fs_sb_info *sbi,
+ 					  block_t blkaddr)
+ {
+ 	if (f2fs_sb_has_blkzoned(sbi)) {
++#ifdef CONFIG_BLK_DEV_ZONED
+ 		int devi = f2fs_target_device_index(sbi, blkaddr);
+ 
+-		return !bdev_is_zoned(FDEV(devi).bdev);
++		if (!bdev_is_zoned(FDEV(devi).bdev))
++			return false;
++
++		if (f2fs_is_multi_device(sbi)) {
++			if (blkaddr < FDEV(devi).start_blk ||
++				blkaddr > FDEV(devi).end_blk) {
++				f2fs_err(sbi, "Invalid block %x", blkaddr);
++				return false;
++			}
++			blkaddr -= FDEV(devi).start_blk;
++		}
++
++		return f2fs_blkz_is_seq(sbi, devi, blkaddr);
++#else
++		return false;
++#endif
+ 	}
+-	return true;
++	return false;
+ }
+ 
+ static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 2b8f9239bede7c..8b5a55b72264dd 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -2066,6 +2066,9 @@ int f2fs_gc_range(struct f2fs_sb_info *sbi,
+ 			.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ 		};
+ 
++		if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno)))
++			continue;
++
+ 		do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
+ 		put_gc_inode(&gc_list);
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 8f8b9b843bdf4b..28137d499f8f65 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -414,7 +414,7 @@ static int f2fs_link(struct dentry *old_dentry, struct inode *dir,
+ 
+ 	if (is_inode_flag_set(dir, FI_PROJ_INHERIT) &&
+ 			(!projid_eq(F2FS_I(dir)->i_projid,
+-			F2FS_I(old_dentry->d_inode)->i_projid)))
++			F2FS_I(inode)->i_projid)))
+ 		return -EXDEV;
+ 
+ 	err = f2fs_dquot_initialize(dir);
+@@ -914,7 +914,7 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 
+ 	if (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
+ 			(!projid_eq(F2FS_I(new_dir)->i_projid,
+-			F2FS_I(old_dentry->d_inode)->i_projid)))
++			F2FS_I(old_inode)->i_projid)))
+ 		return -EXDEV;
+ 
+ 	/*
+@@ -1107,10 +1107,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	if ((is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
+ 			!projid_eq(F2FS_I(new_dir)->i_projid,
+-			F2FS_I(old_dentry->d_inode)->i_projid)) ||
+-	    (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
++			F2FS_I(old_inode)->i_projid)) ||
++	    (is_inode_flag_set(old_dir, FI_PROJ_INHERIT) &&
+ 			!projid_eq(F2FS_I(old_dir)->i_projid,
+-			F2FS_I(new_dentry->d_inode)->i_projid)))
++			F2FS_I(new_inode)->i_projid)))
+ 		return -EXDEV;
+ 
+ 	err = f2fs_dquot_initialize(old_dir);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 396ef71f41e359..41ca73622c8d46 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -424,7 +424,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+ 	if (need && excess_cached_nats(sbi))
+ 		f2fs_balance_fs_bg(sbi, false);
+ 
+-	if (!f2fs_is_checkpoint_ready(sbi))
++	if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
+ 		return;
+ 
+ 	/*
+@@ -2777,7 +2777,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 		if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning)
+ 			segno = 0;
+ 		else
+-			segno = max(sbi->first_zoned_segno, *newseg);
++			segno = max(sbi->first_seq_zone_segno, *newseg);
+ 		hint = GET_SEC_FROM_SEG(sbi, segno);
+ 	}
+ #endif
+@@ -2789,7 +2789,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 	if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) {
+ 		/* Write only to sequential zones */
+ 		if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) {
+-			hint = GET_SEC_FROM_SEG(sbi, sbi->first_zoned_segno);
++			hint = GET_SEC_FROM_SEG(sbi, sbi->first_seq_zone_segno);
+ 			secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
+ 		} else
+ 			secno = find_first_zero_bit(free_i->free_secmap,
+@@ -2838,9 +2838,9 @@ static int get_new_segment(struct f2fs_sb_info *sbi,
+ 	/* set it as dirty segment in free segmap */
+ 	f2fs_bug_on(sbi, test_bit(segno, free_i->free_segmap));
+ 
+-	/* no free section in conventional zone */
++	/* no free section in conventional device or conventional zone */
+ 	if (new_sec && pinning &&
+-		!f2fs_valid_pinned_area(sbi, START_BLOCK(sbi, segno))) {
++		f2fs_is_sequential_zone_area(sbi, START_BLOCK(sbi, segno))) {
+ 		ret = -EAGAIN;
+ 		goto out_unlock;
+ 	}
+@@ -3311,7 +3311,7 @@ int f2fs_allocate_pinning_section(struct f2fs_sb_info *sbi)
+ 
+ 	if (f2fs_sb_has_blkzoned(sbi) && err == -EAGAIN && gc_required) {
+ 		f2fs_down_write(&sbi->gc_lock);
+-		err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk),
++		err = f2fs_gc_range(sbi, 0, sbi->first_seq_zone_segno - 1,
+ 				true, ZONED_PIN_SEC_REQUIRED_COUNT);
+ 		f2fs_up_write(&sbi->gc_lock);
+ 
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 0465dc00b349d2..503f6df690bf2b 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -429,7 +429,6 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
+ 	unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
+ 	unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
+ 	unsigned int next;
+-	unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
+ 
+ 	spin_lock(&free_i->segmap_lock);
+ 	clear_bit(segno, free_i->free_segmap);
+@@ -437,7 +436,7 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
+ 
+ 	next = find_next_bit(free_i->free_segmap,
+ 			start_segno + SEGS_PER_SEC(sbi), start_segno);
+-	if (next >= start_segno + usable_segs) {
++	if (next >= start_segno + f2fs_usable_segs_in_sec(sbi)) {
+ 		clear_bit(secno, free_i->free_secmap);
+ 		free_i->free_sections++;
+ 	}
+@@ -463,22 +462,36 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 	unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
+ 	unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
+ 	unsigned int next;
+-	unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
++	bool ret;
+ 
+ 	spin_lock(&free_i->segmap_lock);
+-	if (test_and_clear_bit(segno, free_i->free_segmap)) {
+-		free_i->free_segments++;
+-
+-		if (!inmem && IS_CURSEC(sbi, secno))
+-			goto skip_free;
+-		next = find_next_bit(free_i->free_segmap,
+-				start_segno + SEGS_PER_SEC(sbi), start_segno);
+-		if (next >= start_segno + usable_segs) {
+-			if (test_and_clear_bit(secno, free_i->free_secmap))
+-				free_i->free_sections++;
+-		}
+-	}
+-skip_free:
++	ret = test_and_clear_bit(segno, free_i->free_segmap);
++	if (!ret)
++		goto unlock_out;
++
++	free_i->free_segments++;
++
++	if (!inmem && IS_CURSEC(sbi, secno))
++		goto unlock_out;
++
++	/* check large section */
++	next = find_next_bit(free_i->free_segmap,
++			     start_segno + SEGS_PER_SEC(sbi), start_segno);
++	if (next < start_segno + f2fs_usable_segs_in_sec(sbi))
++		goto unlock_out;
++
++	ret = test_and_clear_bit(secno, free_i->free_secmap);
++	if (!ret)
++		goto unlock_out;
++
++	free_i->free_sections++;
++
++	if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[BG_GC]) == secno)
++		sbi->next_victim_seg[BG_GC] = NULL_SEGNO;
++	if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[FG_GC]) == secno)
++		sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
++
++unlock_out:
+ 	spin_unlock(&free_i->segmap_lock);
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index f087b2b71c8987..386326f7a440eb 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1882,9 +1882,9 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_fsid    = u64_to_fsid(id);
+ 
+ #ifdef CONFIG_QUOTA
+-	if (is_inode_flag_set(dentry->d_inode, FI_PROJ_INHERIT) &&
++	if (is_inode_flag_set(d_inode(dentry), FI_PROJ_INHERIT) &&
+ 			sb_has_quota_limits_enabled(sb, PRJQUOTA)) {
+-		f2fs_statfs_project(sb, F2FS_I(dentry->d_inode)->i_projid, buf);
++		f2fs_statfs_project(sb, F2FS_I(d_inode(dentry))->i_projid, buf);
+ 	}
+ #endif
+ 	return 0;
+@@ -4304,14 +4304,35 @@ static void f2fs_record_error_work(struct work_struct *work)
+ 	f2fs_record_stop_reason(sbi);
+ }
+ 
+-static inline unsigned int get_first_zoned_segno(struct f2fs_sb_info *sbi)
++static inline unsigned int get_first_seq_zone_segno(struct f2fs_sb_info *sbi)
+ {
++#ifdef CONFIG_BLK_DEV_ZONED
++	unsigned int zoneno, total_zones;
+ 	int devi;
+ 
+-	for (devi = 0; devi < sbi->s_ndevs; devi++)
+-		if (bdev_is_zoned(FDEV(devi).bdev))
+-			return GET_SEGNO(sbi, FDEV(devi).start_blk);
+-	return 0;
++	if (!f2fs_sb_has_blkzoned(sbi))
++		return NULL_SEGNO;
++
++	for (devi = 0; devi < sbi->s_ndevs; devi++) {
++		if (!bdev_is_zoned(FDEV(devi).bdev))
++			continue;
++
++		total_zones = GET_ZONE_FROM_SEG(sbi, FDEV(devi).total_segments);
++
++		for (zoneno = 0; zoneno < total_zones; zoneno++) {
++			unsigned int segs, blks;
++
++			if (!f2fs_zone_is_seq(sbi, devi, zoneno))
++				continue;
++
++			segs = GET_SEG_FROM_SEC(sbi,
++					zoneno * sbi->secs_per_zone);
++			blks = SEGS_TO_BLKS(sbi, segs);
++			return GET_SEGNO(sbi, FDEV(devi).start_blk + blks);
++		}
++	}
++#endif
++	return NULL_SEGNO;
+ }
+ 
+ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+@@ -4348,6 +4369,14 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ #endif
+ 
+ 	for (i = 0; i < max_devices; i++) {
++		if (max_devices == 1) {
++			FDEV(i).total_segments =
++				le32_to_cpu(raw_super->segment_count_main);
++			FDEV(i).start_blk = 0;
++			FDEV(i).end_blk = FDEV(i).total_segments *
++						BLKS_PER_SEG(sbi);
++		}
++
+ 		if (i == 0)
+ 			FDEV(0).bdev_file = sbi->sb->s_bdev_file;
+ 		else if (!RDEV(i).path[0])
+@@ -4718,7 +4747,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
+ 	sbi->sectors_written_start = f2fs_get_sectors_written(sbi);
+ 
+ 	/* get segno of first zoned block device */
+-	sbi->first_zoned_segno = get_first_zoned_segno(sbi);
++	sbi->first_seq_zone_segno = get_first_seq_zone_segno(sbi);
+ 
+ 	/* Read accumulated write IO statistics if exists */
+ 	seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
+diff --git a/fs/filesystems.c b/fs/filesystems.c
+index 58b9067b2391ce..95e5256821a534 100644
+--- a/fs/filesystems.c
++++ b/fs/filesystems.c
+@@ -156,15 +156,19 @@ static int fs_index(const char __user * __name)
+ static int fs_name(unsigned int index, char __user * buf)
+ {
+ 	struct file_system_type * tmp;
+-	int len, res;
++	int len, res = -EINVAL;
+ 
+ 	read_lock(&file_systems_lock);
+-	for (tmp = file_systems; tmp; tmp = tmp->next, index--)
+-		if (index <= 0 && try_module_get(tmp->owner))
++	for (tmp = file_systems; tmp; tmp = tmp->next, index--) {
++		if (index == 0) {
++			if (try_module_get(tmp->owner))
++				res = 0;
+ 			break;
++		}
++	}
+ 	read_unlock(&file_systems_lock);
+-	if (!tmp)
+-		return -EINVAL;
++	if (res)
++		return res;
+ 
+ 	/* OK, we got the reference, so we can safely block */
+ 	len = strlen(tmp->name) + 1;
+diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
+index 68fc8af14700d3..eb4270e82ef8ee 100644
+--- a/fs/gfs2/aops.c
++++ b/fs/gfs2/aops.c
+@@ -37,27 +37,6 @@
+ #include "aops.h"
+ 
+ 
+-void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio,
+-			     size_t from, size_t len)
+-{
+-	struct buffer_head *head = folio_buffers(folio);
+-	unsigned int bsize = head->b_size;
+-	struct buffer_head *bh;
+-	size_t to = from + len;
+-	size_t start, end;
+-
+-	for (bh = head, start = 0; bh != head || !start;
+-	     bh = bh->b_this_page, start = end) {
+-		end = start + bsize;
+-		if (end <= from)
+-			continue;
+-		if (start >= to)
+-			break;
+-		set_buffer_uptodate(bh);
+-		gfs2_trans_add_data(ip->i_gl, bh);
+-	}
+-}
+-
+ /**
+  * gfs2_get_block_noalloc - Fills in a buffer head with details about a block
+  * @inode: The inode
+@@ -133,11 +112,42 @@ static int __gfs2_jdata_write_folio(struct folio *folio,
+ 					inode->i_sb->s_blocksize,
+ 					BIT(BH_Dirty)|BIT(BH_Uptodate));
+ 		}
+-		gfs2_trans_add_databufs(ip, folio, 0, folio_size(folio));
++		gfs2_trans_add_databufs(ip->i_gl, folio, 0, folio_size(folio));
+ 	}
+ 	return gfs2_write_jdata_folio(folio, wbc);
+ }
+ 
++/**
++ * gfs2_jdata_writeback - Write jdata folios to the log
++ * @mapping: The mapping to write
++ * @wbc: The writeback control
++ *
++ * Returns: errno
++ */
++int gfs2_jdata_writeback(struct address_space *mapping, struct writeback_control *wbc)
++{
++	struct inode *inode = mapping->host;
++	struct gfs2_inode *ip = GFS2_I(inode);
++	struct gfs2_sbd *sdp = GFS2_SB(mapping->host);
++	struct folio *folio = NULL;
++	int error;
++
++	BUG_ON(current->journal_info);
++	if (gfs2_assert_withdraw(sdp, ip->i_gl->gl_state == LM_ST_EXCLUSIVE))
++		return 0;
++
++	while ((folio = writeback_iter(mapping, wbc, folio, &error))) {
++		if (folio_test_checked(folio)) {
++			folio_redirty_for_writepage(wbc, folio);
++			folio_unlock(folio);
++			continue;
++		}
++		error = __gfs2_jdata_write_folio(folio, wbc);
++	}
++
++	return error;
++}
++
+ /**
+  * gfs2_writepages - Write a bunch of dirty pages back to disk
+  * @mapping: The mapping to write
+diff --git a/fs/gfs2/aops.h b/fs/gfs2/aops.h
+index a10c4334d24893..bf002522a78220 100644
+--- a/fs/gfs2/aops.h
++++ b/fs/gfs2/aops.h
+@@ -9,7 +9,6 @@
+ #include "incore.h"
+ 
+ void adjust_fs_space(struct inode *inode);
+-void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio,
+-			     size_t from, size_t len);
++int gfs2_jdata_writeback(struct address_space *mapping, struct writeback_control *wbc);
+ 
+ #endif /* __AOPS_DOT_H__ */
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 366516b98b3f31..b81984def58ec3 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -988,7 +988,8 @@ static void gfs2_iomap_put_folio(struct inode *inode, loff_t pos,
+ 	struct gfs2_sbd *sdp = GFS2_SB(inode);
+ 
+ 	if (!gfs2_is_stuffed(ip))
+-		gfs2_trans_add_databufs(ip, folio, offset_in_folio(folio, pos),
++		gfs2_trans_add_databufs(ip->i_gl, folio,
++					offset_in_folio(folio, pos),
+ 					copied);
+ 
+ 	folio_unlock(folio);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index d7220a6fe8f55e..ba25b884169e50 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1166,7 +1166,6 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ 		   const struct gfs2_glock_operations *glops, int create,
+ 		   struct gfs2_glock **glp)
+ {
+-	struct super_block *s = sdp->sd_vfs;
+ 	struct lm_lockname name = { .ln_number = number,
+ 				    .ln_type = glops->go_type,
+ 				    .ln_sbd = sdp };
+@@ -1229,7 +1228,7 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ 	mapping = gfs2_glock2aspace(gl);
+ 	if (mapping) {
+                 mapping->a_ops = &gfs2_meta_aops;
+-		mapping->host = s->s_bdev->bd_mapping->host;
++		mapping->host = sdp->sd_inode;
+ 		mapping->flags = 0;
+ 		mapping_set_gfp_mask(mapping, GFP_NOFS);
+ 		mapping->i_private_data = NULL;
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index eb4714f299efb6..116efe335c3212 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -168,7 +168,7 @@ void gfs2_ail_flush(struct gfs2_glock *gl, bool fsync)
+ static int gfs2_rgrp_metasync(struct gfs2_glock *gl)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+-	struct address_space *metamapping = &sdp->sd_aspace;
++	struct address_space *metamapping = gfs2_aspace(sdp);
+ 	struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl);
+ 	const unsigned bsize = sdp->sd_sb.sb_bsize;
+ 	loff_t start = (rgd->rd_addr * bsize) & PAGE_MASK;
+@@ -225,7 +225,7 @@ static int rgrp_go_sync(struct gfs2_glock *gl)
+ static void rgrp_go_inval(struct gfs2_glock *gl, int flags)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+-	struct address_space *mapping = &sdp->sd_aspace;
++	struct address_space *mapping = gfs2_aspace(sdp);
+ 	struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl);
+ 	const unsigned bsize = sdp->sd_sb.sb_bsize;
+ 	loff_t start, end;
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index 74abbd4970f80b..0a41c4e76b3267 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -795,7 +795,7 @@ struct gfs2_sbd {
+ 
+ 	/* Log stuff */
+ 
+-	struct address_space sd_aspace;
++	struct inode *sd_inode;
+ 
+ 	spinlock_t sd_log_lock;
+ 
+@@ -851,6 +851,13 @@ struct gfs2_sbd {
+ 	unsigned long sd_glock_dqs_held;
+ };
+ 
++#define GFS2_BAD_INO 1
++
++static inline struct address_space *gfs2_aspace(struct gfs2_sbd *sdp)
++{
++	return sdp->sd_inode->i_mapping;
++}
++
+ static inline void gfs2_glstats_inc(struct gfs2_glock *gl, int which)
+ {
+ 	gl->gl_stats.stats[which]++;
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 198a8cbaf5e5ad..8fd81444ffea00 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -439,6 +439,74 @@ static int alloc_dinode(struct gfs2_inode *ip, u32 flags, unsigned *dblocks)
+ 	return error;
+ }
+ 
++static void gfs2_final_release_pages(struct gfs2_inode *ip)
++{
++	struct inode *inode = &ip->i_inode;
++	struct gfs2_glock *gl = ip->i_gl;
++
++	if (unlikely(!gl)) {
++		/* This can only happen during incomplete inode creation. */
++		BUG_ON(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags));
++		return;
++	}
++
++	truncate_inode_pages(gfs2_glock2aspace(gl), 0);
++	truncate_inode_pages(&inode->i_data, 0);
++
++	if (atomic_read(&gl->gl_revokes) == 0) {
++		clear_bit(GLF_LFLUSH, &gl->gl_flags);
++		clear_bit(GLF_DIRTY, &gl->gl_flags);
++	}
++}
++
++int gfs2_dinode_dealloc(struct gfs2_inode *ip)
++{
++	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++	struct gfs2_rgrpd *rgd;
++	struct gfs2_holder gh;
++	int error;
++
++	if (gfs2_get_inode_blocks(&ip->i_inode) != 1) {
++		gfs2_consist_inode(ip);
++		return -EIO;
++	}
++
++	gfs2_rindex_update(sdp);
++
++	error = gfs2_quota_hold(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE);
++	if (error)
++		return error;
++
++	rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr, 1);
++	if (!rgd) {
++		gfs2_consist_inode(ip);
++		error = -EIO;
++		goto out_qs;
++	}
++
++	error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
++				   LM_FLAG_NODE_SCOPE, &gh);
++	if (error)
++		goto out_qs;
++
++	error = gfs2_trans_begin(sdp, RES_RG_BIT + RES_STATFS + RES_QUOTA,
++				 sdp->sd_jdesc->jd_blocks);
++	if (error)
++		goto out_rg_gunlock;
++
++	gfs2_free_di(rgd, ip);
++
++	gfs2_final_release_pages(ip);
++
++	gfs2_trans_end(sdp);
++
++out_rg_gunlock:
++	gfs2_glock_dq_uninit(&gh);
++out_qs:
++	gfs2_quota_unhold(ip);
++	return error;
++}
++
+ static void gfs2_init_dir(struct buffer_head *dibh,
+ 			  const struct gfs2_inode *parent)
+ {
+@@ -629,10 +697,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	struct gfs2_inode *dip = GFS2_I(dir), *ip;
+ 	struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode);
+ 	struct gfs2_glock *io_gl;
+-	int error;
++	int error, dealloc_error;
+ 	u32 aflags = 0;
+ 	unsigned blocks = 1;
+ 	struct gfs2_diradd da = { .bh = NULL, .save_loc = 1, };
++	bool xattr_initialized = false;
+ 
+ 	if (!name->len || name->len > GFS2_FNAMESIZE)
+ 		return -ENAMETOOLONG;
+@@ -659,7 +728,8 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	if (!IS_ERR(inode)) {
+ 		if (S_ISDIR(inode->i_mode)) {
+ 			iput(inode);
+-			inode = ERR_PTR(-EISDIR);
++			inode = NULL;
++			error = -EISDIR;
+ 			goto fail_gunlock;
+ 		}
+ 		d_instantiate(dentry, inode);
+@@ -744,11 +814,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 
+ 	error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl);
+ 	if (error)
+-		goto fail_free_inode;
++		goto fail_dealloc_inode;
+ 
+ 	error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_iopen_glops, CREATE, &io_gl);
+ 	if (error)
+-		goto fail_free_inode;
++		goto fail_dealloc_inode;
+ 	gfs2_cancel_delete_work(io_gl);
+ 	io_gl->gl_no_formal_ino = ip->i_no_formal_ino;
+ 
+@@ -772,8 +842,10 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	if (error)
+ 		goto fail_gunlock3;
+ 
+-	if (blocks > 1)
++	if (blocks > 1) {
+ 		gfs2_init_xattr(ip);
++		xattr_initialized = true;
++	}
+ 	init_dinode(dip, ip, symname);
+ 	gfs2_trans_end(sdp);
+ 
+@@ -828,6 +900,18 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	gfs2_glock_dq_uninit(&ip->i_iopen_gh);
+ fail_gunlock2:
+ 	gfs2_glock_put(io_gl);
++fail_dealloc_inode:
++	set_bit(GIF_ALLOC_FAILED, &ip->i_flags);
++	dealloc_error = 0;
++	if (ip->i_eattr)
++		dealloc_error = gfs2_ea_dealloc(ip, xattr_initialized);
++	clear_nlink(inode);
++	mark_inode_dirty(inode);
++	if (!dealloc_error)
++		dealloc_error = gfs2_dinode_dealloc(ip);
++	if (dealloc_error)
++		fs_warn(sdp, "%s: %d\n", __func__, dealloc_error);
++	ip->i_no_addr = 0;
+ fail_free_inode:
+ 	if (ip->i_gl) {
+ 		gfs2_glock_put(ip->i_gl);
+@@ -842,10 +926,6 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ 	gfs2_dir_no_add(&da);
+ 	gfs2_glock_dq_uninit(&d_gh);
+ 	if (!IS_ERR_OR_NULL(inode)) {
+-		set_bit(GIF_ALLOC_FAILED, &ip->i_flags);
+-		clear_nlink(inode);
+-		if (ip->i_no_addr)
+-			mark_inode_dirty(inode);
+ 		if (inode->i_state & I_NEW)
+ 			iget_failed(inode);
+ 		else
+diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h
+index 9e5e1622d50a60..eafe123617e698 100644
+--- a/fs/gfs2/inode.h
++++ b/fs/gfs2/inode.h
+@@ -92,6 +92,7 @@ struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned type,
+ struct inode *gfs2_lookup_by_inum(struct gfs2_sbd *sdp, u64 no_addr,
+ 				  u64 no_formal_ino,
+ 				  unsigned int blktype);
++int gfs2_dinode_dealloc(struct gfs2_inode *ip);
+ 
+ struct inode *gfs2_lookupi(struct inode *dir, const struct qstr *name,
+ 			   int is_root);
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index f9c5089783d24c..115c4ac457e90a 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -31,6 +31,7 @@
+ #include "dir.h"
+ #include "trace_gfs2.h"
+ #include "trans.h"
++#include "aops.h"
+ 
+ static void gfs2_log_shutdown(struct gfs2_sbd *sdp);
+ 
+@@ -131,7 +132,11 @@ __acquires(&sdp->sd_ail_lock)
+ 		if (!mapping)
+ 			continue;
+ 		spin_unlock(&sdp->sd_ail_lock);
+-		ret = mapping->a_ops->writepages(mapping, wbc);
++		BUG_ON(GFS2_SB(mapping->host) != sdp);
++		if (gfs2_is_jdata(GFS2_I(mapping->host)))
++			ret = gfs2_jdata_writeback(mapping, wbc);
++		else
++			ret = mapping->a_ops->writepages(mapping, wbc);
+ 		if (need_resched()) {
+ 			blk_finish_plug(plug);
+ 			cond_resched();
+diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
+index 198cc705663755..9dc8885c95d072 100644
+--- a/fs/gfs2/meta_io.c
++++ b/fs/gfs2/meta_io.c
+@@ -132,7 +132,7 @@ struct buffer_head *gfs2_getbuf(struct gfs2_glock *gl, u64 blkno, int create)
+ 	unsigned int bufnum;
+ 
+ 	if (mapping == NULL)
+-		mapping = &sdp->sd_aspace;
++		mapping = gfs2_aspace(sdp);
+ 
+ 	shift = PAGE_SHIFT - sdp->sd_sb.sb_bsize_shift;
+ 	index = blkno >> shift;             /* convert block to page */
+diff --git a/fs/gfs2/meta_io.h b/fs/gfs2/meta_io.h
+index 831d988c2ceb74..b7c8a6684d0249 100644
+--- a/fs/gfs2/meta_io.h
++++ b/fs/gfs2/meta_io.h
+@@ -44,9 +44,7 @@ static inline struct gfs2_sbd *gfs2_mapping2sbd(struct address_space *mapping)
+ 		struct gfs2_glock_aspace *gla =
+ 			container_of(mapping, struct gfs2_glock_aspace, mapping);
+ 		return gla->glock.gl_name.ln_sbd;
+-	} else if (mapping->a_ops == &gfs2_rgrp_aops)
+-		return container_of(mapping, struct gfs2_sbd, sd_aspace);
+-	else
++	} else
+ 		return inode->i_sb->s_fs_info;
+ }
+ 
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index e83d293c361423..4a0f7de41b2b2f 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -64,15 +64,17 @@ static void gfs2_tune_init(struct gfs2_tune *gt)
+ 
+ void free_sbd(struct gfs2_sbd *sdp)
+ {
++	struct super_block *sb = sdp->sd_vfs;
++
+ 	if (sdp->sd_lkstats)
+ 		free_percpu(sdp->sd_lkstats);
++	sb->s_fs_info = NULL;
+ 	kfree(sdp);
+ }
+ 
+ static struct gfs2_sbd *init_sbd(struct super_block *sb)
+ {
+ 	struct gfs2_sbd *sdp;
+-	struct address_space *mapping;
+ 
+ 	sdp = kzalloc(sizeof(struct gfs2_sbd), GFP_KERNEL);
+ 	if (!sdp)
+@@ -109,16 +111,6 @@ static struct gfs2_sbd *init_sbd(struct super_block *sb)
+ 
+ 	INIT_LIST_HEAD(&sdp->sd_sc_inodes_list);
+ 
+-	mapping = &sdp->sd_aspace;
+-
+-	address_space_init_once(mapping);
+-	mapping->a_ops = &gfs2_rgrp_aops;
+-	mapping->host = sb->s_bdev->bd_mapping->host;
+-	mapping->flags = 0;
+-	mapping_set_gfp_mask(mapping, GFP_NOFS);
+-	mapping->i_private_data = NULL;
+-	mapping->writeback_index = 0;
+-
+ 	spin_lock_init(&sdp->sd_log_lock);
+ 	atomic_set(&sdp->sd_log_pinned, 0);
+ 	INIT_LIST_HEAD(&sdp->sd_log_revokes);
+@@ -1135,6 +1127,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	int silent = fc->sb_flags & SB_SILENT;
+ 	struct gfs2_sbd *sdp;
+ 	struct gfs2_holder mount_gh;
++	struct address_space *mapping;
+ 	int error;
+ 
+ 	sdp = init_sbd(sb);
+@@ -1156,6 +1149,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	sb->s_flags |= SB_NOSEC;
+ 	sb->s_magic = GFS2_MAGIC;
+ 	sb->s_op = &gfs2_super_ops;
++
+ 	sb->s_d_op = &gfs2_dops;
+ 	sb->s_export_op = &gfs2_export_ops;
+ 	sb->s_qcop = &gfs2_quotactl_ops;
+@@ -1181,9 +1175,21 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		sdp->sd_tune.gt_statfs_quantum = 30;
+ 	}
+ 
++	/* Set up an address space for metadata writes */
++	sdp->sd_inode = new_inode(sb);
++	error = -ENOMEM;
++	if (!sdp->sd_inode)
++		goto fail_free;
++	sdp->sd_inode->i_ino = GFS2_BAD_INO;
++	sdp->sd_inode->i_size = OFFSET_MAX;
++
++	mapping = gfs2_aspace(sdp);
++	mapping->a_ops = &gfs2_rgrp_aops;
++	mapping_set_gfp_mask(mapping, GFP_NOFS);
++
+ 	error = init_names(sdp, silent);
+ 	if (error)
+-		goto fail_free;
++		goto fail_iput;
+ 
+ 	snprintf(sdp->sd_fsname, sizeof(sdp->sd_fsname), "%s", sdp->sd_table_name);
+ 
+@@ -1192,7 +1198,7 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 			WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0,
+ 			sdp->sd_fsname);
+ 	if (!sdp->sd_glock_wq)
+-		goto fail_free;
++		goto fail_iput;
+ 
+ 	sdp->sd_delete_wq = alloc_workqueue("gfs2-delete/%s",
+ 			WQ_MEM_RECLAIM | WQ_FREEZABLE, 0, sdp->sd_fsname);
+@@ -1309,9 +1315,10 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ fail_glock_wq:
+ 	if (sdp->sd_glock_wq)
+ 		destroy_workqueue(sdp->sd_glock_wq);
++fail_iput:
++	iput(sdp->sd_inode);
+ fail_free:
+ 	free_sbd(sdp);
+-	sb->s_fs_info = NULL;
+ 	return error;
+ }
+ 
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 44e5658b896c88..0bd7827e6371e2 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -648,7 +648,7 @@ static void gfs2_put_super(struct super_block *sb)
+ 	gfs2_jindex_free(sdp);
+ 	/*  Take apart glock structures and buffer lists  */
+ 	gfs2_gl_hash_clear(sdp);
+-	truncate_inode_pages_final(&sdp->sd_aspace);
++	iput(sdp->sd_inode);
+ 	gfs2_delete_debugfs_file(sdp);
+ 
+ 	gfs2_sys_fs_del(sdp);
+@@ -674,7 +674,7 @@ static int gfs2_sync_fs(struct super_block *sb, int wait)
+ 	return sdp->sd_log_error;
+ }
+ 
+-static int gfs2_do_thaw(struct gfs2_sbd *sdp)
++static int gfs2_do_thaw(struct gfs2_sbd *sdp, enum freeze_holder who)
+ {
+ 	struct super_block *sb = sdp->sd_vfs;
+ 	int error;
+@@ -682,7 +682,7 @@ static int gfs2_do_thaw(struct gfs2_sbd *sdp)
+ 	error = gfs2_freeze_lock_shared(sdp);
+ 	if (error)
+ 		goto fail;
+-	error = thaw_super(sb, FREEZE_HOLDER_USERSPACE);
++	error = thaw_super(sb, who);
+ 	if (!error)
+ 		return 0;
+ 
+@@ -710,7 +710,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ 	gfs2_freeze_unlock(sdp);
+ 	set_bit(SDF_FROZEN, &sdp->sd_flags);
+ 
+-	error = gfs2_do_thaw(sdp);
++	error = gfs2_do_thaw(sdp, FREEZE_HOLDER_USERSPACE);
+ 	if (error)
+ 		goto out;
+ 
+@@ -728,6 +728,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ /**
+  * gfs2_freeze_super - prevent further writes to the filesystem
+  * @sb: the VFS structure for the filesystem
++ * @who: freeze flags
+  *
+  */
+ 
+@@ -744,7 +745,7 @@ static int gfs2_freeze_super(struct super_block *sb, enum freeze_holder who)
+ 	}
+ 
+ 	for (;;) {
+-		error = freeze_super(sb, FREEZE_HOLDER_USERSPACE);
++		error = freeze_super(sb, who);
+ 		if (error) {
+ 			fs_info(sdp, "GFS2: couldn't freeze filesystem: %d\n",
+ 				error);
+@@ -758,7 +759,7 @@ static int gfs2_freeze_super(struct super_block *sb, enum freeze_holder who)
+ 			break;
+ 		}
+ 
+-		error = gfs2_do_thaw(sdp);
++		error = gfs2_do_thaw(sdp, who);
+ 		if (error)
+ 			goto out;
+ 
+@@ -796,6 +797,7 @@ static int gfs2_freeze_fs(struct super_block *sb)
+ /**
+  * gfs2_thaw_super - reallow writes to the filesystem
+  * @sb: the VFS structure for the filesystem
++ * @who: freeze flags
+  *
+  */
+ 
+@@ -814,7 +816,7 @@ static int gfs2_thaw_super(struct super_block *sb, enum freeze_holder who)
+ 	atomic_inc(&sb->s_active);
+ 	gfs2_freeze_unlock(sdp);
+ 
+-	error = gfs2_do_thaw(sdp);
++	error = gfs2_do_thaw(sdp, who);
+ 
+ 	if (!error) {
+ 		clear_bit(SDF_FREEZE_INITIATOR, &sdp->sd_flags);
+@@ -1173,74 +1175,6 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root)
+ 	return 0;
+ }
+ 
+-static void gfs2_final_release_pages(struct gfs2_inode *ip)
+-{
+-	struct inode *inode = &ip->i_inode;
+-	struct gfs2_glock *gl = ip->i_gl;
+-
+-	if (unlikely(!gl)) {
+-		/* This can only happen during incomplete inode creation. */
+-		BUG_ON(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags));
+-		return;
+-	}
+-
+-	truncate_inode_pages(gfs2_glock2aspace(gl), 0);
+-	truncate_inode_pages(&inode->i_data, 0);
+-
+-	if (atomic_read(&gl->gl_revokes) == 0) {
+-		clear_bit(GLF_LFLUSH, &gl->gl_flags);
+-		clear_bit(GLF_DIRTY, &gl->gl_flags);
+-	}
+-}
+-
+-static int gfs2_dinode_dealloc(struct gfs2_inode *ip)
+-{
+-	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+-	struct gfs2_rgrpd *rgd;
+-	struct gfs2_holder gh;
+-	int error;
+-
+-	if (gfs2_get_inode_blocks(&ip->i_inode) != 1) {
+-		gfs2_consist_inode(ip);
+-		return -EIO;
+-	}
+-
+-	gfs2_rindex_update(sdp);
+-
+-	error = gfs2_quota_hold(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE);
+-	if (error)
+-		return error;
+-
+-	rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr, 1);
+-	if (!rgd) {
+-		gfs2_consist_inode(ip);
+-		error = -EIO;
+-		goto out_qs;
+-	}
+-
+-	error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE,
+-				   LM_FLAG_NODE_SCOPE, &gh);
+-	if (error)
+-		goto out_qs;
+-
+-	error = gfs2_trans_begin(sdp, RES_RG_BIT + RES_STATFS + RES_QUOTA,
+-				 sdp->sd_jdesc->jd_blocks);
+-	if (error)
+-		goto out_rg_gunlock;
+-
+-	gfs2_free_di(rgd, ip);
+-
+-	gfs2_final_release_pages(ip);
+-
+-	gfs2_trans_end(sdp);
+-
+-out_rg_gunlock:
+-	gfs2_glock_dq_uninit(&gh);
+-out_qs:
+-	gfs2_quota_unhold(ip);
+-	return error;
+-}
+-
+ /**
+  * gfs2_glock_put_eventually
+  * @gl:	The glock to put
+@@ -1326,9 +1260,6 @@ static enum evict_behavior evict_should_delete(struct inode *inode,
+ 	struct gfs2_sbd *sdp = sb->s_fs_info;
+ 	int ret;
+ 
+-	if (unlikely(test_bit(GIF_ALLOC_FAILED, &ip->i_flags)))
+-		goto should_delete;
+-
+ 	if (gfs2_holder_initialized(&ip->i_iopen_gh) &&
+ 	    test_bit(GLF_DEFER_DELETE, &ip->i_iopen_gh.gh_gl->gl_flags))
+ 		return EVICT_SHOULD_DEFER_DELETE;
+@@ -1358,7 +1289,6 @@ static enum evict_behavior evict_should_delete(struct inode *inode,
+ 	if (inode->i_nlink)
+ 		return EVICT_SHOULD_SKIP_DELETE;
+ 
+-should_delete:
+ 	if (gfs2_holder_initialized(&ip->i_iopen_gh) &&
+ 	    test_bit(HIF_HOLDER, &ip->i_iopen_gh.gh_iflags))
+ 		return gfs2_upgrade_iopen_glock(inode);
+@@ -1382,7 +1312,7 @@ static int evict_unlinked_inode(struct inode *inode)
+ 	}
+ 
+ 	if (ip->i_eattr) {
+-		ret = gfs2_ea_dealloc(ip);
++		ret = gfs2_ea_dealloc(ip, true);
+ 		if (ret)
+ 			goto out;
+ 	}
+diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c
+index ecc699f8d9fcaa..6286183021022a 100644
+--- a/fs/gfs2/sys.c
++++ b/fs/gfs2/sys.c
+@@ -764,7 +764,6 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
+ 	fs_err(sdp, "error %d adding sysfs files\n", error);
+ 	kobject_put(&sdp->sd_kobj);
+ 	wait_for_completion(&sdp->sd_kobj_unregister);
+-	sb->s_fs_info = NULL;
+ 	return error;
+ }
+ 
+diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c
+index f8ae2c666fd609..075f7e9abe47ca 100644
+--- a/fs/gfs2/trans.c
++++ b/fs/gfs2/trans.c
+@@ -226,6 +226,27 @@ void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh)
+ 	unlock_buffer(bh);
+ }
+ 
++void gfs2_trans_add_databufs(struct gfs2_glock *gl, struct folio *folio,
++			     size_t from, size_t len)
++{
++	struct buffer_head *head = folio_buffers(folio);
++	unsigned int bsize = head->b_size;
++	struct buffer_head *bh;
++	size_t to = from + len;
++	size_t start, end;
++
++	for (bh = head, start = 0; bh != head || !start;
++	     bh = bh->b_this_page, start = end) {
++		end = start + bsize;
++		if (end <= from)
++			continue;
++		if (start >= to)
++			break;
++		set_buffer_uptodate(bh);
++		gfs2_trans_add_data(gl, bh);
++	}
++}
++
+ void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh)
+ {
+ 
+diff --git a/fs/gfs2/trans.h b/fs/gfs2/trans.h
+index f8ce5302280d31..790c55f59e6121 100644
+--- a/fs/gfs2/trans.h
++++ b/fs/gfs2/trans.h
+@@ -42,6 +42,8 @@ int gfs2_trans_begin(struct gfs2_sbd *sdp, unsigned int blocks,
+ 
+ void gfs2_trans_end(struct gfs2_sbd *sdp);
+ void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh);
++void gfs2_trans_add_databufs(struct gfs2_glock *gl, struct folio *folio,
++			     size_t from, size_t len);
+ void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh);
+ void gfs2_trans_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd);
+ void gfs2_trans_remove_revoke(struct gfs2_sbd *sdp, u64 blkno, unsigned int len);
+diff --git a/fs/gfs2/xattr.c b/fs/gfs2/xattr.c
+index 17ae5070a90e67..df9c93de94c793 100644
+--- a/fs/gfs2/xattr.c
++++ b/fs/gfs2/xattr.c
+@@ -1383,7 +1383,7 @@ static int ea_dealloc_indirect(struct gfs2_inode *ip)
+ 	return error;
+ }
+ 
+-static int ea_dealloc_block(struct gfs2_inode *ip)
++static int ea_dealloc_block(struct gfs2_inode *ip, bool initialized)
+ {
+ 	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
+ 	struct gfs2_rgrpd *rgd;
+@@ -1416,7 +1416,7 @@ static int ea_dealloc_block(struct gfs2_inode *ip)
+ 	ip->i_eattr = 0;
+ 	gfs2_add_inode_blocks(&ip->i_inode, -1);
+ 
+-	if (likely(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) {
++	if (initialized) {
+ 		error = gfs2_meta_inode_buffer(ip, &dibh);
+ 		if (!error) {
+ 			gfs2_trans_add_meta(ip->i_gl, dibh);
+@@ -1435,11 +1435,12 @@ static int ea_dealloc_block(struct gfs2_inode *ip)
+ /**
+  * gfs2_ea_dealloc - deallocate the extended attribute fork
+  * @ip: the inode
++ * @initialized: xattrs have been initialized
+  *
+  * Returns: errno
+  */
+ 
+-int gfs2_ea_dealloc(struct gfs2_inode *ip)
++int gfs2_ea_dealloc(struct gfs2_inode *ip, bool initialized)
+ {
+ 	int error;
+ 
+@@ -1451,7 +1452,7 @@ int gfs2_ea_dealloc(struct gfs2_inode *ip)
+ 	if (error)
+ 		return error;
+ 
+-	if (likely(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) {
++	if (initialized) {
+ 		error = ea_foreach(ip, ea_dealloc_unstuffed, NULL);
+ 		if (error)
+ 			goto out_quota;
+@@ -1463,7 +1464,7 @@ int gfs2_ea_dealloc(struct gfs2_inode *ip)
+ 		}
+ 	}
+ 
+-	error = ea_dealloc_block(ip);
++	error = ea_dealloc_block(ip, initialized);
+ 
+ out_quota:
+ 	gfs2_quota_unhold(ip);
+diff --git a/fs/gfs2/xattr.h b/fs/gfs2/xattr.h
+index eb12eb7e37c194..3c9788e0e13750 100644
+--- a/fs/gfs2/xattr.h
++++ b/fs/gfs2/xattr.h
+@@ -54,7 +54,7 @@ int __gfs2_xattr_set(struct inode *inode, const char *name,
+ 		     const void *value, size_t size,
+ 		     int flags, int type);
+ ssize_t gfs2_listxattr(struct dentry *dentry, char *buffer, size_t size);
+-int gfs2_ea_dealloc(struct gfs2_inode *ip);
++int gfs2_ea_dealloc(struct gfs2_inode *ip, bool initialized);
+ 
+ /* Exported to acl.c */
+ 
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 5b08bd417b2872..0ac474888a02e9 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1675,6 +1675,8 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
+ 		ioend_flags |= IOMAP_IOEND_UNWRITTEN;
+ 	if (wpc->iomap.flags & IOMAP_F_SHARED)
+ 		ioend_flags |= IOMAP_IOEND_SHARED;
++	if (folio_test_dropbehind(folio))
++		ioend_flags |= IOMAP_IOEND_DONTCACHE;
+ 	if (pos == wpc->iomap.offset && (wpc->iomap.flags & IOMAP_F_BOUNDARY))
+ 		ioend_flags |= IOMAP_IOEND_BOUNDARY;
+ 
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index fc70d72c3fe805..43487fa83eaea1 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -1580,8 +1580,9 @@ void kernfs_break_active_protection(struct kernfs_node *kn)
+  * invoked before finishing the kernfs operation.  Note that while this
+  * function restores the active reference, it doesn't and can't actually
+  * restore the active protection - @kn may already or be in the process of
+- * being removed.  Once kernfs_break_active_protection() is invoked, that
+- * protection is irreversibly gone for the kernfs operation instance.
++ * being drained and removed.  Once kernfs_break_active_protection() is
++ * invoked, that protection is irreversibly gone for the kernfs operation
++ * instance.
+  *
+  * While this function may be called at any point after
+  * kernfs_break_active_protection() is invoked, its most useful location
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index 66fe8fe41f0605..a6c692cac61659 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -778,8 +778,9 @@ bool kernfs_should_drain_open_files(struct kernfs_node *kn)
+ 	/*
+ 	 * @kn being deactivated guarantees that @kn->attr.open can't change
+ 	 * beneath us making the lockless test below safe.
++	 * Callers post kernfs_unbreak_active_protection may be counted in
++	 * kn->active by now, do not WARN_ON because of them.
+ 	 */
+-	WARN_ON_ONCE(atomic_read(&kn->active) != KN_DEACTIVATED_BIAS);
+ 
+ 	rcu_read_lock();
+ 	on = rcu_dereference(kn->attr.open);
+diff --git a/fs/mount.h b/fs/mount.h
+index 7aecf2a6047232..ad7173037924a8 100644
+--- a/fs/mount.h
++++ b/fs/mount.h
+@@ -7,10 +7,6 @@
+ 
+ extern struct list_head notify_list;
+ 
+-typedef __u32 __bitwise mntns_flags_t;
+-
+-#define MNTNS_PROPAGATING	((__force mntns_flags_t)(1 << 0))
+-
+ struct mnt_namespace {
+ 	struct ns_common	ns;
+ 	struct mount *	root;
+@@ -37,7 +33,6 @@ struct mnt_namespace {
+ 	struct rb_node		mnt_ns_tree_node; /* node in the mnt_ns_tree */
+ 	struct list_head	mnt_ns_list; /* entry in the sequential list of mounts namespace */
+ 	refcount_t		passive; /* number references not pinning @mounts */
+-	mntns_flags_t		mntns_flags;
+ } __randomize_layout;
+ 
+ struct mnt_pcp {
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1b466c54a357d1..d6ac7e533b0212 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2424,7 +2424,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ 	namespace_unlock();
+ }
+ 
+-bool has_locked_children(struct mount *mnt, struct dentry *dentry)
++static bool __has_locked_children(struct mount *mnt, struct dentry *dentry)
+ {
+ 	struct mount *child;
+ 
+@@ -2438,6 +2438,16 @@ bool has_locked_children(struct mount *mnt, struct dentry *dentry)
+ 	return false;
+ }
+ 
++bool has_locked_children(struct mount *mnt, struct dentry *dentry)
++{
++	bool res;
++
++	read_seqlock_excl(&mount_lock);
++	res = __has_locked_children(mnt, dentry);
++	read_sequnlock_excl(&mount_lock);
++	return res;
++}
++
+ /*
+  * Check that there aren't references to earlier/same mount namespaces in the
+  * specified subtree.  Such references can act as pins for mount namespaces
+@@ -2482,23 +2492,27 @@ struct vfsmount *clone_private_mount(const struct path *path)
+ 	if (IS_MNT_UNBINDABLE(old_mnt))
+ 		return ERR_PTR(-EINVAL);
+ 
+-	if (mnt_has_parent(old_mnt)) {
+-		if (!check_mnt(old_mnt))
+-			return ERR_PTR(-EINVAL);
+-	} else {
+-		if (!is_mounted(&old_mnt->mnt))
+-			return ERR_PTR(-EINVAL);
+-
+-		/* Make sure this isn't something purely kernel internal. */
+-		if (!is_anon_ns(old_mnt->mnt_ns))
++	/*
++	 * Make sure the source mount is acceptable.
++	 * Anything mounted in our mount namespace is allowed.
++	 * Otherwise, it must be the root of an anonymous mount
++	 * namespace, and we need to make sure no namespace
++	 * loops get created.
++	 */
++	if (!check_mnt(old_mnt)) {
++		if (!is_mounted(&old_mnt->mnt) ||
++			!is_anon_ns(old_mnt->mnt_ns) ||
++			mnt_has_parent(old_mnt))
+ 			return ERR_PTR(-EINVAL);
+ 
+-		/* Make sure we don't create mount namespace loops. */
+ 		if (!check_for_nsfs_mounts(old_mnt))
+ 			return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if (has_locked_children(old_mnt, path->dentry))
++        if (!ns_capable(old_mnt->mnt_ns->user_ns, CAP_SYS_ADMIN))
++		return ERR_PTR(-EPERM);
++
++	if (__has_locked_children(old_mnt, path->dentry))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	new_mnt = clone_mnt(old_mnt, path->dentry, CL_PRIVATE);
+@@ -2944,6 +2958,10 @@ static int do_change_type(struct path *path, int ms_flags)
+ 		return -EINVAL;
+ 
+ 	namespace_lock();
++	if (!check_mnt(mnt)) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
+ 	if (type == MS_SHARED) {
+ 		err = invent_group_ids(mnt, recurse);
+ 		if (err)
+@@ -3035,7 +3053,7 @@ static struct mount *__do_loopback(struct path *old_path, int recurse)
+ 	if (!may_copy_tree(old_path))
+ 		return mnt;
+ 
+-	if (!recurse && has_locked_children(old, old_path->dentry))
++	if (!recurse && __has_locked_children(old, old_path->dentry))
+ 		return mnt;
+ 
+ 	if (recurse)
+@@ -3428,7 +3446,7 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+ 		goto out;
+ 
+ 	/* From mount should not have locked children in place of To's root */
+-	if (has_locked_children(from, to->mnt.mnt_root))
++	if (__has_locked_children(from, to->mnt.mnt_root))
+ 		goto out;
+ 
+ 	/* Setting sharing groups is only allowed on private mounts */
+@@ -3442,7 +3460,7 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+ 	if (IS_MNT_SLAVE(from)) {
+ 		struct mount *m = from->mnt_master;
+ 
+-		list_add(&to->mnt_slave, &m->mnt_slave_list);
++		list_add(&to->mnt_slave, &from->mnt_slave);
+ 		to->mnt_master = m;
+ 	}
+ 
+@@ -3467,18 +3485,25 @@ static int do_set_group(struct path *from_path, struct path *to_path)
+  * Check if path is overmounted, i.e., if there's a mount on top of
+  * @path->mnt with @path->dentry as mountpoint.
+  *
+- * Context: This function expects namespace_lock() to be held.
++ * Context: namespace_sem must be held at least shared.
++ * MUST NOT be called under lock_mount_hash() (there one should just
++ * call __lookup_mnt() and check if it returns NULL).
+  * Return: If path is overmounted true is returned, false if not.
+  */
+ static inline bool path_overmounted(const struct path *path)
+ {
++	unsigned seq = read_seqbegin(&mount_lock);
++	bool no_child;
++
+ 	rcu_read_lock();
+-	if (unlikely(__lookup_mnt(path->mnt, path->dentry))) {
+-		rcu_read_unlock();
+-		return true;
+-	}
++	no_child = !__lookup_mnt(path->mnt, path->dentry);
+ 	rcu_read_unlock();
+-	return false;
++	if (need_seqretry(&mount_lock, seq)) {
++		read_seqlock_excl(&mount_lock);
++		no_child = !__lookup_mnt(path->mnt, path->dentry);
++		read_sequnlock_excl(&mount_lock);
++	}
++	return unlikely(!no_child);
+ }
+ 
+ /**
+@@ -3637,46 +3662,41 @@ static int do_move_mount(struct path *old_path,
+ 	ns = old->mnt_ns;
+ 
+ 	err = -EINVAL;
+-	if (!may_use_mount(p))
+-		goto out;
+-
+ 	/* The thing moved must be mounted... */
+ 	if (!is_mounted(&old->mnt))
+ 		goto out;
+ 
+-	/* ... and either ours or the root of anon namespace */
+-	if (!(attached ? check_mnt(old) : is_anon_ns(ns)))
+-		goto out;
+-
+-	if (is_anon_ns(ns)) {
++	if (check_mnt(old)) {
++		/* if the source is in our namespace... */
++		/* ... it should be detachable from parent */
++		if (!mnt_has_parent(old) || IS_MNT_LOCKED(old))
++			goto out;
++		/* ... and the target should be in our namespace */
++		if (!check_mnt(p))
++			goto out;
++	} else {
+ 		/*
+-		 * Ending up with two files referring to the root of the
+-		 * same anonymous mount namespace would cause an error
+-		 * as this would mean trying to move the same mount
+-		 * twice into the mount tree which would be rejected
+-		 * later. But be explicit about it right here.
++		 * otherwise the source must be the root of some anon namespace.
++		 * AV: check for mount being root of an anon namespace is worth
++		 * an inlined predicate...
+ 		 */
+-		if ((is_anon_ns(p->mnt_ns) && ns == p->mnt_ns))
++		if (!is_anon_ns(ns) || mnt_has_parent(old))
+ 			goto out;
+-
+ 		/*
+-		 * If this is an anonymous mount tree ensure that mount
+-		 * propagation can detect mounts that were just
+-		 * propagated to the target mount tree so we don't
+-		 * propagate onto them.
++		 * Bail out early if the target is within the same namespace -
++		 * subsequent checks would've rejected that, but they lose
++		 * some corner cases if we check it early.
+ 		 */
+-		ns->mntns_flags |= MNTNS_PROPAGATING;
+-	} else if (is_anon_ns(p->mnt_ns)) {
++		if (ns == p->mnt_ns)
++			goto out;
+ 		/*
+-		 * Don't allow moving an attached mount tree to an
+-		 * anonymous mount tree.
++		 * Target should be either in our namespace or in an acceptable
++		 * anon namespace, sensu check_anonymous_mnt().
+ 		 */
+-		goto out;
++		if (!may_use_mount(p))
++			goto out;
+ 	}
+ 
+-	if (old->mnt.mnt_flags & MNT_LOCKED)
+-		goto out;
+-
+ 	if (!path_mounted(old_path))
+ 		goto out;
+ 
+@@ -3722,8 +3742,6 @@ static int do_move_mount(struct path *old_path,
+ 	if (attached)
+ 		put_mountpoint(old_mp);
+ out:
+-	if (is_anon_ns(ns))
+-		ns->mntns_flags &= ~MNTNS_PROPAGATING;
+ 	unlock_mount(mp);
+ 	if (!err) {
+ 		if (attached) {
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 0d1b6d35ff3b80..fd4619275801be 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -262,9 +262,9 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ 				if (ret < 0) {
+ 					subreq->error = ret;
+ 					/* Not queued - release both refs. */
+-					netfs_put_subrequest(subreq, false,
++					netfs_put_subrequest(subreq,
+ 							     netfs_sreq_trace_put_cancel);
+-					netfs_put_subrequest(subreq, false,
++					netfs_put_subrequest(subreq,
+ 							     netfs_sreq_trace_put_cancel);
+ 					break;
+ 				}
+@@ -297,8 +297,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ 			subreq->error = ret;
+ 			trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
+ 			/* Not queued - release both refs. */
+-			netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
+-			netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++			netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
++			netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ 			break;
+ 		}
+ 		size -= slice;
+@@ -312,7 +312,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ 	if (unlikely(size > 0)) {
+ 		smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 		set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+-		netfs_wake_read_collector(rreq);
++		netfs_wake_collector(rreq);
+ 	}
+ 
+ 	/* Defer error return as we may need to wait for outstanding I/O. */
+@@ -365,12 +365,10 @@ void netfs_readahead(struct readahead_control *ractl)
+ 		goto cleanup_free;
+ 	netfs_read_to_pagecache(rreq);
+ 
+-	netfs_put_request(rreq, true, netfs_rreq_trace_put_return);
+-	return;
++	return netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
+-	return;
++	return netfs_put_request(rreq, netfs_rreq_trace_put_failed);
+ }
+ EXPORT_SYMBOL(netfs_readahead);
+ 
+@@ -470,11 +468,11 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
+ 		folio_mark_uptodate(folio);
+ 	}
+ 	folio_unlock(folio);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
++	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -530,11 +528,11 @@ int netfs_read_folio(struct file *file, struct folio *folio)
+ 
+ 	netfs_read_to_pagecache(rreq);
+ 	ret = netfs_wait_for_read(rreq);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret < 0 ? ret : 0;
+ 
+ discard:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
++	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ alloc_error:
+ 	folio_unlock(folio);
+ 	return ret;
+@@ -689,7 +687,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	ret = netfs_wait_for_read(rreq);
+ 	if (ret < 0)
+ 		goto error;
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 
+ have_folio:
+ 	ret = folio_wait_private_2_killable(folio);
+@@ -701,7 +699,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
+ 	return 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
++	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
+ error:
+ 	if (folio) {
+ 		folio_unlock(folio);
+@@ -752,11 +750,11 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 
+ 	netfs_read_to_pagecache(rreq);
+ 	ret = netfs_wait_for_read(rreq);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret < 0 ? ret : 0;
+ 
+ error_put:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_discard);
++	netfs_put_request(rreq, netfs_rreq_trace_put_discard);
+ error:
+ 	_leave(" = %d", ret);
+ 	return ret;
+diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
+index b4826360a41112..dbb544e183d13d 100644
+--- a/fs/netfs/buffered_write.c
++++ b/fs/netfs/buffered_write.c
+@@ -386,7 +386,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
+ 		wbc_detach_inode(&wbc);
+ 		if (ret2 == -EIOCBQUEUED)
+ 			return ret2;
+-		if (ret == 0)
++		if (ret == 0 && ret2 < 0)
+ 			ret = ret2;
+ 	}
+ 
+diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
+index 5e3f0aeb51f31f..9902766195d7b2 100644
+--- a/fs/netfs/direct_read.c
++++ b/fs/netfs/direct_read.c
+@@ -85,7 +85,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ 		if (rreq->netfs_ops->prepare_read) {
+ 			ret = rreq->netfs_ops->prepare_read(subreq);
+ 			if (ret < 0) {
+-				netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++				netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ 				break;
+ 			}
+ 		}
+@@ -103,7 +103,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ 		rreq->netfs_ops->issue_read(subreq);
+ 
+ 		if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
+-			netfs_wait_for_pause(rreq);
++			netfs_wait_for_paused_read(rreq);
+ 		if (test_bit(NETFS_RREQ_FAILED, &rreq->flags))
+ 			break;
+ 		if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) &&
+@@ -115,7 +115,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ 	if (unlikely(size > 0)) {
+ 		smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 		set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+-		netfs_wake_read_collector(rreq);
++		netfs_wake_collector(rreq);
+ 	}
+ 
+ 	return ret;
+@@ -144,7 +144,7 @@ static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ 	ret = netfs_dispatch_unbuffered_reads(rreq);
+ 
+ 	if (!rreq->submitted) {
+-		netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit);
++		netfs_put_request(rreq, netfs_rreq_trace_put_no_submit);
+ 		inode_dio_end(rreq->inode);
+ 		ret = 0;
+ 		goto out;
+@@ -188,7 +188,8 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 
+ 	rreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp,
+ 				   iocb->ki_pos, orig_count,
+-				   NETFS_DIO_READ);
++				   iocb->ki_flags & IOCB_DIRECT ?
++				   NETFS_DIO_READ : NETFS_UNBUFFERED_READ);
+ 	if (IS_ERR(rreq))
+ 		return PTR_ERR(rreq);
+ 
+@@ -236,7 +237,7 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
+ 	}
+ 
+ out:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	if (ret > 0)
+ 		orig_count -= ret;
+ 	return ret;
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index 42ce53cc216e9d..fa9a5bf3c6d512 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -87,6 +87,8 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 	}
+ 
+ 	__set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);
++	if (async)
++		__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
+ 
+ 	/* Copy the data into the bounce buffer and encrypt it. */
+ 	// TODO
+@@ -105,19 +107,15 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ 
+ 	if (!async) {
+ 		trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip);
+-		wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
+-			    TASK_UNINTERRUPTIBLE);
+-		ret = wreq->error;
+-		if (ret == 0) {
+-			ret = wreq->transferred;
++		ret = netfs_wait_for_write(wreq);
++		if (ret > 0)
+ 			iocb->ki_pos += ret;
+-		}
+ 	} else {
+ 		ret = -EIOCBQUEUED;
+ 	}
+ 
+ out:
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_unbuffered_write_iter_locked);
+diff --git a/fs/netfs/fscache_io.c b/fs/netfs/fscache_io.c
+index b1722a82c03d3d..e4308457633ca3 100644
+--- a/fs/netfs/fscache_io.c
++++ b/fs/netfs/fscache_io.c
+@@ -192,8 +192,7 @@ EXPORT_SYMBOL(__fscache_clear_page_bits);
+ /*
+  * Deal with the completion of writing the data to the cache.
+  */
+-static void fscache_wreq_done(void *priv, ssize_t transferred_or_error,
+-			      bool was_async)
++static void fscache_wreq_done(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct fscache_write_request *wreq = priv;
+ 
+@@ -202,8 +201,7 @@ static void fscache_wreq_done(void *priv, ssize_t transferred_or_error,
+ 					wreq->set_bits);
+ 
+ 	if (wreq->term_func)
+-		wreq->term_func(wreq->term_func_priv, transferred_or_error,
+-				was_async);
++		wreq->term_func(wreq->term_func_priv, transferred_or_error);
+ 	fscache_end_operation(&wreq->cache_resources);
+ 	kfree(wreq);
+ }
+@@ -255,14 +253,14 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie,
+ 	return;
+ 
+ abandon_end:
+-	return fscache_wreq_done(wreq, ret, false);
++	return fscache_wreq_done(wreq, ret);
+ abandon_free:
+ 	kfree(wreq);
+ abandon:
+ 	if (using_pgpriv2)
+ 		fscache_clear_page_bits(mapping, start, len, cond);
+ 	if (term_func)
+-		term_func(term_func_priv, ret, false);
++		term_func(term_func_priv, ret);
+ }
+ EXPORT_SYMBOL(__fscache_write_to_cache);
+ 
+diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
+index 1c4f953c3d683b..e2ee9183392b93 100644
+--- a/fs/netfs/internal.h
++++ b/fs/netfs/internal.h
+@@ -23,7 +23,7 @@
+ /*
+  * buffered_read.c
+  */
+-void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool was_async);
++void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error);
+ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
+ 			     size_t offset, size_t len);
+ 
+@@ -62,6 +62,14 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {}
+ struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq,
+ 					    enum netfs_folioq_trace trace);
+ void netfs_reset_iter(struct netfs_io_subrequest *subreq);
++void netfs_wake_collector(struct netfs_io_request *rreq);
++void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq);
++void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
++				       struct netfs_io_stream *stream);
++ssize_t netfs_wait_for_read(struct netfs_io_request *rreq);
++ssize_t netfs_wait_for_write(struct netfs_io_request *rreq);
++void netfs_wait_for_paused_read(struct netfs_io_request *rreq);
++void netfs_wait_for_paused_write(struct netfs_io_request *rreq);
+ 
+ /*
+  * objects.c
+@@ -71,9 +79,8 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 					     loff_t start, size_t len,
+ 					     enum netfs_io_origin origin);
+ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+-void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async);
+-void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+-		       enum netfs_rreq_ref_trace what);
++void netfs_clear_subrequests(struct netfs_io_request *rreq);
++void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
+ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
+ 
+ static inline void netfs_see_request(struct netfs_io_request *rreq,
+@@ -92,11 +99,9 @@ static inline void netfs_see_subrequest(struct netfs_io_subrequest *subreq,
+ /*
+  * read_collect.c
+  */
++bool netfs_read_collection(struct netfs_io_request *rreq);
+ void netfs_read_collection_worker(struct work_struct *work);
+-void netfs_wake_read_collector(struct netfs_io_request *rreq);
+-void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool was_async);
+-ssize_t netfs_wait_for_read(struct netfs_io_request *rreq);
+-void netfs_wait_for_pause(struct netfs_io_request *rreq);
++void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error);
+ 
+ /*
+  * read_pgpriv2.c
+@@ -176,8 +181,8 @@ static inline void netfs_stat_d(atomic_t *stat)
+  * write_collect.c
+  */
+ int netfs_folio_written_back(struct folio *folio);
++bool netfs_write_collection(struct netfs_io_request *wreq);
+ void netfs_write_collection_worker(struct work_struct *work);
+-void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async);
+ 
+ /*
+  * write_issue.c
+@@ -198,8 +203,8 @@ struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len
+ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
+ 			       struct folio *folio, size_t copied, bool to_page_end,
+ 			       struct folio **writethrough_cache);
+-int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
+-			   struct folio *writethrough_cache);
++ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
++			       struct folio *writethrough_cache);
+ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len);
+ 
+ /*
+@@ -254,6 +259,21 @@ static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr)
+ 		netfs_group->free(netfs_group);
+ }
+ 
++/*
++ * Clear and wake up a NETFS_RREQ_* flag bit on a request.
++ */
++static inline void netfs_wake_rreq_flag(struct netfs_io_request *rreq,
++					unsigned int rreq_flag,
++					enum netfs_rreq_trace trace)
++{
++	if (test_bit(rreq_flag, &rreq->flags)) {
++		trace_netfs_rreq(rreq, trace);
++		clear_bit_unlock(rreq_flag, &rreq->flags);
++		smp_mb__after_atomic(); /* Set flag before task state */
++		wake_up(&rreq->waitq);
++	}
++}
++
+ /*
+  * fscache-cache.c
+  */
+diff --git a/fs/netfs/main.c b/fs/netfs/main.c
+index 70ecc8f5f21034..3db401d269e7b3 100644
+--- a/fs/netfs/main.c
++++ b/fs/netfs/main.c
+@@ -39,6 +39,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = {
+ 	[NETFS_READ_GAPS]		= "RG",
+ 	[NETFS_READ_SINGLE]		= "R1",
+ 	[NETFS_READ_FOR_WRITE]		= "RW",
++	[NETFS_UNBUFFERED_READ]		= "UR",
+ 	[NETFS_DIO_READ]		= "DR",
+ 	[NETFS_WRITEBACK]		= "WB",
+ 	[NETFS_WRITEBACK_SINGLE]	= "W1",
+diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
+index 7099aa07737ac0..43b67a28a8fa07 100644
+--- a/fs/netfs/misc.c
++++ b/fs/netfs/misc.c
+@@ -313,3 +313,222 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp)
+ 	return true;
+ }
+ EXPORT_SYMBOL(netfs_release_folio);
++
++/*
++ * Wake the collection work item.
++ */
++void netfs_wake_collector(struct netfs_io_request *rreq)
++{
++	if (test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags) &&
++	    !test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) {
++		queue_work(system_unbound_wq, &rreq->work);
++	} else {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wake_queue);
++		wake_up(&rreq->waitq);
++	}
++}
++
++/*
++ * Mark a subrequest as no longer being in progress and, if need be, wake the
++ * collector.
++ */
++void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq)
++{
++	struct netfs_io_request *rreq = subreq->rreq;
++	struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr];
++
++	clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
++	smp_mb__after_atomic(); /* Clear IN_PROGRESS before task state */
++
++	/* If we are at the head of the queue, wake up the collector. */
++	if (list_is_first(&subreq->rreq_link, &stream->subrequests) ||
++	    test_bit(NETFS_RREQ_RETRYING, &rreq->flags))
++		netfs_wake_collector(rreq);
++}
++
++/*
++ * Wait for all outstanding I/O in a stream to quiesce.
++ */
++void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
++				       struct netfs_io_stream *stream)
++{
++	struct netfs_io_subrequest *subreq;
++	DEFINE_WAIT(myself);
++
++	list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
++		if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
++			continue;
++
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
++		for (;;) {
++			prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
++
++			if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
++				break;
++
++			trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for);
++			schedule();
++			trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
++		}
++	}
++
++	finish_wait(&rreq->waitq, &myself);
++}
++
++/*
++ * Perform collection in app thread if not offloaded to workqueue.
++ */
++static int netfs_collect_in_app(struct netfs_io_request *rreq,
++				bool (*collector)(struct netfs_io_request *rreq))
++{
++	bool need_collect = false, inactive = true;
++
++	for (int i = 0; i < NR_IO_STREAMS; i++) {
++		struct netfs_io_subrequest *subreq;
++		struct netfs_io_stream *stream = &rreq->io_streams[i];
++
++		if (!stream->active)
++			continue;
++		inactive = false;
++		trace_netfs_collect_stream(rreq, stream);
++		subreq = list_first_entry_or_null(&stream->subrequests,
++						  struct netfs_io_subrequest,
++						  rreq_link);
++		if (subreq &&
++		    (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) ||
++		     test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) {
++			need_collect = true;
++			break;
++		}
++	}
++
++	if (!need_collect && !inactive)
++		return 0; /* Sleep */
++
++	__set_current_state(TASK_RUNNING);
++	if (collector(rreq)) {
++		/* Drop the ref from the NETFS_RREQ_IN_PROGRESS flag. */
++		netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
++		return 1; /* Done */
++	}
++
++	if (inactive) {
++		WARN(true, "Failed to collect inactive req R=%08x\n",
++		     rreq->debug_id);
++		cond_resched();
++	}
++	return 2; /* Again */
++}
++
++/*
++ * Wait for a request to complete, successfully or otherwise.
++ */
++static ssize_t netfs_wait_for_request(struct netfs_io_request *rreq,
++				      bool (*collector)(struct netfs_io_request *rreq))
++{
++	DEFINE_WAIT(myself);
++	ssize_t ret;
++
++	for (;;) {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
++		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
++
++		if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) {
++			switch (netfs_collect_in_app(rreq, collector)) {
++			case 0:
++				break;
++			case 1:
++				goto all_collected;
++			case 2:
++				continue;
++			}
++		}
++
++		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
++			break;
++
++		schedule();
++		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
++	}
++
++all_collected:
++	finish_wait(&rreq->waitq, &myself);
++
++	ret = rreq->error;
++	if (ret == 0) {
++		ret = rreq->transferred;
++		switch (rreq->origin) {
++		case NETFS_DIO_READ:
++		case NETFS_DIO_WRITE:
++		case NETFS_READ_SINGLE:
++		case NETFS_UNBUFFERED_READ:
++		case NETFS_UNBUFFERED_WRITE:
++			break;
++		default:
++			if (rreq->submitted < rreq->len) {
++				trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
++				ret = -EIO;
++			}
++			break;
++		}
++	}
++
++	return ret;
++}
++
++ssize_t netfs_wait_for_read(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_request(rreq, netfs_read_collection);
++}
++
++ssize_t netfs_wait_for_write(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_request(rreq, netfs_write_collection);
++}
++
++/*
++ * Wait for a paused operation to unpause or complete in some manner.
++ */
++static void netfs_wait_for_pause(struct netfs_io_request *rreq,
++				 bool (*collector)(struct netfs_io_request *rreq))
++{
++	DEFINE_WAIT(myself);
++
++	trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause);
++
++	for (;;) {
++		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
++		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
++
++		if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) {
++			switch (netfs_collect_in_app(rreq, collector)) {
++			case 0:
++				break;
++			case 1:
++				goto all_collected;
++			case 2:
++				continue;
++			}
++		}
++
++		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags) ||
++		    !test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
++			break;
++
++		schedule();
++		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
++	}
++
++all_collected:
++	finish_wait(&rreq->waitq, &myself);
++}
++
++void netfs_wait_for_paused_read(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_pause(rreq, netfs_read_collection);
++}
++
++void netfs_wait_for_paused_write(struct netfs_io_request *rreq)
++{
++	return netfs_wait_for_pause(rreq, netfs_write_collection);
++}
+diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
+index dc6b41ef18b097..31fa0c81e2a43e 100644
+--- a/fs/netfs/objects.c
++++ b/fs/netfs/objects.c
+@@ -10,6 +10,8 @@
+ #include <linux/delay.h>
+ #include "internal.h"
+ 
++static void netfs_free_request(struct work_struct *work);
++
+ /*
+  * Allocate an I/O request and initialise it.
+  */
+@@ -34,6 +36,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 	}
+ 
+ 	memset(rreq, 0, kmem_cache_size(cache));
++	INIT_WORK(&rreq->cleanup_work, netfs_free_request);
+ 	rreq->start	= start;
+ 	rreq->len	= len;
+ 	rreq->origin	= origin;
+@@ -49,13 +52,14 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 	INIT_LIST_HEAD(&rreq->io_streams[0].subrequests);
+ 	INIT_LIST_HEAD(&rreq->io_streams[1].subrequests);
+ 	init_waitqueue_head(&rreq->waitq);
+-	refcount_set(&rreq->ref, 1);
++	refcount_set(&rreq->ref, 2);
+ 
+ 	if (origin == NETFS_READAHEAD ||
+ 	    origin == NETFS_READPAGE ||
+ 	    origin == NETFS_READ_GAPS ||
+ 	    origin == NETFS_READ_SINGLE ||
+ 	    origin == NETFS_READ_FOR_WRITE ||
++	    origin == NETFS_UNBUFFERED_READ ||
+ 	    origin == NETFS_DIO_READ) {
+ 		INIT_WORK(&rreq->work, netfs_read_collection_worker);
+ 		rreq->io_streams[0].avail = true;
+@@ -63,7 +67,9 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 		INIT_WORK(&rreq->work, netfs_write_collection_worker);
+ 	}
+ 
++	/* The IN_PROGRESS flag comes with a ref. */
+ 	__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
++
+ 	if (file && file->f_flags & O_NONBLOCK)
+ 		__set_bit(NETFS_RREQ_NONBLOCK, &rreq->flags);
+ 	if (rreq->netfs_ops->init_request) {
+@@ -75,7 +81,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
+ 	}
+ 
+ 	atomic_inc(&ctx->io_count);
+-	trace_netfs_rreq_ref(rreq->debug_id, 1, netfs_rreq_trace_new);
++	trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), netfs_rreq_trace_new);
+ 	netfs_proc_add_rreq(rreq);
+ 	netfs_stat(&netfs_n_rh_rreq);
+ 	return rreq;
+@@ -89,7 +95,7 @@ void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace
+ 	trace_netfs_rreq_ref(rreq->debug_id, r + 1, what);
+ }
+ 
+-void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
++void netfs_clear_subrequests(struct netfs_io_request *rreq)
+ {
+ 	struct netfs_io_subrequest *subreq;
+ 	struct netfs_io_stream *stream;
+@@ -101,8 +107,7 @@ void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
+ 			subreq = list_first_entry(&stream->subrequests,
+ 						  struct netfs_io_subrequest, rreq_link);
+ 			list_del(&subreq->rreq_link);
+-			netfs_put_subrequest(subreq, was_async,
+-					     netfs_sreq_trace_put_clear);
++			netfs_put_subrequest(subreq, netfs_sreq_trace_put_clear);
+ 		}
+ 	}
+ }
+@@ -118,13 +123,19 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)
+ static void netfs_free_request(struct work_struct *work)
+ {
+ 	struct netfs_io_request *rreq =
+-		container_of(work, struct netfs_io_request, work);
++		container_of(work, struct netfs_io_request, cleanup_work);
+ 	struct netfs_inode *ictx = netfs_inode(rreq->inode);
+ 	unsigned int i;
+ 
+ 	trace_netfs_rreq(rreq, netfs_rreq_trace_free);
++
++	/* Cancel/flush the result collection worker.  That does not carry a
++	 * ref of its own, so we must wait for it somewhere.
++	 */
++	cancel_work_sync(&rreq->work);
++
+ 	netfs_proc_del_rreq(rreq);
+-	netfs_clear_subrequests(rreq, false);
++	netfs_clear_subrequests(rreq);
+ 	if (rreq->netfs_ops->free_request)
+ 		rreq->netfs_ops->free_request(rreq);
+ 	if (rreq->cache_resources.ops)
+@@ -145,8 +156,7 @@ static void netfs_free_request(struct work_struct *work)
+ 	call_rcu(&rreq->rcu, netfs_free_request_rcu);
+ }
+ 
+-void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+-		       enum netfs_rreq_ref_trace what)
++void netfs_put_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what)
+ {
+ 	unsigned int debug_id;
+ 	bool dead;
+@@ -156,15 +166,8 @@ void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+ 		debug_id = rreq->debug_id;
+ 		dead = __refcount_dec_and_test(&rreq->ref, &r);
+ 		trace_netfs_rreq_ref(debug_id, r - 1, what);
+-		if (dead) {
+-			if (was_async) {
+-				rreq->work.func = netfs_free_request;
+-				if (!queue_work(system_unbound_wq, &rreq->work))
+-					WARN_ON(1);
+-			} else {
+-				netfs_free_request(&rreq->work);
+-			}
+-		}
++		if (dead)
++			WARN_ON(!queue_work(system_unbound_wq, &rreq->cleanup_work));
+ 	}
+ }
+ 
+@@ -206,8 +209,7 @@ void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
+ 			     what);
+ }
+ 
+-static void netfs_free_subrequest(struct netfs_io_subrequest *subreq,
+-				  bool was_async)
++static void netfs_free_subrequest(struct netfs_io_subrequest *subreq)
+ {
+ 	struct netfs_io_request *rreq = subreq->rreq;
+ 
+@@ -216,10 +218,10 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq,
+ 		rreq->netfs_ops->free_subrequest(subreq);
+ 	mempool_free(subreq, rreq->netfs_ops->subrequest_pool ?: &netfs_subrequest_pool);
+ 	netfs_stat_d(&netfs_n_rh_sreq);
+-	netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq);
++	netfs_put_request(rreq, netfs_rreq_trace_put_subreq);
+ }
+ 
+-void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async,
++void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
+ 			  enum netfs_sreq_ref_trace what)
+ {
+ 	unsigned int debug_index = subreq->debug_index;
+@@ -230,5 +232,5 @@ void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async,
+ 	dead = __refcount_dec_and_test(&subreq->ref, &r);
+ 	trace_netfs_sreq_ref(debug_id, debug_index, r - 1, what);
+ 	if (dead)
+-		netfs_free_subrequest(subreq, was_async);
++		netfs_free_subrequest(subreq);
+ }
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index 23c75755ad4ed9..bad677e58a4237 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -280,9 +280,13 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 			stream->need_retry = true;
+ 			notes |= NEED_RETRY | MADE_PROGRESS;
+ 			break;
++		} else if (test_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags)) {
++			notes |= MADE_PROGRESS;
+ 		} else {
+ 			if (!stream->failed)
+-				stream->transferred = stream->collected_to - rreq->start;
++				stream->transferred += transferred;
++			if (front->transferred < front->len)
++				set_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags);
+ 			notes |= MADE_PROGRESS;
+ 		}
+ 
+@@ -297,7 +301,7 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 						 struct netfs_io_subrequest, rreq_link);
+ 		stream->front = front;
+ 		spin_unlock(&rreq->lock);
+-		netfs_put_subrequest(remove, false,
++		netfs_put_subrequest(remove,
+ 				     notes & ABANDON_SREQ ?
+ 				     netfs_sreq_trace_put_abandon :
+ 				     netfs_sreq_trace_put_done);
+@@ -311,14 +315,8 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+ 
+ 	if (notes & NEED_RETRY)
+ 		goto need_retry;
+-	if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &rreq->flags)) {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_unpause);
+-		clear_bit_unlock(NETFS_RREQ_PAUSE, &rreq->flags);
+-		smp_mb__after_atomic(); /* Set PAUSE before task state */
+-		wake_up(&rreq->waitq);
+-	}
+-
+ 	if (notes & MADE_PROGRESS) {
++		netfs_wake_rreq_flag(rreq, NETFS_RREQ_PAUSE, netfs_rreq_trace_unpause);
+ 		//cond_resched();
+ 		goto reassess;
+ 	}
+@@ -342,24 +340,10 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
+  */
+ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_subrequest *subreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+ 	unsigned int i;
+ 
+-	/* Collect unbuffered reads and direct reads, adding up the transfer
+-	 * sizes until we find the first short or failed subrequest.
+-	 */
+-	list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
+-		rreq->transferred += subreq->transferred;
+-
+-		if (subreq->transferred < subreq->len ||
+-		    test_bit(NETFS_SREQ_FAILED, &subreq->flags)) {
+-			rreq->error = subreq->error;
+-			break;
+-		}
+-	}
+-
+-	if (rreq->origin == NETFS_DIO_READ) {
++	if (rreq->origin == NETFS_UNBUFFERED_READ ||
++	    rreq->origin == NETFS_DIO_READ) {
+ 		for (i = 0; i < rreq->direct_bv_count; i++) {
+ 			flush_dcache_page(rreq->direct_bv[i].bv_page);
+ 			// TODO: cifs marks pages in the destination buffer
+@@ -377,7 +361,8 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
+ 	}
+ 	if (rreq->netfs_ops->done)
+ 		rreq->netfs_ops->done(rreq);
+-	if (rreq->origin == NETFS_DIO_READ)
++	if (rreq->origin == NETFS_UNBUFFERED_READ ||
++	    rreq->origin == NETFS_DIO_READ)
+ 		inode_dio_end(rreq->inode);
+ }
+ 
+@@ -410,7 +395,7 @@ static void netfs_rreq_assess_single(struct netfs_io_request *rreq)
+  * Note that we're in normal kernel thread context at this point, possibly
+  * running on a workqueue.
+  */
+-static void netfs_read_collection(struct netfs_io_request *rreq)
++bool netfs_read_collection(struct netfs_io_request *rreq)
+ {
+ 	struct netfs_io_stream *stream = &rreq->io_streams[0];
+ 
+@@ -420,11 +405,11 @@ static void netfs_read_collection(struct netfs_io_request *rreq)
+ 	 * queue is empty.
+ 	 */
+ 	if (!test_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags))
+-		return;
++		return false;
+ 	smp_rmb(); /* Read ALL_QUEUED before subreq lists. */
+ 
+ 	if (!list_empty(&stream->subrequests))
+-		return;
++		return false;
+ 
+ 	/* Okay, declare that all I/O is complete. */
+ 	rreq->transferred = stream->transferred;
+@@ -433,6 +418,7 @@ static void netfs_read_collection(struct netfs_io_request *rreq)
+ 	//netfs_rreq_is_still_valid(rreq);
+ 
+ 	switch (rreq->origin) {
++	case NETFS_UNBUFFERED_READ:
+ 	case NETFS_DIO_READ:
+ 	case NETFS_READ_GAPS:
+ 		netfs_rreq_assess_dio(rreq);
+@@ -445,14 +431,15 @@ static void netfs_read_collection(struct netfs_io_request *rreq)
+ 	}
+ 	task_io_account_read(rreq->transferred);
+ 
+-	trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip);
+-	clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
++	netfs_wake_rreq_flag(rreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);
++	/* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+ 
+ 	trace_netfs_rreq(rreq, netfs_rreq_trace_done);
+-	netfs_clear_subrequests(rreq, false);
++	netfs_clear_subrequests(rreq);
+ 	netfs_unlock_abandoned_read_pages(rreq);
+ 	if (unlikely(rreq->copy_to_cache))
+ 		netfs_pgpriv2_end_copy_to_cache(rreq);
++	return true;
+ }
+ 
+ void netfs_read_collection_worker(struct work_struct *work)
+@@ -460,26 +447,12 @@ void netfs_read_collection_worker(struct work_struct *work)
+ 	struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work);
+ 
+ 	netfs_see_request(rreq, netfs_rreq_trace_see_work);
+-	if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
+-		netfs_read_collection(rreq);
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_work);
+-}
+-
+-/*
+- * Wake the collection work item.
+- */
+-void netfs_wake_read_collector(struct netfs_io_request *rreq)
+-{
+-	if (test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags) &&
+-	    !test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) {
+-		if (!work_pending(&rreq->work)) {
+-			netfs_get_request(rreq, netfs_rreq_trace_get_work);
+-			if (!queue_work(system_unbound_wq, &rreq->work))
+-				netfs_put_request(rreq, true, netfs_rreq_trace_put_work_nq);
+-		}
+-	} else {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wake_queue);
+-		wake_up(&rreq->waitq);
++	if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) {
++		if (netfs_read_collection(rreq))
++			/* Drop the ref from the IN_PROGRESS flag. */
++			netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
++		else
++			netfs_see_request(rreq, netfs_rreq_trace_see_work_complete);
+ 	}
+ }
+ 
+@@ -511,7 +484,7 @@ void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq)
+ 	    list_is_first(&subreq->rreq_link, &stream->subrequests)
+ 	    ) {
+ 		__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+-		netfs_wake_read_collector(rreq);
++		netfs_wake_collector(rreq);
+ 	}
+ }
+ EXPORT_SYMBOL(netfs_read_subreq_progress);
+@@ -535,7 +508,6 @@ EXPORT_SYMBOL(netfs_read_subreq_progress);
+ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq)
+ {
+ 	struct netfs_io_request *rreq = subreq->rreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+ 
+ 	switch (subreq->source) {
+ 	case NETFS_READ_FROM_CACHE:
+@@ -582,23 +554,15 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq)
+ 	}
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
+-
+-	clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+-	smp_mb__after_atomic(); /* Clear IN_PROGRESS before task state */
+-
+-	/* If we are at the head of the queue, wake up the collector. */
+-	if (list_is_first(&subreq->rreq_link, &stream->subrequests) ||
+-	    test_bit(NETFS_RREQ_RETRYING, &rreq->flags))
+-		netfs_wake_read_collector(rreq);
+-
+-	netfs_put_subrequest(subreq, true, netfs_sreq_trace_put_terminated);
++	netfs_subreq_clear_in_progress(subreq);
++	netfs_put_subrequest(subreq, netfs_sreq_trace_put_terminated);
+ }
+ EXPORT_SYMBOL(netfs_read_subreq_terminated);
+ 
+ /*
+  * Handle termination of a read from the cache.
+  */
+-void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool was_async)
++void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error)
+ {
+ 	struct netfs_io_subrequest *subreq = priv;
+ 
+@@ -613,94 +577,3 @@ void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error, bool
+ 	}
+ 	netfs_read_subreq_terminated(subreq);
+ }
+-
+-/*
+- * Wait for the read operation to complete, successfully or otherwise.
+- */
+-ssize_t netfs_wait_for_read(struct netfs_io_request *rreq)
+-{
+-	struct netfs_io_subrequest *subreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+-	DEFINE_WAIT(myself);
+-	ssize_t ret;
+-
+-	for (;;) {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
+-		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+-
+-		subreq = list_first_entry_or_null(&stream->subrequests,
+-						  struct netfs_io_subrequest, rreq_link);
+-		if (subreq &&
+-		    (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) ||
+-		     test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) {
+-			__set_current_state(TASK_RUNNING);
+-			netfs_read_collection(rreq);
+-			continue;
+-		}
+-
+-		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
+-			break;
+-
+-		schedule();
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
+-	}
+-
+-	finish_wait(&rreq->waitq, &myself);
+-
+-	ret = rreq->error;
+-	if (ret == 0) {
+-		ret = rreq->transferred;
+-		switch (rreq->origin) {
+-		case NETFS_DIO_READ:
+-		case NETFS_READ_SINGLE:
+-			ret = rreq->transferred;
+-			break;
+-		default:
+-			if (rreq->submitted < rreq->len) {
+-				trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
+-				ret = -EIO;
+-			}
+-			break;
+-		}
+-	}
+-
+-	return ret;
+-}
+-
+-/*
+- * Wait for a paused read operation to unpause or complete in some manner.
+- */
+-void netfs_wait_for_pause(struct netfs_io_request *rreq)
+-{
+-	struct netfs_io_subrequest *subreq;
+-	struct netfs_io_stream *stream = &rreq->io_streams[0];
+-	DEFINE_WAIT(myself);
+-
+-	trace_netfs_rreq(rreq, netfs_rreq_trace_wait_pause);
+-
+-	for (;;) {
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
+-		prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+-
+-		if (!test_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags)) {
+-			subreq = list_first_entry_or_null(&stream->subrequests,
+-							  struct netfs_io_subrequest, rreq_link);
+-			if (subreq &&
+-			    (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags) ||
+-			     test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags))) {
+-				__set_current_state(TASK_RUNNING);
+-				netfs_read_collection(rreq);
+-				continue;
+-			}
+-		}
+-
+-		if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags) ||
+-		    !test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
+-			break;
+-
+-		schedule();
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
+-	}
+-
+-	finish_wait(&rreq->waitq, &myself);
+-}
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index cf7727060215ad..5bbe906a551d57 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -116,7 +116,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
+ 	return creq;
+ 
+ cancel_put:
+-	netfs_put_request(creq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(creq, netfs_rreq_trace_put_return);
+ cancel:
+ 	rreq->copy_to_cache = ERR_PTR(-ENOBUFS);
+ 	clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
+@@ -155,7 +155,7 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)
+ 	smp_wmb(); /* Write lists before ALL_QUEUED. */
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags);
+ 
+-	netfs_put_request(creq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(creq, netfs_rreq_trace_put_return);
+ 	creq->copy_to_cache = NULL;
+ }
+ 
+diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
+index 0f294b26e08c96..b99e84a8170af2 100644
+--- a/fs/netfs/read_retry.c
++++ b/fs/netfs/read_retry.c
+@@ -173,7 +173,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+ 						      &stream->subrequests, rreq_link) {
+ 				trace_netfs_sreq(subreq, netfs_sreq_trace_superfluous);
+ 				list_del(&subreq->rreq_link);
+-				netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done);
++				netfs_put_subrequest(subreq, netfs_sreq_trace_put_done);
+ 				if (subreq == to)
+ 					break;
+ 			}
+@@ -257,35 +257,15 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+  */
+ void netfs_retry_reads(struct netfs_io_request *rreq)
+ {
+-	struct netfs_io_subrequest *subreq;
+ 	struct netfs_io_stream *stream = &rreq->io_streams[0];
+-	DEFINE_WAIT(myself);
+ 
+ 	netfs_stat(&netfs_n_rh_retry_read_req);
+ 
+-	set_bit(NETFS_RREQ_RETRYING, &rreq->flags);
+-
+ 	/* Wait for all outstanding I/O to quiesce before performing retries as
+ 	 * we may need to renegotiate the I/O sizes.
+ 	 */
+-	list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
+-		if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
+-			continue;
+-
+-		trace_netfs_rreq(rreq, netfs_rreq_trace_wait_queue);
+-		for (;;) {
+-			prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+-
+-			if (!test_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags))
+-				break;
+-
+-			trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for);
+-			schedule();
+-			trace_netfs_rreq(rreq, netfs_rreq_trace_woke_queue);
+-		}
+-
+-		finish_wait(&rreq->waitq, &myself);
+-	}
++	set_bit(NETFS_RREQ_RETRYING, &rreq->flags);
++	netfs_wait_for_in_progress_stream(rreq, stream);
+ 	clear_bit(NETFS_RREQ_RETRYING, &rreq->flags);
+ 
+ 	trace_netfs_rreq(rreq, netfs_rreq_trace_resubmit);
+diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
+index fea0ecdecc5397..fa622a6cd56da3 100644
+--- a/fs/netfs/read_single.c
++++ b/fs/netfs/read_single.c
+@@ -142,7 +142,7 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+ 	return ret;
+ cancel:
+-	netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++	netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ 	return ret;
+ }
+ 
+@@ -185,11 +185,11 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
+ 	netfs_single_dispatch_read(rreq);
+ 
+ 	ret = netfs_wait_for_read(rreq);
+-	netfs_put_request(rreq, true, netfs_rreq_trace_put_return);
++	netfs_put_request(rreq, netfs_rreq_trace_put_return);
+ 	return ret;
+ 
+ cleanup_free:
+-	netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
++	netfs_put_request(rreq, netfs_rreq_trace_put_failed);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netfs_read_single);
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 3fca59e6475d1c..0ce7b53e7fe83f 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -280,7 +280,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ 							 struct netfs_io_subrequest, rreq_link);
+ 			stream->front = front;
+ 			spin_unlock(&wreq->lock);
+-			netfs_put_subrequest(remove, false,
++			netfs_put_subrequest(remove,
+ 					     notes & SAW_FAILURE ?
+ 					     netfs_sreq_trace_put_cancel :
+ 					     netfs_sreq_trace_put_done);
+@@ -321,18 +321,14 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ 
+ 	if (notes & NEED_RETRY)
+ 		goto need_retry;
+-	if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
+-		trace_netfs_rreq(wreq, netfs_rreq_trace_unpause);
+-		clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags);
+-		smp_mb__after_atomic(); /* Set PAUSE before task state */
+-		wake_up(&wreq->waitq);
+-	}
+ 
+-	if (notes & NEED_REASSESS) {
++	if (notes & MADE_PROGRESS) {
++		netfs_wake_rreq_flag(wreq, NETFS_RREQ_PAUSE, netfs_rreq_trace_unpause);
+ 		//cond_resched();
+ 		goto reassess_streams;
+ 	}
+-	if (notes & MADE_PROGRESS) {
++
++	if (notes & NEED_REASSESS) {
+ 		//cond_resched();
+ 		goto reassess_streams;
+ 	}
+@@ -356,30 +352,21 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ /*
+  * Perform the collection of subrequests, folios and encryption buffers.
+  */
+-void netfs_write_collection_worker(struct work_struct *work)
++bool netfs_write_collection(struct netfs_io_request *wreq)
+ {
+-	struct netfs_io_request *wreq = container_of(work, struct netfs_io_request, work);
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+ 	size_t transferred;
+ 	int s;
+ 
+ 	_enter("R=%x", wreq->debug_id);
+ 
+-	netfs_see_request(wreq, netfs_rreq_trace_see_work);
+-	if (!test_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags)) {
+-		netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+-		return;
+-	}
+-
+ 	netfs_collect_write_results(wreq);
+ 
+ 	/* We're done when the app thread has finished posting subreqs and all
+ 	 * the queues in all the streams are empty.
+ 	 */
+-	if (!test_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags)) {
+-		netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+-		return;
+-	}
++	if (!test_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags))
++		return false;
+ 	smp_rmb(); /* Read ALL_QUEUED before lists. */
+ 
+ 	transferred = LONG_MAX;
+@@ -387,10 +374,8 @@ void netfs_write_collection_worker(struct work_struct *work)
+ 		struct netfs_io_stream *stream = &wreq->io_streams[s];
+ 		if (!stream->active)
+ 			continue;
+-		if (!list_empty(&stream->subrequests)) {
+-			netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+-			return;
+-		}
++		if (!list_empty(&stream->subrequests))
++			return false;
+ 		if (stream->transferred < transferred)
+ 			transferred = stream->transferred;
+ 	}
+@@ -428,8 +413,8 @@ void netfs_write_collection_worker(struct work_struct *work)
+ 		inode_dio_end(wreq->inode);
+ 
+ 	_debug("finished");
+-	trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip);
+-	clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
++	netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);
++	/* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+ 
+ 	if (wreq->iocb) {
+ 		size_t written = min(wreq->transferred, wreq->len);
+@@ -440,19 +425,21 @@ void netfs_write_collection_worker(struct work_struct *work)
+ 		wreq->iocb = VFS_PTR_POISON;
+ 	}
+ 
+-	netfs_clear_subrequests(wreq, false);
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_work_complete);
++	netfs_clear_subrequests(wreq);
++	return true;
+ }
+ 
+-/*
+- * Wake the collection work item.
+- */
+-void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
++void netfs_write_collection_worker(struct work_struct *work)
+ {
+-	if (!work_pending(&wreq->work)) {
+-		netfs_get_request(wreq, netfs_rreq_trace_get_work);
+-		if (!queue_work(system_unbound_wq, &wreq->work))
+-			netfs_put_request(wreq, was_async, netfs_rreq_trace_put_work_nq);
++	struct netfs_io_request *rreq = container_of(work, struct netfs_io_request, work);
++
++	netfs_see_request(rreq, netfs_rreq_trace_see_work);
++	if (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags)) {
++		if (netfs_write_collection(rreq))
++			/* Drop the ref from the IN_PROGRESS flag. */
++			netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
++		else
++			netfs_see_request(rreq, netfs_rreq_trace_see_work_complete);
+ 	}
+ }
+ 
+@@ -460,7 +447,6 @@ void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
+  * netfs_write_subrequest_terminated - Note the termination of a write operation.
+  * @_op: The I/O request that has terminated.
+  * @transferred_or_error: The amount of data transferred or an error code.
+- * @was_async: The termination was asynchronous
+  *
+  * This tells the library that a contributory write I/O operation has
+  * terminated, one way or another, and that it should collect the results.
+@@ -470,21 +456,16 @@ void netfs_wake_write_collector(struct netfs_io_request *wreq, bool was_async)
+  * negative error code.  The library will look after reissuing I/O operations
+  * as appropriate and writing downloaded data to the cache.
+  *
+- * If @was_async is true, the caller might be running in softirq or interrupt
+- * context and we can't sleep.
+- *
+  * When this is called, ownership of the subrequest is transferred back to the
+  * library, along with a ref.
+  *
+  * Note that %_op is a void* so that the function can be passed to
+  * kiocb::term_func without the need for a casting wrapper.
+  */
+-void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+-				       bool was_async)
++void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error)
+ {
+ 	struct netfs_io_subrequest *subreq = _op;
+ 	struct netfs_io_request *wreq = subreq->rreq;
+-	struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
+ 
+ 	_enter("%x[%x] %zd", wreq->debug_id, subreq->debug_index, transferred_or_error);
+ 
+@@ -536,15 +517,7 @@ void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+ 	}
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
+-
+-	clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+-
+-	/* If we are at the head of the queue, wake up the collector,
+-	 * transferring a ref to it if we were the ones to do so.
+-	 */
+-	if (list_is_first(&subreq->rreq_link, &stream->subrequests))
+-		netfs_wake_write_collector(wreq, was_async);
+-
+-	netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated);
++	netfs_subreq_clear_in_progress(subreq);
++	netfs_put_subrequest(subreq, netfs_sreq_trace_put_terminated);
+ }
+ EXPORT_SYMBOL(netfs_write_subrequest_terminated);
+diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
+index 77279fc5b5a7cb..50bee2c4130d1e 100644
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -134,7 +134,7 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
+ 	return wreq;
+ nomem:
+ 	wreq->error = -ENOMEM;
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_failed);
++	netfs_put_request(wreq, netfs_rreq_trace_put_failed);
+ 	return ERR_PTR(-ENOMEM);
+ }
+ 
+@@ -233,7 +233,7 @@ static void netfs_do_issue_write(struct netfs_io_stream *stream,
+ 	_enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len);
+ 
+ 	if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
+-		return netfs_write_subrequest_terminated(subreq, subreq->error, false);
++		return netfs_write_subrequest_terminated(subreq, subreq->error);
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ 	stream->issue_write(subreq);
+@@ -542,7 +542,7 @@ static void netfs_end_issue_write(struct netfs_io_request *wreq)
+ 	}
+ 
+ 	if (needs_poke)
+-		netfs_wake_write_collector(wreq, false);
++		netfs_wake_collector(wreq);
+ }
+ 
+ /*
+@@ -576,6 +576,7 @@ int netfs_writepages(struct address_space *mapping,
+ 		goto couldnt_start;
+ 	}
+ 
++	__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
+ 	trace_netfs_write(wreq, netfs_write_trace_writeback);
+ 	netfs_stat(&netfs_n_wh_writepages);
+ 
+@@ -599,8 +600,9 @@ int netfs_writepages(struct address_space *mapping,
+ 	netfs_end_issue_write(wreq);
+ 
+ 	mutex_unlock(&ictx->wb_lock);
++	netfs_wake_collector(wreq);
+ 
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	_leave(" = %d", error);
+ 	return error;
+ 
+@@ -673,11 +675,11 @@ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_c
+ /*
+  * End a write operation used when writing through the pagecache.
+  */
+-int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
+-			   struct folio *writethrough_cache)
++ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
++			       struct folio *writethrough_cache)
+ {
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+-	int ret;
++	ssize_t ret;
+ 
+ 	_enter("R=%x", wreq->debug_id);
+ 
+@@ -688,13 +690,11 @@ int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_contr
+ 
+ 	mutex_unlock(&ictx->wb_lock);
+ 
+-	if (wreq->iocb) {
++	if (wreq->iocb)
+ 		ret = -EIOCBQUEUED;
+-	} else {
+-		wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, TASK_UNINTERRUPTIBLE);
+-		ret = wreq->error;
+-	}
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	else
++		ret = netfs_wait_for_write(wreq);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	return ret;
+ }
+ 
+@@ -722,10 +722,8 @@ int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t
+ 		start += part;
+ 		len -= part;
+ 		rolling_buffer_advance(&wreq->buffer, part);
+-		if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
+-			trace_netfs_rreq(wreq, netfs_rreq_trace_wait_pause);
+-			wait_event(wreq->waitq, !test_bit(NETFS_RREQ_PAUSE, &wreq->flags));
+-		}
++		if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags))
++			netfs_wait_for_paused_write(wreq);
+ 		if (test_bit(NETFS_RREQ_FAILED, &wreq->flags))
+ 			break;
+ 	}
+@@ -885,7 +883,8 @@ int netfs_writeback_single(struct address_space *mapping,
+ 		goto couldnt_start;
+ 	}
+ 
+-	trace_netfs_write(wreq, netfs_write_trace_writeback);
++	__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
++	trace_netfs_write(wreq, netfs_write_trace_writeback_single);
+ 	netfs_stat(&netfs_n_wh_writepages);
+ 
+ 	if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
+@@ -914,8 +913,9 @@ int netfs_writeback_single(struct address_space *mapping,
+ 	set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+ 
+ 	mutex_unlock(&ictx->wb_lock);
++	netfs_wake_collector(wreq);
+ 
+-	netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++	netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ 	_leave(" = %d", ret);
+ 	return ret;
+ 
+diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c
+index 545d33079a77d0..9d1d8a8bab7261 100644
+--- a/fs/netfs/write_retry.c
++++ b/fs/netfs/write_retry.c
+@@ -39,9 +39,10 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+ 			if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
+ 				break;
+ 			if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
+-				struct iov_iter source = subreq->io_iter;
++				struct iov_iter source;
+ 
+-				iov_iter_revert(&source, subreq->len - source.count);
++				netfs_reset_iter(subreq);
++				source = subreq->io_iter;
+ 				netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
+ 				netfs_reissue_write(stream, subreq, &source);
+ 			}
+@@ -131,7 +132,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+ 						      &stream->subrequests, rreq_link) {
+ 				trace_netfs_sreq(subreq, netfs_sreq_trace_discard);
+ 				list_del(&subreq->rreq_link);
+-				netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done);
++				netfs_put_subrequest(subreq, netfs_sreq_trace_put_done);
+ 				if (subreq == to)
+ 					break;
+ 			}
+@@ -199,7 +200,6 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
+  */
+ void netfs_retry_writes(struct netfs_io_request *wreq)
+ {
+-	struct netfs_io_subrequest *subreq;
+ 	struct netfs_io_stream *stream;
+ 	int s;
+ 
+@@ -208,16 +208,13 @@ void netfs_retry_writes(struct netfs_io_request *wreq)
+ 	/* Wait for all outstanding I/O to quiesce before performing retries as
+ 	 * we may need to renegotiate the I/O sizes.
+ 	 */
++	set_bit(NETFS_RREQ_RETRYING, &wreq->flags);
+ 	for (s = 0; s < NR_IO_STREAMS; s++) {
+ 		stream = &wreq->io_streams[s];
+-		if (!stream->active)
+-			continue;
+-
+-		list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
+-			wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS,
+-				    TASK_UNINTERRUPTIBLE);
+-		}
++		if (stream->active)
++			netfs_wait_for_in_progress_stream(wreq, stream);
+ 	}
++	clear_bit(NETFS_RREQ_RETRYING, &wreq->flags);
+ 
+ 	// TODO: Enc: Fetch changed partial pages
+ 	// TODO: Enc: Reencrypt content if needed.
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index e278a1ad1ca3e8..8b07851787312b 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -367,6 +367,7 @@ void nfs_netfs_read_completion(struct nfs_pgio_header *hdr)
+ 
+ 	sreq = netfs->sreq;
+ 	if (test_bit(NFS_IOHDR_EOF, &hdr->flags) &&
++	    sreq->rreq->origin != NETFS_UNBUFFERED_READ &&
+ 	    sreq->rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &sreq->flags);
+ 
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 4ec952f9f47dde..e6d36b3d3fc059 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -207,14 +207,16 @@ void nfs_local_probe_async(struct nfs_client *clp)
+ }
+ EXPORT_SYMBOL_GPL(nfs_local_probe_async);
+ 
+-static inline struct nfsd_file *nfs_local_file_get(struct nfsd_file *nf)
++static inline void nfs_local_file_put(struct nfsd_file *localio)
+ {
+-	return nfs_to->nfsd_file_get(nf);
+-}
++	/* nfs_to_nfsd_file_put_local() expects an __rcu pointer
++	 * but we have a __kernel pointer.  It is always safe
++	 * to cast a __kernel pointer to an __rcu pointer
++	 * because the cast only weakens what is known about the pointer.
++	 */
++	struct nfsd_file __rcu *nf = (struct nfsd_file __rcu*) localio;
+ 
+-static inline void nfs_local_file_put(struct nfsd_file *nf)
+-{
+-	nfs_to->nfsd_file_put(nf);
++	nfs_to_nfsd_file_put_local(&nf);
+ }
+ 
+ /*
+@@ -226,12 +228,13 @@ static inline void nfs_local_file_put(struct nfsd_file *nf)
+ static struct nfsd_file *
+ __nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ 		    struct nfs_fh *fh, struct nfs_file_localio *nfl,
++		    struct nfsd_file __rcu **pnf,
+ 		    const fmode_t mode)
+ {
+ 	struct nfsd_file *localio;
+ 
+ 	localio = nfs_open_local_fh(&clp->cl_uuid, clp->cl_rpcclient,
+-				    cred, fh, nfl, mode);
++				    cred, fh, nfl, pnf, mode);
+ 	if (IS_ERR(localio)) {
+ 		int status = PTR_ERR(localio);
+ 		trace_nfs_local_open_fh(fh, mode, status);
+@@ -258,7 +261,7 @@ nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ 		  struct nfs_fh *fh, struct nfs_file_localio *nfl,
+ 		  const fmode_t mode)
+ {
+-	struct nfsd_file *nf, *new, __rcu **pnf;
++	struct nfsd_file *nf, __rcu **pnf;
+ 
+ 	if (!nfs_server_is_local(clp))
+ 		return NULL;
+@@ -270,29 +273,9 @@ nfs_local_open_fh(struct nfs_client *clp, const struct cred *cred,
+ 	else
+ 		pnf = &nfl->ro_file;
+ 
+-	new = NULL;
+-	rcu_read_lock();
+-	nf = rcu_dereference(*pnf);
+-	if (!nf) {
+-		rcu_read_unlock();
+-		new = __nfs_local_open_fh(clp, cred, fh, nfl, mode);
+-		if (IS_ERR(new))
+-			return NULL;
+-		rcu_read_lock();
+-		/* try to swap in the pointer */
+-		spin_lock(&clp->cl_uuid.lock);
+-		nf = rcu_dereference_protected(*pnf, 1);
+-		if (!nf) {
+-			nf = new;
+-			new = NULL;
+-			rcu_assign_pointer(*pnf, nf);
+-		}
+-		spin_unlock(&clp->cl_uuid.lock);
+-	}
+-	nf = nfs_local_file_get(nf);
+-	rcu_read_unlock();
+-	if (new)
+-		nfs_to_nfsd_file_put_local(new);
++	nf = __nfs_local_open_fh(clp, cred, fh, nfl, pnf, mode);
++	if (IS_ERR(nf))
++		return NULL;
+ 	return nf;
+ }
+ EXPORT_SYMBOL_GPL(nfs_local_open_fh);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index b1d2122bd5a749..4b123bca65e12d 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5164,13 +5164,15 @@ static int nfs4_do_create(struct inode *dir, struct dentry *dentry, struct nfs4_
+ }
+ 
+ static struct dentry *nfs4_do_mkdir(struct inode *dir, struct dentry *dentry,
+-				    struct nfs4_createdata *data)
++				    struct nfs4_createdata *data, int *statusp)
+ {
+-	int status = nfs4_call_sync(NFS_SERVER(dir)->client, NFS_SERVER(dir), &data->msg,
++	struct dentry *ret;
++
++	*statusp = nfs4_call_sync(NFS_SERVER(dir)->client, NFS_SERVER(dir), &data->msg,
+ 				    &data->arg.seq_args, &data->res.seq_res, 1);
+ 
+-	if (status)
+-		return ERR_PTR(status);
++	if (*statusp)
++		return NULL;
+ 
+ 	spin_lock(&dir->i_lock);
+ 	/* Creating a directory bumps nlink in the parent */
+@@ -5179,7 +5181,11 @@ static struct dentry *nfs4_do_mkdir(struct inode *dir, struct dentry *dentry,
+ 				      data->res.fattr->time_start,
+ 				      NFS_INO_INVALID_DATA);
+ 	spin_unlock(&dir->i_lock);
+-	return nfs_add_or_obtain(dentry, data->res.fh, data->res.fattr);
++	ret = nfs_add_or_obtain(dentry, data->res.fh, data->res.fattr);
++	if (!IS_ERR(ret))
++		return ret;
++	*statusp = PTR_ERR(ret);
++	return NULL;
+ }
+ 
+ static void nfs4_free_createdata(struct nfs4_createdata *data)
+@@ -5240,17 +5246,18 @@ static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
+ 
+ static struct dentry *_nfs4_proc_mkdir(struct inode *dir, struct dentry *dentry,
+ 				       struct iattr *sattr,
+-				       struct nfs4_label *label)
++				       struct nfs4_label *label, int *statusp)
+ {
+ 	struct nfs4_createdata *data;
+-	struct dentry *ret = ERR_PTR(-ENOMEM);
++	struct dentry *ret = NULL;
+ 
++	*statusp = -ENOMEM;
+ 	data = nfs4_alloc_createdata(dir, &dentry->d_name, sattr, NF4DIR);
+ 	if (data == NULL)
+ 		goto out;
+ 
+ 	data->arg.label = label;
+-	ret = nfs4_do_mkdir(dir, dentry, data);
++	ret = nfs4_do_mkdir(dir, dentry, data, statusp);
+ 
+ 	nfs4_free_createdata(data);
+ out:
+@@ -5273,11 +5280,12 @@ static struct dentry *nfs4_proc_mkdir(struct inode *dir, struct dentry *dentry,
+ 	if (!(server->attr_bitmask[2] & FATTR4_WORD2_MODE_UMASK))
+ 		sattr->ia_mode &= ~current_umask();
+ 	do {
+-		alias = _nfs4_proc_mkdir(dir, dentry, sattr, label);
+-		err = PTR_ERR_OR_ZERO(alias);
++		alias = _nfs4_proc_mkdir(dir, dentry, sattr, label, &err);
+ 		trace_nfs4_mkdir(dir, &dentry->d_name, err);
+-		err = nfs4_handle_exception(NFS_SERVER(dir), err,
+-				&exception);
++		if (err)
++			alias = ERR_PTR(nfs4_handle_exception(NFS_SERVER(dir),
++							      err,
++							      &exception));
+ 	} while (exception.retry);
+ 	nfs4_label_release_security(label);
+ 
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 9eea9e62afc9c3..91b5503b6f74d7 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -1051,6 +1051,16 @@ int nfs_reconfigure(struct fs_context *fc)
+ 
+ 	sync_filesystem(sb);
+ 
++	/*
++	 * The SB_RDONLY flag has been removed from the superblock during
++	 * mounts to prevent interference between different filesystems.
++	 * Similarly, it is also necessary to ignore the SB_RDONLY flag
++	 * during reconfiguration; otherwise, it may also result in the
++	 * creation of redundant superblocks when mounting a directory with
++	 * different rw and ro flags multiple times.
++	 */
++	fc->sb_flags_mask &= ~SB_RDONLY;
++
+ 	/*
+ 	 * Userspace mount programs that send binary options generally send
+ 	 * them populated with default values. We have no way to know which
+@@ -1308,8 +1318,17 @@ int nfs_get_tree_common(struct fs_context *fc)
+ 	if (IS_ERR(server))
+ 		return PTR_ERR(server);
+ 
++	/*
++	 * When NFS_MOUNT_UNSHARED is not set, NFS forces the sharing of a
++	 * superblock among each filesystem that mounts sub-directories
++	 * belonging to a single exported root path.
++	 * To prevent interference between different filesystems, the
++	 * SB_RDONLY flag should be removed from the superblock.
++	 */
+ 	if (server->flags & NFS_MOUNT_UNSHARED)
+ 		compare_super = NULL;
++	else
++		fc->sb_flags &= ~SB_RDONLY;
+ 
+ 	/* -o noac implies -o sync */
+ 	if (server->flags & NFS_MOUNT_NOAC)
+diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c
+index 6a0bdea6d6449f..05c7c16e37ab4c 100644
+--- a/fs/nfs_common/nfslocalio.c
++++ b/fs/nfs_common/nfslocalio.c
+@@ -151,8 +151,7 @@ EXPORT_SYMBOL_GPL(nfs_localio_enable_client);
+  */
+ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ {
+-	LIST_HEAD(local_files);
+-	struct nfs_file_localio *nfl, *tmp;
++	struct nfs_file_localio *nfl;
+ 
+ 	spin_lock(&nfs_uuid->lock);
+ 	if (unlikely(!rcu_access_pointer(nfs_uuid->net))) {
+@@ -166,17 +165,42 @@ static bool nfs_uuid_put(nfs_uuid_t *nfs_uuid)
+ 		nfs_uuid->dom = NULL;
+ 	}
+ 
+-	list_splice_init(&nfs_uuid->files, &local_files);
+-	spin_unlock(&nfs_uuid->lock);
+-
+ 	/* Walk list of files and ensure their last references dropped */
+-	list_for_each_entry_safe(nfl, tmp, &local_files, list) {
+-		nfs_close_local_fh(nfl);
++
++	while ((nfl = list_first_entry_or_null(&nfs_uuid->files,
++					       struct nfs_file_localio,
++					       list)) != NULL) {
++		/* If nfs_uuid is already NULL, nfs_close_local_fh is
++		 * closing and we must wait, else we unlink and close.
++		 */
++		if (rcu_access_pointer(nfl->nfs_uuid) == NULL) {
++			/* nfs_close_local_fh() is doing the
++			 * close and we must wait. until it unlinks
++			 */
++			wait_var_event_spinlock(nfl,
++						list_first_entry_or_null(
++							&nfs_uuid->files,
++							struct nfs_file_localio,
++							list) != nfl,
++						&nfs_uuid->lock);
++			continue;
++		}
++
++		/* Remove nfl from nfs_uuid->files list */
++		list_del_init(&nfl->list);
++		spin_unlock(&nfs_uuid->lock);
++
++		nfs_to_nfsd_file_put_local(&nfl->ro_file);
++		nfs_to_nfsd_file_put_local(&nfl->rw_file);
+ 		cond_resched();
+-	}
+ 
+-	spin_lock(&nfs_uuid->lock);
+-	BUG_ON(!list_empty(&nfs_uuid->files));
++		spin_lock(&nfs_uuid->lock);
++		/* Now we can allow racing nfs_close_local_fh() to
++		 * skip the locking.
++		 */
++		RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
++		wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++	}
+ 
+ 	/* Remove client from nn->local_clients */
+ 	if (nfs_uuid->list_lock) {
+@@ -237,6 +261,7 @@ static void nfs_uuid_add_file(nfs_uuid_t *nfs_uuid, struct nfs_file_localio *nfl
+ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ 		   struct rpc_clnt *rpc_clnt, const struct cred *cred,
+ 		   const struct nfs_fh *nfs_fh, struct nfs_file_localio *nfl,
++		   struct nfsd_file __rcu **pnf,
+ 		   const fmode_t fmode)
+ {
+ 	struct net *net;
+@@ -261,10 +286,9 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ 	rcu_read_unlock();
+ 	/* We have an implied reference to net thanks to nfsd_net_try_get */
+ 	localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt,
+-					     cred, nfs_fh, fmode);
+-	if (IS_ERR(localio))
+-		nfs_to_nfsd_net_put(net);
+-	else
++					     cred, nfs_fh, pnf, fmode);
++	nfs_to_nfsd_net_put(net);
++	if (!IS_ERR(localio))
+ 		nfs_uuid_add_file(uuid, nfl);
+ 
+ 	return localio;
+@@ -273,8 +297,6 @@ EXPORT_SYMBOL_GPL(nfs_open_local_fh);
+ 
+ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ {
+-	struct nfsd_file *ro_nf = NULL;
+-	struct nfsd_file *rw_nf = NULL;
+ 	nfs_uuid_t *nfs_uuid;
+ 
+ 	rcu_read_lock();
+@@ -285,28 +307,39 @@ void nfs_close_local_fh(struct nfs_file_localio *nfl)
+ 		return;
+ 	}
+ 
+-	ro_nf = rcu_access_pointer(nfl->ro_file);
+-	rw_nf = rcu_access_pointer(nfl->rw_file);
+-	if (ro_nf || rw_nf) {
+-		spin_lock(&nfs_uuid->lock);
+-		if (ro_nf)
+-			ro_nf = rcu_dereference_protected(xchg(&nfl->ro_file, NULL), 1);
+-		if (rw_nf)
+-			rw_nf = rcu_dereference_protected(xchg(&nfl->rw_file, NULL), 1);
+-
+-		/* Remove nfl from nfs_uuid->files list */
+-		RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
+-		list_del_init(&nfl->list);
++	spin_lock(&nfs_uuid->lock);
++	if (!rcu_access_pointer(nfl->nfs_uuid)) {
++		/* nfs_uuid_put has finished here */
+ 		spin_unlock(&nfs_uuid->lock);
+ 		rcu_read_unlock();
+-
+-		if (ro_nf)
+-			nfs_to_nfsd_file_put_local(ro_nf);
+-		if (rw_nf)
+-			nfs_to_nfsd_file_put_local(rw_nf);
+ 		return;
+ 	}
++	if (list_empty(&nfs_uuid->files)) {
++		/* nfs_uuid_put() has started closing files, wait for it
++		 * to finished
++		 */
++		spin_unlock(&nfs_uuid->lock);
++		rcu_read_unlock();
++		wait_var_event(&nfl->nfs_uuid,
++			       rcu_access_pointer(nfl->nfs_uuid) == NULL);
++		return;
++	}
++	/* tell nfs_uuid_put() to wait for us */
++	RCU_INIT_POINTER(nfl->nfs_uuid, NULL);
++	spin_unlock(&nfs_uuid->lock);
+ 	rcu_read_unlock();
++
++	nfs_to_nfsd_file_put_local(&nfl->ro_file);
++	nfs_to_nfsd_file_put_local(&nfl->rw_file);
++
++	/* Remove nfl from nfs_uuid->files list and signal nfs_uuid_put()
++	 * that we are done.  The moment we drop the spinlock the
++	 * nfs_uuid could be freed.
++	 */
++	spin_lock(&nfs_uuid->lock);
++	list_del_init(&nfl->list);
++	wake_up_var_locked(&nfl->nfs_uuid, &nfs_uuid->lock);
++	spin_unlock(&nfs_uuid->lock);
+ }
+ EXPORT_SYMBOL_GPL(nfs_close_local_fh);
+ 
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index ab85e6a2454f4c..e108b6c705b459 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -378,14 +378,40 @@ nfsd_file_put(struct nfsd_file *nf)
+  * the reference of the nfsd_file.
+  */
+ struct net *
+-nfsd_file_put_local(struct nfsd_file *nf)
++nfsd_file_put_local(struct nfsd_file __rcu **pnf)
+ {
+-	struct net *net = nf->nf_net;
++	struct nfsd_file *nf;
++	struct net *net = NULL;
+ 
+-	nfsd_file_put(nf);
++	nf = unrcu_pointer(xchg(pnf, NULL));
++	if (nf) {
++		net = nf->nf_net;
++		nfsd_file_put(nf);
++	}
+ 	return net;
+ }
+ 
++/**
++ * nfsd_file_get_local - get nfsd_file reference and reference to net
++ * @nf: nfsd_file of which to put the reference
++ *
++ * Get reference to both the nfsd_file and nf->nf_net.
++ */
++struct nfsd_file *
++nfsd_file_get_local(struct nfsd_file *nf)
++{
++	struct net *net = nf->nf_net;
++
++	if (nfsd_net_try_get(net)) {
++		nf = nfsd_file_get(nf);
++		if (!nf)
++			nfsd_net_put(net);
++	} else {
++		nf = NULL;
++	}
++	return nf;
++}
++
+ /**
+  * nfsd_file_file - get the backing file of an nfsd_file
+  * @nf: nfsd_file of which to access the backing file.
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index 5865f9c7271214..722b26c71e454a 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -62,7 +62,8 @@ void nfsd_file_cache_shutdown(void);
+ int nfsd_file_cache_start_net(struct net *net);
+ void nfsd_file_cache_shutdown_net(struct net *net);
+ void nfsd_file_put(struct nfsd_file *nf);
+-struct net *nfsd_file_put_local(struct nfsd_file *nf);
++struct net *nfsd_file_put_local(struct nfsd_file __rcu **nf);
++struct nfsd_file *nfsd_file_get_local(struct nfsd_file *nf);
+ struct nfsd_file *nfsd_file_get(struct nfsd_file *nf);
+ struct file *nfsd_file_file(struct nfsd_file *nf);
+ void nfsd_file_close_inode_sync(struct inode *inode);
+diff --git a/fs/nfsd/localio.c b/fs/nfsd/localio.c
+index 238647fa379e32..80d9ff6608a7b9 100644
+--- a/fs/nfsd/localio.c
++++ b/fs/nfsd/localio.c
+@@ -24,21 +24,6 @@
+ #include "filecache.h"
+ #include "cache.h"
+ 
+-static const struct nfsd_localio_operations nfsd_localio_ops = {
+-	.nfsd_net_try_get  = nfsd_net_try_get,
+-	.nfsd_net_put  = nfsd_net_put,
+-	.nfsd_open_local_fh = nfsd_open_local_fh,
+-	.nfsd_file_put_local = nfsd_file_put_local,
+-	.nfsd_file_get = nfsd_file_get,
+-	.nfsd_file_put = nfsd_file_put,
+-	.nfsd_file_file = nfsd_file_file,
+-};
+-
+-void nfsd_localio_ops_init(void)
+-{
+-	nfs_to = &nfsd_localio_ops;
+-}
+-
+ /**
+  * nfsd_open_local_fh - lookup a local filehandle @nfs_fh and map to nfsd_file
+  *
+@@ -47,6 +32,7 @@ void nfsd_localio_ops_init(void)
+  * @rpc_clnt: rpc_clnt that the client established
+  * @cred: cred that the client established
+  * @nfs_fh: filehandle to lookup
++ * @nfp: place to find the nfsd_file, or store it if it was non-NULL
+  * @fmode: fmode_t to use for open
+  *
+  * This function maps a local fh to a path on a local filesystem.
+@@ -57,10 +43,11 @@ void nfsd_localio_ops_init(void)
+  * set. Caller (NFS client) is responsible for calling nfsd_net_put and
+  * nfsd_file_put (via nfs_to_nfsd_file_put_local).
+  */
+-struct nfsd_file *
++static struct nfsd_file *
+ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 		   struct rpc_clnt *rpc_clnt, const struct cred *cred,
+-		   const struct nfs_fh *nfs_fh, const fmode_t fmode)
++		   const struct nfs_fh *nfs_fh, struct nfsd_file __rcu **pnf,
++		   const fmode_t fmode)
+ {
+ 	int mayflags = NFSD_MAY_LOCALIO;
+ 	struct svc_cred rq_cred;
+@@ -71,6 +58,15 @@ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 	if (nfs_fh->size > NFS4_FHSIZE)
+ 		return ERR_PTR(-EINVAL);
+ 
++	if (!nfsd_net_try_get(net))
++		return ERR_PTR(-ENXIO);
++
++	rcu_read_lock();
++	localio = nfsd_file_get(rcu_dereference(*pnf));
++	rcu_read_unlock();
++	if (localio)
++		return localio;
++
+ 	/* nfs_fh -> svc_fh */
+ 	fh_init(&fh, NFS4_FHSIZE);
+ 	fh.fh_handle.fh_size = nfs_fh->size;
+@@ -92,9 +88,47 @@ nfsd_open_local_fh(struct net *net, struct auth_domain *dom,
+ 	if (rq_cred.cr_group_info)
+ 		put_group_info(rq_cred.cr_group_info);
+ 
++	if (!IS_ERR(localio)) {
++		struct nfsd_file *new;
++		if (!nfsd_net_try_get(net)) {
++			nfsd_file_put(localio);
++			nfsd_net_put(net);
++			return ERR_PTR(-ENXIO);
++		}
++		nfsd_file_get(localio);
++	again:
++		new = unrcu_pointer(cmpxchg(pnf, NULL, RCU_INITIALIZER(localio)));
++		if (new) {
++			/* Some other thread installed an nfsd_file */
++			if (nfsd_file_get(new) == NULL)
++				goto again;
++			/*
++			 * Drop the ref we were going to install and the
++			 * one we were going to return.
++			 */
++			nfsd_file_put(localio);
++			nfsd_file_put(localio);
++			localio = new;
++		}
++	} else
++		nfsd_net_put(net);
++
+ 	return localio;
+ }
+-EXPORT_SYMBOL_GPL(nfsd_open_local_fh);
++
++static const struct nfsd_localio_operations nfsd_localio_ops = {
++	.nfsd_net_try_get  = nfsd_net_try_get,
++	.nfsd_net_put  = nfsd_net_put,
++	.nfsd_open_local_fh = nfsd_open_local_fh,
++	.nfsd_file_put_local = nfsd_file_put_local,
++	.nfsd_file_get_local = nfsd_file_get_local,
++	.nfsd_file_file = nfsd_file_file,
++};
++
++void nfsd_localio_ops_init(void)
++{
++	nfs_to = &nfsd_localio_ops;
++}
+ 
+ /*
+  * UUID_IS_LOCAL XDR functions
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 0d8f7fb15c2e54..dd0c8e560ef6a2 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -2102,11 +2102,13 @@ static int nilfs_btree_propagate(struct nilfs_bmap *btree,
+ 
+ 	ret = nilfs_btree_do_lookup(btree, path, key, NULL, level + 1, 0);
+ 	if (ret < 0) {
+-		if (unlikely(ret == -ENOENT))
++		if (unlikely(ret == -ENOENT)) {
+ 			nilfs_crit(btree->b_inode->i_sb,
+ 				   "writing node/leaf block does not appear in b-tree (ino=%lu) at key=%llu, level=%d",
+ 				   btree->b_inode->i_ino,
+ 				   (unsigned long long)key, level);
++			ret = -EINVAL;
++		}
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/nilfs2/direct.c b/fs/nilfs2/direct.c
+index 893ab36824cc2b..2d8dc6b35b5477 100644
+--- a/fs/nilfs2/direct.c
++++ b/fs/nilfs2/direct.c
+@@ -273,6 +273,9 @@ static int nilfs_direct_propagate(struct nilfs_bmap *bmap,
+ 	dat = nilfs_bmap_get_dat(bmap);
+ 	key = nilfs_bmap_data_get_key(bmap, bh);
+ 	ptr = nilfs_direct_get_ptr(bmap, key);
++	if (ptr == NILFS_BMAP_INVALID_PTR)
++		return -EINVAL;
++
+ 	if (!buffer_nilfs_volatile(bh)) {
+ 		oldreq.pr_entry_nr = ptr;
+ 		newreq.pr_entry_nr = ptr;
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 78d20e4baa2c9a..1bf2a6593dec66 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -2182,6 +2182,10 @@ static int indx_get_entry_to_replace(struct ntfs_index *indx,
+ 
+ 		e = hdr_first_de(&n->index->ihdr);
+ 		fnd_push(fnd, n, e);
++		if (!e) {
++			err = -EINVAL;
++			goto out;
++		}
+ 
+ 		if (!de_is_last(e)) {
+ 			/*
+@@ -2203,6 +2207,10 @@ static int indx_get_entry_to_replace(struct ntfs_index *indx,
+ 
+ 	n = fnd->nodes[level];
+ 	te = hdr_first_de(&n->index->ihdr);
++	if (!te) {
++		err = -EINVAL;
++		goto out;
++	}
+ 	/* Copy the candidate entry into the replacement entry buffer. */
+ 	re = kmalloc(le16_to_cpu(te->size) + sizeof(u64), GFP_NOFS);
+ 	if (!re) {
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 3e2957a1e3605c..0f0d27d4644a9b 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -805,6 +805,10 @@ static ssize_t ntfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 		ret = 0;
+ 		goto out;
+ 	}
++	if (is_compressed(ni)) {
++		ret = 0;
++		goto out;
++	}
+ 
+ 	ret = blockdev_direct_IO(iocb, inode, iter,
+ 				 wr ? ntfs_get_block_direct_IO_W :
+@@ -2068,5 +2072,6 @@ const struct address_space_operations ntfs_aops_cmpr = {
+ 	.read_folio	= ntfs_read_folio,
+ 	.readahead	= ntfs_readahead,
+ 	.dirty_folio	= block_dirty_folio,
++	.direct_IO	= ntfs_direct_IO,
+ };
+ // clang-format on
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index e272429da3db34..de7f12858729ac 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -674,7 +674,7 @@ int ocfs2_finish_quota_recovery(struct ocfs2_super *osb,
+ 			break;
+ 	}
+ out:
+-	kfree(rec);
++	ocfs2_free_quota_recovery(rec);
+ 	return status;
+ }
+ 
+diff --git a/fs/pidfs.c b/fs/pidfs.c
+index 50e69a9e104a60..87a53d2ae4bb78 100644
+--- a/fs/pidfs.c
++++ b/fs/pidfs.c
+@@ -336,7 +336,7 @@ static long pidfd_info(struct file *file, unsigned int cmd, unsigned long arg)
+ 	kinfo.pid = task_pid_vnr(task);
+ 	kinfo.mask |= PIDFD_INFO_PID;
+ 
+-	if (kinfo.pid == 0 || kinfo.tgid == 0 || (kinfo.ppid == 0 && kinfo.pid != 1))
++	if (kinfo.pid == 0 || kinfo.tgid == 0)
+ 		return -ESRCH;
+ 
+ copy_out:
+diff --git a/fs/pnode.c b/fs/pnode.c
+index fb77427df39e2e..ffd429b760d5d4 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -231,8 +231,8 @@ static int propagate_one(struct mount *m, struct mountpoint *dest_mp)
+ 	/* skip if mountpoint isn't visible in m */
+ 	if (!is_subdir(dest_mp->m_dentry, m->mnt.mnt_root))
+ 		return 0;
+-	/* skip if m is in the anon_ns we are emptying */
+-	if (m->mnt_ns->mntns_flags & MNTNS_PROPAGATING)
++	/* skip if m is in the anon_ns */
++	if (is_anon_ns(m->mnt_ns))
+ 		return 0;
+ 
+ 	if (peers(m, last_dest)) {
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index ecf774a8f1ca01..66093fa78aed7d 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -151,8 +151,7 @@ extern bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 eof,
+ 				   bool from_readdir);
+ extern void cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset,
+ 			    unsigned int bytes_written);
+-void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result,
+-				      bool was_async);
++void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result);
+ extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *, int);
+ extern int cifs_get_writable_file(struct cifsInodeInfo *cifs_inode,
+ 				  int flags,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index f55457b4b82e36..a3ba3346ed313f 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -1725,7 +1725,7 @@ cifs_writev_callback(struct mid_q_entry *mid)
+ 			      server->credits, server->in_flight,
+ 			      0, cifs_trace_rw_credits_write_response_clear);
+ 	wdata->credits.value = 0;
+-	cifs_write_subrequest_terminated(wdata, result, true);
++	cifs_write_subrequest_terminated(wdata, result);
+ 	release_mid(mid);
+ 	trace_smb3_rw_credits(credits.rreq_debug_id, credits.rreq_debug_index, 0,
+ 			      server->credits, server->in_flight,
+@@ -1813,7 +1813,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)
+ out:
+ 	if (rc) {
+ 		add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+-		cifs_write_subrequest_terminated(wdata, rc, false);
++		cifs_write_subrequest_terminated(wdata, rc);
+ 	}
+ }
+ 
+@@ -2753,10 +2753,10 @@ int cifs_query_reparse_point(const unsigned int xid,
+ 
+ 	io_req->TotalParameterCount = 0;
+ 	io_req->TotalDataCount = 0;
+-	io_req->MaxParameterCount = cpu_to_le32(2);
++	io_req->MaxParameterCount = cpu_to_le32(0);
+ 	/* BB find exact data count max from sess structure BB */
+ 	io_req->MaxDataCount = cpu_to_le32(CIFSMaxBufSize & 0xFFFFFF00);
+-	io_req->MaxSetupCount = 4;
++	io_req->MaxSetupCount = 1;
+ 	io_req->Reserved = 0;
+ 	io_req->ParameterOffset = 0;
+ 	io_req->DataCount = 0;
+@@ -2783,6 +2783,22 @@ int cifs_query_reparse_point(const unsigned int xid,
+ 		goto error;
+ 	}
+ 
++	/* SetupCount must be 1, otherwise offset to ByteCount is incorrect. */
++	if (io_rsp->SetupCount != 1) {
++		rc = -EIO;
++		goto error;
++	}
++
++	/*
++	 * ReturnedDataLen is output length of executed IOCTL.
++	 * DataCount is output length transferred over network.
++	 * Check that we have full FSCTL_GET_REPARSE_POINT buffer.
++	 */
++	if (data_count != le16_to_cpu(io_rsp->ReturnedDataLen)) {
++		rc = -EIO;
++		goto error;
++	}
++
+ 	end = 2 + get_bcc(&io_rsp->hdr) + (__u8 *)&io_rsp->ByteCount;
+ 	start = (__u8 *)&io_rsp->hdr.Protocol + data_offset;
+ 	if (start >= end) {
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 950aa4f912f5cd..9835672267d277 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -130,7 +130,7 @@ static void cifs_issue_write(struct netfs_io_subrequest *subreq)
+ 	else
+ 		trace_netfs_sreq(subreq, netfs_sreq_trace_fail);
+ 	add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+-	cifs_write_subrequest_terminated(wdata, rc, false);
++	cifs_write_subrequest_terminated(wdata, rc);
+ 	goto out;
+ }
+ 
+@@ -219,7 +219,8 @@ static void cifs_issue_read(struct netfs_io_subrequest *subreq)
+ 			goto failed;
+ 	}
+ 
+-	if (subreq->rreq->origin != NETFS_DIO_READ)
++	if (subreq->rreq->origin != NETFS_UNBUFFERED_READ &&
++	    subreq->rreq->origin != NETFS_DIO_READ)
+ 		__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ 
+ 	trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+@@ -998,15 +999,18 @@ int cifs_open(struct inode *inode, struct file *file)
+ 		rc = cifs_get_readable_path(tcon, full_path, &cfile);
+ 	}
+ 	if (rc == 0) {
+-		if (file->f_flags == cfile->f_flags) {
++		unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);
++		unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);
++
++		if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) &&
++		    (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) {
+ 			file->private_data = cfile;
+ 			spin_lock(&CIFS_I(inode)->deferred_lock);
+ 			cifs_del_deferred_close(cfile);
+ 			spin_unlock(&CIFS_I(inode)->deferred_lock);
+ 			goto use_cache;
+-		} else {
+-			_cifsFileInfo_put(cfile, true, false);
+ 		}
++		_cifsFileInfo_put(cfile, true, false);
+ 	} else {
+ 		/* hard link on the defeered close file */
+ 		rc = cifs_get_hardlink_path(tcon, inode, file);
+@@ -2423,8 +2427,7 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *flock)
+ 	return rc;
+ }
+ 
+-void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result,
+-				      bool was_async)
++void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result)
+ {
+ 	struct netfs_io_request *wreq = wdata->rreq;
+ 	struct netfs_inode *ictx = netfs_inode(wreq->inode);
+@@ -2441,7 +2444,7 @@ void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t
+ 			netfs_resize_file(ictx, wrend, true);
+ 	}
+ 
+-	netfs_write_subrequest_terminated(&wdata->subreq, result, was_async);
++	netfs_write_subrequest_terminated(&wdata->subreq, result);
+ }
+ 
+ struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 4e28632b5fd661..399185ca7cacb0 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4888,7 +4888,7 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ 			      0, cifs_trace_rw_credits_write_response_clear);
+ 	wdata->credits.value = 0;
+ 	trace_netfs_sreq(&wdata->subreq, netfs_sreq_trace_io_progress);
+-	cifs_write_subrequest_terminated(wdata, result ?: written, true);
++	cifs_write_subrequest_terminated(wdata, result ?: written);
+ 	release_mid(mid);
+ 	trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, 0,
+ 			      server->credits, server->in_flight,
+@@ -5061,7 +5061,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
+ 				      -(int)wdata->credits.value,
+ 				      cifs_trace_rw_credits_write_response_clear);
+ 		add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+-		cifs_write_subrequest_terminated(wdata, rc, true);
++		cifs_write_subrequest_terminated(wdata, rc);
+ 	}
+ }
+ 
+diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c
+index 67c55fe32ce88d..992ea0e372572f 100644
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -202,6 +202,11 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ 	msblk->panic_on_errors = (opts->errors == Opt_errors_panic);
+ 
+ 	msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
++	if (!msblk->devblksize) {
++		errorf(fc, "squashfs: unable to set blocksize\n");
++		return -EINVAL;
++	}
++
+ 	msblk->devblksize_log2 = ffz(~msblk->devblksize);
+ 
+ 	mutex_init(&msblk->meta_index_mutex);
+diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
+index 26a04a78348967..63151feb9c3fd5 100644
+--- a/fs/xfs/xfs_aops.c
++++ b/fs/xfs/xfs_aops.c
+@@ -436,6 +436,25 @@ xfs_map_blocks(
+ 	return 0;
+ }
+ 
++static bool
++xfs_ioend_needs_wq_completion(
++	struct iomap_ioend	*ioend)
++{
++	/* Changing inode size requires a transaction. */
++	if (xfs_ioend_is_append(ioend))
++		return true;
++
++	/* Extent manipulation requires a transaction. */
++	if (ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED))
++		return true;
++
++	/* Page cache invalidation cannot be done in irq context. */
++	if (ioend->io_flags & IOMAP_IOEND_DONTCACHE)
++		return true;
++
++	return false;
++}
++
+ static int
+ xfs_submit_ioend(
+ 	struct iomap_writepage_ctx *wpc,
+@@ -460,8 +479,7 @@ xfs_submit_ioend(
+ 	memalloc_nofs_restore(nofs_flag);
+ 
+ 	/* send ioends that might require a transaction to the completion wq */
+-	if (xfs_ioend_is_append(ioend) ||
+-	    (ioend->io_flags & (IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_SHARED)))
++	if (xfs_ioend_needs_wq_completion(ioend))
+ 		ioend->io_bio.bi_end_io = xfs_end_bio;
+ 
+ 	if (status)
+diff --git a/fs/xfs/xfs_discard.c b/fs/xfs/xfs_discard.c
+index c1a306268ae439..94d0873bcd6289 100644
+--- a/fs/xfs/xfs_discard.c
++++ b/fs/xfs/xfs_discard.c
+@@ -167,6 +167,14 @@ xfs_discard_extents(
+ 	return error;
+ }
+ 
++/*
++ * Care must be taken setting up the trim cursor as the perags may not have been
++ * initialised when the cursor is initialised. e.g. a clean mount which hasn't
++ * read in AGFs and the first operation run on the mounted fs is a trim. This
++ * can result in perag fields that aren't initialised until
++ * xfs_trim_gather_extents() calls xfs_alloc_read_agf() to lock down the AG for
++ * the free space search.
++ */
+ struct xfs_trim_cur {
+ 	xfs_agblock_t	start;
+ 	xfs_extlen_t	count;
+@@ -204,6 +212,14 @@ xfs_trim_gather_extents(
+ 	if (error)
+ 		goto out_trans_cancel;
+ 
++	/*
++	 * First time through tcur->count will not have been initialised as
++	 * pag->pagf_longest is not guaranteed to be valid before we read
++	 * the AGF buffer above.
++	 */
++	if (!tcur->count)
++		tcur->count = pag->pagf_longest;
++
+ 	if (tcur->by_bno) {
+ 		/* sub-AG discard request always starts at tcur->start */
+ 		cur = xfs_bnobt_init_cursor(mp, tp, agbp, pag);
+@@ -350,7 +366,6 @@ xfs_trim_perag_extents(
+ {
+ 	struct xfs_trim_cur	tcur = {
+ 		.start		= start,
+-		.count		= pag->pagf_longest,
+ 		.end		= end,
+ 		.minlen		= minlen,
+ 	};
+diff --git a/include/crypto/sig.h b/include/crypto/sig.h
+index 11024708c06929..fa6dafafab3f0d 100644
+--- a/include/crypto/sig.h
++++ b/include/crypto/sig.h
+@@ -128,7 +128,7 @@ static inline void crypto_free_sig(struct crypto_sig *tfm)
+ /**
+  * crypto_sig_keysize() - Get key size
+  *
+- * Function returns the key size in bytes.
++ * Function returns the key size in bits.
+  * Function assumes that the key is already set in the transformation. If this
+  * function is called without a setkey or with a failed setkey, you may end up
+  * in a NULL dereference.
+diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
+index abf0bd76e3703a..6f5976aca3e860 100644
+--- a/include/hyperv/hvgdk_mini.h
++++ b/include/hyperv/hvgdk_mini.h
+@@ -475,7 +475,7 @@ union hv_vp_assist_msr_contents {	 /* HV_REGISTER_VP_ASSIST_PAGE */
+ #define HVCALL_CREATE_PORT				0x0095
+ #define HVCALL_CONNECT_PORT				0x0096
+ #define HVCALL_START_VP					0x0099
+-#define HVCALL_GET_VP_ID_FROM_APIC_ID			0x009a
++#define HVCALL_GET_VP_INDEX_FROM_APIC_ID			0x009a
+ #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE	0x00af
+ #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST	0x00b0
+ #define HVCALL_SIGNAL_EVENT_DIRECT			0x00c0
+diff --git a/include/kunit/clk.h b/include/kunit/clk.h
+index 0afae7688157b8..f226044cc78d11 100644
+--- a/include/kunit/clk.h
++++ b/include/kunit/clk.h
+@@ -6,6 +6,7 @@ struct clk;
+ struct clk_hw;
+ struct device;
+ struct device_node;
++struct of_phandle_args;
+ struct kunit;
+ 
+ struct clk *
+diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
+index 255701e1251b4a..f652a5028b5907 100644
+--- a/include/linux/arm_sdei.h
++++ b/include/linux/arm_sdei.h
+@@ -46,12 +46,12 @@ int sdei_unregister_ghes(struct ghes *ghes);
+ /* For use by arch code when CPU hotplug notifiers are not appropriate. */
+ int sdei_mask_local_cpu(void);
+ int sdei_unmask_local_cpu(void);
+-void __init sdei_init(void);
++void __init acpi_sdei_init(void);
+ void sdei_handler_abort(void);
+ #else
+ static inline int sdei_mask_local_cpu(void) { return 0; }
+ static inline int sdei_unmask_local_cpu(void) { return 0; }
+-static inline void sdei_init(void) { }
++static inline void acpi_sdei_init(void) { }
+ static inline void sdei_handler_abort(void) { }
+ #endif /* CONFIG_ARM_SDE_INTERFACE */
+ 
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index b786ec5bcc81d0..b474a47ec7eefe 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -291,7 +291,7 @@ static inline void bio_first_folio(struct folio_iter *fi, struct bio *bio,
+ 
+ 	fi->folio = page_folio(bvec->bv_page);
+ 	fi->offset = bvec->bv_offset +
+-			PAGE_SIZE * (bvec->bv_page - &fi->folio->page);
++			PAGE_SIZE * folio_page_idx(fi->folio, bvec->bv_page);
+ 	fi->_seg_count = bvec->bv_len;
+ 	fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count);
+ 	fi->_next = folio_next(fi->folio);
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 9734544b6957cf..d1f02f8e3e55f4 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -356,7 +356,11 @@ enum {
+ 	INSN_F_SPI_MASK = 0x3f, /* 6 bits */
+ 	INSN_F_SPI_SHIFT = 3, /* shifted 3 bits to the left */
+ 
+-	INSN_F_STACK_ACCESS = BIT(9), /* we need 10 bits total */
++	INSN_F_STACK_ACCESS = BIT(9),
++
++	INSN_F_DST_REG_STACK = BIT(10), /* dst_reg is PTR_TO_STACK */
++	INSN_F_SRC_REG_STACK = BIT(11), /* src_reg is PTR_TO_STACK */
++	/* total 12 bits are used now. */
+ };
+ 
+ static_assert(INSN_F_FRAMENO_MASK + 1 >= MAX_CALL_FRAMES);
+@@ -365,9 +369,9 @@ static_assert(INSN_F_SPI_MASK + 1 >= MAX_BPF_STACK / 8);
+ struct bpf_insn_hist_entry {
+ 	u32 idx;
+ 	/* insn idx can't be bigger than 1 million */
+-	u32 prev_idx : 22;
+-	/* special flags, e.g., whether insn is doing register stack spill/load */
+-	u32 flags : 10;
++	u32 prev_idx : 20;
++	/* special INSN_F_xxx flags */
++	u32 flags : 12;
+ 	/* additional registers that need precision tracking when this
+ 	 * jump is backtracked, vector of six 10-bit records
+ 	 */
+diff --git a/include/linux/bvec.h b/include/linux/bvec.h
+index 204b22a99c4ba6..0a80e1f9aa201c 100644
+--- a/include/linux/bvec.h
++++ b/include/linux/bvec.h
+@@ -57,9 +57,12 @@ static inline void bvec_set_page(struct bio_vec *bv, struct page *page,
+  * @offset:	offset into the folio
+  */
+ static inline void bvec_set_folio(struct bio_vec *bv, struct folio *folio,
+-		unsigned int len, unsigned int offset)
++		size_t len, size_t offset)
+ {
+-	bvec_set_page(bv, &folio->page, len, offset);
++	unsigned long nr = offset / PAGE_SIZE;
++
++	WARN_ON_ONCE(len > UINT_MAX);
++	bvec_set_page(bv, folio_page(folio, nr), len, offset % PAGE_SIZE);
+ }
+ 
+ /**
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index d79a242b271d6e..cfcf6e4707ed94 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -723,7 +723,7 @@ coresight_find_output_type(struct coresight_platform_data *pdata,
+ 			   union coresight_dev_subtype subtype);
+ 
+ int coresight_init_driver(const char *drv, struct amba_driver *amba_drv,
+-			  struct platform_driver *pdev_drv);
++			  struct platform_driver *pdev_drv, struct module *owner);
+ 
+ void coresight_remove_driver(struct amba_driver *amba_drv,
+ 			     struct platform_driver *pdev_drv);
+diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
+index fc93f0abf513cd..25c4a5afbd4432 100644
+--- a/include/linux/exportfs.h
++++ b/include/linux/exportfs.h
+@@ -314,6 +314,9 @@ static inline bool exportfs_can_decode_fh(const struct export_operations *nop)
+ static inline bool exportfs_can_encode_fh(const struct export_operations *nop,
+ 					  int fh_flags)
+ {
++	if (!nop)
++		return false;
++
+ 	/*
+ 	 * If a non-decodeable file handle was requested, we only need to make
+ 	 * sure that filesystem did not opt-out of encoding fid.
+@@ -321,6 +324,13 @@ static inline bool exportfs_can_encode_fh(const struct export_operations *nop,
+ 	if (fh_flags & EXPORT_FH_FID)
+ 		return exportfs_can_encode_fid(nop);
+ 
++	/*
++	 * If a connectable file handle was requested, we need to make sure that
++	 * filesystem can also decode connected file handles.
++	 */
++	if ((fh_flags & EXPORT_FH_CONNECTABLE) && !nop->fh_to_parent)
++		return false;
++
+ 	/*
+ 	 * If a decodeable file handle was requested, we need to make sure that
+ 	 * filesystem can also decode file handles.
+diff --git a/include/linux/fscache.h b/include/linux/fscache.h
+index 9de27643607fb1..266e6c9e6f83ad 100644
+--- a/include/linux/fscache.h
++++ b/include/linux/fscache.h
+@@ -628,7 +628,7 @@ static inline void fscache_write_to_cache(struct fscache_cookie *cookie,
+ 					 term_func, term_func_priv,
+ 					 using_pgpriv2, caching);
+ 	else if (term_func)
+-		term_func(term_func_priv, -ENOBUFS, false);
++		term_func(term_func_priv, -ENOBUFS);
+ 
+ }
+ 
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index ef9a90ca0fbd6a..daae1d6d11a744 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -740,8 +740,9 @@ struct hid_descriptor {
+ 	__le16 bcdHID;
+ 	__u8  bCountryCode;
+ 	__u8  bNumDescriptors;
++	struct hid_class_descriptor rpt_desc;
+ 
+-	struct hid_class_descriptor desc[1];
++	struct hid_class_descriptor opt_descs[];
+ } __attribute__ ((packed));
+ 
+ #define HID_DEVICE(b, g, ven, prod)					\
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 457b4fba88bd00..7edc3fb0641cba 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -111,6 +111,8 @@
+ 
+ /* bits unique to S1G beacon */
+ #define IEEE80211_S1G_BCN_NEXT_TBTT	0x100
++#define IEEE80211_S1G_BCN_CSSID		0x200
++#define IEEE80211_S1G_BCN_ANO		0x400
+ 
+ /* see 802.11ah-2016 9.9 NDP CMAC frames */
+ #define IEEE80211_S1G_1MHZ_NDP_BITS	25
+@@ -153,9 +155,6 @@
+ 
+ #define IEEE80211_ANO_NETTYPE_WILD              15
+ 
+-/* bits unique to S1G beacon */
+-#define IEEE80211_S1G_BCN_NEXT_TBTT    0x100
+-
+ /* control extension - for IEEE80211_FTYPE_CTL | IEEE80211_STYPE_CTL_EXT */
+ #define IEEE80211_CTL_EXT_POLL		0x2000
+ #define IEEE80211_CTL_EXT_SPR		0x3000
+@@ -627,6 +626,42 @@ static inline bool ieee80211_is_s1g_beacon(__le16 fc)
+ 	       cpu_to_le16(IEEE80211_FTYPE_EXT | IEEE80211_STYPE_S1G_BEACON);
+ }
+ 
++/**
++ * ieee80211_s1g_has_next_tbtt - check if IEEE80211_S1G_BCN_NEXT_TBTT
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: whether or not the frame contains the variable-length
++ *	next TBTT field
++ */
++static inline bool ieee80211_s1g_has_next_tbtt(__le16 fc)
++{
++	return ieee80211_is_s1g_beacon(fc) &&
++		(fc & cpu_to_le16(IEEE80211_S1G_BCN_NEXT_TBTT));
++}
++
++/**
++ * ieee80211_s1g_has_ano - check if IEEE80211_S1G_BCN_ANO
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: whether or not the frame contains the variable-length
++ *	ANO field
++ */
++static inline bool ieee80211_s1g_has_ano(__le16 fc)
++{
++	return ieee80211_is_s1g_beacon(fc) &&
++		(fc & cpu_to_le16(IEEE80211_S1G_BCN_ANO));
++}
++
++/**
++ * ieee80211_s1g_has_cssid - check if IEEE80211_S1G_BCN_CSSID
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: whether or not the frame contains the variable-length
++ *	compressed SSID field
++ */
++static inline bool ieee80211_s1g_has_cssid(__le16 fc)
++{
++	return ieee80211_is_s1g_beacon(fc) &&
++		(fc & cpu_to_le16(IEEE80211_S1G_BCN_CSSID));
++}
++
+ /**
+  * ieee80211_is_s1g_short_beacon - check if frame is an S1G short beacon
+  * @fc: frame control bytes in little-endian byteorder
+@@ -1245,16 +1280,40 @@ struct ieee80211_ext {
+ 			u8 change_seq;
+ 			u8 variable[0];
+ 		} __packed s1g_beacon;
+-		struct {
+-			u8 sa[ETH_ALEN];
+-			__le32 timestamp;
+-			u8 change_seq;
+-			u8 next_tbtt[3];
+-			u8 variable[0];
+-		} __packed s1g_short_beacon;
+ 	} u;
+ } __packed __aligned(2);
+ 
++/**
++ * ieee80211_s1g_optional_len - determine length of optional S1G beacon fields
++ * @fc: frame control bytes in little-endian byteorder
++ * Return: total length in bytes of the optional fixed-length fields
++ *
++ * S1G beacons may contain up to three optional fixed-length fields that
++ * precede the variable-length elements. Whether these fields are present
++ * is indicated by flags in the frame control field.
++ *
++ * From IEEE 802.11-2024 section 9.3.4.3:
++ *  - Next TBTT field may be 0 or 3 bytes
++ *  - Short SSID field may be 0 or 4 bytes
++ *  - Access Network Options (ANO) field may be 0 or 1 byte
++ */
++static inline size_t
++ieee80211_s1g_optional_len(__le16 fc)
++{
++	size_t len = 0;
++
++	if (ieee80211_s1g_has_next_tbtt(fc))
++		len += 3;
++
++	if (ieee80211_s1g_has_cssid(fc))
++		len += 4;
++
++	if (ieee80211_s1g_has_ano(fc))
++		len += 1;
++
++	return len;
++}
++
+ #define IEEE80211_TWT_CONTROL_NDP			BIT(0)
+ #define IEEE80211_TWT_CONTROL_RESP_MODE			BIT(1)
+ #define IEEE80211_TWT_CONTROL_NEG_TYPE_BROADCAST	BIT(3)
+diff --git a/include/linux/iomap.h b/include/linux/iomap.h
+index 68416b135151d7..522644d62f30f0 100644
+--- a/include/linux/iomap.h
++++ b/include/linux/iomap.h
+@@ -377,13 +377,16 @@ sector_t iomap_bmap(struct address_space *mapping, sector_t bno,
+ #define IOMAP_IOEND_BOUNDARY		(1U << 2)
+ /* is direct I/O */
+ #define IOMAP_IOEND_DIRECT		(1U << 3)
++/* is DONTCACHE I/O */
++#define IOMAP_IOEND_DONTCACHE		(1U << 4)
+ 
+ /*
+  * Flags that if set on either ioend prevent the merge of two ioends.
+  * (IOMAP_IOEND_BOUNDARY also prevents merges, but only one-way)
+  */
+ #define IOMAP_IOEND_NOMERGE_FLAGS \
+-	(IOMAP_IOEND_SHARED | IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_DIRECT)
++	(IOMAP_IOEND_SHARED | IOMAP_IOEND_UNWRITTEN | IOMAP_IOEND_DIRECT | \
++	 IOMAP_IOEND_DONTCACHE)
+ 
+ /*
+  * Structure for writeback I/O completions.
+diff --git a/include/linux/mdio.h b/include/linux/mdio.h
+index 3c3deac57894ef..e43ff9f980a46b 100644
+--- a/include/linux/mdio.h
++++ b/include/linux/mdio.h
+@@ -45,10 +45,7 @@ struct mdio_device {
+ 	unsigned int reset_deassert_delay;
+ };
+ 
+-static inline struct mdio_device *to_mdio_device(const struct device *dev)
+-{
+-	return container_of(dev, struct mdio_device, dev);
+-}
++#define to_mdio_device(__dev)	container_of_const(__dev, struct mdio_device, dev)
+ 
+ /* struct mdio_driver_common: Common to all MDIO drivers */
+ struct mdio_driver_common {
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index d1dfbad9a44730..e6ba8f4f4bd1f4 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -398,6 +398,7 @@ struct mlx5_core_rsc_common {
+ 	enum mlx5_res_type	res;
+ 	refcount_t		refcount;
+ 	struct completion	free;
++	bool			invalid;
+ };
+ 
+ struct mlx5_uars_page {
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index fdda6b16263b35..e51dba8398f747 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -4265,4 +4265,62 @@ int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status);
+ #define VM_SEALED_SYSMAP	VM_NONE
+ #endif
+ 
++/*
++ * DMA mapping IDs for page_pool
++ *
++ * When DMA-mapping a page, page_pool allocates an ID (from an xarray) and
++ * stashes it in the upper bits of page->pp_magic. We always want to be able to
++ * unambiguously identify page pool pages (using page_pool_page_is_pp()). Non-PP
++ * pages can have arbitrary kernel pointers stored in the same field as pp_magic
++ * (since it overlaps with page->lru.next), so we must ensure that we cannot
++ * mistake a valid kernel pointer with any of the values we write into this
++ * field.
++ *
++ * On architectures that set POISON_POINTER_DELTA, this is already ensured,
++ * since this value becomes part of PP_SIGNATURE; meaning we can just use the
++ * space between the PP_SIGNATURE value (without POISON_POINTER_DELTA), and the
++ * lowest bits of POISON_POINTER_DELTA. On arches where POISON_POINTER_DELTA is
++ * 0, we make sure that we leave the two topmost bits empty, as that guarantees
++ * we won't mistake a valid kernel pointer for a value we set, regardless of the
++ * VMSPLIT setting.
++ *
++ * Altogether, this means that the number of bits available is constrained by
++ * the size of an unsigned long (at the upper end, subtracting two bits per the
++ * above), and the definition of PP_SIGNATURE (with or without
++ * POISON_POINTER_DELTA).
++ */
++#define PP_DMA_INDEX_SHIFT (1 + __fls(PP_SIGNATURE - POISON_POINTER_DELTA))
++#if POISON_POINTER_DELTA > 0
++/* PP_SIGNATURE includes POISON_POINTER_DELTA, so limit the size of the DMA
++ * index to not overlap with that if set
++ */
++#define PP_DMA_INDEX_BITS MIN(32, __ffs(POISON_POINTER_DELTA) - PP_DMA_INDEX_SHIFT)
++#else
++/* Always leave out the topmost two; see above. */
++#define PP_DMA_INDEX_BITS MIN(32, BITS_PER_LONG - PP_DMA_INDEX_SHIFT - 2)
++#endif
++
++#define PP_DMA_INDEX_MASK GENMASK(PP_DMA_INDEX_BITS + PP_DMA_INDEX_SHIFT - 1, \
++				  PP_DMA_INDEX_SHIFT)
++
++/* Mask used for checking in page_pool_page_is_pp() below. page->pp_magic is
++ * OR'ed with PP_SIGNATURE after the allocation in order to preserve bit 0 for
++ * the head page of compound page and bit 1 for pfmemalloc page, as well as the
++ * bits used for the DMA index. page_is_pfmemalloc() is checked in
++ * __page_pool_put_page() to avoid recycling the pfmemalloc page.
++ */
++#define PP_MAGIC_MASK ~(PP_DMA_INDEX_MASK | 0x3UL)
++
++#ifdef CONFIG_PAGE_POOL
++static inline bool page_pool_page_is_pp(struct page *page)
++{
++	return (page->pp_magic & PP_MAGIC_MASK) == PP_SIGNATURE;
++}
++#else
++static inline bool page_pool_page_is_pp(struct page *page)
++{
++	return false;
++}
++#endif
++
+ #endif /* _LINUX_MM_H */
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index dcc17ce8a959e0..1a3136e53eaa07 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -22,48 +22,52 @@ struct fs_context;
+ struct file;
+ struct path;
+ 
+-#define MNT_NOSUID	0x01
+-#define MNT_NODEV	0x02
+-#define MNT_NOEXEC	0x04
+-#define MNT_NOATIME	0x08
+-#define MNT_NODIRATIME	0x10
+-#define MNT_RELATIME	0x20
+-#define MNT_READONLY	0x40	/* does the user want this to be r/o? */
+-#define MNT_NOSYMFOLLOW	0x80
+-
+-#define MNT_SHRINKABLE	0x100
+-#define MNT_WRITE_HOLD	0x200
+-
+-#define MNT_SHARED	0x1000	/* if the vfsmount is a shared mount */
+-#define MNT_UNBINDABLE	0x2000	/* if the vfsmount is a unbindable mount */
+-/*
+- * MNT_SHARED_MASK is the set of flags that should be cleared when a
+- * mount becomes shared.  Currently, this is only the flag that says a
+- * mount cannot be bind mounted, since this is how we create a mount
+- * that shares events with another mount.  If you add a new MNT_*
+- * flag, consider how it interacts with shared mounts.
+- */
+-#define MNT_SHARED_MASK	(MNT_UNBINDABLE)
+-#define MNT_USER_SETTABLE_MASK  (MNT_NOSUID | MNT_NODEV | MNT_NOEXEC \
+-				 | MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME \
+-				 | MNT_READONLY | MNT_NOSYMFOLLOW)
+-#define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME )
+-
+-#define MNT_INTERNAL_FLAGS (MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | \
+-			    MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED)
+-
+-#define MNT_INTERNAL	0x4000
+-
+-#define MNT_LOCK_ATIME		0x040000
+-#define MNT_LOCK_NOEXEC		0x080000
+-#define MNT_LOCK_NOSUID		0x100000
+-#define MNT_LOCK_NODEV		0x200000
+-#define MNT_LOCK_READONLY	0x400000
+-#define MNT_LOCKED		0x800000
+-#define MNT_DOOMED		0x1000000
+-#define MNT_SYNC_UMOUNT		0x2000000
+-#define MNT_MARKED		0x4000000
+-#define MNT_UMOUNT		0x8000000
++enum mount_flags {
++	MNT_NOSUID	= 0x01,
++	MNT_NODEV	= 0x02,
++	MNT_NOEXEC	= 0x04,
++	MNT_NOATIME	= 0x08,
++	MNT_NODIRATIME	= 0x10,
++	MNT_RELATIME	= 0x20,
++	MNT_READONLY	= 0x40, /* does the user want this to be r/o? */
++	MNT_NOSYMFOLLOW	= 0x80,
++
++	MNT_SHRINKABLE	= 0x100,
++	MNT_WRITE_HOLD	= 0x200,
++
++	MNT_SHARED	= 0x1000, /* if the vfsmount is a shared mount */
++	MNT_UNBINDABLE	= 0x2000, /* if the vfsmount is a unbindable mount */
++
++	MNT_INTERNAL	= 0x4000,
++
++	MNT_LOCK_ATIME		= 0x040000,
++	MNT_LOCK_NOEXEC		= 0x080000,
++	MNT_LOCK_NOSUID		= 0x100000,
++	MNT_LOCK_NODEV		= 0x200000,
++	MNT_LOCK_READONLY	= 0x400000,
++	MNT_LOCKED		= 0x800000,
++	MNT_DOOMED		= 0x1000000,
++	MNT_SYNC_UMOUNT		= 0x2000000,
++	MNT_MARKED		= 0x4000000,
++	MNT_UMOUNT		= 0x8000000,
++
++	/*
++	 * MNT_SHARED_MASK is the set of flags that should be cleared when a
++	 * mount becomes shared.  Currently, this is only the flag that says a
++	 * mount cannot be bind mounted, since this is how we create a mount
++	 * that shares events with another mount.  If you add a new MNT_*
++	 * flag, consider how it interacts with shared mounts.
++	 */
++	MNT_SHARED_MASK	= MNT_UNBINDABLE,
++	MNT_USER_SETTABLE_MASK  = MNT_NOSUID | MNT_NODEV | MNT_NOEXEC
++				  | MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME
++				  | MNT_READONLY | MNT_NOSYMFOLLOW,
++	MNT_ATIME_MASK = MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME,
++
++	MNT_INTERNAL_FLAGS = MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL |
++			     MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED |
++			     MNT_LOCKED,
++};
+ 
+ struct vfsmount {
+ 	struct dentry *mnt_root;	/* root of the mounted tree */
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 7ea022750e4e09..33338a233cc724 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1012,9 +1012,13 @@ struct netdev_bpf {
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+ struct xfrmdev_ops {
+-	int	(*xdo_dev_state_add) (struct xfrm_state *x, struct netlink_ext_ack *extack);
+-	void	(*xdo_dev_state_delete) (struct xfrm_state *x);
+-	void	(*xdo_dev_state_free) (struct xfrm_state *x);
++	int	(*xdo_dev_state_add)(struct net_device *dev,
++				     struct xfrm_state *x,
++				     struct netlink_ext_ack *extack);
++	void	(*xdo_dev_state_delete)(struct net_device *dev,
++					struct xfrm_state *x);
++	void	(*xdo_dev_state_free)(struct net_device *dev,
++				      struct xfrm_state *x);
+ 	bool	(*xdo_dev_offload_ok) (struct sk_buff *skb,
+ 				       struct xfrm_state *x);
+ 	void	(*xdo_dev_state_advance_esn) (struct xfrm_state *x);
+diff --git a/include/linux/netfs.h b/include/linux/netfs.h
+index c86a11cfc4a36a..1464b3a104989d 100644
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -51,8 +51,7 @@ enum netfs_io_source {
+ 	NETFS_INVALID_WRITE,
+ } __mode(byte);
+ 
+-typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error,
+-				      bool was_async);
++typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error);
+ 
+ /*
+  * Per-inode context.  This wraps the VFS inode.
+@@ -207,6 +206,7 @@ enum netfs_io_origin {
+ 	NETFS_READ_GAPS,		/* This read is a synchronous read to fill gaps */
+ 	NETFS_READ_SINGLE,		/* This read should be treated as a single object */
+ 	NETFS_READ_FOR_WRITE,		/* This read is to prepare a write */
++	NETFS_UNBUFFERED_READ,		/* This is an unbuffered read */
+ 	NETFS_DIO_READ,			/* This is a direct I/O read */
+ 	NETFS_WRITEBACK,		/* This write was triggered by writepages */
+ 	NETFS_WRITEBACK_SINGLE,		/* This monolithic write was triggered by writepages */
+@@ -223,9 +223,10 @@ enum netfs_io_origin {
+  */
+ struct netfs_io_request {
+ 	union {
+-		struct work_struct work;
++		struct work_struct cleanup_work; /* Deferred cleanup work */
+ 		struct rcu_head rcu;
+ 	};
++	struct work_struct	work;		/* Result collector work */
+ 	struct inode		*inode;		/* The file being accessed */
+ 	struct address_space	*mapping;	/* The mapping being accessed */
+ 	struct kiocb		*iocb;		/* AIO completion vector */
+@@ -270,7 +271,7 @@ struct netfs_io_request {
+ #define NETFS_RREQ_NO_UNLOCK_FOLIO	2	/* Don't unlock no_unlock_folio on completion */
+ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS	3	/* Don't unlock the folios on completion */
+ #define NETFS_RREQ_FAILED		4	/* The request failed */
+-#define NETFS_RREQ_IN_PROGRESS		5	/* Unlocked when the request completes */
++#define NETFS_RREQ_IN_PROGRESS		5	/* Unlocked when the request completes (has ref) */
+ #define NETFS_RREQ_FOLIO_COPY_TO_CACHE	6	/* Copy current folio to cache from read */
+ #define NETFS_RREQ_UPLOAD_TO_SERVER	8	/* Need to write to the server */
+ #define NETFS_RREQ_NONBLOCK		9	/* Don't block if possible (O_NONBLOCK) */
+@@ -279,6 +280,7 @@ struct netfs_io_request {
+ #define NETFS_RREQ_USE_IO_ITER		12	/* Use ->io_iter rather than ->i_pages */
+ #define NETFS_RREQ_ALL_QUEUED		13	/* All subreqs are now queued */
+ #define NETFS_RREQ_RETRYING		14	/* Set if we're in the retry path */
++#define NETFS_RREQ_SHORT_TRANSFER	15	/* Set if we have a short transfer */
+ #define NETFS_RREQ_USE_PGPRIV2		31	/* [DEPRECATED] Use PG_private_2 to mark
+ 						 * write to cache on read */
+ 	const struct netfs_request_ops *netfs_ops;
+@@ -439,15 +441,14 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq);
+ void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
+ 			  enum netfs_sreq_ref_trace what);
+ void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
+-			  bool was_async, enum netfs_sreq_ref_trace what);
++			  enum netfs_sreq_ref_trace what);
+ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
+ 				struct iov_iter *new,
+ 				iov_iter_extraction_t extraction_flags);
+ size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
+ 			size_t max_size, size_t max_segs);
+ void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq);
+-void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+-				       bool was_async);
++void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error);
+ void netfs_queue_write_request(struct netfs_io_subrequest *subreq);
+ 
+ int netfs_start_io_read(struct inode *inode);
+diff --git a/include/linux/nfslocalio.h b/include/linux/nfslocalio.h
+index 9aa8a43843d717..5c7c92659e736f 100644
+--- a/include/linux/nfslocalio.h
++++ b/include/linux/nfslocalio.h
+@@ -50,10 +50,6 @@ void nfs_localio_invalidate_clients(struct list_head *nn_local_clients,
+ 				    spinlock_t *nn_local_clients_lock);
+ 
+ /* localio needs to map filehandle -> struct nfsd_file */
+-extern struct nfsd_file *
+-nfsd_open_local_fh(struct net *, struct auth_domain *, struct rpc_clnt *,
+-		   const struct cred *, const struct nfs_fh *,
+-		   const fmode_t) __must_hold(rcu);
+ void nfs_close_local_fh(struct nfs_file_localio *);
+ 
+ struct nfsd_localio_operations {
+@@ -64,10 +60,10 @@ struct nfsd_localio_operations {
+ 						struct rpc_clnt *,
+ 						const struct cred *,
+ 						const struct nfs_fh *,
++						struct nfsd_file __rcu **pnf,
+ 						const fmode_t);
+-	struct net *(*nfsd_file_put_local)(struct nfsd_file *);
+-	struct nfsd_file *(*nfsd_file_get)(struct nfsd_file *);
+-	void (*nfsd_file_put)(struct nfsd_file *);
++	struct net *(*nfsd_file_put_local)(struct nfsd_file __rcu **);
++	struct nfsd_file *(*nfsd_file_get_local)(struct nfsd_file *);
+ 	struct file *(*nfsd_file_file)(struct nfsd_file *);
+ } ____cacheline_aligned;
+ 
+@@ -77,6 +73,7 @@ extern const struct nfsd_localio_operations *nfs_to;
+ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *,
+ 		   struct rpc_clnt *, const struct cred *,
+ 		   const struct nfs_fh *, struct nfs_file_localio *,
++		   struct nfsd_file __rcu **pnf,
+ 		   const fmode_t);
+ 
+ static inline void nfs_to_nfsd_net_put(struct net *net)
+@@ -91,16 +88,19 @@ static inline void nfs_to_nfsd_net_put(struct net *net)
+ 	rcu_read_unlock();
+ }
+ 
+-static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
++static inline void nfs_to_nfsd_file_put_local(struct nfsd_file __rcu **localio)
+ {
+ 	/*
+-	 * Must not hold RCU otherwise nfsd_file_put() can easily trigger:
+-	 * "Voluntary context switch within RCU read-side critical section!"
+-	 * by scheduling deep in underlying filesystem (e.g. XFS).
++	 * Either *localio must be guaranteed to be non-NULL, or caller
++	 * must prevent nfsd shutdown from completing as nfs_close_local_fh()
++	 * does by blocking the nfs_uuid from being finally put.
+ 	 */
+-	struct net *net = nfs_to->nfsd_file_put_local(localio);
++	struct net *net;
+ 
+-	nfs_to_nfsd_net_put(net);
++	net = nfs_to->nfsd_file_put_local(localio);
++
++	if (net)
++		nfs_to_nfsd_net_put(net);
+ }
+ 
+ #else   /* CONFIG_NFS_LOCALIO */
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index 2479ed10f53e37..5d7afb6079f1d8 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -2094,7 +2094,7 @@ enum {
+ 	NVME_SC_BAD_ATTRIBUTES		= 0x180,
+ 	NVME_SC_INVALID_PI		= 0x181,
+ 	NVME_SC_READ_ONLY		= 0x182,
+-	NVME_SC_ONCS_NOT_SUPPORTED	= 0x183,
++	NVME_SC_CMD_SIZE_LIM_EXCEEDED	= 0x183,
+ 
+ 	/*
+ 	 * I/O Command Set Specific - Fabrics commands:
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 0c7e3dcfe8670c..89e9d604988351 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -389,24 +389,37 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+ 	struct_size((type *)NULL, member, count)
+ 
+ /**
+- * _DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
+- * Enables caller macro to pass (different) initializer.
++ * __DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
++ * Enables caller macro to pass arbitrary trailing expressions
+  *
+  * @type: structure type name, including "struct" keyword.
+  * @name: Name for a variable to define.
+  * @member: Name of the array member.
+  * @count: Number of elements in the array; must be compile-time const.
+- * @initializer: initializer expression (could be empty for no init).
++ * @trailer: Trailing expressions for attributes and/or initializers.
+  */
+-#define _DEFINE_FLEX(type, name, member, count, initializer...)			\
++#define __DEFINE_FLEX(type, name, member, count, trailer...)			\
+ 	_Static_assert(__builtin_constant_p(count),				\
+ 		       "onstack flex array members require compile-time const count"); \
+ 	union {									\
+ 		u8 bytes[struct_size_t(type, member, count)];			\
+ 		type obj;							\
+-	} name##_u initializer;							\
++	} name##_u trailer;							\
+ 	type *name = (type *)&name##_u
+ 
++/**
++ * _DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
++ * Enables caller macro to pass (different) initializer.
++ *
++ * @type: structure type name, including "struct" keyword.
++ * @name: Name for a variable to define.
++ * @member: Name of the array member.
++ * @count: Number of elements in the array; must be compile-time const.
++ * @initializer: Initializer expression (e.g., pass `= { }` at minimum).
++ */
++#define _DEFINE_FLEX(type, name, member, count, initializer...)			\
++	__DEFINE_FLEX(type, name, member, count, = { .obj initializer })
++
+ /**
+  * DEFINE_RAW_FLEX() - Define an on-stack instance of structure with a trailing
+  * flexible array member, when it does not have a __counted_by annotation.
+@@ -421,7 +434,7 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+  * Use __struct_size(@name) to get compile-time size of it afterwards.
+  */
+ #define DEFINE_RAW_FLEX(type, name, member, count)	\
+-	_DEFINE_FLEX(type, name, member, count, = {})
++	__DEFINE_FLEX(type, name, member, count, = { })
+ 
+ /**
+  * DEFINE_FLEX() - Define an on-stack instance of structure with a trailing
+@@ -438,6 +451,6 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
+  * Use __struct_size(@NAME) to get compile-time size of it afterwards.
+  */
+ #define DEFINE_FLEX(TYPE, NAME, MEMBER, COUNTER, COUNT)	\
+-	_DEFINE_FLEX(TYPE, NAME, MEMBER, COUNT, = { .obj.COUNTER = COUNT, })
++	_DEFINE_FLEX(TYPE, NAME, MEMBER, COUNT, = { .COUNTER = COUNT, })
+ 
+ #endif /* __LINUX_OVERFLOW_H */
+diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h
+index 879d19cebd4fc6..749cee0bcf2cc0 100644
+--- a/include/linux/pci-epf.h
++++ b/include/linux/pci-epf.h
+@@ -114,6 +114,8 @@ struct pci_epf_driver {
+  * @phys_addr: physical address that should be mapped to the BAR
+  * @addr: virtual address corresponding to the @phys_addr
+  * @size: the size of the address space present in BAR
++ * @aligned_size: the size actually allocated to accommodate the iATU alignment
++ *                requirement
+  * @barno: BAR number
+  * @flags: flags that are set for the BAR
+  */
+@@ -121,6 +123,7 @@ struct pci_epf_bar {
+ 	dma_addr_t	phys_addr;
+ 	void		*addr;
+ 	size_t		size;
++	size_t		aligned_size;
+ 	enum pci_barno	barno;
+ 	int		flags;
+ };
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 51e2bd6405cda5..081e5c0a3ddf4e 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1850,6 +1850,14 @@ static inline bool pcie_aspm_support_enabled(void) { return false; }
+ static inline bool pcie_aspm_enabled(struct pci_dev *pdev) { return false; }
+ #endif
+ 
++#ifdef CONFIG_HOTPLUG_PCI
++void pci_hp_ignore_link_change(struct pci_dev *pdev);
++void pci_hp_unignore_link_change(struct pci_dev *pdev);
++#else
++static inline void pci_hp_ignore_link_change(struct pci_dev *pdev) { }
++static inline void pci_hp_unignore_link_change(struct pci_dev *pdev) { }
++#endif
++
+ #ifdef CONFIG_PCIEAER
+ bool pci_aer_available(void);
+ #else
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index a2bfae80c44975..bef68f6af99a92 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -744,10 +744,7 @@ struct phy_device {
+ #define PHY_F_NO_IRQ		0x80000000
+ #define PHY_F_RXC_ALWAYS_ON	0x40000000
+ 
+-static inline struct phy_device *to_phy_device(const struct device *dev)
+-{
+-	return container_of(to_mdio_device(dev), struct phy_device, mdio);
+-}
++#define to_phy_device(__dev)	container_of_const(to_mdio_device(__dev), struct phy_device, mdio)
+ 
+ /**
+  * struct phy_tdr_config - Configuration of a TDR raw test
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index 7fb5a459847ef3..756b842dcd3091 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -96,7 +96,9 @@ extern void pm_runtime_new_link(struct device *dev);
+ extern void pm_runtime_drop_link(struct device_link *link);
+ extern void pm_runtime_release_supplier(struct device_link *link);
+ 
++int devm_pm_runtime_set_active_enabled(struct device *dev);
+ extern int devm_pm_runtime_enable(struct device *dev);
++int devm_pm_runtime_get_noresume(struct device *dev);
+ 
+ /**
+  * pm_suspend_ignore_children - Set runtime PM behavior regarding children.
+@@ -294,7 +296,9 @@ static inline bool pm_runtime_blocked(struct device *dev) { return true; }
+ static inline void pm_runtime_allow(struct device *dev) {}
+ static inline void pm_runtime_forbid(struct device *dev) {}
+ 
++static inline int devm_pm_runtime_set_active_enabled(struct device *dev) { return 0; }
+ static inline int devm_pm_runtime_enable(struct device *dev) { return 0; }
++static inline int devm_pm_runtime_get_noresume(struct device *dev) { return 0; }
+ 
+ static inline void pm_suspend_ignore_children(struct device *dev, bool enable) {}
+ static inline void pm_runtime_get_noresume(struct device *dev) {}
+diff --git a/include/linux/poison.h b/include/linux/poison.h
+index 331a9a996fa874..8ca2235f78d5d9 100644
+--- a/include/linux/poison.h
++++ b/include/linux/poison.h
+@@ -70,6 +70,10 @@
+ #define KEY_DESTROY		0xbd
+ 
+ /********** net/core/page_pool.c **********/
++/*
++ * page_pool uses additional free bits within this value to store data, see the
++ * definition of PP_DMA_INDEX_MASK in mm.h
++ */
+ #define PP_SIGNATURE		(0x40 + POISON_POINTER_DELTA)
+ 
+ /********** net/core/skbuff.c **********/
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index 0387d64e2c66c6..36fb3edfa403d9 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -140,6 +140,7 @@ struct virtio_vsock_sock {
+ 	u32 last_fwd_cnt;
+ 	u32 rx_bytes;
+ 	u32 buf_alloc;
++	u32 buf_used;
+ 	struct sk_buff_head rx_queue;
+ 	u32 msg_count;
+ };
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 797992019f9ee5..521a9d0acac692 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -557,7 +557,8 @@ enum {
+ #define ESCO_LINK	0x02
+ /* Low Energy links do not have defined link type. Use invented one */
+ #define LE_LINK		0x80
+-#define ISO_LINK	0x82
++#define CIS_LINK	0x82
++#define BIS_LINK	0x83
+ #define INVALID_LINK	0xff
+ 
+ /* LMP features */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 54bfeeaa09959b..d15316bffd70bb 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -545,6 +545,7 @@ struct hci_dev {
+ 	struct hci_conn_hash	conn_hash;
+ 
+ 	struct list_head	mesh_pending;
++	struct mutex		mgmt_pending_lock;
+ 	struct list_head	mgmt_pending;
+ 	struct list_head	reject_list;
+ 	struct list_head	accept_list;
+@@ -996,7 +997,8 @@ static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c)
+ 	case ESCO_LINK:
+ 		h->sco_num++;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		h->iso_num++;
+ 		break;
+ 	}
+@@ -1022,7 +1024,8 @@ static inline void hci_conn_hash_del(struct hci_dev *hdev, struct hci_conn *c)
+ 	case ESCO_LINK:
+ 		h->sco_num--;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		h->iso_num--;
+ 		break;
+ 	}
+@@ -1039,7 +1042,8 @@ static inline unsigned int hci_conn_num(struct hci_dev *hdev, __u8 type)
+ 	case SCO_LINK:
+ 	case ESCO_LINK:
+ 		return h->sco_num;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		return h->iso_num;
+ 	default:
+ 		return 0;
+@@ -1100,7 +1104,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (bacmp(&c->dst, ba) || c->type != ISO_LINK)
++		if (bacmp(&c->dst, ba) || c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (c->iso_qos.bcast.bis == bis) {
+@@ -1122,7 +1126,7 @@ hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags))
+@@ -1148,8 +1152,8 @@ hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (bacmp(&c->dst, ba) || c->type != ISO_LINK ||
+-			!test_bit(HCI_CONN_PER_ADV, &c->flags))
++		if (bacmp(&c->dst, ba) || c->type != BIS_LINK ||
++		    !test_bit(HCI_CONN_PER_ADV, &c->flags))
+ 			continue;
+ 
+ 		if (c->iso_qos.bcast.big == big &&
+@@ -1238,7 +1242,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK || !bacmp(&c->dst, BDADDR_ANY))
++		if (c->type != CIS_LINK)
+ 			continue;
+ 
+ 		/* Match CIG ID if set */
+@@ -1270,7 +1274,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cig(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK || !bacmp(&c->dst, BDADDR_ANY))
++		if (c->type != CIS_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.ucast.cig) {
+@@ -1293,17 +1297,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
+-			continue;
+-
+-		/* An ISO_LINK hcon with BDADDR_ANY as destination
+-		 * address is a Broadcast connection. A Broadcast
+-		 * slave connection is associated with a PA train,
+-		 * so the sync_handle can be used to differentiate
+-		 * from unicast.
+-		 */
+-		if (bacmp(&c->dst, BDADDR_ANY) &&
+-		    c->sync_handle == HCI_SYNC_HANDLE_INVALID)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+@@ -1327,7 +1321,7 @@ hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) {
+@@ -1350,8 +1344,8 @@ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle,  __u16 state)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (bacmp(&c->dst, BDADDR_ANY) || c->type != ISO_LINK ||
+-			c->state != state)
++		if (c->type != BIS_LINK || bacmp(&c->dst, BDADDR_ANY) ||
++		    c->state != state)
+ 			continue;
+ 
+ 		if (handle == c->iso_qos.bcast.big) {
+@@ -1374,8 +1368,8 @@ hci_conn_hash_lookup_pa_sync_big_handle(struct hci_dev *hdev, __u8 big)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK ||
+-			!test_bit(HCI_CONN_PA_SYNC, &c->flags))
++		if (c->type != BIS_LINK ||
++		    !test_bit(HCI_CONN_PA_SYNC, &c->flags))
+ 			continue;
+ 
+ 		if (c->iso_qos.bcast.big == big) {
+@@ -1397,7 +1391,7 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != ISO_LINK)
++		if (c->type != BIS_LINK)
+ 			continue;
+ 
+ 		/* Ignore the listen hcon, we are looking
+@@ -2012,7 +2006,8 @@ static inline int hci_proto_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	case ESCO_LINK:
+ 		return sco_connect_ind(hdev, bdaddr, flags);
+ 
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		return iso_connect_ind(hdev, bdaddr, flags);
+ 
+ 	default:
+@@ -2403,7 +2398,6 @@ void mgmt_advertising_added(struct sock *sk, struct hci_dev *hdev,
+ 			    u8 instance);
+ void mgmt_advertising_removed(struct sock *sk, struct hci_dev *hdev,
+ 			      u8 instance);
+-void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle);
+ int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip);
+ void mgmt_adv_monitor_device_lost(struct hci_dev *hdev, u16 handle,
+ 				  bdaddr_t *bdaddr, u8 addr_type);
+diff --git a/include/net/checksum.h b/include/net/checksum.h
+index 243f972267b8d1..be9356d4b67a1e 100644
+--- a/include/net/checksum.h
++++ b/include/net/checksum.h
+@@ -164,7 +164,7 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
+ 			       const __be32 *from, const __be32 *to,
+ 			       bool pseudohdr);
+ void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
+-				     __wsum diff, bool pseudohdr);
++				     __wsum diff, bool pseudohdr, bool ipv6);
+ 
+ static __always_inline
+ void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
+diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h
+index 6e202ed5e63f3c..7370fba844efcf 100644
+--- a/include/net/netfilter/nft_fib.h
++++ b/include/net/netfilter/nft_fib.h
+@@ -2,6 +2,7 @@
+ #ifndef _NFT_FIB_H_
+ #define _NFT_FIB_H_
+ 
++#include <net/l3mdev.h>
+ #include <net/netfilter/nf_tables.h>
+ 
+ struct nft_fib {
+@@ -39,6 +40,14 @@ static inline bool nft_fib_can_skip(const struct nft_pktinfo *pkt)
+ 	return nft_fib_is_loopback(pkt->skb, indev);
+ }
+ 
++static inline int nft_fib_l3mdev_master_ifindex_rcu(const struct nft_pktinfo *pkt,
++						    const struct net_device *iif)
++{
++	const struct net_device *dev = iif ? iif : pkt->skb->dev;
++
++	return l3mdev_master_ifindex_rcu(dev);
++}
++
+ int nft_fib_dump(struct sk_buff *skb, const struct nft_expr *expr, bool reset);
+ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		 const struct nlattr * const tb[]);
+diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
+index 36eb57d73abc6c..431b593de70937 100644
+--- a/include/net/page_pool/types.h
++++ b/include/net/page_pool/types.h
+@@ -6,6 +6,7 @@
+ #include <linux/dma-direction.h>
+ #include <linux/ptr_ring.h>
+ #include <linux/types.h>
++#include <linux/xarray.h>
+ #include <net/netmem.h>
+ 
+ #define PP_FLAG_DMA_MAP		BIT(0) /* Should page_pool do the DMA
+@@ -33,6 +34,9 @@
+ #define PP_FLAG_ALL		(PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | \
+ 				 PP_FLAG_SYSTEM_POOL | PP_FLAG_ALLOW_UNREADABLE_NETMEM)
+ 
++/* Index limit to stay within PP_DMA_INDEX_BITS for DMA indices */
++#define PP_DMA_INDEX_LIMIT XA_LIMIT(1, BIT(PP_DMA_INDEX_BITS) - 1)
++
+ /*
+  * Fast allocation side cache array/stack
+  *
+@@ -221,6 +225,8 @@ struct page_pool {
+ 	void *mp_priv;
+ 	const struct memory_provider_ops *mp_ops;
+ 
++	struct xarray dma_mapped;
++
+ #ifdef CONFIG_PAGE_POOL_STATS
+ 	/* recycle stats are per-cpu to avoid locking */
+ 	struct page_pool_recycle_stats __percpu *recycle_stats;
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 694f954258d437..99470c6d24de8b 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2978,8 +2978,11 @@ int sock_ioctl_inout(struct sock *sk, unsigned int cmd,
+ int sk_ioctl(struct sock *sk, unsigned int cmd, void __user *arg);
+ static inline bool sk_is_readable(struct sock *sk)
+ {
+-	if (sk->sk_prot->sock_is_readable)
+-		return sk->sk_prot->sock_is_readable(sk);
++	const struct proto *prot = READ_ONCE(sk->sk_prot);
++
++	if (prot->sock_is_readable)
++		return prot->sock_is_readable(sk);
++
+ 	return false;
+ }
+ #endif	/* _SOCK_H */
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 06ab2a3d2ebd10..1f1861c57e2ad0 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -147,8 +147,19 @@ enum {
+ };
+ 
+ struct xfrm_dev_offload {
++	/* The device for this offload.
++	 * Device drivers should not use this directly, as that will prevent
++	 * them from working with bonding device. Instead, the device passed
++	 * to the add/delete callbacks should be used.
++	 */
+ 	struct net_device	*dev;
+ 	netdevice_tracker	dev_tracker;
++	/* This is a private pointer used by the bonding driver (and eventually
++	 * should be moved there). Device drivers should not use it.
++	 * Protected by xfrm_state.lock AND bond.ipsec_lock in most cases,
++	 * except in the .xdo_dev_state_del() flow, where only xfrm_state.lock
++	 * is held.
++	 */
+ 	struct net_device	*real_dev;
+ 	unsigned long		offload_handle;
+ 	u8			dir : 2;
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index b098ceadbe74bf..9a70048adbc069 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -223,7 +223,7 @@ struct hdac_driver {
+ 	struct device_driver driver;
+ 	int type;
+ 	const struct hda_device_id *id_table;
+-	int (*match)(struct hdac_device *dev, struct hdac_driver *drv);
++	int (*match)(struct hdac_device *dev, const struct hdac_driver *drv);
+ 	void (*unsol_event)(struct hdac_device *dev, unsigned int event);
+ 
+ 	/* fields used by ext bus APIs */
+@@ -235,7 +235,7 @@ struct hdac_driver {
+ #define drv_to_hdac_driver(_drv) container_of(_drv, struct hdac_driver, driver)
+ 
+ const struct hda_device_id *
+-hdac_get_device_id(struct hdac_device *hdev, struct hdac_driver *drv);
++hdac_get_device_id(struct hdac_device *hdev, const struct hdac_driver *drv);
+ 
+ /*
+  * Bus verb operators
+diff --git a/include/sound/hdaudio_ext.h b/include/sound/hdaudio_ext.h
+index 4c7a40e149a594..7de390022ac268 100644
+--- a/include/sound/hdaudio_ext.h
++++ b/include/sound/hdaudio_ext.h
+@@ -22,6 +22,7 @@ void snd_hdac_ext_bus_ppcap_enable(struct hdac_bus *chip, bool enable);
+ void snd_hdac_ext_bus_ppcap_int_enable(struct hdac_bus *chip, bool enable);
+ 
+ int snd_hdac_ext_bus_get_ml_capabilities(struct hdac_bus *bus);
++struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_id(struct hdac_bus *bus, u32 id);
+ struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_addr(struct hdac_bus *bus, int addr);
+ struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_name(struct hdac_bus *bus,
+ 							 const char *codec_name);
+@@ -97,12 +98,17 @@ struct hdac_ext_link {
+ 	void __iomem *ml_addr; /* link output stream reg pointer */
+ 	u32 lcaps;   /* link capablities */
+ 	u16 lsdiid;  /* link sdi identifier */
++	u32 id;
++	u8 slcount;
+ 
+ 	int ref_count;
+ 
+ 	struct list_head list;
+ };
+ 
++#define hdac_ext_link_alt(link)		((link)->lcaps & AZX_ML_HDA_LCAP_ALT)
++#define hdac_ext_link_ofls(link)	((link)->lcaps & AZX_ML_HDA_LCAP_OFLS)
++
+ int snd_hdac_ext_bus_link_power_up(struct hdac_ext_link *hlink);
+ int snd_hdac_ext_bus_link_power_down(struct hdac_ext_link *hlink);
+ int snd_hdac_ext_bus_link_power_up_all(struct hdac_bus *bus);
+diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
+index f880835f7695ed..4175eec40048ad 100644
+--- a/include/trace/events/netfs.h
++++ b/include/trace/events/netfs.h
+@@ -30,6 +30,7 @@
+ 	EM(netfs_write_trace_dio_write,		"DIO-WRITE")	\
+ 	EM(netfs_write_trace_unbuffered_write,	"UNB-WRITE")	\
+ 	EM(netfs_write_trace_writeback,		"WRITEBACK")	\
++	EM(netfs_write_trace_writeback_single,	"WB-SINGLE") \
+ 	E_(netfs_write_trace_writethrough,	"WRITETHRU")
+ 
+ #define netfs_rreq_origins					\
+@@ -38,6 +39,7 @@
+ 	EM(NETFS_READ_GAPS,			"RG")		\
+ 	EM(NETFS_READ_SINGLE,			"R1")		\
+ 	EM(NETFS_READ_FOR_WRITE,		"RW")		\
++	EM(NETFS_UNBUFFERED_READ,		"UR")		\
+ 	EM(NETFS_DIO_READ,			"DR")		\
+ 	EM(NETFS_WRITEBACK,			"WB")		\
+ 	EM(NETFS_WRITEBACK_SINGLE,		"W1")		\
+@@ -128,17 +130,15 @@
+ #define netfs_rreq_ref_traces					\
+ 	EM(netfs_rreq_trace_get_for_outstanding,"GET OUTSTND")	\
+ 	EM(netfs_rreq_trace_get_subreq,		"GET SUBREQ ")	\
+-	EM(netfs_rreq_trace_get_work,		"GET WORK   ")	\
+ 	EM(netfs_rreq_trace_put_complete,	"PUT COMPLT ")	\
+ 	EM(netfs_rreq_trace_put_discard,	"PUT DISCARD")	\
+ 	EM(netfs_rreq_trace_put_failed,		"PUT FAILED ")	\
+ 	EM(netfs_rreq_trace_put_no_submit,	"PUT NO-SUBM")	\
+ 	EM(netfs_rreq_trace_put_return,		"PUT RETURN ")	\
+ 	EM(netfs_rreq_trace_put_subreq,		"PUT SUBREQ ")	\
+-	EM(netfs_rreq_trace_put_work,		"PUT WORK   ")	\
+-	EM(netfs_rreq_trace_put_work_complete,	"PUT WORK CP")	\
+-	EM(netfs_rreq_trace_put_work_nq,	"PUT WORK NQ")	\
++	EM(netfs_rreq_trace_put_work_ip,	"PUT WORK IP ")	\
+ 	EM(netfs_rreq_trace_see_work,		"SEE WORK   ")	\
++	EM(netfs_rreq_trace_see_work_complete,	"SEE WORK CP")	\
+ 	E_(netfs_rreq_trace_new,		"NEW        ")
+ 
+ #define netfs_sreq_ref_traces					\
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index 616916985e3f30..5e5442d03f8789 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -1206,6 +1206,11 @@ struct drm_xe_vm_bind {
+  *    there is no need to explicitly set that. When a queue of type
+  *    %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
+  *    (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't already running.
++ *    The user is expected to query the PXP status via the query ioctl (see
++ *    %DRM_XE_DEVICE_QUERY_PXP_STATUS) and to wait for PXP to be ready before
++ *    attempting to create a queue with this property. When a queue is created
++ *    before PXP is ready, the ioctl will return -EBUSY if init is still in
++ *    progress or -EIO if init failed.
+  *    Given that going into a power-saving state kills PXP HWDRM sessions,
+  *    runtime PM will be blocked while queues of this type are alive.
+  *    All PXP queues will be killed if a PXP invalidation event occurs.
+diff --git a/include/uapi/linux/bits.h b/include/uapi/linux/bits.h
+index 682b406e10679d..a04afef9efca42 100644
+--- a/include/uapi/linux/bits.h
++++ b/include/uapi/linux/bits.h
+@@ -4,9 +4,9 @@
+ #ifndef _UAPI_LINUX_BITS_H
+ #define _UAPI_LINUX_BITS_H
+ 
+-#define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h))))
++#define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h))))
+ 
+-#define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h))))
++#define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h))))
+ 
+ #define __GENMASK_U128(h, l) \
+ 	((_BIT128((h)) << 1) - (_BIT128(l)))
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index fd404729b1154b..fe5df2a9fe8ee6 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -2051,7 +2051,8 @@ union bpf_attr {
+  * 		untouched (unless **BPF_F_MARK_ENFORCE** is added as well), and
+  * 		for updates resulting in a null checksum the value is set to
+  * 		**CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
+- * 		the checksum is to be computed against a pseudo-header.
++ * 		that the modified header field is part of the pseudo-header.
++ * 		Flag **BPF_F_IPV6** should be set for IPv6 packets.
+  *
+  * 		This helper works in combination with **bpf_csum_diff**\ (),
+  * 		which does not update the checksum in-place, but offers more
+@@ -6068,6 +6069,7 @@ enum {
+ 	BPF_F_PSEUDO_HDR		= (1ULL << 4),
+ 	BPF_F_MARK_MANGLED_0		= (1ULL << 5),
+ 	BPF_F_MARK_ENFORCE		= (1ULL << 6),
++	BPF_F_IPV6			= (1ULL << 7),
+ };
+ 
+ /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
+diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
+index e0d6a59a89fa1b..f948917f7f7071 100644
+--- a/io_uring/fdinfo.c
++++ b/io_uring/fdinfo.c
+@@ -172,18 +172,26 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
+ 
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		struct io_sq_data *sq = ctx->sq_data;
++		struct task_struct *tsk;
+ 
++		rcu_read_lock();
++		tsk = rcu_dereference(sq->thread);
+ 		/*
+ 		 * sq->thread might be NULL if we raced with the sqpoll
+ 		 * thread termination.
+ 		 */
+-		if (sq->thread) {
++		if (tsk) {
++			get_task_struct(tsk);
++			rcu_read_unlock();
++			getrusage(tsk, RUSAGE_SELF, &sq_usage);
++			put_task_struct(tsk);
+ 			sq_pid = sq->task_pid;
+ 			sq_cpu = sq->sq_cpu;
+-			getrusage(sq->thread, RUSAGE_SELF, &sq_usage);
+ 			sq_total_time = (sq_usage.ru_stime.tv_sec * 1000000
+ 					 + sq_usage.ru_stime.tv_usec);
+ 			sq_work_time = sq->work_time;
++		} else {
++			rcu_read_unlock();
+ 		}
+ 	}
+ 
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index edda31a15c6e65..e5466f65682699 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -537,18 +537,30 @@ void io_req_queue_iowq(struct io_kiocb *req)
+ 	io_req_task_work_add(req);
+ }
+ 
++static bool io_drain_defer_seq(struct io_kiocb *req, u32 seq)
++{
++	struct io_ring_ctx *ctx = req->ctx;
++
++	return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
++}
++
+ static __cold noinline void io_queue_deferred(struct io_ring_ctx *ctx)
+ {
++	bool drain_seen = false, first = true;
++
+ 	spin_lock(&ctx->completion_lock);
+ 	while (!list_empty(&ctx->defer_list)) {
+ 		struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
+ 						struct io_defer_entry, list);
+ 
+-		if (req_need_defer(de->req, de->seq))
++		drain_seen |= de->req->flags & REQ_F_IO_DRAIN;
++		if ((drain_seen || first) && io_drain_defer_seq(de->req, de->seq))
+ 			break;
++
+ 		list_del_init(&de->list);
+ 		io_req_task_queue(de->req);
+ 		kfree(de);
++		first = false;
+ 	}
+ 	spin_unlock(&ctx->completion_lock);
+ }
+@@ -2901,7 +2913,7 @@ static __cold void io_ring_exit_work(struct work_struct *work)
+ 			struct task_struct *tsk;
+ 
+ 			io_sq_thread_park(sqd);
+-			tsk = sqd->thread;
++			tsk = sqpoll_task_locked(sqd);
+ 			if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
+ 				io_wq_cancel_cb(tsk->io_uring->io_wq,
+ 						io_cancel_ctx_cb, ctx, true);
+@@ -3138,7 +3150,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
+ 	s64 inflight;
+ 	DEFINE_WAIT(wait);
+ 
+-	WARN_ON_ONCE(sqd && sqd->thread != current);
++	WARN_ON_ONCE(sqd && sqpoll_task_locked(sqd) != current);
+ 
+ 	if (!current->io_uring)
+ 		return;
+diff --git a/io_uring/register.c b/io_uring/register.c
+index cc23a4c205cd43..a59589249fce7a 100644
+--- a/io_uring/register.c
++++ b/io_uring/register.c
+@@ -273,6 +273,8 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		sqd = ctx->sq_data;
+ 		if (sqd) {
++			struct task_struct *tsk;
++
+ 			/*
+ 			 * Observe the correct sqd->lock -> ctx->uring_lock
+ 			 * ordering. Fine to drop uring_lock here, we hold
+@@ -282,8 +284,9 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+ 			mutex_unlock(&ctx->uring_lock);
+ 			mutex_lock(&sqd->lock);
+ 			mutex_lock(&ctx->uring_lock);
+-			if (sqd->thread)
+-				tctx = sqd->thread->io_uring;
++			tsk = sqpoll_task_locked(sqd);
++			if (tsk)
++				tctx = tsk->io_uring;
+ 		}
+ 	} else {
+ 		tctx = current->io_uring;
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 03c699493b5ab6..268d2fbe6160c2 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -30,7 +30,7 @@ enum {
+ void io_sq_thread_unpark(struct io_sq_data *sqd)
+ 	__releases(&sqd->lock)
+ {
+-	WARN_ON_ONCE(sqd->thread == current);
++	WARN_ON_ONCE(sqpoll_task_locked(sqd) == current);
+ 
+ 	/*
+ 	 * Do the dance but not conditional clear_bit() because it'd race with
+@@ -46,24 +46,32 @@ void io_sq_thread_unpark(struct io_sq_data *sqd)
+ void io_sq_thread_park(struct io_sq_data *sqd)
+ 	__acquires(&sqd->lock)
+ {
+-	WARN_ON_ONCE(data_race(sqd->thread) == current);
++	struct task_struct *tsk;
+ 
+ 	atomic_inc(&sqd->park_pending);
+ 	set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
+ 	mutex_lock(&sqd->lock);
+-	if (sqd->thread)
+-		wake_up_process(sqd->thread);
++
++	tsk = sqpoll_task_locked(sqd);
++	if (tsk) {
++		WARN_ON_ONCE(tsk == current);
++		wake_up_process(tsk);
++	}
+ }
+ 
+ void io_sq_thread_stop(struct io_sq_data *sqd)
+ {
+-	WARN_ON_ONCE(sqd->thread == current);
++	struct task_struct *tsk;
++
+ 	WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
+ 
+ 	set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
+ 	mutex_lock(&sqd->lock);
+-	if (sqd->thread)
+-		wake_up_process(sqd->thread);
++	tsk = sqpoll_task_locked(sqd);
++	if (tsk) {
++		WARN_ON_ONCE(tsk == current);
++		wake_up_process(tsk);
++	}
+ 	mutex_unlock(&sqd->lock);
+ 	wait_for_completion(&sqd->exited);
+ }
+@@ -270,7 +278,8 @@ static int io_sq_thread(void *data)
+ 	/* offload context creation failed, just exit */
+ 	if (!current->io_uring) {
+ 		mutex_lock(&sqd->lock);
+-		sqd->thread = NULL;
++		rcu_assign_pointer(sqd->thread, NULL);
++		put_task_struct(current);
+ 		mutex_unlock(&sqd->lock);
+ 		goto err_out;
+ 	}
+@@ -379,7 +388,8 @@ static int io_sq_thread(void *data)
+ 		io_sq_tw(&retry_list, UINT_MAX);
+ 
+ 	io_uring_cancel_generic(true, sqd);
+-	sqd->thread = NULL;
++	rcu_assign_pointer(sqd->thread, NULL);
++	put_task_struct(current);
+ 	list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+ 		atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);
+ 	io_run_task_work();
+@@ -484,7 +494,10 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 			goto err_sqpoll;
+ 		}
+ 
+-		sqd->thread = tsk;
++		mutex_lock(&sqd->lock);
++		rcu_assign_pointer(sqd->thread, tsk);
++		mutex_unlock(&sqd->lock);
++
+ 		task_to_put = get_task_struct(tsk);
+ 		ret = io_uring_alloc_task_context(tsk, ctx);
+ 		wake_up_new_task(tsk);
+@@ -495,9 +508,6 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 		ret = -EINVAL;
+ 		goto err;
+ 	}
+-
+-	if (task_to_put)
+-		put_task_struct(task_to_put);
+ 	return 0;
+ err_sqpoll:
+ 	complete(&ctx->sq_data->exited);
+@@ -515,10 +525,13 @@ __cold int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx,
+ 	int ret = -EINVAL;
+ 
+ 	if (sqd) {
++		struct task_struct *tsk;
++
+ 		io_sq_thread_park(sqd);
+ 		/* Don't set affinity for a dying thread */
+-		if (sqd->thread)
+-			ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask);
++		tsk = sqpoll_task_locked(sqd);
++		if (tsk)
++			ret = io_wq_cpu_affinity(tsk->io_uring, mask);
+ 		io_sq_thread_unpark(sqd);
+ 	}
+ 
+diff --git a/io_uring/sqpoll.h b/io_uring/sqpoll.h
+index 4171666b1cf4cc..b83dcdec9765fd 100644
+--- a/io_uring/sqpoll.h
++++ b/io_uring/sqpoll.h
+@@ -8,7 +8,7 @@ struct io_sq_data {
+ 	/* ctx's that are using this sqd */
+ 	struct list_head	ctx_list;
+ 
+-	struct task_struct	*thread;
++	struct task_struct __rcu *thread;
+ 	struct wait_queue_head	wait;
+ 
+ 	unsigned		sq_thread_idle;
+@@ -29,3 +29,9 @@ void io_sq_thread_unpark(struct io_sq_data *sqd);
+ void io_put_sq_data(struct io_sq_data *sqd);
+ void io_sqpoll_wait_sq(struct io_ring_ctx *ctx);
+ int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx, cpumask_var_t mask);
++
++static inline struct task_struct *sqpoll_task_locked(struct io_sq_data *sqd)
++{
++	return rcu_dereference_protected(sqd->thread,
++					 lockdep_is_held(&sqd->lock));
++}
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index ba6b6118cf5040..c20babbf998f4e 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2358,8 +2358,8 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ 	return 0;
+ }
+ 
+-bool bpf_prog_map_compatible(struct bpf_map *map,
+-			     const struct bpf_prog *fp)
++static bool __bpf_prog_map_compatible(struct bpf_map *map,
++				      const struct bpf_prog *fp)
+ {
+ 	enum bpf_prog_type prog_type = resolve_prog_type(fp);
+ 	bool ret;
+@@ -2368,14 +2368,6 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ 	if (fp->kprobe_override)
+ 		return false;
+ 
+-	/* XDP programs inserted into maps are not guaranteed to run on
+-	 * a particular netdev (and can run outside driver context entirely
+-	 * in the case of devmap and cpumap). Until device checks
+-	 * are implemented, prohibit adding dev-bound programs to program maps.
+-	 */
+-	if (bpf_prog_is_dev_bound(aux))
+-		return false;
+-
+ 	spin_lock(&map->owner.lock);
+ 	if (!map->owner.type) {
+ 		/* There's no owner yet where we could check for
+@@ -2409,6 +2401,19 @@ bool bpf_prog_map_compatible(struct bpf_map *map,
+ 	return ret;
+ }
+ 
++bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp)
++{
++	/* XDP programs inserted into maps are not guaranteed to run on
++	 * a particular netdev (and can run outside driver context entirely
++	 * in the case of devmap and cpumap). Until device checks
++	 * are implemented, prohibit adding dev-bound programs to program maps.
++	 */
++	if (bpf_prog_is_dev_bound(fp->aux))
++		return false;
++
++	return __bpf_prog_map_compatible(map, fp);
++}
++
+ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ {
+ 	struct bpf_prog_aux *aux = fp->aux;
+@@ -2421,7 +2426,7 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ 		if (!map_type_contains_progs(map))
+ 			continue;
+ 
+-		if (!bpf_prog_map_compatible(map, fp)) {
++		if (!__bpf_prog_map_compatible(map, fp)) {
+ 			ret = -EINVAL;
+ 			goto out;
+ 		}
+@@ -2469,7 +2474,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ 	/* In case of BPF to BPF calls, verifier did all the prep
+ 	 * work with regards to JITing, etc.
+ 	 */
+-	bool jit_needed = false;
++	bool jit_needed = fp->jit_requested;
+ 
+ 	if (fp->bpf_func)
+ 		goto finalize;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 54c6953a8b84c2..efa70141171290 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4413,8 +4413,10 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
+ 			 * before it would be equally necessary to
+ 			 * propagate it to dreg.
+ 			 */
+-			bt_set_reg(bt, dreg);
+-			bt_set_reg(bt, sreg);
++			if (!hist || !(hist->flags & INSN_F_SRC_REG_STACK))
++				bt_set_reg(bt, sreg);
++			if (!hist || !(hist->flags & INSN_F_DST_REG_STACK))
++				bt_set_reg(bt, dreg);
+ 		} else if (BPF_SRC(insn->code) == BPF_K) {
+ 			 /* dreg <cond> K
+ 			  * Only dreg still needs precision before
+@@ -16377,6 +16379,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 	struct bpf_reg_state *eq_branch_regs;
+ 	struct linked_regs linked_regs = {};
+ 	u8 opcode = BPF_OP(insn->code);
++	int insn_flags = 0;
+ 	bool is_jmp32;
+ 	int pred = -1;
+ 	int err;
+@@ -16435,6 +16438,9 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 				insn->src_reg);
+ 			return -EACCES;
+ 		}
++
++		if (src_reg->type == PTR_TO_STACK)
++			insn_flags |= INSN_F_SRC_REG_STACK;
+ 	} else {
+ 		if (insn->src_reg != BPF_REG_0) {
+ 			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+@@ -16446,6 +16452,14 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		__mark_reg_known(src_reg, insn->imm);
+ 	}
+ 
++	if (dst_reg->type == PTR_TO_STACK)
++		insn_flags |= INSN_F_DST_REG_STACK;
++	if (insn_flags) {
++		err = push_insn_history(env, this_branch, insn_flags, 0);
++		if (err)
++			return err;
++	}
++
+ 	is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+ 	pred = is_branch_taken(dst_reg, src_reg, opcode, is_jmp32);
+ 	if (pred >= 0) {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 95e703891b24f8..e97bc9220fd1a8 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6239,6 +6239,9 @@ static int perf_event_set_output(struct perf_event *event,
+ static int perf_event_set_filter(struct perf_event *event, void __user *arg);
+ static int perf_copy_attr(struct perf_event_attr __user *uattr,
+ 			  struct perf_event_attr *attr);
++static int __perf_event_set_bpf_prog(struct perf_event *event,
++				     struct bpf_prog *prog,
++				     u64 bpf_cookie);
+ 
+ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned long arg)
+ {
+@@ -6301,7 +6304,7 @@ static long _perf_ioctl(struct perf_event *event, unsigned int cmd, unsigned lon
+ 		if (IS_ERR(prog))
+ 			return PTR_ERR(prog);
+ 
+-		err = perf_event_set_bpf_prog(event, prog, 0);
++		err = __perf_event_set_bpf_prog(event, prog, 0);
+ 		if (err) {
+ 			bpf_prog_put(prog);
+ 			return err;
+@@ -10029,14 +10032,14 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
+ 		hwc->interrupts = 1;
+ 	} else {
+ 		hwc->interrupts++;
+-		if (unlikely(throttle &&
+-			     hwc->interrupts > max_samples_per_tick)) {
+-			__this_cpu_inc(perf_throttled_count);
+-			tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
+-			hwc->interrupts = MAX_INTERRUPTS;
+-			perf_log_throttle(event, 0);
+-			ret = 1;
+-		}
++	}
++
++	if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) {
++		__this_cpu_inc(perf_throttled_count);
++		tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
++		hwc->interrupts = MAX_INTERRUPTS;
++		perf_log_throttle(event, 0);
++		ret = 1;
+ 	}
+ 
+ 	if (event->attr.freq) {
+@@ -11069,8 +11072,9 @@ static inline bool perf_event_is_tracing(struct perf_event *event)
+ 	return false;
+ }
+ 
+-int perf_event_set_bpf_prog(struct perf_event *event, struct bpf_prog *prog,
+-			    u64 bpf_cookie)
++static int __perf_event_set_bpf_prog(struct perf_event *event,
++				     struct bpf_prog *prog,
++				     u64 bpf_cookie)
+ {
+ 	bool is_kprobe, is_uprobe, is_tracepoint, is_syscall_tp;
+ 
+@@ -11108,6 +11112,20 @@ int perf_event_set_bpf_prog(struct perf_event *event, struct bpf_prog *prog,
+ 	return perf_event_attach_bpf_prog(event, prog, bpf_cookie);
+ }
+ 
++int perf_event_set_bpf_prog(struct perf_event *event,
++			    struct bpf_prog *prog,
++			    u64 bpf_cookie)
++{
++	struct perf_event_context *ctx;
++	int ret;
++
++	ctx = perf_event_ctx_lock(event);
++	ret = __perf_event_set_bpf_prog(event, prog, bpf_cookie);
++	perf_event_ctx_unlock(event, ctx);
++
++	return ret;
++}
++
+ void perf_event_free_bpf_prog(struct perf_event *event)
+ {
+ 	if (!event->prog)
+@@ -11130,7 +11148,15 @@ static void perf_event_free_filter(struct perf_event *event)
+ {
+ }
+ 
+-int perf_event_set_bpf_prog(struct perf_event *event, struct bpf_prog *prog,
++static int __perf_event_set_bpf_prog(struct perf_event *event,
++				     struct bpf_prog *prog,
++				     u64 bpf_cookie)
++{
++	return -ENOENT;
++}
++
++int perf_event_set_bpf_prog(struct perf_event *event,
++			    struct bpf_prog *prog,
+ 			    u64 bpf_cookie)
+ {
+ 	return -ENOENT;
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index d9b7e2b38c7a9f..41606247c27763 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -233,6 +233,10 @@ static int em_compute_costs(struct device *dev, struct em_perf_state *table,
+ 	unsigned long prev_cost = ULONG_MAX;
+ 	int i, ret;
+ 
++	/* This is needed only for CPUs and EAS skip other devices */
++	if (!_is_cpu_device(dev))
++		return 0;
++
+ 	/* Compute the cost of each performance state. */
+ 	for (i = nr_states - 1; i >= 0; i--) {
+ 		unsigned long power_res, cost;
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 23c0f4e6cb2ffe..5af9c7ee98cd4a 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -90,6 +90,11 @@ void hibernate_release(void)
+ 	atomic_inc(&hibernate_atomic);
+ }
+ 
++bool hibernation_in_progress(void)
++{
++	return !atomic_read(&hibernate_atomic);
++}
++
+ bool hibernation_available(void)
+ {
+ 	return nohibernate == 0 &&
+diff --git a/kernel/power/main.c b/kernel/power/main.c
+index 6254814d481714..0622e7dacf1720 100644
+--- a/kernel/power/main.c
++++ b/kernel/power/main.c
+@@ -613,7 +613,8 @@ bool pm_debug_messages_on __read_mostly;
+ 
+ bool pm_debug_messages_should_print(void)
+ {
+-	return pm_debug_messages_on && pm_suspend_target_state != PM_SUSPEND_ON;
++	return pm_debug_messages_on && (hibernation_in_progress() ||
++		pm_suspend_target_state != PM_SUSPEND_ON);
+ }
+ EXPORT_SYMBOL_GPL(pm_debug_messages_should_print);
+ 
+diff --git a/kernel/power/power.h b/kernel/power/power.h
+index c352dea2f67b56..f8496f40b54fa5 100644
+--- a/kernel/power/power.h
++++ b/kernel/power/power.h
+@@ -71,10 +71,14 @@ extern void enable_restore_image_protection(void);
+ static inline void enable_restore_image_protection(void) {}
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
+ 
++extern bool hibernation_in_progress(void);
++
+ #else /* !CONFIG_HIBERNATION */
+ 
+ static inline void hibernate_reserved_size_init(void) {}
+ static inline void hibernate_image_size_init(void) {}
++
++static inline bool hibernation_in_progress(void) { return false; }
+ #endif /* !CONFIG_HIBERNATION */
+ 
+ #define power_attr(_name) \
+diff --git a/kernel/power/wakelock.c b/kernel/power/wakelock.c
+index 52571dcad768b9..4e941999a53ba6 100644
+--- a/kernel/power/wakelock.c
++++ b/kernel/power/wakelock.c
+@@ -49,6 +49,9 @@ ssize_t pm_show_wakelocks(char *buf, bool show_active)
+ 			len += sysfs_emit_at(buf, len, "%s ", wl->name);
+ 	}
+ 
++	if (len > 0)
++		--len;
++
+ 	len += sysfs_emit_at(buf, len, "\n");
+ 
+ 	mutex_unlock(&wakelocks_lock);
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 659f83e7104869..80b10893b5038d 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -801,6 +801,10 @@ static int rcu_watching_snap_save(struct rcu_data *rdp)
+ 	return 0;
+ }
+ 
++#ifndef arch_irq_stat_cpu
++#define arch_irq_stat_cpu(cpu) 0
++#endif
++
+ /*
+  * Returns positive if the specified CPU has passed through a quiescent state
+  * by virtue of being in or having passed through an dynticks idle state since
+@@ -936,9 +940,9 @@ static int rcu_watching_snap_recheck(struct rcu_data *rdp)
+ 			rsrp->cputime_irq     = kcpustat_field(kcsp, CPUTIME_IRQ, cpu);
+ 			rsrp->cputime_softirq = kcpustat_field(kcsp, CPUTIME_SOFTIRQ, cpu);
+ 			rsrp->cputime_system  = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu);
+-			rsrp->nr_hardirqs = kstat_cpu_irqs_sum(rdp->cpu);
+-			rsrp->nr_softirqs = kstat_cpu_softirqs_sum(rdp->cpu);
+-			rsrp->nr_csw = nr_context_switches_cpu(rdp->cpu);
++			rsrp->nr_hardirqs = kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu);
++			rsrp->nr_softirqs = kstat_cpu_softirqs_sum(cpu);
++			rsrp->nr_csw = nr_context_switches_cpu(cpu);
+ 			rsrp->jiffies = jiffies;
+ 			rsrp->gp_seq = rdp->gp_seq;
+ 		}
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index a9a811d9d7a372..1bba2225e7448b 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -168,7 +168,7 @@ struct rcu_snap_record {
+ 	u64		cputime_irq;	/* Accumulated cputime of hard irqs */
+ 	u64		cputime_softirq;/* Accumulated cputime of soft irqs */
+ 	u64		cputime_system; /* Accumulated cputime of kernel tasks */
+-	unsigned long	nr_hardirqs;	/* Accumulated number of hard irqs */
++	u64		nr_hardirqs;	/* Accumulated number of hard irqs */
+ 	unsigned int	nr_softirqs;	/* Accumulated number of soft irqs */
+ 	unsigned long long nr_csw;	/* Accumulated number of task switches */
+ 	unsigned long   jiffies;	/* Track jiffies value */
+diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
+index 925fcdad5dea22..56b21219442b65 100644
+--- a/kernel/rcu/tree_stall.h
++++ b/kernel/rcu/tree_stall.h
+@@ -435,8 +435,8 @@ static void print_cpu_stat_info(int cpu)
+ 	rsr.cputime_system  = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu);
+ 
+ 	pr_err("\t         hardirqs   softirqs   csw/system\n");
+-	pr_err("\t number: %8ld %10d %12lld\n",
+-		kstat_cpu_irqs_sum(cpu) - rsrp->nr_hardirqs,
++	pr_err("\t number: %8lld %10d %12lld\n",
++		kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu) - rsrp->nr_hardirqs,
+ 		kstat_cpu_softirqs_sum(cpu) - rsrp->nr_softirqs,
+ 		nr_context_switches_cpu(cpu) - rsrp->nr_csw);
+ 	pr_err("\tcputime: %8lld %10lld %12lld   ==> %d(ms)\n",
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c81cf642dba055..d593d6612ba07e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2283,6 +2283,12 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state
+ 		 * just go back and repeat.
+ 		 */
+ 		rq = task_rq_lock(p, &rf);
++		/*
++		 * If task is sched_delayed, force dequeue it, to avoid always
++		 * hitting the tick timeout in the queued case
++		 */
++		if (p->se.sched_delayed)
++			dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
+ 		trace_sched_wait_task(p);
+ 		running = task_on_cpu(rq, p);
+ 		queued = task_on_rq_queued(p);
+@@ -6571,12 +6577,14 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+  * Otherwise marks the task's __state as RUNNING
+  */
+ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
+-			      unsigned long task_state)
++			      unsigned long *task_state_p)
+ {
++	unsigned long task_state = *task_state_p;
+ 	int flags = DEQUEUE_NOCLOCK;
+ 
+ 	if (signal_pending_state(task_state, p)) {
+ 		WRITE_ONCE(p->__state, TASK_RUNNING);
++		*task_state_p = TASK_RUNNING;
+ 		return false;
+ 	}
+ 
+@@ -6713,7 +6721,7 @@ static void __sched notrace __schedule(int sched_mode)
+ 			goto picked;
+ 		}
+ 	} else if (!preempt && prev_state) {
+-		try_to_block_task(rq, prev, prev_state);
++		try_to_block_task(rq, prev, &prev_state);
+ 		switch_count = &prev->nvcsw;
+ 	}
+ 
+diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
+index e67a19a071c114..50b9f3af810d93 100644
+--- a/kernel/sched/ext_idle.c
++++ b/kernel/sched/ext_idle.c
+@@ -131,6 +131,7 @@ static s32 pick_idle_cpu_in_node(const struct cpumask *cpus_allowed, int node, u
+ 		goto retry;
+ }
+ 
++#ifdef CONFIG_NUMA
+ /*
+  * Tracks nodes that have not yet been visited when searching for an idle
+  * CPU across all available nodes.
+@@ -179,6 +180,13 @@ static s32 pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, i
+ 
+ 	return cpu;
+ }
++#else
++static inline s32
++pick_idle_cpu_from_online_nodes(const struct cpumask *cpus_allowed, int node, u64 flags)
++{
++	return -EBUSY;
++}
++#endif
+ 
+ /*
+  * Find an idle CPU in the system, starting from @node.
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 0fb9bf995a4795..0c04ed41485259 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7196,6 +7196,11 @@ static bool dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 	return true;
+ }
+ 
++static inline unsigned int cfs_h_nr_delayed(struct rq *rq)
++{
++	return (rq->cfs.h_nr_queued - rq->cfs.h_nr_runnable);
++}
++
+ #ifdef CONFIG_SMP
+ 
+ /* Working cpumask for: sched_balance_rq(), sched_balance_newidle(). */
+@@ -7357,8 +7362,12 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
+ 	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
+ 		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
+ 
+-	if (sync && cpu_rq(this_cpu)->nr_running == 1)
+-		return this_cpu;
++	if (sync) {
++		struct rq *rq = cpu_rq(this_cpu);
++
++		if ((rq->nr_running - cfs_h_nr_delayed(rq)) == 1)
++			return this_cpu;
++	}
+ 
+ 	if (available_idle_cpu(prev_cpu))
+ 		return prev_cpu;
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 50e8d04ab661f4..2e5b89d7d86605 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1405,6 +1405,15 @@ void run_posix_cpu_timers(void)
+ 
+ 	lockdep_assert_irqs_disabled();
+ 
++	/*
++	 * Ensure that release_task(tsk) can't happen while
++	 * handle_posix_cpu_timers() is running. Otherwise, a concurrent
++	 * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and
++	 * miss timer->it.cpu.firing != 0.
++	 */
++	if (tsk->exit_state)
++		return;
++
+ 	/*
+ 	 * If the actual expiry is deferred to task work context and the
+ 	 * work is already scheduled there is no point to do anything here.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 187dc37d61d4a3..090cdab38f0ccd 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1858,7 +1858,7 @@ static struct pt_regs *get_bpf_raw_tp_regs(void)
+ 	struct bpf_raw_tp_regs *tp_regs = this_cpu_ptr(&bpf_raw_tp_regs);
+ 	int nest_level = this_cpu_inc_return(bpf_raw_tp_nest_level);
+ 
+-	if (WARN_ON_ONCE(nest_level > ARRAY_SIZE(tp_regs->regs))) {
++	if (nest_level > ARRAY_SIZE(tp_regs->regs)) {
+ 		this_cpu_dec(bpf_raw_tp_nest_level);
+ 		return ERR_PTR(-EBUSY);
+ 	}
+@@ -2987,6 +2987,9 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	if (sizeof(u64) != sizeof(void *))
+ 		return -EOPNOTSUPP;
+ 
++	if (attr->link_create.flags)
++		return -EINVAL;
++
+ 	if (!is_kprobe_multi(prog))
+ 		return -EINVAL;
+ 
+@@ -3376,6 +3379,9 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	if (sizeof(u64) != sizeof(void *))
+ 		return -EOPNOTSUPP;
+ 
++	if (attr->link_create.flags)
++		return -EINVAL;
++
+ 	if (!is_uprobe_multi(prog))
+ 		return -EINVAL;
+ 
+@@ -3417,7 +3423,9 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 	}
+ 
+ 	if (pid) {
++		rcu_read_lock();
+ 		task = get_pid_task(find_vpid(pid), PIDTYPE_TGID);
++		rcu_read_unlock();
+ 		if (!task) {
+ 			err = -ESRCH;
+ 			goto error_path_put;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 3f9bf562beea23..67707ff28fc519 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2849,6 +2849,12 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 	if (nr_pages < 2)
+ 		nr_pages = 2;
+ 
++	/*
++	 * Keep CPUs from coming online while resizing to synchronize
++	 * with new per CPU buffers being created.
++	 */
++	guard(cpus_read_lock)();
++
+ 	/* prevent another thread from changing buffer sizes */
+ 	mutex_lock(&buffer->mutex);
+ 	atomic_inc(&buffer->resizing);
+@@ -2893,7 +2899,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 			cond_resched();
+ 		}
+ 
+-		cpus_read_lock();
+ 		/*
+ 		 * Fire off all the required work handlers
+ 		 * We can't schedule on offline CPUs, but it's not necessary
+@@ -2933,7 +2938,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 			cpu_buffer->nr_pages_to_update = 0;
+ 		}
+ 
+-		cpus_read_unlock();
+ 	} else {
+ 		cpu_buffer = buffer->buffers[cpu_id];
+ 
+@@ -2961,8 +2965,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 			goto out_err;
+ 		}
+ 
+-		cpus_read_lock();
+-
+ 		/* Can't run something on an offline CPU. */
+ 		if (!cpu_online(cpu_id))
+ 			rb_update_pages(cpu_buffer);
+@@ -2981,7 +2983,6 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 		}
+ 
+ 		cpu_buffer->nr_pages_to_update = 0;
+-		cpus_read_unlock();
+ 	}
+ 
+  out:
+@@ -6764,7 +6765,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ 	old_size = buffer->subbuf_size;
+ 
+ 	/* prevent another thread from changing buffer sizes */
+-	mutex_lock(&buffer->mutex);
++	guard(mutex)(&buffer->mutex);
+ 	atomic_inc(&buffer->record_disabled);
+ 
+ 	/* Make sure all commits have finished */
+@@ -6869,7 +6870,6 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ 	}
+ 
+ 	atomic_dec(&buffer->record_disabled);
+-	mutex_unlock(&buffer->mutex);
+ 
+ 	return 0;
+ 
+@@ -6878,7 +6878,6 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ 	buffer->subbuf_size = old_size;
+ 
+ 	atomic_dec(&buffer->record_disabled);
+-	mutex_unlock(&buffer->mutex);
+ 
+ 	for_each_buffer_cpu(buffer, cpu) {
+ 		cpu_buffer = buffer->buffers[cpu];
+@@ -7284,8 +7283,8 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
+ 	/* Check if any events were dropped */
+ 	missed_events = cpu_buffer->lost_events;
+ 
+-	if (cpu_buffer->reader_page != cpu_buffer->commit_page) {
+-		if (missed_events) {
++	if (missed_events) {
++		if (cpu_buffer->reader_page != cpu_buffer->commit_page) {
+ 			struct buffer_data_page *bpage = reader->page;
+ 			unsigned int commit;
+ 			/*
+@@ -7306,13 +7305,23 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
+ 				local_add(RB_MISSED_STORED, &bpage->commit);
+ 			}
+ 			local_add(RB_MISSED_EVENTS, &bpage->commit);
++		} else if (!WARN_ONCE(cpu_buffer->reader_page == cpu_buffer->tail_page,
++				      "Reader on commit with %ld missed events",
++				      missed_events)) {
++			/*
++			 * There shouldn't be any missed events if the tail_page
++			 * is on the reader page. But if the tail page is not on the
++			 * reader page and the commit_page is, that would mean that
++			 * there's a commit_overrun (an interrupt preempted an
++			 * addition of an event and then filled the buffer
++			 * with new events). In this case it's not an
++			 * error, but it should still be reported.
++			 *
++			 * TODO: Add missed events to the page for user space to know.
++			 */
++			pr_info("Ring buffer [%d] commit overrun lost %ld events at timestamp:%lld\n",
++				cpu, missed_events, cpu_buffer->reader_page->page->time_stamp);
+ 		}
+-	} else {
+-		/*
+-		 * There really shouldn't be any missed events if the commit
+-		 * is on the reader page.
+-		 */
+-		WARN_ON_ONCE(missed_events);
+ 	}
+ 
+ 	cpu_buffer->lost_events = 0;
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 79be1995db44c4..10ee434a9b755f 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1772,6 +1772,9 @@ extern int event_enable_register_trigger(char *glob,
+ extern void event_enable_unregister_trigger(char *glob,
+ 					    struct event_trigger_data *test,
+ 					    struct trace_event_file *file);
++extern struct event_trigger_data *
++trigger_data_alloc(struct event_command *cmd_ops, char *cmd, char *param,
++		   void *private_data);
+ extern void trigger_data_free(struct event_trigger_data *data);
+ extern int event_trigger_init(struct event_trigger_data *data);
+ extern int trace_event_trigger_enable_disable(struct trace_event_file *file,
+@@ -1798,11 +1801,6 @@ extern bool event_trigger_check_remove(const char *glob);
+ extern bool event_trigger_empty_param(const char *param);
+ extern int event_trigger_separate_filter(char *param_and_filter, char **param,
+ 					 char **filter, bool param_required);
+-extern struct event_trigger_data *
+-event_trigger_alloc(struct event_command *cmd_ops,
+-		    char *cmd,
+-		    char *param,
+-		    void *private_data);
+ extern int event_trigger_parse_num(char *trigger,
+ 				   struct event_trigger_data *trigger_data);
+ extern int event_trigger_set_filter(struct event_command *cmd_ops,
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 1260c23cfa5fc4..86fd06812cdab4 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -5246,17 +5246,94 @@ hist_trigger_actions(struct hist_trigger_data *hist_data,
+ 	}
+ }
+ 
++/*
++ * The hist_pad structure is used to save information to create
++ * a histogram from the histogram trigger. It's too big to store
++ * on the stack, so when the histogram trigger is initialized
++ * a percpu array of 4 hist_pad structures is allocated.
++ * This will cover every context from normal, softirq, irq and NMI
++ * in the very unlikely event that a tigger happens at each of
++ * these contexts and interrupts a currently active trigger.
++ */
++struct hist_pad {
++	unsigned long		entries[HIST_STACKTRACE_DEPTH];
++	u64			var_ref_vals[TRACING_MAP_VARS_MAX];
++	char			compound_key[HIST_KEY_SIZE_MAX];
++};
++
++static struct hist_pad __percpu *hist_pads;
++static DEFINE_PER_CPU(int, hist_pad_cnt);
++static refcount_t hist_pad_ref;
++
++/* One hist_pad for every context (normal, softirq, irq, NMI) */
++#define MAX_HIST_CNT 4
++
++static int alloc_hist_pad(void)
++{
++	lockdep_assert_held(&event_mutex);
++
++	if (refcount_read(&hist_pad_ref)) {
++		refcount_inc(&hist_pad_ref);
++		return 0;
++	}
++
++	hist_pads = __alloc_percpu(sizeof(struct hist_pad) * MAX_HIST_CNT,
++				   __alignof__(struct hist_pad));
++	if (!hist_pads)
++		return -ENOMEM;
++
++	refcount_set(&hist_pad_ref, 1);
++	return 0;
++}
++
++static void free_hist_pad(void)
++{
++	lockdep_assert_held(&event_mutex);
++
++	if (!refcount_dec_and_test(&hist_pad_ref))
++		return;
++
++	free_percpu(hist_pads);
++	hist_pads = NULL;
++}
++
++static struct hist_pad *get_hist_pad(void)
++{
++	struct hist_pad *hist_pad;
++	int cnt;
++
++	if (WARN_ON_ONCE(!hist_pads))
++		return NULL;
++
++	preempt_disable();
++
++	hist_pad = per_cpu_ptr(hist_pads, smp_processor_id());
++
++	if (this_cpu_read(hist_pad_cnt) == MAX_HIST_CNT) {
++		preempt_enable();
++		return NULL;
++	}
++
++	cnt = this_cpu_inc_return(hist_pad_cnt) - 1;
++
++	return &hist_pad[cnt];
++}
++
++static void put_hist_pad(void)
++{
++	this_cpu_dec(hist_pad_cnt);
++	preempt_enable();
++}
++
+ static void event_hist_trigger(struct event_trigger_data *data,
+ 			       struct trace_buffer *buffer, void *rec,
+ 			       struct ring_buffer_event *rbe)
+ {
+ 	struct hist_trigger_data *hist_data = data->private_data;
+ 	bool use_compound_key = (hist_data->n_keys > 1);
+-	unsigned long entries[HIST_STACKTRACE_DEPTH];
+-	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
+-	char compound_key[HIST_KEY_SIZE_MAX];
+ 	struct tracing_map_elt *elt = NULL;
+ 	struct hist_field *key_field;
++	struct hist_pad *hist_pad;
+ 	u64 field_contents;
+ 	void *key = NULL;
+ 	unsigned int i;
+@@ -5264,12 +5341,18 @@ static void event_hist_trigger(struct event_trigger_data *data,
+ 	if (unlikely(!rbe))
+ 		return;
+ 
+-	memset(compound_key, 0, hist_data->key_size);
++	hist_pad = get_hist_pad();
++	if (!hist_pad)
++		return;
++
++	memset(hist_pad->compound_key, 0, hist_data->key_size);
+ 
+ 	for_each_hist_key_field(i, hist_data) {
+ 		key_field = hist_data->fields[i];
+ 
+ 		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
++			unsigned long *entries = hist_pad->entries;
++
+ 			memset(entries, 0, HIST_STACKTRACE_SIZE);
+ 			if (key_field->field) {
+ 				unsigned long *stack, n_entries;
+@@ -5293,26 +5376,31 @@ static void event_hist_trigger(struct event_trigger_data *data,
+ 		}
+ 
+ 		if (use_compound_key)
+-			add_to_key(compound_key, key, key_field, rec);
++			add_to_key(hist_pad->compound_key, key, key_field, rec);
+ 	}
+ 
+ 	if (use_compound_key)
+-		key = compound_key;
++		key = hist_pad->compound_key;
+ 
+ 	if (hist_data->n_var_refs &&
+-	    !resolve_var_refs(hist_data, key, var_ref_vals, false))
+-		return;
++	    !resolve_var_refs(hist_data, key, hist_pad->var_ref_vals, false))
++		goto out;
+ 
+ 	elt = tracing_map_insert(hist_data->map, key);
+ 	if (!elt)
+-		return;
++		goto out;
+ 
+-	hist_trigger_elt_update(hist_data, elt, buffer, rec, rbe, var_ref_vals);
++	hist_trigger_elt_update(hist_data, elt, buffer, rec, rbe, hist_pad->var_ref_vals);
+ 
+-	if (resolve_var_refs(hist_data, key, var_ref_vals, true))
+-		hist_trigger_actions(hist_data, elt, buffer, rec, rbe, key, var_ref_vals);
++	if (resolve_var_refs(hist_data, key, hist_pad->var_ref_vals, true)) {
++		hist_trigger_actions(hist_data, elt, buffer, rec, rbe,
++				     key, hist_pad->var_ref_vals);
++	}
+ 
+ 	hist_poll_wakeup();
++
++ out:
++	put_hist_pad();
+ }
+ 
+ static void hist_trigger_stacktrace_print(struct seq_file *m,
+@@ -6157,6 +6245,9 @@ static int event_hist_trigger_init(struct event_trigger_data *data)
+ {
+ 	struct hist_trigger_data *hist_data = data->private_data;
+ 
++	if (alloc_hist_pad() < 0)
++		return -ENOMEM;
++
+ 	if (!data->ref && hist_data->attrs->name)
+ 		save_named_trigger(hist_data->attrs->name, data);
+ 
+@@ -6201,6 +6292,7 @@ static void event_hist_trigger_free(struct event_trigger_data *data)
+ 
+ 		destroy_hist_data(hist_data);
+ 	}
++	free_hist_pad();
+ }
+ 
+ static const struct event_trigger_ops event_hist_trigger_ops = {
+@@ -6216,9 +6308,7 @@ static int event_hist_trigger_named_init(struct event_trigger_data *data)
+ 
+ 	save_named_trigger(data->named_data->name, data);
+ 
+-	event_hist_trigger_init(data->named_data);
+-
+-	return 0;
++	return event_hist_trigger_init(data->named_data);
+ }
+ 
+ static void event_hist_trigger_named_free(struct event_trigger_data *data)
+@@ -6705,7 +6795,7 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
+ 		return PTR_ERR(hist_data);
+ 	}
+ 
+-	trigger_data = event_trigger_alloc(cmd_ops, cmd, param, hist_data);
++	trigger_data = trigger_data_alloc(cmd_ops, cmd, param, hist_data);
+ 	if (!trigger_data) {
+ 		ret = -ENOMEM;
+ 		goto out_free;
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 6e87ae2a1a66bf..c443ed7649a896 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -804,7 +804,7 @@ int event_trigger_separate_filter(char *param_and_filter, char **param,
+ }
+ 
+ /**
+- * event_trigger_alloc - allocate and init event_trigger_data for a trigger
++ * trigger_data_alloc - allocate and init event_trigger_data for a trigger
+  * @cmd_ops: The event_command operations for the trigger
+  * @cmd: The cmd string
+  * @param: The param string
+@@ -815,14 +815,14 @@ int event_trigger_separate_filter(char *param_and_filter, char **param,
+  * trigger_ops to assign to the event_trigger_data.  @private_data can
+  * also be passed in and associated with the event_trigger_data.
+  *
+- * Use event_trigger_free() to free an event_trigger_data object.
++ * Use trigger_data_free() to free an event_trigger_data object.
+  *
+  * Return: The trigger_data object success, NULL otherwise
+  */
+-struct event_trigger_data *event_trigger_alloc(struct event_command *cmd_ops,
+-					       char *cmd,
+-					       char *param,
+-					       void *private_data)
++struct event_trigger_data *trigger_data_alloc(struct event_command *cmd_ops,
++					      char *cmd,
++					      char *param,
++					      void *private_data)
+ {
+ 	struct event_trigger_data *trigger_data;
+ 	const struct event_trigger_ops *trigger_ops;
+@@ -989,13 +989,13 @@ event_trigger_parse(struct event_command *cmd_ops,
+ 		return ret;
+ 
+ 	ret = -ENOMEM;
+-	trigger_data = event_trigger_alloc(cmd_ops, cmd, param, file);
++	trigger_data = trigger_data_alloc(cmd_ops, cmd, param, file);
+ 	if (!trigger_data)
+ 		goto out;
+ 
+ 	if (remove) {
+ 		event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
+-		kfree(trigger_data);
++		trigger_data_free(trigger_data);
+ 		ret = 0;
+ 		goto out;
+ 	}
+@@ -1022,7 +1022,7 @@ event_trigger_parse(struct event_command *cmd_ops,
+ 
+  out_free:
+ 	event_trigger_reset_filter(cmd_ops, trigger_data);
+-	kfree(trigger_data);
++	trigger_data_free(trigger_data);
+ 	goto out;
+ }
+ 
+@@ -1793,7 +1793,7 @@ int event_enable_trigger_parse(struct event_command *cmd_ops,
+ 	enable_data->enable = enable;
+ 	enable_data->file = event_enable_file;
+ 
+-	trigger_data = event_trigger_alloc(cmd_ops, cmd, param, enable_data);
++	trigger_data = trigger_data_alloc(cmd_ops, cmd, param, enable_data);
+ 	if (!trigger_data) {
+ 		kfree(enable_data);
+ 		goto out;
+diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
+index f6ea0c5b5da393..96cd896684676d 100644
+--- a/lib/Kconfig.ubsan
++++ b/lib/Kconfig.ubsan
+@@ -118,6 +118,8 @@ config UBSAN_UNREACHABLE
+ 
+ config UBSAN_INTEGER_WRAP
+ 	bool "Perform checking for integer arithmetic wrap-around"
++	# This is very experimental so drop the next line if you really want it
++	depends on BROKEN
+ 	depends on !COMPILE_TEST
+ 	depends on $(cc-option,-fsanitize-undefined-ignore-overflow-pattern=all)
+ 	depends on $(cc-option,-fsanitize=signed-integer-overflow)
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index bc9391e55d57ea..9ce83ab71bacd8 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -820,7 +820,7 @@ static bool iov_iter_aligned_bvec(const struct iov_iter *i, unsigned addr_mask,
+ 	size_t size = i->count;
+ 
+ 	do {
+-		size_t len = bvec->bv_len;
++		size_t len = bvec->bv_len - skip;
+ 
+ 		if (len > size)
+ 			len = size;
+diff --git a/lib/kunit/static_stub.c b/lib/kunit/static_stub.c
+index 92b2cccd5e7633..484fd85251b415 100644
+--- a/lib/kunit/static_stub.c
++++ b/lib/kunit/static_stub.c
+@@ -96,7 +96,7 @@ void __kunit_activate_static_stub(struct kunit *test,
+ 
+ 	/* If the replacement address is NULL, deactivate the stub. */
+ 	if (!replacement_addr) {
+-		kunit_deactivate_static_stub(test, replacement_addr);
++		kunit_deactivate_static_stub(test, real_fn_addr);
+ 		return;
+ 	}
+ 
+diff --git a/lib/tests/usercopy_kunit.c b/lib/tests/usercopy_kunit.c
+index 77fa00a13df775..80f8abe10968c1 100644
+--- a/lib/tests/usercopy_kunit.c
++++ b/lib/tests/usercopy_kunit.c
+@@ -27,6 +27,7 @@
+ 			    !defined(CONFIG_MICROBLAZE) &&	\
+ 			    !defined(CONFIG_NIOS2) &&		\
+ 			    !defined(CONFIG_PPC32) &&		\
++			    !defined(CONFIG_SPARC32) &&		\
+ 			    !defined(CONFIG_SUPERH))
+ # define TEST_U64
+ #endif
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 7b90cbeb4a1adf..6af6d8f2929ce4 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1589,6 +1589,16 @@ int folio_wait_private_2_killable(struct folio *folio)
+ }
+ EXPORT_SYMBOL(folio_wait_private_2_killable);
+ 
++static void filemap_end_dropbehind(struct folio *folio)
++{
++	struct address_space *mapping = folio->mapping;
++
++	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
++
++	if (mapping && !folio_test_writeback(folio) && !folio_test_dirty(folio))
++		folio_unmap_invalidate(mapping, folio, 0);
++}
++
+ /*
+  * If folio was marked as dropbehind, then pages should be dropped when writeback
+  * completes. Do that now. If we fail, it's likely because of a big folio -
+@@ -1604,8 +1614,7 @@ static void folio_end_dropbehind_write(struct folio *folio)
+ 	 * invalidation in that case.
+ 	 */
+ 	if (in_task() && folio_trylock(folio)) {
+-		if (folio->mapping)
+-			folio_unmap_invalidate(folio->mapping, folio, 0);
++		filemap_end_dropbehind(folio);
+ 		folio_unlock(folio);
+ 	}
+ }
+@@ -2635,8 +2644,7 @@ static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio)
+ 	return (pos1 >> shift == pos2 >> shift);
+ }
+ 
+-static void filemap_end_dropbehind_read(struct address_space *mapping,
+-					struct folio *folio)
++static void filemap_end_dropbehind_read(struct folio *folio)
+ {
+ 	if (!folio_test_dropbehind(folio))
+ 		return;
+@@ -2644,7 +2652,7 @@ static void filemap_end_dropbehind_read(struct address_space *mapping,
+ 		return;
+ 	if (folio_trylock(folio)) {
+ 		if (folio_test_clear_dropbehind(folio))
+-			folio_unmap_invalidate(mapping, folio, 0);
++			filemap_end_dropbehind(folio);
+ 		folio_unlock(folio);
+ 	}
+ }
+@@ -2765,7 +2773,7 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter,
+ 		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+ 			struct folio *folio = fbatch.folios[i];
+ 
+-			filemap_end_dropbehind_read(mapping, folio);
++			filemap_end_dropbehind_read(folio);
+ 			folio_put(folio);
+ 		}
+ 		folio_batch_init(&fbatch);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 47fa713ccb4d89..4f29e393f6af1c 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -898,9 +898,7 @@ static inline bool page_expected_state(struct page *page,
+ #ifdef CONFIG_MEMCG
+ 			page->memcg_data |
+ #endif
+-#ifdef CONFIG_PAGE_POOL
+-			((page->pp_magic & ~0x3UL) == PP_SIGNATURE) |
+-#endif
++			page_pool_page_is_pp(page) |
+ 			(page->flags & check_flags)))
+ 		return false;
+ 
+@@ -927,10 +925,8 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
+ 	if (unlikely(page->memcg_data))
+ 		bad_reason = "page still charged to cgroup";
+ #endif
+-#ifdef CONFIG_PAGE_POOL
+-	if (unlikely((page->pp_magic & ~0x3UL) == PP_SIGNATURE))
++	if (unlikely(page_pool_page_is_pp(page)))
+ 		bad_reason = "page_pool leak";
+-#endif
+ 	return bad_reason;
+ }
+ 
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 61461b9fa13431..5c1ca57ccd2853 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1704,7 +1704,7 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
+ 				    start, len, &subreq->io_iter);
+ 	}
+ 	if (IS_ERR(req)) {
+-		netfs_write_subrequest_terminated(subreq, PTR_ERR(req), false);
++		netfs_write_subrequest_terminated(subreq, PTR_ERR(req));
+ 		return;
+ 	}
+ 
+@@ -1712,7 +1712,7 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
+ 	if (err) {
+ 		trace_9p_protocol_dump(clnt, &req->rc);
+ 		p9_req_put(clnt, req);
+-		netfs_write_subrequest_terminated(subreq, err, false);
++		netfs_write_subrequest_terminated(subreq, err);
+ 		return;
+ 	}
+ 
+@@ -1724,7 +1724,7 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
+ 	p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", len);
+ 
+ 	p9_req_put(clnt, req);
+-	netfs_write_subrequest_terminated(subreq, written, false);
++	netfs_write_subrequest_terminated(subreq, written);
+ }
+ EXPORT_SYMBOL(p9_client_write_subreq);
+ 
+diff --git a/net/bluetooth/eir.c b/net/bluetooth/eir.c
+index 1bc51e2b05a347..3f72111ba651f9 100644
+--- a/net/bluetooth/eir.c
++++ b/net/bluetooth/eir.c
+@@ -242,7 +242,7 @@ u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 	return ad_len;
+ }
+ 
+-u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
++u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size)
+ {
+ 	struct adv_info *adv = NULL;
+ 	u8 ad_len = 0, flags = 0;
+@@ -286,7 +286,7 @@ u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 		/* If flags would still be empty, then there is no need to
+ 		 * include the "Flags" AD field".
+ 		 */
+-		if (flags) {
++		if (flags && (ad_len + eir_precalc_len(1) <= size)) {
+ 			ptr[0] = 0x02;
+ 			ptr[1] = EIR_FLAGS;
+ 			ptr[2] = flags;
+@@ -316,7 +316,8 @@ u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 		}
+ 
+ 		/* Provide Tx Power only if we can provide a valid value for it */
+-		if (adv_tx_power != HCI_TX_POWER_INVALID) {
++		if (adv_tx_power != HCI_TX_POWER_INVALID &&
++		    (ad_len + eir_precalc_len(1) <= size)) {
+ 			ptr[0] = 0x02;
+ 			ptr[1] = EIR_TX_POWER;
+ 			ptr[2] = (u8)adv_tx_power;
+@@ -366,17 +367,19 @@ u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr)
+ 
+ void *eir_get_service_data(u8 *eir, size_t eir_len, u16 uuid, size_t *len)
+ {
+-	while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, len))) {
++	size_t dlen;
++
++	while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, &dlen))) {
+ 		u16 value = get_unaligned_le16(eir);
+ 
+ 		if (uuid == value) {
+ 			if (len)
+-				*len -= 2;
++				*len = dlen - 2;
+ 			return &eir[2];
+ 		}
+ 
+-		eir += *len;
+-		eir_len -= *len;
++		eir += dlen;
++		eir_len -= dlen;
+ 	}
+ 
+ 	return NULL;
+diff --git a/net/bluetooth/eir.h b/net/bluetooth/eir.h
+index 5c89a05e8b2905..9372db83f912fa 100644
+--- a/net/bluetooth/eir.h
++++ b/net/bluetooth/eir.h
+@@ -9,7 +9,7 @@
+ 
+ void eir_create(struct hci_dev *hdev, u8 *data);
+ 
+-u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr);
++u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size);
+ u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr);
+ u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr);
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 946d2ae551f86c..fccdb864af7264 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -785,7 +785,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c
+ 	d->sync_handle = conn->sync_handle;
+ 
+ 	if (test_and_clear_bit(HCI_CONN_PA_SYNC, &conn->flags)) {
+-		hci_conn_hash_list_flag(hdev, find_bis, ISO_LINK,
++		hci_conn_hash_list_flag(hdev, find_bis, BIS_LINK,
+ 					HCI_CONN_PA_SYNC, d);
+ 
+ 		if (!d->count)
+@@ -795,7 +795,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c
+ 	}
+ 
+ 	if (test_and_clear_bit(HCI_CONN_BIG_SYNC, &conn->flags)) {
+-		hci_conn_hash_list_flag(hdev, find_bis, ISO_LINK,
++		hci_conn_hash_list_flag(hdev, find_bis, BIS_LINK,
+ 					HCI_CONN_BIG_SYNC, d);
+ 
+ 		if (!d->count)
+@@ -885,9 +885,11 @@ static void cis_cleanup(struct hci_conn *conn)
+ 	/* Check if ISO connection is a CIS and remove CIG if there are
+ 	 * no other connections using it.
+ 	 */
+-	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_BOUND, &d);
+-	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECT, &d);
+-	hci_conn_hash_list_state(hdev, find_cis, ISO_LINK, BT_CONNECTED, &d);
++	hci_conn_hash_list_state(hdev, find_cis, CIS_LINK, BT_BOUND, &d);
++	hci_conn_hash_list_state(hdev, find_cis, CIS_LINK, BT_CONNECT,
++				 &d);
++	hci_conn_hash_list_state(hdev, find_cis, CIS_LINK, BT_CONNECTED,
++				 &d);
+ 	if (d.count)
+ 		return;
+ 
+@@ -910,7 +912,8 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 		if (!hdev->acl_mtu)
+ 			return ERR_PTR(-ECONNREFUSED);
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		if (hdev->iso_mtu)
+ 			/* Dedicated ISO Buffer exists */
+ 			break;
+@@ -974,7 +977,8 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ 		hci_copy_identity_address(hdev, &conn->src, &conn->src_type);
+ 		conn->mtu = hdev->le_mtu ? hdev->le_mtu : hdev->acl_mtu;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		/* conn->src should reflect the local identity address */
+ 		hci_copy_identity_address(hdev, &conn->src, &conn->src_type);
+ 
+@@ -1071,7 +1075,8 @@ static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason)
+ 		if (HCI_CONN_HANDLE_UNSET(conn->handle))
+ 			hci_conn_failed(conn, reason);
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		if ((conn->state != BT_CONNECTED &&
+ 		    !test_bit(HCI_CONN_CREATE_CIS, &conn->flags)) ||
+ 		    test_bit(HCI_CONN_BIG_CREATED, &conn->flags))
+@@ -1146,7 +1151,8 @@ void hci_conn_del(struct hci_conn *conn)
+ 			hdev->acl_cnt += conn->sent;
+ 	} else {
+ 		/* Unacked ISO frames */
+-		if (conn->type == ISO_LINK) {
++		if (conn->type == CIS_LINK ||
++		    conn->type == BIS_LINK) {
+ 			if (hdev->iso_pkts)
+ 				hdev->iso_cnt += conn->sent;
+ 			else if (hdev->le_pkts)
+@@ -1532,7 +1538,7 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		     memcmp(conn->le_per_adv_data, base, base_len)))
+ 		return ERR_PTR(-EADDRINUSE);
+ 
+-	conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
++	conn = hci_conn_add_unset(hdev, BIS_LINK, dst, HCI_ROLE_MASTER);
+ 	if (IS_ERR(conn))
+ 		return conn;
+ 
+@@ -1740,7 +1746,7 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 	data.count = 0;
+ 
+ 	/* Create a BIS for each bound connection */
+-	hci_conn_hash_list_state(hdev, bis_list, ISO_LINK,
++	hci_conn_hash_list_state(hdev, bis_list, BIS_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	cp.handle = qos->bcast.big;
+@@ -1829,12 +1835,12 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 		for (data.cig = 0x00; data.cig < 0xf0; data.cig++) {
+ 			data.count = 0;
+ 
+-			hci_conn_hash_list_state(hdev, find_cis, ISO_LINK,
++			hci_conn_hash_list_state(hdev, find_cis, CIS_LINK,
+ 						 BT_CONNECT, &data);
+ 			if (data.count)
+ 				continue;
+ 
+-			hci_conn_hash_list_state(hdev, find_cis, ISO_LINK,
++			hci_conn_hash_list_state(hdev, find_cis, CIS_LINK,
+ 						 BT_CONNECTED, &data);
+ 			if (!data.count)
+ 				break;
+@@ -1884,7 +1890,8 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type, qos->ucast.cig,
+ 				       qos->ucast.cis);
+ 	if (!cis) {
+-		cis = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
++		cis = hci_conn_add_unset(hdev, CIS_LINK, dst,
++					 HCI_ROLE_MASTER);
+ 		if (IS_ERR(cis))
+ 			return cis;
+ 		cis->cleanup = cis_cleanup;
+@@ -1976,7 +1983,7 @@ bool hci_iso_setup_path(struct hci_conn *conn)
+ 
+ int hci_conn_check_create_cis(struct hci_conn *conn)
+ {
+-	if (conn->type != ISO_LINK || !bacmp(&conn->dst, BDADDR_ANY))
++	if (conn->type != CIS_LINK)
+ 		return -EINVAL;
+ 
+ 	if (!conn->parent || conn->parent->state != BT_CONNECTED ||
+@@ -2070,7 +2077,9 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ {
+ 	struct hci_conn *conn;
+ 
+-	conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE);
++	bt_dev_dbg(hdev, "dst %pMR type %d sid %d", dst, dst_type, sid);
++
++	conn = hci_conn_add_unset(hdev, BIS_LINK, dst, HCI_ROLE_SLAVE);
+ 	if (IS_ERR(conn))
+ 		return conn;
+ 
+@@ -2219,7 +2228,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	 * the start periodic advertising and create BIG commands have
+ 	 * been queued
+ 	 */
+-	hci_conn_hash_list_state(hdev, bis_mark_per_adv, ISO_LINK,
++	hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
+ 				 BT_BOUND, &data);
+ 
+ 	/* Queue start periodic advertising and create BIG */
+@@ -2951,7 +2960,8 @@ void hci_conn_tx_queue(struct hci_conn *conn, struct sk_buff *skb)
+ 	 * TODO: SCO support without flowctl (needs to be done in drivers)
+ 	 */
+ 	switch (conn->type) {
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 	case ACL_LINK:
+ 	case LE_LINK:
+ 		break;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 5eb0600bbd03cc..af30a420bab75a 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1877,10 +1877,8 @@ void hci_free_adv_monitor(struct hci_dev *hdev, struct adv_monitor *monitor)
+ 	if (monitor->handle)
+ 		idr_remove(&hdev->adv_monitors_idr, monitor->handle);
+ 
+-	if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) {
++	if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED)
+ 		hdev->adv_monitors_cnt--;
+-		mgmt_adv_monitor_removed(hdev, monitor->handle);
+-	}
+ 
+ 	kfree(monitor);
+ }
+@@ -2487,6 +2485,7 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
+ 
+ 	mutex_init(&hdev->lock);
+ 	mutex_init(&hdev->req_lock);
++	mutex_init(&hdev->mgmt_pending_lock);
+ 
+ 	ida_init(&hdev->unset_handle_ida);
+ 
+@@ -2898,12 +2897,13 @@ int hci_recv_frame(struct hci_dev *hdev, struct sk_buff *skb)
+ 		break;
+ 	case HCI_ACLDATA_PKT:
+ 		/* Detect if ISO packet has been sent as ACL */
+-		if (hci_conn_num(hdev, ISO_LINK)) {
++		if (hci_conn_num(hdev, CIS_LINK) ||
++		    hci_conn_num(hdev, BIS_LINK)) {
+ 			__u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle);
+ 			__u8 type;
+ 
+ 			type = hci_conn_lookup_type(hdev, hci_handle(handle));
+-			if (type == ISO_LINK)
++			if (type == CIS_LINK || type == BIS_LINK)
+ 				hci_skb_pkt_type(skb) = HCI_ISODATA_PKT;
+ 		}
+ 		break;
+@@ -3345,7 +3345,8 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
+ 	case LE_LINK:
+ 		cnt = hdev->le_mtu ? hdev->le_cnt : hdev->acl_cnt;
+ 		break;
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 		cnt = hdev->iso_mtu ? hdev->iso_cnt :
+ 			hdev->le_mtu ? hdev->le_cnt : hdev->acl_cnt;
+ 		break;
+@@ -3359,7 +3360,7 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote)
+ }
+ 
+ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type,
+-				     int *quote)
++				     __u8 type2, int *quote)
+ {
+ 	struct hci_conn_hash *h = &hdev->conn_hash;
+ 	struct hci_conn *conn = NULL, *c;
+@@ -3371,7 +3372,8 @@ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type,
+ 	rcu_read_lock();
+ 
+ 	list_for_each_entry_rcu(c, &h->list, list) {
+-		if (c->type != type || skb_queue_empty(&c->data_q))
++		if ((c->type != type && c->type != type2) ||
++		    skb_queue_empty(&c->data_q))
+ 			continue;
+ 
+ 		if (c->state != BT_CONNECTED && c->state != BT_CONFIG)
+@@ -3403,23 +3405,18 @@ static void hci_link_tx_to(struct hci_dev *hdev, __u8 type)
+ 
+ 	bt_dev_err(hdev, "link tx timeout");
+ 
+-	rcu_read_lock();
++	hci_dev_lock(hdev);
+ 
+ 	/* Kill stalled connections */
+-	list_for_each_entry_rcu(c, &h->list, list) {
++	list_for_each_entry(c, &h->list, list) {
+ 		if (c->type == type && c->sent) {
+ 			bt_dev_err(hdev, "killing stalled connection %pMR",
+ 				   &c->dst);
+-			/* hci_disconnect might sleep, so, we have to release
+-			 * the RCU read lock before calling it.
+-			 */
+-			rcu_read_unlock();
+ 			hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM);
+-			rcu_read_lock();
+ 		}
+ 	}
+ 
+-	rcu_read_unlock();
++	hci_dev_unlock(hdev);
+ }
+ 
+ static struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
+@@ -3579,7 +3576,7 @@ static void hci_sched_sco(struct hci_dev *hdev, __u8 type)
+ 	else
+ 		cnt = &hdev->sco_cnt;
+ 
+-	while (*cnt && (conn = hci_low_sent(hdev, type, &quote))) {
++	while (*cnt && (conn = hci_low_sent(hdev, type, type, &quote))) {
+ 		while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ 			BT_DBG("skb %p len %d", skb, skb->len);
+ 			hci_send_conn_frame(hdev, conn, skb);
+@@ -3707,12 +3704,14 @@ static void hci_sched_iso(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s", hdev->name);
+ 
+-	if (!hci_conn_num(hdev, ISO_LINK))
++	if (!hci_conn_num(hdev, CIS_LINK) &&
++	    !hci_conn_num(hdev, BIS_LINK))
+ 		return;
+ 
+ 	cnt = hdev->iso_pkts ? &hdev->iso_cnt :
+ 		hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt;
+-	while (*cnt && (conn = hci_low_sent(hdev, ISO_LINK, &quote))) {
++	while (*cnt && (conn = hci_low_sent(hdev, CIS_LINK, BIS_LINK,
++					    &quote))) {
+ 		while (quote-- && (skb = skb_dequeue(&conn->data_q))) {
+ 			BT_DBG("skb %p len %d", skb, skb->len);
+ 			hci_send_conn_frame(hdev, conn, skb);
+@@ -4057,10 +4056,13 @@ static void hci_send_cmd_sync(struct hci_dev *hdev, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	err = hci_send_frame(hdev, skb);
+-	if (err < 0) {
+-		hci_cmd_sync_cancel_sync(hdev, -err);
+-		return;
++	if (hci_skb_opcode(skb) != HCI_OP_NOP) {
++		err = hci_send_frame(hdev, skb);
++		if (err < 0) {
++			hci_cmd_sync_cancel_sync(hdev, -err);
++			return;
++		}
++		atomic_dec(&hdev->cmd_cnt);
+ 	}
+ 
+ 	if (hdev->req_status == HCI_REQ_PEND &&
+@@ -4068,8 +4070,6 @@ static void hci_send_cmd_sync(struct hci_dev *hdev, struct sk_buff *skb)
+ 		kfree_skb(hdev->req_skb);
+ 		hdev->req_skb = skb_clone(hdev->sent_cmd, GFP_KERNEL);
+ 	}
+-
+-	atomic_dec(&hdev->cmd_cnt);
+ }
+ 
+ static void hci_cmd_work(struct work_struct *work)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index c38ada69c3d7f2..66052d6aaa1d50 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3804,7 +3804,7 @@ static void hci_unbound_cis_failed(struct hci_dev *hdev, u8 cig, u8 status)
+ 	lockdep_assert_held(&hdev->lock);
+ 
+ 	list_for_each_entry_safe(conn, tmp, &hdev->conn_hash.list, list) {
+-		if (conn->type != ISO_LINK || !bacmp(&conn->dst, BDADDR_ANY) ||
++		if (conn->type != CIS_LINK ||
+ 		    conn->state == BT_OPEN || conn->iso_qos.ucast.cig != cig)
+ 			continue;
+ 
+@@ -4467,7 +4467,8 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data,
+ 
+ 			break;
+ 
+-		case ISO_LINK:
++		case CIS_LINK:
++		case BIS_LINK:
+ 			if (hdev->iso_pkts) {
+ 				hdev->iso_cnt += count;
+ 				if (hdev->iso_cnt > hdev->iso_pkts)
+@@ -6351,6 +6352,17 @@ static void hci_le_ext_adv_report_evt(struct hci_dev *hdev, void *data,
+ 			info->secondary_phy &= 0x1f;
+ 		}
+ 
++		/* Check if PA Sync is pending and if the hci_conn SID has not
++		 * been set update it.
++		 */
++		if (hci_dev_test_flag(hdev, HCI_PA_SYNC)) {
++			struct hci_conn *conn;
++
++			conn = hci_conn_hash_lookup_create_pa_sync(hdev);
++			if (conn && conn->sid == HCI_SID_INVALID)
++				conn->sid = info->sid;
++		}
++
+ 		if (legacy_evt_type != LE_ADV_INVALID) {
+ 			process_adv_report(hdev, legacy_evt_type, &info->bdaddr,
+ 					   info->bdaddr_type, NULL, 0,
+@@ -6402,7 +6414,8 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ 	conn->sync_handle = le16_to_cpu(ev->handle);
+ 	conn->sid = HCI_SID_INVALID;
+ 
+-	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ISO_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, BIS_LINK,
++				      &flags);
+ 	if (!(mask & HCI_LM_ACCEPT)) {
+ 		hci_le_pa_term_sync(hdev, ev->handle);
+ 		goto unlock;
+@@ -6412,7 +6425,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 
+ 	/* Add connection to indicate PA sync event */
+-	pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
++	pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY,
+ 				     HCI_ROLE_SLAVE);
+ 
+ 	if (IS_ERR(pa_sync))
+@@ -6443,7 +6456,7 @@ static void hci_le_per_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, BIS_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT))
+ 		goto unlock;
+ 
+@@ -6727,7 +6740,7 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 	}
+ 
+-	if (conn->type != ISO_LINK) {
++	if (conn->type != CIS_LINK) {
+ 		bt_dev_err(hdev,
+ 			   "Invalid connection link type handle 0x%4.4x",
+ 			   handle);
+@@ -6845,7 +6858,7 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
+ 	if (!acl)
+ 		goto unlock;
+ 
+-	mask = hci_proto_connect_ind(hdev, &acl->dst, ISO_LINK, &flags);
++	mask = hci_proto_connect_ind(hdev, &acl->dst, CIS_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT)) {
+ 		hci_le_reject_cis(hdev, ev->cis_handle);
+ 		goto unlock;
+@@ -6853,8 +6866,8 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
+ 
+ 	cis = hci_conn_hash_lookup_handle(hdev, cis_handle);
+ 	if (!cis) {
+-		cis = hci_conn_add(hdev, ISO_LINK, &acl->dst, HCI_ROLE_SLAVE,
+-				   cis_handle);
++		cis = hci_conn_add(hdev, CIS_LINK, &acl->dst,
++				   HCI_ROLE_SLAVE, cis_handle);
+ 		if (IS_ERR(cis)) {
+ 			hci_le_reject_cis(hdev, ev->cis_handle);
+ 			goto unlock;
+@@ -6969,7 +6982,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 				bt_dev_dbg(hdev, "ignore too large handle %u", handle);
+ 				continue;
+ 			}
+-			bis = hci_conn_add(hdev, ISO_LINK, BDADDR_ANY,
++			bis = hci_conn_add(hdev, BIS_LINK, BDADDR_ANY,
+ 					   HCI_ROLE_SLAVE, handle);
+ 			if (IS_ERR(bis))
+ 				continue;
+@@ -7025,7 +7038,7 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
++	mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, BIS_LINK, &flags);
+ 	if (!(mask & HCI_LM_ACCEPT))
+ 		goto unlock;
+ 
+@@ -7155,7 +7168,8 @@ static void hci_le_meta_evt(struct hci_dev *hdev, void *data,
+ 
+ 	/* Only match event if command OGF is for LE */
+ 	if (hdev->req_skb &&
+-	    hci_opcode_ogf(hci_skb_opcode(hdev->req_skb)) == 0x08 &&
++	   (hci_opcode_ogf(hci_skb_opcode(hdev->req_skb)) == 0x08 ||
++	    hci_skb_opcode(hdev->req_skb) == HCI_OP_NOP) &&
+ 	    hci_skb_event(hdev->req_skb) == ev->subevent) {
+ 		*opcode = hci_skb_opcode(hdev->req_skb);
+ 		hci_req_cmd_complete(hdev, *opcode, 0x00, req_complete,
+@@ -7511,8 +7525,10 @@ void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ 		goto done;
+ 	}
+ 
++	hci_dev_lock(hdev);
+ 	kfree_skb(hdev->recv_event);
+ 	hdev->recv_event = skb_clone(skb, GFP_KERNEL);
++	hci_dev_unlock(hdev);
+ 
+ 	event = hdr->evt;
+ 	if (!event) {
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index e56b1cbedab908..83de3847c8eaf7 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1559,7 +1559,8 @@ static int hci_enable_per_advertising_sync(struct hci_dev *hdev, u8 instance)
+ static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv)
+ {
+ 	u8 bid[3];
+-	u8 ad[4 + 3];
++	u8 ad[HCI_MAX_EXT_AD_LENGTH];
++	u8 len;
+ 
+ 	/* Skip if NULL adv as instance 0x00 is used for general purpose
+ 	 * advertising so it cannot used for the likes of Broadcast Announcement
+@@ -1585,8 +1586,10 @@ static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv)
+ 
+ 	/* Generate Broadcast ID */
+ 	get_random_bytes(bid, sizeof(bid));
+-	eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid));
+-	hci_set_adv_instance_data(hdev, adv->instance, sizeof(ad), ad, 0, NULL);
++	len = eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid));
++	memcpy(ad + len, adv->adv_data, adv->adv_data_len);
++	hci_set_adv_instance_data(hdev, adv->instance, len + adv->adv_data_len,
++				  ad, 0, NULL);
+ 
+ 	return hci_update_adv_data_sync(hdev, adv->instance);
+ }
+@@ -1603,8 +1606,15 @@ int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len,
+ 
+ 	if (instance) {
+ 		adv = hci_find_adv_instance(hdev, instance);
+-		/* Create an instance if that could not be found */
+-		if (!adv) {
++		if (adv) {
++			/* Turn it into periodic advertising */
++			adv->periodic = true;
++			adv->per_adv_data_len = data_len;
++			if (data)
++				memcpy(adv->per_adv_data, data, data_len);
++			adv->flags = flags;
++		} else if (!adv) {
++			/* Create an instance if that could not be found */
+ 			adv = hci_add_per_instance(hdev, instance, flags,
+ 						   data_len, data,
+ 						   sync_interval,
+@@ -1812,7 +1822,8 @@ static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)
+ 			return 0;
+ 	}
+ 
+-	len = eir_create_adv_data(hdev, instance, pdu->data);
++	len = eir_create_adv_data(hdev, instance, pdu->data,
++				  HCI_MAX_EXT_AD_LENGTH);
+ 
+ 	pdu->length = len;
+ 	pdu->handle = adv ? adv->handle : instance;
+@@ -1843,7 +1854,7 @@ static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 
+-	len = eir_create_adv_data(hdev, instance, cp.data);
++	len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));
+ 
+ 	/* There's nothing to do if the data hasn't changed */
+ 	if (hdev->adv_data_len == len &&
+@@ -2860,7 +2871,7 @@ static int hci_le_set_ext_scan_param_sync(struct hci_dev *hdev, u8 type,
+ 		if (sent) {
+ 			struct hci_conn *conn;
+ 
+-			conn = hci_conn_hash_lookup_ba(hdev, ISO_LINK,
++			conn = hci_conn_hash_lookup_ba(hdev, BIS_LINK,
+ 						       &sent->bdaddr);
+ 			if (conn) {
+ 				struct bt_iso_qos *qos = &conn->iso_qos;
+@@ -5477,7 +5488,7 @@ static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 	if (conn->type == LE_LINK)
+ 		return hci_le_connect_cancel_sync(hdev, conn, reason);
+ 
+-	if (conn->type == ISO_LINK) {
++	if (conn->type == CIS_LINK) {
+ 		/* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
+ 		 * page 1857:
+ 		 *
+@@ -5490,9 +5501,10 @@ static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 			return hci_disconnect_sync(hdev, conn, reason);
+ 
+ 		/* CIS with no Create CIS sent have nothing to cancel */
+-		if (bacmp(&conn->dst, BDADDR_ANY))
+-			return HCI_ERROR_LOCAL_HOST_TERM;
++		return HCI_ERROR_LOCAL_HOST_TERM;
++	}
+ 
++	if (conn->type == BIS_LINK) {
+ 		/* There is no way to cancel a BIS without terminating the BIG
+ 		 * which is done later on connection cleanup.
+ 		 */
+@@ -5554,9 +5566,12 @@ static int hci_reject_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ {
+ 	struct hci_cp_reject_conn_req cp;
+ 
+-	if (conn->type == ISO_LINK)
++	if (conn->type == CIS_LINK)
+ 		return hci_le_reject_cis_sync(hdev, conn, reason);
+ 
++	if (conn->type == BIS_LINK)
++		return -EINVAL;
++
+ 	if (conn->type == SCO_LINK || conn->type == ESCO_LINK)
+ 		return hci_reject_sco_sync(hdev, conn, reason);
+ 
+@@ -6898,20 +6913,37 @@ int hci_le_conn_update_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 
+ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ {
++	struct hci_conn *conn = data;
++	struct hci_conn *pa_sync;
++
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	if (!err)
++	if (err == -ECANCELED)
+ 		return;
+ 
++	hci_dev_lock(hdev);
++
+ 	hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+ 
+-	if (err == -ECANCELED)
+-		return;
++	if (!hci_conn_valid(hdev, conn))
++		clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
+ 
+-	hci_dev_lock(hdev);
++	if (!err)
++		goto unlock;
+ 
+-	hci_update_passive_scan_sync(hdev);
++	/* Add connection to indicate PA sync error */
++	pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY,
++				     HCI_ROLE_SLAVE);
++
++	if (IS_ERR(pa_sync))
++		goto unlock;
+ 
++	set_bit(HCI_CONN_PA_SYNC_FAILED, &pa_sync->flags);
++
++	/* Notify iso layer */
++	hci_connect_cfm(pa_sync, bt_status(err));
++
++unlock:
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -6925,9 +6957,23 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
+ 	if (!hci_conn_valid(hdev, conn))
+ 		return -ECANCELED;
+ 
++	if (conn->sync_handle != HCI_SYNC_HANDLE_INVALID)
++		return -EINVAL;
++
+ 	if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
+ 		return -EBUSY;
+ 
++	/* Stop scanning if SID has not been set and active scanning is enabled
++	 * so we use passive scanning which will be scanning using the allow
++	 * list programmed to contain only the connection address.
++	 */
++	if (conn->sid == HCI_SID_INVALID &&
++	    hci_dev_test_flag(hdev, HCI_LE_SCAN)) {
++		hci_scan_disable_sync(hdev);
++		hci_dev_set_flag(hdev, HCI_LE_SCAN_INTERRUPTED);
++		hci_discovery_set_state(hdev, DISCOVERY_STOPPED);
++	}
++
+ 	/* Mark HCI_CONN_CREATE_PA_SYNC so hci_update_passive_scan_sync can
+ 	 * program the address in the allow list so PA advertisements can be
+ 	 * received.
+@@ -6936,6 +6982,14 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
+ 
+ 	hci_update_passive_scan_sync(hdev);
+ 
++	/* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update
++	 * it.
++	 */
++	if (conn->sid == HCI_SID_INVALID)
++		__hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,
++					 HCI_EV_LE_EXT_ADV_REPORT,
++					 conn->conn_timeout, NULL);
++
+ 	memset(&cp, 0, sizeof(cp));
+ 	cp.options = qos->bcast.options;
+ 	cp.sid = conn->sid;
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 2819cda616bce8..5389af86bdae4f 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -941,7 +941,7 @@ static int iso_sock_bind_bc(struct socket *sock, struct sockaddr *addr,
+ 
+ 	iso_pi(sk)->dst_type = sa->iso_bc->bc_bdaddr_type;
+ 
+-	if (sa->iso_bc->bc_sid > 0x0f)
++	if (sa->iso_bc->bc_sid > 0x0f && sa->iso_bc->bc_sid != HCI_SID_INVALID)
+ 		return -EINVAL;
+ 
+ 	iso_pi(sk)->bc_sid = sa->iso_bc->bc_sid;
+@@ -2029,6 +2029,9 @@ static bool iso_match_sid(struct sock *sk, void *data)
+ {
+ 	struct hci_ev_le_pa_sync_established *ev = data;
+ 
++	if (iso_pi(sk)->bc_sid == HCI_SID_INVALID)
++		return true;
++
+ 	return ev->sid == iso_pi(sk)->bc_sid;
+ }
+ 
+@@ -2075,8 +2078,10 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 	if (ev1) {
+ 		sk = iso_get_sock(&hdev->bdaddr, bdaddr, BT_LISTEN,
+ 				  iso_match_sid, ev1);
+-		if (sk && !ev1->status)
++		if (sk && !ev1->status) {
+ 			iso_pi(sk)->sync_handle = le16_to_cpu(ev1->handle);
++			iso_pi(sk)->bc_sid = ev1->sid;
++		}
+ 
+ 		goto done;
+ 	}
+@@ -2203,7 +2208,7 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 
+ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
+-	if (hcon->type != ISO_LINK) {
++	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK) {
+ 		if (hcon->type != LE_LINK)
+ 			return;
+ 
+@@ -2244,7 +2249,7 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ 
+ static void iso_disconn_cfm(struct hci_conn *hcon, __u8 reason)
+ {
+-	if (hcon->type != ISO_LINK)
++	if (hcon->type != CIS_LINK && hcon->type != BIS_LINK)
+ 		return;
+ 
+ 	BT_DBG("hcon %p reason %d", hcon, reason);
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 042d3ac3b4a38e..a5bde5db58efcb 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4870,7 +4870,8 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+ 
+ 	if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
+ 				     SMP_ALLOW_STK)) {
+-		result = L2CAP_CR_LE_AUTHENTICATION;
++		result = pchan->sec_level == BT_SECURITY_MEDIUM ?
++			L2CAP_CR_LE_ENCRYPTION : L2CAP_CR_LE_AUTHENTICATION;
+ 		chan = NULL;
+ 		goto response_unlock;
+ 	}
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 46b22708dfbd2d..d540f7b4f75fbf 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1447,22 +1447,17 @@ static void settings_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ 	send_settings_rsp(cmd->sk, cmd->opcode, match->hdev);
+ 
+-	list_del(&cmd->list);
+-
+ 	if (match->sk == NULL) {
+ 		match->sk = cmd->sk;
+ 		sock_hold(match->sk);
+ 	}
+-
+-	mgmt_pending_free(cmd);
+ }
+ 
+ static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ {
+ 	u8 *status = data;
+ 
+-	mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, *status);
+-	mgmt_pending_remove(cmd);
++	mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, *status);
+ }
+ 
+ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+@@ -1476,8 +1471,6 @@ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ 	if (cmd->cmd_complete) {
+ 		cmd->cmd_complete(cmd, match->mgmt_status);
+-		mgmt_pending_remove(cmd);
+-
+ 		return;
+ 	}
+ 
+@@ -1486,13 +1479,13 @@ static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 
+ static int generic_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status)
+ {
+-	return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status,
++	return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status,
+ 				 cmd->param, cmd->param_len);
+ }
+ 
+ static int addr_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status)
+ {
+-	return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status,
++	return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status,
+ 				 cmd->param, sizeof(struct mgmt_addr_info));
+ }
+ 
+@@ -1532,7 +1525,7 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ 
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		hci_dev_clear_flag(hdev, HCI_LIMITED_DISCOVERABLE);
+ 		goto done;
+ 	}
+@@ -1707,7 +1700,7 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ 
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		goto done;
+ 	}
+ 
+@@ -1943,8 +1936,8 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 			new_settings(hdev, NULL);
+ 		}
+ 
+-		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, cmd_status_rsp,
+-				     &mgmt_err);
++		mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true,
++				     cmd_status_rsp, &mgmt_err);
+ 		return;
+ 	}
+ 
+@@ -1954,7 +1947,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -2074,12 +2067,12 @@ static void set_le_complete(struct hci_dev *hdev, void *data, int err)
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, cmd_status_rsp,
+-							&status);
++		mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp,
++				     &status);
+ 		return;
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -2138,7 +2131,7 @@ static void set_mesh_complete(struct hci_dev *hdev, void *data, int err)
+ 	struct sock *sk = cmd->sk;
+ 
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev,
++		mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true,
+ 				     cmd_status_rsp, &status);
+ 		return;
+ 	}
+@@ -2566,7 +2559,8 @@ static int mgmt_hci_cmd_sync(struct sock *sk, struct hci_dev *hdev,
+ 	struct mgmt_pending_cmd *cmd;
+ 	int err;
+ 
+-	if (len < sizeof(*cp))
++	if (len != (offsetof(struct mgmt_cp_hci_cmd_sync, params) +
++		    le16_to_cpu(cp->params_len)))
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_HCI_CMD_SYNC,
+ 				       MGMT_STATUS_INVALID_PARAMS);
+ 
+@@ -2637,7 +2631,7 @@ static void mgmt_class_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 			  mgmt_status(err), hdev->dev_class, 3);
+ 
+ 	mgmt_pending_free(cmd);
+@@ -3221,7 +3215,8 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
+ static u8 link_to_bdaddr(u8 link_type, u8 addr_type)
+ {
+ 	switch (link_type) {
+-	case ISO_LINK:
++	case CIS_LINK:
++	case BIS_LINK:
+ 	case LE_LINK:
+ 		switch (addr_type) {
+ 		case ADDR_LE_DEV_PUBLIC:
+@@ -3425,7 +3420,7 @@ static int pairing_complete(struct mgmt_pending_cmd *cmd, u8 status)
+ 	bacpy(&rp.addr.bdaddr, &conn->dst);
+ 	rp.addr.type = link_to_bdaddr(conn->type, conn->dst_type);
+ 
+-	err = mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_PAIR_DEVICE,
++	err = mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_PAIR_DEVICE,
+ 				status, &rp, sizeof(rp));
+ 
+ 	/* So we don't get further callbacks for this connection */
+@@ -5106,24 +5101,14 @@ static void mgmt_adv_monitor_added(struct sock *sk, struct hci_dev *hdev,
+ 	mgmt_event(MGMT_EV_ADV_MONITOR_ADDED, hdev, &ev, sizeof(ev), sk);
+ }
+ 
+-void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle)
++static void mgmt_adv_monitor_removed(struct sock *sk, struct hci_dev *hdev,
++				     __le16 handle)
+ {
+ 	struct mgmt_ev_adv_monitor_removed ev;
+-	struct mgmt_pending_cmd *cmd;
+-	struct sock *sk_skip = NULL;
+-	struct mgmt_cp_remove_adv_monitor *cp;
+-
+-	cmd = pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev);
+-	if (cmd) {
+-		cp = cmd->param;
+-
+-		if (cp->monitor_handle)
+-			sk_skip = cmd->sk;
+-	}
+ 
+-	ev.monitor_handle = cpu_to_le16(handle);
++	ev.monitor_handle = handle;
+ 
+-	mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk_skip);
++	mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk);
+ }
+ 
+ static int read_adv_mon_features(struct sock *sk, struct hci_dev *hdev,
+@@ -5194,7 +5179,7 @@ static void mgmt_add_adv_patterns_monitor_complete(struct hci_dev *hdev,
+ 		hci_update_passive_scan(hdev);
+ 	}
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 			  mgmt_status(status), &rp, sizeof(rp));
+ 	mgmt_pending_remove(cmd);
+ 
+@@ -5225,8 +5210,7 @@ static int __add_adv_patterns_monitor(struct sock *sk, struct hci_dev *hdev,
+ 
+ 	if (pending_find(MGMT_OP_SET_LE, hdev) ||
+ 	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) ||
+-	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev) ||
+-	    pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) {
++	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) {
+ 		status = MGMT_STATUS_BUSY;
+ 		goto unlock;
+ 	}
+@@ -5396,8 +5380,7 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ 	struct mgmt_pending_cmd *cmd = data;
+ 	struct mgmt_cp_remove_adv_monitor *cp;
+ 
+-	if (status == -ECANCELED ||
+-	    cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
++	if (status == -ECANCELED)
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+@@ -5406,12 +5389,14 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ 
+ 	rp.monitor_handle = cp->monitor_handle;
+ 
+-	if (!status)
++	if (!status) {
++		mgmt_adv_monitor_removed(cmd->sk, hdev, cp->monitor_handle);
+ 		hci_update_passive_scan(hdev);
++	}
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 			  mgmt_status(status), &rp, sizeof(rp));
+-	mgmt_pending_remove(cmd);
++	mgmt_pending_free(cmd);
+ 
+ 	hci_dev_unlock(hdev);
+ 	bt_dev_dbg(hdev, "remove monitor %d complete, status %d",
+@@ -5421,10 +5406,6 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd = data;
+-
+-	if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
+-		return -ECANCELED;
+-
+ 	struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
+ 	u16 handle = __le16_to_cpu(cp->monitor_handle);
+ 
+@@ -5443,14 +5424,13 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	if (pending_find(MGMT_OP_SET_LE, hdev) ||
+-	    pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev) ||
+ 	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) ||
+ 	    pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) {
+ 		status = MGMT_STATUS_BUSY;
+ 		goto unlock;
+ 	}
+ 
+-	cmd = mgmt_pending_add(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len);
++	cmd = mgmt_pending_new(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len);
+ 	if (!cmd) {
+ 		status = MGMT_STATUS_NO_RESOURCES;
+ 		goto unlock;
+@@ -5460,7 +5440,7 @@ static int remove_adv_monitor(struct sock *sk, struct hci_dev *hdev,
+ 				  mgmt_remove_adv_monitor_complete);
+ 
+ 	if (err) {
+-		mgmt_pending_remove(cmd);
++		mgmt_pending_free(cmd);
+ 
+ 		if (err == -ENOMEM)
+ 			status = MGMT_STATUS_NO_RESOURCES;
+@@ -5790,7 +5770,7 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 	    cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
+ 		return;
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+ 	mgmt_pending_remove(cmd);
+ 
+@@ -6011,7 +5991,7 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_dbg(hdev, "err %d", err);
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err),
+ 			  cmd->param, 1);
+ 	mgmt_pending_remove(cmd);
+ 
+@@ -6236,7 +6216,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	u8 status = mgmt_status(err);
+ 
+ 	if (status) {
+-		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev,
++		mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true,
+ 				     cmd_status_rsp, &status);
+ 		return;
+ 	}
+@@ -6246,7 +6226,7 @@ static void set_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	else
+ 		hci_dev_clear_flag(hdev, HCI_ADVERTISING);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, settings_rsp,
++	mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp,
+ 			     &match);
+ 
+ 	new_settings(hdev, match.sk);
+@@ -6590,7 +6570,7 @@ static void set_bredr_complete(struct hci_dev *hdev, void *data, int err)
+ 		 */
+ 		hci_dev_clear_flag(hdev, HCI_BREDR_ENABLED);
+ 
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 	} else {
+ 		send_settings_rsp(cmd->sk, MGMT_OP_SET_BREDR, hdev);
+ 		new_settings(hdev, cmd->sk);
+@@ -6727,7 +6707,7 @@ static void set_secure_conn_complete(struct hci_dev *hdev, void *data, int err)
+ 	if (err) {
+ 		u8 mgmt_err = mgmt_status(err);
+ 
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err);
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err);
+ 		goto done;
+ 	}
+ 
+@@ -7174,7 +7154,7 @@ static void get_conn_info_complete(struct hci_dev *hdev, void *data, int err)
+ 		rp.max_tx_power = HCI_TX_POWER_INVALID;
+ 	}
+ 
+-	mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_GET_CONN_INFO, status,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_GET_CONN_INFO, status,
+ 			  &rp, sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -7334,7 +7314,7 @@ static void get_clock_info_complete(struct hci_dev *hdev, void *data, int err)
+ 	}
+ 
+ complete:
+-	mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, &rp,
++	mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, &rp,
+ 			  sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -8584,10 +8564,10 @@ static void add_advertising_complete(struct hci_dev *hdev, void *data, int err)
+ 	rp.instance = cp->instance;
+ 
+ 	if (err)
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	else
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  mgmt_status(err), &rp, sizeof(rp));
+ 
+ 	add_adv_complete(hdev, cmd->sk, cp->instance, err);
+@@ -8775,10 +8755,10 @@ static void add_ext_adv_params_complete(struct hci_dev *hdev, void *data,
+ 
+ 		hci_remove_adv_instance(hdev, cp->instance);
+ 
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	} else {
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  mgmt_status(err), &rp, sizeof(rp));
+ 	}
+ 
+@@ -8925,10 +8905,10 @@ static void add_ext_adv_data_complete(struct hci_dev *hdev, void *data, int err)
+ 	rp.instance = cp->instance;
+ 
+ 	if (err)
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	else
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  mgmt_status(err), &rp, sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -9087,10 +9067,10 @@ static void remove_advertising_complete(struct hci_dev *hdev, void *data,
+ 	rp.instance = cp->instance;
+ 
+ 	if (err)
+-		mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				mgmt_status(err));
+ 	else
+-		mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode,
++		mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode,
+ 				  MGMT_STATUS_SUCCESS, &rp, sizeof(rp));
+ 
+ 	mgmt_pending_free(cmd);
+@@ -9362,7 +9342,7 @@ void mgmt_index_removed(struct hci_dev *hdev)
+ 	if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
+ 		return;
+ 
+-	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
++	mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
+ 		mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
+@@ -9400,7 +9380,8 @@ void mgmt_power_on(struct hci_dev *hdev, int err)
+ 		hci_update_passive_scan(hdev);
+ 	}
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp,
++			     &match);
+ 
+ 	new_settings(hdev, match.sk);
+ 
+@@ -9415,7 +9396,8 @@ void __mgmt_power_off(struct hci_dev *hdev)
+ 	struct cmd_lookup match = { NULL, hdev };
+ 	u8 zero_cod[] = { 0, 0, 0 };
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp,
++			     &match);
+ 
+ 	/* If the power off is because of hdev unregistration let
+ 	 * use the appropriate INVALID_INDEX status. Otherwise use
+@@ -9429,7 +9411,7 @@ void __mgmt_power_off(struct hci_dev *hdev)
+ 	else
+ 		match.mgmt_status = MGMT_STATUS_NOT_POWERED;
+ 
+-	mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
++	mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match);
+ 
+ 	if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) {
+ 		mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,
+@@ -9670,7 +9652,6 @@ static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data)
+ 	device_unpaired(hdev, &cp->addr.bdaddr, cp->addr.type, cmd->sk);
+ 
+ 	cmd->cmd_complete(cmd, 0);
+-	mgmt_pending_remove(cmd);
+ }
+ 
+ bool mgmt_powering_down(struct hci_dev *hdev)
+@@ -9726,8 +9707,8 @@ void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
+ 	struct mgmt_cp_disconnect *cp;
+ 	struct mgmt_pending_cmd *cmd;
+ 
+-	mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp,
+-			     hdev);
++	mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, true,
++			     unpair_device_rsp, hdev);
+ 
+ 	cmd = pending_find(MGMT_OP_DISCONNECT, hdev);
+ 	if (!cmd)
+@@ -9920,7 +9901,7 @@ void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
+ 
+ 	if (status) {
+ 		u8 mgmt_err = mgmt_status(status);
+-		mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev,
++		mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true,
+ 				     cmd_status_rsp, &mgmt_err);
+ 		return;
+ 	}
+@@ -9930,8 +9911,8 @@ void mgmt_auth_enable_complete(struct hci_dev *hdev, u8 status)
+ 	else
+ 		changed = hci_dev_test_and_clear_flag(hdev, HCI_LINK_SECURITY);
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, settings_rsp,
+-			     &match);
++	mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true,
++			     settings_rsp, &match);
+ 
+ 	if (changed)
+ 		new_settings(hdev, match.sk);
+@@ -9955,9 +9936,12 @@ void mgmt_set_class_of_dev_complete(struct hci_dev *hdev, u8 *dev_class,
+ {
+ 	struct cmd_lookup match = { NULL, hdev, mgmt_status(status) };
+ 
+-	mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, sk_lookup, &match);
+-	mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, sk_lookup, &match);
+-	mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, sk_lookup, &match);
++	mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, false, sk_lookup,
++			     &match);
++	mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, false, sk_lookup,
++			     &match);
++	mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, false, sk_lookup,
++			     &match);
+ 
+ 	if (!status) {
+ 		mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, dev_class,
+diff --git a/net/bluetooth/mgmt_util.c b/net/bluetooth/mgmt_util.c
+index e5ff65e424b5b4..a88a07da394734 100644
+--- a/net/bluetooth/mgmt_util.c
++++ b/net/bluetooth/mgmt_util.c
+@@ -217,30 +217,47 @@ int mgmt_cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status,
+ struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode,
+ 					   struct hci_dev *hdev)
+ {
+-	struct mgmt_pending_cmd *cmd;
++	struct mgmt_pending_cmd *cmd, *tmp;
++
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 
+-	list_for_each_entry(cmd, &hdev->mgmt_pending, list) {
++	list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) {
+ 		if (hci_sock_get_channel(cmd->sk) != channel)
+ 			continue;
+-		if (cmd->opcode == opcode)
++
++		if (cmd->opcode == opcode) {
++			mutex_unlock(&hdev->mgmt_pending_lock);
+ 			return cmd;
++		}
+ 	}
+ 
++	mutex_unlock(&hdev->mgmt_pending_lock);
++
+ 	return NULL;
+ }
+ 
+-void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev,
++void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove,
+ 			  void (*cb)(struct mgmt_pending_cmd *cmd, void *data),
+ 			  void *data)
+ {
+ 	struct mgmt_pending_cmd *cmd, *tmp;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
++
+ 	list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) {
+ 		if (opcode > 0 && cmd->opcode != opcode)
+ 			continue;
+ 
++		if (remove)
++			list_del(&cmd->list);
++
+ 		cb(cmd, data);
++
++		if (remove)
++			mgmt_pending_free(cmd);
+ 	}
++
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ }
+ 
+ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+@@ -254,7 +271,7 @@ struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode,
+ 		return NULL;
+ 
+ 	cmd->opcode = opcode;
+-	cmd->index = hdev->id;
++	cmd->hdev = hdev;
+ 
+ 	cmd->param = kmemdup(data, len, GFP_KERNEL);
+ 	if (!cmd->param) {
+@@ -280,7 +297,9 @@ struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
+ 	if (!cmd)
+ 		return NULL;
+ 
++	mutex_lock(&hdev->mgmt_pending_lock);
+ 	list_add_tail(&cmd->list, &hdev->mgmt_pending);
++	mutex_unlock(&hdev->mgmt_pending_lock);
+ 
+ 	return cmd;
+ }
+@@ -294,7 +313,10 @@ void mgmt_pending_free(struct mgmt_pending_cmd *cmd)
+ 
+ void mgmt_pending_remove(struct mgmt_pending_cmd *cmd)
+ {
++	mutex_lock(&cmd->hdev->mgmt_pending_lock);
+ 	list_del(&cmd->list);
++	mutex_unlock(&cmd->hdev->mgmt_pending_lock);
++
+ 	mgmt_pending_free(cmd);
+ }
+ 
+@@ -304,7 +326,7 @@ void mgmt_mesh_foreach(struct hci_dev *hdev,
+ {
+ 	struct mgmt_mesh_tx *mesh_tx, *tmp;
+ 
+-	list_for_each_entry_safe(mesh_tx, tmp, &hdev->mgmt_pending, list) {
++	list_for_each_entry_safe(mesh_tx, tmp, &hdev->mesh_pending, list) {
+ 		if (!sk || mesh_tx->sk == sk)
+ 			cb(mesh_tx, data);
+ 	}
+diff --git a/net/bluetooth/mgmt_util.h b/net/bluetooth/mgmt_util.h
+index f2ba994ab1d847..024e51dd693756 100644
+--- a/net/bluetooth/mgmt_util.h
++++ b/net/bluetooth/mgmt_util.h
+@@ -33,7 +33,7 @@ struct mgmt_mesh_tx {
+ struct mgmt_pending_cmd {
+ 	struct list_head list;
+ 	u16 opcode;
+-	int index;
++	struct hci_dev *hdev;
+ 	void *param;
+ 	size_t param_len;
+ 	struct sock *sk;
+@@ -54,7 +54,7 @@ int mgmt_cmd_complete(struct sock *sk, u16 index, u16 cmd, u8 status,
+ 
+ struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode,
+ 					   struct hci_dev *hdev);
+-void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev,
++void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove,
+ 			  void (*cb)(struct mgmt_pending_cmd *cmd, void *data),
+ 			  void *data);
+ struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 816bb0fde718ed..6482de4d875092 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -60,19 +60,19 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ 		struct ip_fraglist_iter iter;
+ 		struct sk_buff *frag;
+ 
+-		if (first_len - hlen > mtu ||
+-		    skb_headroom(skb) < ll_rs)
++		if (first_len - hlen > mtu)
+ 			goto blackhole;
+ 
+-		if (skb_cloned(skb))
++		if (skb_cloned(skb) ||
++		    skb_headroom(skb) < ll_rs)
+ 			goto slow_path;
+ 
+ 		skb_walk_frags(skb, frag) {
+-			if (frag->len > mtu ||
+-			    skb_headroom(frag) < hlen + ll_rs)
++			if (frag->len > mtu)
+ 				goto blackhole;
+ 
+-			if (skb_shared(frag))
++			if (skb_shared(frag) ||
++			    skb_headroom(frag) < hlen + ll_rs)
+ 				goto slow_path;
+ 		}
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 0d891634c69270..2b20aadaf9268d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -10393,7 +10393,7 @@ static void dev_index_release(struct net *net, int ifindex)
+ static bool from_cleanup_net(void)
+ {
+ #ifdef CONFIG_NET_NS
+-	return current == cleanup_net_task;
++	return current == READ_ONCE(cleanup_net_task);
+ #else
+ 	return false;
+ #endif
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 577a4504e26fa0..357d26b76c22d9 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1968,10 +1968,11 @@ BPF_CALL_5(bpf_l4_csum_replace, struct sk_buff *, skb, u32, offset,
+ 	bool is_pseudo = flags & BPF_F_PSEUDO_HDR;
+ 	bool is_mmzero = flags & BPF_F_MARK_MANGLED_0;
+ 	bool do_mforce = flags & BPF_F_MARK_ENFORCE;
++	bool is_ipv6   = flags & BPF_F_IPV6;
+ 	__sum16 *ptr;
+ 
+ 	if (unlikely(flags & ~(BPF_F_MARK_MANGLED_0 | BPF_F_MARK_ENFORCE |
+-			       BPF_F_PSEUDO_HDR | BPF_F_HDR_FIELD_MASK)))
++			       BPF_F_PSEUDO_HDR | BPF_F_HDR_FIELD_MASK | BPF_F_IPV6)))
+ 		return -EINVAL;
+ 	if (unlikely(offset > 0xffff || offset & 1))
+ 		return -EFAULT;
+@@ -1987,7 +1988,7 @@ BPF_CALL_5(bpf_l4_csum_replace, struct sk_buff *, skb, u32, offset,
+ 		if (unlikely(from != 0))
+ 			return -EINVAL;
+ 
+-		inet_proto_csum_replace_by_diff(ptr, skb, to, is_pseudo);
++		inet_proto_csum_replace_by_diff(ptr, skb, to, is_pseudo, is_ipv6);
+ 		break;
+ 	case 2:
+ 		inet_proto_csum_replace2(ptr, skb, from, to, is_pseudo);
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index b0dfdf791ece5a..599f6a89ae581e 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -600,7 +600,7 @@ static void cleanup_net(struct work_struct *work)
+ 	LIST_HEAD(net_exit_list);
+ 	LIST_HEAD(dev_kill_list);
+ 
+-	cleanup_net_task = current;
++	WRITE_ONCE(cleanup_net_task, current);
+ 
+ 	/* Atomically snapshot the list of namespaces to cleanup */
+ 	net_kill_list = llist_del_all(&cleanup_list);
+@@ -676,7 +676,7 @@ static void cleanup_net(struct work_struct *work)
+ 		put_user_ns(net->user_ns);
+ 		net_passive_dec(net);
+ 	}
+-	cleanup_net_task = NULL;
++	WRITE_ONCE(cleanup_net_task, NULL);
+ }
+ 
+ /**
+diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h
+index 7eadb8393e002f..cd95394399b40c 100644
+--- a/net/core/netmem_priv.h
++++ b/net/core/netmem_priv.h
+@@ -5,7 +5,7 @@
+ 
+ static inline unsigned long netmem_get_pp_magic(netmem_ref netmem)
+ {
+-	return __netmem_clear_lsb(netmem)->pp_magic;
++	return __netmem_clear_lsb(netmem)->pp_magic & ~PP_DMA_INDEX_MASK;
+ }
+ 
+ static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic)
+@@ -15,9 +15,16 @@ static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic)
+ 
+ static inline void netmem_clear_pp_magic(netmem_ref netmem)
+ {
++	WARN_ON_ONCE(__netmem_clear_lsb(netmem)->pp_magic & PP_DMA_INDEX_MASK);
++
+ 	__netmem_clear_lsb(netmem)->pp_magic = 0;
+ }
+ 
++static inline bool netmem_is_pp(netmem_ref netmem)
++{
++	return (netmem_get_pp_magic(netmem) & PP_MAGIC_MASK) == PP_SIGNATURE;
++}
++
+ static inline void netmem_set_pp(netmem_ref netmem, struct page_pool *pool)
+ {
+ 	__netmem_clear_lsb(netmem)->pp = pool;
+@@ -28,4 +35,28 @@ static inline void netmem_set_dma_addr(netmem_ref netmem,
+ {
+ 	__netmem_clear_lsb(netmem)->dma_addr = dma_addr;
+ }
++
++static inline unsigned long netmem_get_dma_index(netmem_ref netmem)
++{
++	unsigned long magic;
++
++	if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
++		return 0;
++
++	magic = __netmem_clear_lsb(netmem)->pp_magic;
++
++	return (magic & PP_DMA_INDEX_MASK) >> PP_DMA_INDEX_SHIFT;
++}
++
++static inline void netmem_set_dma_index(netmem_ref netmem,
++					unsigned long id)
++{
++	unsigned long magic;
++
++	if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
++		return;
++
++	magic = netmem_get_pp_magic(netmem) | (id << PP_DMA_INDEX_SHIFT);
++	__netmem_clear_lsb(netmem)->pp_magic = magic;
++}
+ #endif
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index 7745ad924ae2d8..2d9c51f480fb5f 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -153,9 +153,9 @@ u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats)
+ EXPORT_SYMBOL(page_pool_ethtool_stats_get);
+ 
+ #else
+-#define alloc_stat_inc(pool, __stat)
+-#define recycle_stat_inc(pool, __stat)
+-#define recycle_stat_add(pool, __stat, val)
++#define alloc_stat_inc(...)	do { } while (0)
++#define recycle_stat_inc(...)	do { } while (0)
++#define recycle_stat_add(...)	do { } while (0)
+ #endif
+ 
+ static bool page_pool_producer_lock(struct page_pool *pool)
+@@ -276,8 +276,7 @@ static int page_pool_init(struct page_pool *pool,
+ 	/* Driver calling page_pool_create() also call page_pool_destroy() */
+ 	refcount_set(&pool->user_cnt, 1);
+ 
+-	if (pool->dma_map)
+-		get_device(pool->p.dev);
++	xa_init_flags(&pool->dma_mapped, XA_FLAGS_ALLOC1);
+ 
+ 	if (pool->slow.flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) {
+ 		netdev_assert_locked(pool->slow.netdev);
+@@ -320,9 +319,7 @@ static int page_pool_init(struct page_pool *pool,
+ static void page_pool_uninit(struct page_pool *pool)
+ {
+ 	ptr_ring_cleanup(&pool->ring, NULL);
+-
+-	if (pool->dma_map)
+-		put_device(pool->p.dev);
++	xa_destroy(&pool->dma_mapped);
+ 
+ #ifdef CONFIG_PAGE_POOL_STATS
+ 	if (!pool->system)
+@@ -463,13 +460,21 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
+ 			      netmem_ref netmem,
+ 			      u32 dma_sync_size)
+ {
+-	if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
+-		__page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
++	if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) {
++		rcu_read_lock();
++		/* re-check under rcu_read_lock() to sync with page_pool_scrub() */
++		if (pool->dma_sync)
++			__page_pool_dma_sync_for_device(pool, netmem,
++							dma_sync_size);
++		rcu_read_unlock();
++	}
+ }
+ 
+-static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
++static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t gfp)
+ {
+ 	dma_addr_t dma;
++	int err;
++	u32 id;
+ 
+ 	/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
+ 	 * since dma_addr_t can be either 32 or 64 bits and does not always fit
+@@ -483,15 +488,30 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
+ 	if (dma_mapping_error(pool->p.dev, dma))
+ 		return false;
+ 
+-	if (page_pool_set_dma_addr_netmem(netmem, dma))
++	if (page_pool_set_dma_addr_netmem(netmem, dma)) {
++		WARN_ONCE(1, "unexpected DMA address, please report to netdev@");
+ 		goto unmap_failed;
++	}
++
++	if (in_softirq())
++		err = xa_alloc(&pool->dma_mapped, &id, netmem_to_page(netmem),
++			       PP_DMA_INDEX_LIMIT, gfp);
++	else
++		err = xa_alloc_bh(&pool->dma_mapped, &id, netmem_to_page(netmem),
++				  PP_DMA_INDEX_LIMIT, gfp);
++	if (err) {
++		WARN_ONCE(err != -ENOMEM, "couldn't track DMA mapping, please report to netdev@");
++		goto unset_failed;
++	}
+ 
++	netmem_set_dma_index(netmem, id);
+ 	page_pool_dma_sync_for_device(pool, netmem, pool->p.max_len);
+ 
+ 	return true;
+ 
++unset_failed:
++	page_pool_set_dma_addr_netmem(netmem, 0);
+ unmap_failed:
+-	WARN_ONCE(1, "unexpected DMA address, please report to netdev@");
+ 	dma_unmap_page_attrs(pool->p.dev, dma,
+ 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ 			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+@@ -508,7 +528,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
+ 	if (unlikely(!page))
+ 		return NULL;
+ 
+-	if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page)))) {
++	if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page), gfp))) {
+ 		put_page(page);
+ 		return NULL;
+ 	}
+@@ -554,7 +574,7 @@ static noinline netmem_ref __page_pool_alloc_pages_slow(struct page_pool *pool,
+ 	 */
+ 	for (i = 0; i < nr_pages; i++) {
+ 		netmem = pool->alloc.cache[i];
+-		if (dma_map && unlikely(!page_pool_dma_map(pool, netmem))) {
++		if (dma_map && unlikely(!page_pool_dma_map(pool, netmem, gfp))) {
+ 			put_page(netmem_to_page(netmem));
+ 			continue;
+ 		}
+@@ -656,6 +676,8 @@ void page_pool_clear_pp_info(netmem_ref netmem)
+ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ 							 netmem_ref netmem)
+ {
++	struct page *old, *page = netmem_to_page(netmem);
++	unsigned long id;
+ 	dma_addr_t dma;
+ 
+ 	if (!pool->dma_map)
+@@ -664,6 +686,17 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ 		 */
+ 		return;
+ 
++	id = netmem_get_dma_index(netmem);
++	if (!id)
++		return;
++
++	if (in_softirq())
++		old = xa_cmpxchg(&pool->dma_mapped, id, page, NULL, 0);
++	else
++		old = xa_cmpxchg_bh(&pool->dma_mapped, id, page, NULL, 0);
++	if (old != page)
++		return;
++
+ 	dma = page_pool_get_dma_addr_netmem(netmem);
+ 
+ 	/* When page is unmapped, it cannot be returned to our pool */
+@@ -671,6 +704,7 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ 			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+ 	page_pool_set_dma_addr_netmem(netmem, 0);
++	netmem_set_dma_index(netmem, 0);
+ }
+ 
+ /* Disconnects a page (from a page_pool).  API users can have a need
+@@ -707,19 +741,16 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem)
+ 
+ static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
+ {
+-	int ret;
+-	/* BH protection not needed if current is softirq */
+-	if (in_softirq())
+-		ret = ptr_ring_produce(&pool->ring, (__force void *)netmem);
+-	else
+-		ret = ptr_ring_produce_bh(&pool->ring, (__force void *)netmem);
++	bool in_softirq, ret;
+ 
+-	if (!ret) {
++	/* BH protection not needed if current is softirq */
++	in_softirq = page_pool_producer_lock(pool);
++	ret = !__ptr_ring_produce(&pool->ring, (__force void *)netmem);
++	if (ret)
+ 		recycle_stat_inc(pool, ring);
+-		return true;
+-	}
++	page_pool_producer_unlock(pool, in_softirq);
+ 
+-	return false;
++	return ret;
+ }
+ 
+ /* Only allow direct recycling in special circumstances, into the
+@@ -1080,8 +1111,29 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
+ 
+ static void page_pool_scrub(struct page_pool *pool)
+ {
++	unsigned long id;
++	void *ptr;
++
+ 	page_pool_empty_alloc_cache_once(pool);
+-	pool->destroy_cnt++;
++	if (!pool->destroy_cnt++ && pool->dma_map) {
++		if (pool->dma_sync) {
++			/* Disable page_pool_dma_sync_for_device() */
++			pool->dma_sync = false;
++
++			/* Make sure all concurrent returns that may see the old
++			 * value of dma_sync (and thus perform a sync) have
++			 * finished before doing the unmapping below. Skip the
++			 * wait if the device doesn't actually need syncing, or
++			 * if there are no outstanding mapped pages.
++			 */
++			if (dma_dev_need_sync(pool->p.dev) &&
++			    !xa_empty(&pool->dma_mapped))
++				synchronize_net();
++		}
++
++		xa_for_each(&pool->dma_mapped, id, ptr)
++			__page_pool_release_page_dma(pool, page_to_netmem(ptr));
++	}
+ 
+ 	/* No more consumers should exist, but producers could still
+ 	 * be in-flight.
+@@ -1091,10 +1143,14 @@ static void page_pool_scrub(struct page_pool *pool)
+ 
+ static int page_pool_release(struct page_pool *pool)
+ {
++	bool in_softirq;
+ 	int inflight;
+ 
+ 	page_pool_scrub(pool);
+ 	inflight = page_pool_inflight(pool, true);
++	/* Acquire producer lock to make sure producers have exited. */
++	in_softirq = page_pool_producer_lock(pool);
++	page_pool_producer_unlock(pool, in_softirq);
+ 	if (!inflight)
+ 		__page_pool_destroy(pool);
+ 
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index c5a7f41982a575..fc6815ad78266f 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3681,7 +3681,7 @@ struct net_device *rtnl_create_link(struct net *net, const char *ifname,
+ 	if (tb[IFLA_LINKMODE])
+ 		dev->link_mode = nla_get_u8(tb[IFLA_LINKMODE]);
+ 	if (tb[IFLA_GROUP])
+-		dev_set_group(dev, nla_get_u32(tb[IFLA_GROUP]));
++		netif_set_group(dev, nla_get_u32(tb[IFLA_GROUP]));
+ 	if (tb[IFLA_GSO_MAX_SIZE])
+ 		netif_set_gso_max_size(dev, nla_get_u32(tb[IFLA_GSO_MAX_SIZE]));
+ 	if (tb[IFLA_GSO_MAX_SEGS])
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 6cbf77bc61fce7..74a2d886a35b51 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -893,11 +893,6 @@ static void skb_clone_fraglist(struct sk_buff *skb)
+ 		skb_get(list);
+ }
+ 
+-static bool is_pp_netmem(netmem_ref netmem)
+-{
+-	return (netmem_get_pp_magic(netmem) & ~0x3UL) == PP_SIGNATURE;
+-}
+-
+ int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb,
+ 		    unsigned int headroom)
+ {
+@@ -995,14 +990,7 @@ bool napi_pp_put_page(netmem_ref netmem)
+ {
+ 	netmem = netmem_compound_head(netmem);
+ 
+-	/* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
+-	 * in order to preserve any existing bits, such as bit 0 for the
+-	 * head page of compound page and bit 1 for pfmemalloc page, so
+-	 * mask those bits for freeing side when doing below checking,
+-	 * and page_is_pfmemalloc() is checked in __page_pool_put_page()
+-	 * to avoid recycling the pfmemalloc page.
+-	 */
+-	if (unlikely(!is_pp_netmem(netmem)))
++	if (unlikely(!netmem_is_pp(netmem)))
+ 		return false;
+ 
+ 	page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false);
+@@ -1042,7 +1030,7 @@ static int skb_pp_frag_ref(struct sk_buff *skb)
+ 
+ 	for (i = 0; i < shinfo->nr_frags; i++) {
+ 		head_netmem = netmem_compound_head(shinfo->frags[i].netmem);
+-		if (likely(is_pp_netmem(head_netmem)))
++		if (likely(netmem_is_pp(head_netmem)))
+ 			page_pool_ref_netmem(head_netmem);
+ 		else
+ 			page_ref_inc(netmem_to_page(head_netmem));
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 0ddc4c7188332a..6d689918c2b390 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -530,16 +530,22 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 					u32 off, u32 len,
+ 					struct sk_psock *psock,
+ 					struct sock *sk,
+-					struct sk_msg *msg)
++					struct sk_msg *msg,
++					bool take_ref)
+ {
+ 	int num_sge, copied;
+ 
++	/* skb_to_sgvec will fail when the total number of fragments in
++	 * frag_list and frags exceeds MAX_MSG_FRAGS. For example, the
++	 * caller may aggregate multiple skbs.
++	 */
+ 	num_sge = skb_to_sgvec(skb, msg->sg.data, off, len);
+ 	if (num_sge < 0) {
+ 		/* skb linearize may fail with ENOMEM, but lets simply try again
+ 		 * later if this happens. Under memory pressure we don't want to
+ 		 * drop the skb. We need to linearize the skb so that the mapping
+ 		 * in skb_to_sgvec can not error.
++		 * Note that skb_linearize requires the skb not to be shared.
+ 		 */
+ 		if (skb_linearize(skb))
+ 			return -EAGAIN;
+@@ -556,7 +562,7 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 	msg->sg.start = 0;
+ 	msg->sg.size = copied;
+ 	msg->sg.end = num_sge;
+-	msg->skb = skb;
++	msg->skb = take_ref ? skb_get(skb) : skb;
+ 
+ 	sk_psock_queue_msg(psock, msg);
+ 	sk_psock_data_ready(sk, psock);
+@@ -564,7 +570,7 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ }
+ 
+ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
+-				     u32 off, u32 len);
++				     u32 off, u32 len, bool take_ref);
+ 
+ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+ 				u32 off, u32 len)
+@@ -578,7 +584,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+ 	 * correctly.
+ 	 */
+ 	if (unlikely(skb->sk == sk))
+-		return sk_psock_skb_ingress_self(psock, skb, off, len);
++		return sk_psock_skb_ingress_self(psock, skb, off, len, true);
+ 	msg = sk_psock_create_ingress_msg(sk, skb);
+ 	if (!msg)
+ 		return -EAGAIN;
+@@ -590,7 +596,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+ 	 * into user buffers.
+ 	 */
+ 	skb_set_owner_r(skb, sk);
+-	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg, true);
+ 	if (err < 0)
+ 		kfree(msg);
+ 	return err;
+@@ -601,7 +607,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb,
+  * because the skb is already accounted for here.
+  */
+ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb,
+-				     u32 off, u32 len)
++				     u32 off, u32 len, bool take_ref)
+ {
+ 	struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC);
+ 	struct sock *sk = psock->sk;
+@@ -610,7 +616,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ 	if (unlikely(!msg))
+ 		return -EAGAIN;
+ 	skb_set_owner_r(skb, sk);
+-	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg, take_ref);
+ 	if (err < 0)
+ 		kfree(msg);
+ 	return err;
+@@ -619,18 +625,13 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+ 			       u32 off, u32 len, bool ingress)
+ {
+-	int err = 0;
+-
+ 	if (!ingress) {
+ 		if (!sock_writeable(psock->sk))
+ 			return -EAGAIN;
+ 		return skb_send_sock(psock->sk, skb, off, len);
+ 	}
+-	skb_get(skb);
+-	err = sk_psock_skb_ingress(psock, skb, off, len);
+-	if (err < 0)
+-		kfree_skb(skb);
+-	return err;
++
++	return sk_psock_skb_ingress(psock, skb, off, len);
+ }
+ 
+ static void sk_psock_skb_state(struct sk_psock *psock,
+@@ -655,12 +656,14 @@ static void sk_psock_backlog(struct work_struct *work)
+ 	bool ingress;
+ 	int ret;
+ 
++	/* Increment the psock refcnt to synchronize with close(fd) path in
++	 * sock_map_close(), ensuring we wait for backlog thread completion
++	 * before sk_socket freed. If refcnt increment fails, it indicates
++	 * sock_map_close() completed with sk_socket potentially already freed.
++	 */
++	if (!sk_psock_get(psock->sk))
++		return;
+ 	mutex_lock(&psock->work_mutex);
+-	if (unlikely(state->len)) {
+-		len = state->len;
+-		off = state->off;
+-	}
+-
+ 	while ((skb = skb_peek(&psock->ingress_skb))) {
+ 		len = skb->len;
+ 		off = 0;
+@@ -670,6 +673,13 @@ static void sk_psock_backlog(struct work_struct *work)
+ 			off = stm->offset;
+ 			len = stm->full_len;
+ 		}
++
++		/* Resume processing from previous partial state */
++		if (unlikely(state->len)) {
++			len = state->len;
++			off = state->off;
++		}
++
+ 		ingress = skb_bpf_ingress(skb);
+ 		skb_bpf_redirect_clear(skb);
+ 		do {
+@@ -697,11 +707,14 @@ static void sk_psock_backlog(struct work_struct *work)
+ 			len -= ret;
+ 		} while (len);
+ 
++		/* The entire skb sent, clear state */
++		sk_psock_skb_state(psock, state, 0, 0);
+ 		skb = skb_dequeue(&psock->ingress_skb);
+ 		kfree_skb(skb);
+ 	}
+ end:
+ 	mutex_unlock(&psock->work_mutex);
++	sk_psock_put(psock->sk, psock);
+ }
+ 
+ struct sk_psock *sk_psock_init(struct sock *sk, int node)
+@@ -1014,7 +1027,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 				off = stm->offset;
+ 				len = stm->full_len;
+ 			}
+-			err = sk_psock_skb_ingress_self(psock, skb, off, len);
++			err = sk_psock_skb_ingress_self(psock, skb, off, len, false);
+ 		}
+ 		if (err < 0) {
+ 			spin_lock_bh(&psock->ingress_lock);
+diff --git a/net/core/sock.c b/net/core/sock.c
+index e54449c9ab0bad..5034d0fbd4a427 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -3234,16 +3234,16 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
+ {
+ 	struct mem_cgroup *memcg = mem_cgroup_sockets_enabled ? sk->sk_memcg : NULL;
+ 	struct proto *prot = sk->sk_prot;
+-	bool charged = false;
++	bool charged = true;
+ 	long allocated;
+ 
+ 	sk_memory_allocated_add(sk, amt);
+ 	allocated = sk_memory_allocated(sk);
+ 
+ 	if (memcg) {
+-		if (!mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge()))
++		charged = mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge());
++		if (!charged)
+ 			goto suppress_allocation;
+-		charged = true;
+ 	}
+ 
+ 	/* Under limit. */
+@@ -3328,7 +3328,7 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
+ 
+ 	sk_memory_allocated_sub(sk, amt);
+ 
+-	if (charged)
++	if (memcg && charged)
+ 		mem_cgroup_uncharge_skmem(memcg, amt);
+ 
+ 	return 0;
+diff --git a/net/core/utils.c b/net/core/utils.c
+index 27f4cffaae05d9..b8c21a859e27b1 100644
+--- a/net/core/utils.c
++++ b/net/core/utils.c
+@@ -473,11 +473,11 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
+ EXPORT_SYMBOL(inet_proto_csum_replace16);
+ 
+ void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
+-				     __wsum diff, bool pseudohdr)
++				     __wsum diff, bool pseudohdr, bool ipv6)
+ {
+ 	if (skb->ip_summed != CHECKSUM_PARTIAL) {
+ 		csum_replace_by_diff(sum, diff);
+-		if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr)
++		if (skb->ip_summed == CHECKSUM_COMPLETE && pseudohdr && !ipv6)
+ 			skb->csum = ~csum_sub(diff, skb->csum);
+ 	} else if (pseudohdr) {
+ 		*sum = ~csum_fold(csum_add(diff, csum_unfold(*sum)));
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index f86eedad586a77..0ba73943c6eed8 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -437,8 +437,8 @@ void __xdp_return(netmem_ref netmem, enum xdp_mem_type mem_type,
+ 		netmem = netmem_compound_head(netmem);
+ 		if (napi_direct && xdp_return_frame_no_direct())
+ 			napi_direct = false;
+-		/* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
+-		 * as mem->type knows this a page_pool page
++		/* No need to check netmem_is_pp() as mem->type knows this a
++		 * page_pool page
+ 		 */
+ 		page_pool_put_full_netmem(netmem_get_pp(netmem), netmem,
+ 					  napi_direct);
+diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
+index 8c3c068728e51c..fe75821623a4fc 100644
+--- a/net/dsa/tag_brcm.c
++++ b/net/dsa/tag_brcm.c
+@@ -257,7 +257,7 @@ static struct sk_buff *brcm_leg_tag_rcv(struct sk_buff *skb,
+ 	int source_port;
+ 	u8 *brcm_tag;
+ 
+-	if (unlikely(!pskb_may_pull(skb, BRCM_LEG_PORT_ID)))
++	if (unlikely(!pskb_may_pull(skb, BRCM_LEG_TAG_LEN + VLAN_HLEN)))
+ 		return NULL;
+ 
+ 	brcm_tag = dsa_etype_header_pos_rx(skb);
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 8262cc10f98db7..4b1badeebc741c 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -1001,7 +1001,8 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		    ethtool_get_flow_spec_ring(info.fs.ring_cookie))
+ 			return -EINVAL;
+ 
+-		if (!xa_load(&dev->ethtool->rss_ctx, info.rss_context))
++		if (info.rss_context &&
++		    !xa_load(&dev->ethtool->rss_ctx, info.rss_context))
+ 			return -EINVAL;
+ 	}
+ 
+diff --git a/net/ipv4/netfilter/nft_fib_ipv4.c b/net/ipv4/netfilter/nft_fib_ipv4.c
+index 9082ca17e845cb..7e7c49535e3f56 100644
+--- a/net/ipv4/netfilter/nft_fib_ipv4.c
++++ b/net/ipv4/netfilter/nft_fib_ipv4.c
+@@ -50,7 +50,12 @@ void nft_fib4_eval_type(const struct nft_expr *expr, struct nft_regs *regs,
+ 	else
+ 		addr = iph->saddr;
+ 
+-	*dst = inet_dev_addr_type(nft_net(pkt), dev, addr);
++	if (priv->flags & (NFTA_FIB_F_IIF | NFTA_FIB_F_OIF)) {
++		*dst = inet_dev_addr_type(nft_net(pkt), dev, addr);
++		return;
++	}
++
++	*dst = inet_addr_type_dev_table(nft_net(pkt), pkt->skb->dev, addr);
+ }
+ EXPORT_SYMBOL_GPL(nft_fib4_eval_type);
+ 
+@@ -65,8 +70,8 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	struct flowi4 fl4 = {
+ 		.flowi4_scope = RT_SCOPE_UNIVERSE,
+ 		.flowi4_iif = LOOPBACK_IFINDEX,
++		.flowi4_proto = pkt->tprot,
+ 		.flowi4_uid = sock_net_uid(nft_net(pkt), NULL),
+-		.flowi4_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
+ 	};
+ 	const struct net_device *oif;
+ 	const struct net_device *found;
+@@ -90,6 +95,8 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	else
+ 		oif = NULL;
+ 
++	fl4.flowi4_l3mdev = nft_fib_l3mdev_master_ifindex_rcu(pkt, oif);
++
+ 	iph = skb_header_pointer(pkt->skb, noff, sizeof(_iph), &_iph);
+ 	if (!iph) {
+ 		regs->verdict.code = NFT_BREAK;
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 9a8142ccbabe44..9b295b2878befa 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -332,6 +332,7 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 	bool copy_dtor;
+ 	__sum16 check;
+ 	__be16 newlen;
++	int ret = 0;
+ 
+ 	mss = skb_shinfo(gso_skb)->gso_size;
+ 	if (gso_skb->len <= sizeof(*uh) + mss)
+@@ -360,6 +361,10 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ 		if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size)
+ 			return __udp_gso_segment_list(gso_skb, features, is_ipv6);
+ 
++		ret = __skb_linearize(gso_skb);
++		if (ret)
++			return ERR_PTR(ret);
++
+ 		 /* Setup csum, as fraglist skips this in udp4_gro_receive. */
+ 		gso_skb->csum_start = skb_transport_header(gso_skb) - gso_skb->head;
+ 		gso_skb->csum_offset = offsetof(struct udphdr, check);
+diff --git a/net/ipv6/ila/ila_common.c b/net/ipv6/ila/ila_common.c
+index 95e9146918cc6f..b8d43ed4689db9 100644
+--- a/net/ipv6/ila/ila_common.c
++++ b/net/ipv6/ila/ila_common.c
+@@ -86,7 +86,7 @@ static void ila_csum_adjust_transport(struct sk_buff *skb,
+ 
+ 			diff = get_csum_diff(ip6h, p);
+ 			inet_proto_csum_replace_by_diff(&th->check, skb,
+-							diff, true);
++							diff, true, true);
+ 		}
+ 		break;
+ 	case NEXTHDR_UDP:
+@@ -97,7 +97,7 @@ static void ila_csum_adjust_transport(struct sk_buff *skb,
+ 			if (uh->check || skb->ip_summed == CHECKSUM_PARTIAL) {
+ 				diff = get_csum_diff(ip6h, p);
+ 				inet_proto_csum_replace_by_diff(&uh->check, skb,
+-								diff, true);
++								diff, true, true);
+ 				if (!uh->check)
+ 					uh->check = CSUM_MANGLED_0;
+ 			}
+@@ -111,7 +111,7 @@ static void ila_csum_adjust_transport(struct sk_buff *skb,
+ 
+ 			diff = get_csum_diff(ip6h, p);
+ 			inet_proto_csum_replace_by_diff(&ih->icmp6_cksum, skb,
+-							diff, true);
++							diff, true, true);
+ 		}
+ 		break;
+ 	}
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 581ce055bf520f..4541836ee3da20 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -164,20 +164,20 @@ int br_ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		struct ip6_fraglist_iter iter;
+ 		struct sk_buff *frag2;
+ 
+-		if (first_len - hlen > mtu ||
+-		    skb_headroom(skb) < (hroom + sizeof(struct frag_hdr)))
++		if (first_len - hlen > mtu)
+ 			goto blackhole;
+ 
+-		if (skb_cloned(skb))
++		if (skb_cloned(skb) ||
++		    skb_headroom(skb) < (hroom + sizeof(struct frag_hdr)))
+ 			goto slow_path;
+ 
+ 		skb_walk_frags(skb, frag2) {
+-			if (frag2->len > mtu ||
+-			    skb_headroom(frag2) < (hlen + hroom + sizeof(struct frag_hdr)))
++			if (frag2->len > mtu)
+ 				goto blackhole;
+ 
+ 			/* Partially cloned skb? */
+-			if (skb_shared(frag2))
++			if (skb_shared(frag2) ||
++			    skb_headroom(frag2) < (hlen + hroom + sizeof(struct frag_hdr)))
+ 				goto slow_path;
+ 		}
+ 
+diff --git a/net/ipv6/netfilter/nft_fib_ipv6.c b/net/ipv6/netfilter/nft_fib_ipv6.c
+index 7fd9d7b21cd42d..421036a3605b46 100644
+--- a/net/ipv6/netfilter/nft_fib_ipv6.c
++++ b/net/ipv6/netfilter/nft_fib_ipv6.c
+@@ -50,6 +50,7 @@ static int nft_fib6_flowi_init(struct flowi6 *fl6, const struct nft_fib *priv,
+ 		fl6->flowi6_mark = pkt->skb->mark;
+ 
+ 	fl6->flowlabel = (*(__be32 *)iph) & IPV6_FLOWINFO_MASK;
++	fl6->flowi6_l3mdev = nft_fib_l3mdev_master_ifindex_rcu(pkt, dev);
+ 
+ 	return lookup_flags;
+ }
+@@ -73,8 +74,6 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv,
+ 	else if (priv->flags & NFTA_FIB_F_OIF)
+ 		dev = nft_out(pkt);
+ 
+-	fl6.flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev);
+-
+ 	nft_fib6_flowi_init(&fl6, priv, pkt, dev, iph);
+ 
+ 	if (dev && nf_ipv6_chk_addr(nft_net(pkt), &fl6.daddr, dev, true))
+@@ -158,6 +157,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ {
+ 	const struct nft_fib *priv = nft_expr_priv(expr);
+ 	int noff = skb_network_offset(pkt->skb);
++	const struct net_device *found = NULL;
+ 	const struct net_device *oif = NULL;
+ 	u32 *dest = &regs->data[priv->dreg];
+ 	struct ipv6hdr *iph, _iph;
+@@ -165,7 +165,6 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 		.flowi6_iif = LOOPBACK_IFINDEX,
+ 		.flowi6_proto = pkt->tprot,
+ 		.flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
+-		.flowi6_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
+ 	};
+ 	struct rt6_info *rt;
+ 	int lookup_flags;
+@@ -203,11 +202,15 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	if (rt->rt6i_flags & (RTF_REJECT | RTF_ANYCAST | RTF_LOCAL))
+ 		goto put_rt_err;
+ 
+-	if (oif && oif != rt->rt6i_idev->dev &&
+-	    l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) != oif->ifindex)
+-		goto put_rt_err;
++	if (!oif) {
++		found = rt->rt6i_idev->dev;
++	} else {
++		if (oif == rt->rt6i_idev->dev ||
++		    l3mdev_master_ifindex_rcu(rt->rt6i_idev->dev) == oif->ifindex)
++			found = oif;
++	}
+ 
+-	nft_fib_store_result(dest, priv, rt->rt6i_idev->dev);
++	nft_fib_store_result(dest, priv, found);
+  put_rt_err:
+ 	ip6_rt_put(rt);
+ }
+diff --git a/net/ipv6/seg6_local.c b/net/ipv6/seg6_local.c
+index ac1dbd492c22dc..a11a02b4ba95b6 100644
+--- a/net/ipv6/seg6_local.c
++++ b/net/ipv6/seg6_local.c
+@@ -1644,10 +1644,8 @@ static const struct nla_policy seg6_local_policy[SEG6_LOCAL_MAX + 1] = {
+ 	[SEG6_LOCAL_SRH]	= { .type = NLA_BINARY },
+ 	[SEG6_LOCAL_TABLE]	= { .type = NLA_U32 },
+ 	[SEG6_LOCAL_VRFTABLE]	= { .type = NLA_U32 },
+-	[SEG6_LOCAL_NH4]	= { .type = NLA_BINARY,
+-				    .len = sizeof(struct in_addr) },
+-	[SEG6_LOCAL_NH6]	= { .type = NLA_BINARY,
+-				    .len = sizeof(struct in6_addr) },
++	[SEG6_LOCAL_NH4]	= NLA_POLICY_EXACT_LEN(sizeof(struct in_addr)),
++	[SEG6_LOCAL_NH6]	= NLA_POLICY_EXACT_LEN(sizeof(struct in6_addr)),
+ 	[SEG6_LOCAL_IIF]	= { .type = NLA_U32 },
+ 	[SEG6_LOCAL_OIF]	= { .type = NLA_U32 },
+ 	[SEG6_LOCAL_BPF]	= { .type = NLA_NESTED },
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 35eaf0812c5b24..53d5ffad87be87 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -7220,11 +7220,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
+ 	bssid = ieee80211_get_bssid(hdr, len, sdata->vif.type);
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+ 		struct ieee80211_ext *ext = (void *) mgmt;
+-
+-		if (ieee80211_is_s1g_short_beacon(ext->frame_control))
+-			variable = ext->u.s1g_short_beacon.variable;
+-		else
+-			variable = ext->u.s1g_beacon.variable;
++		variable = ext->u.s1g_beacon.variable +
++			   ieee80211_s1g_optional_len(ext->frame_control);
+ 	}
+ 
+ 	baselen = (u8 *) variable - (u8 *) mgmt;
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index cb707907188585..5a56487dab69cb 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -260,6 +260,7 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 	struct ieee80211_mgmt *mgmt = (void *)skb->data;
+ 	struct ieee80211_bss *bss;
+ 	struct ieee80211_channel *channel;
++	struct ieee80211_ext *ext;
+ 	size_t min_hdr_len = offsetof(struct ieee80211_mgmt,
+ 				      u.probe_resp.variable);
+ 
+@@ -269,12 +270,10 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 		return;
+ 
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+-		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_short_beacon.variable);
+-		else
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_beacon);
++		ext = (struct ieee80211_ext *)mgmt;
++		min_hdr_len =
++			offsetof(struct ieee80211_ext, u.s1g_beacon.variable) +
++			ieee80211_s1g_optional_len(ext->frame_control);
+ 	}
+ 
+ 	if (skb->len < min_hdr_len)
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index 4e0842df5234ea..2c260f33b55cc5 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -143,16 +143,15 @@ struct ncsi_channel_vlan_filter {
+ };
+ 
+ struct ncsi_channel_stats {
+-	u32 hnc_cnt_hi;		/* Counter cleared            */
+-	u32 hnc_cnt_lo;		/* Counter cleared            */
+-	u32 hnc_rx_bytes;	/* Rx bytes                   */
+-	u32 hnc_tx_bytes;	/* Tx bytes                   */
+-	u32 hnc_rx_uc_pkts;	/* Rx UC packets              */
+-	u32 hnc_rx_mc_pkts;     /* Rx MC packets              */
+-	u32 hnc_rx_bc_pkts;	/* Rx BC packets              */
+-	u32 hnc_tx_uc_pkts;	/* Tx UC packets              */
+-	u32 hnc_tx_mc_pkts;	/* Tx MC packets              */
+-	u32 hnc_tx_bc_pkts;	/* Tx BC packets              */
++	u64 hnc_cnt;		/* Counter cleared            */
++	u64 hnc_rx_bytes;	/* Rx bytes                   */
++	u64 hnc_tx_bytes;	/* Tx bytes                   */
++	u64 hnc_rx_uc_pkts;	/* Rx UC packets              */
++	u64 hnc_rx_mc_pkts;     /* Rx MC packets              */
++	u64 hnc_rx_bc_pkts;	/* Rx BC packets              */
++	u64 hnc_tx_uc_pkts;	/* Tx UC packets              */
++	u64 hnc_tx_mc_pkts;	/* Tx MC packets              */
++	u64 hnc_tx_bc_pkts;	/* Tx BC packets              */
+ 	u32 hnc_fcs_err;	/* FCS errors                 */
+ 	u32 hnc_align_err;	/* Alignment errors           */
+ 	u32 hnc_false_carrier;	/* False carrier detection    */
+@@ -181,7 +180,7 @@ struct ncsi_channel_stats {
+ 	u32 hnc_tx_1023_frames;	/* Tx 512-1023 bytes frames   */
+ 	u32 hnc_tx_1522_frames;	/* Tx 1024-1522 bytes frames  */
+ 	u32 hnc_tx_9022_frames;	/* Tx 1523-9022 bytes frames  */
+-	u32 hnc_rx_valid_bytes;	/* Rx valid bytes             */
++	u64 hnc_rx_valid_bytes;	/* Rx valid bytes             */
+ 	u32 hnc_rx_runt_pkts;	/* Rx error runt packets      */
+ 	u32 hnc_rx_jabber_pkts;	/* Rx error jabber packets    */
+ 	u32 ncsi_rx_cmds;	/* Rx NCSI commands           */
+diff --git a/net/ncsi/ncsi-pkt.h b/net/ncsi/ncsi-pkt.h
+index f2f3b5c1b94126..24edb273797240 100644
+--- a/net/ncsi/ncsi-pkt.h
++++ b/net/ncsi/ncsi-pkt.h
+@@ -252,16 +252,15 @@ struct ncsi_rsp_gp_pkt {
+ /* Get Controller Packet Statistics */
+ struct ncsi_rsp_gcps_pkt {
+ 	struct ncsi_rsp_pkt_hdr rsp;            /* Response header            */
+-	__be32                  cnt_hi;         /* Counter cleared            */
+-	__be32                  cnt_lo;         /* Counter cleared            */
+-	__be32                  rx_bytes;       /* Rx bytes                   */
+-	__be32                  tx_bytes;       /* Tx bytes                   */
+-	__be32                  rx_uc_pkts;     /* Rx UC packets              */
+-	__be32                  rx_mc_pkts;     /* Rx MC packets              */
+-	__be32                  rx_bc_pkts;     /* Rx BC packets              */
+-	__be32                  tx_uc_pkts;     /* Tx UC packets              */
+-	__be32                  tx_mc_pkts;     /* Tx MC packets              */
+-	__be32                  tx_bc_pkts;     /* Tx BC packets              */
++	__be64                  cnt;            /* Counter cleared            */
++	__be64                  rx_bytes;       /* Rx bytes                   */
++	__be64                  tx_bytes;       /* Tx bytes                   */
++	__be64                  rx_uc_pkts;     /* Rx UC packets              */
++	__be64                  rx_mc_pkts;     /* Rx MC packets              */
++	__be64                  rx_bc_pkts;     /* Rx BC packets              */
++	__be64                  tx_uc_pkts;     /* Tx UC packets              */
++	__be64                  tx_mc_pkts;     /* Tx MC packets              */
++	__be64                  tx_bc_pkts;     /* Tx BC packets              */
+ 	__be32                  fcs_err;        /* FCS errors                 */
+ 	__be32                  align_err;      /* Alignment errors           */
+ 	__be32                  false_carrier;  /* False carrier detection    */
+@@ -290,11 +289,11 @@ struct ncsi_rsp_gcps_pkt {
+ 	__be32                  tx_1023_frames; /* Tx 512-1023 bytes frames   */
+ 	__be32                  tx_1522_frames; /* Tx 1024-1522 bytes frames  */
+ 	__be32                  tx_9022_frames; /* Tx 1523-9022 bytes frames  */
+-	__be32                  rx_valid_bytes; /* Rx valid bytes             */
++	__be64                  rx_valid_bytes; /* Rx valid bytes             */
+ 	__be32                  rx_runt_pkts;   /* Rx error runt packets      */
+ 	__be32                  rx_jabber_pkts; /* Rx error jabber packets    */
+ 	__be32                  checksum;       /* Checksum                   */
+-};
++}  __packed __aligned(4);
+ 
+ /* Get NCSI Statistics */
+ struct ncsi_rsp_gns_pkt {
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 4a8ce2949faeac..8668888c5a2f99 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -926,16 +926,15 @@ static int ncsi_rsp_handler_gcps(struct ncsi_request *nr)
+ 
+ 	/* Update HNC's statistics */
+ 	ncs = &nc->stats;
+-	ncs->hnc_cnt_hi         = ntohl(rsp->cnt_hi);
+-	ncs->hnc_cnt_lo         = ntohl(rsp->cnt_lo);
+-	ncs->hnc_rx_bytes       = ntohl(rsp->rx_bytes);
+-	ncs->hnc_tx_bytes       = ntohl(rsp->tx_bytes);
+-	ncs->hnc_rx_uc_pkts     = ntohl(rsp->rx_uc_pkts);
+-	ncs->hnc_rx_mc_pkts     = ntohl(rsp->rx_mc_pkts);
+-	ncs->hnc_rx_bc_pkts     = ntohl(rsp->rx_bc_pkts);
+-	ncs->hnc_tx_uc_pkts     = ntohl(rsp->tx_uc_pkts);
+-	ncs->hnc_tx_mc_pkts     = ntohl(rsp->tx_mc_pkts);
+-	ncs->hnc_tx_bc_pkts     = ntohl(rsp->tx_bc_pkts);
++	ncs->hnc_cnt            = be64_to_cpu(rsp->cnt);
++	ncs->hnc_rx_bytes       = be64_to_cpu(rsp->rx_bytes);
++	ncs->hnc_tx_bytes       = be64_to_cpu(rsp->tx_bytes);
++	ncs->hnc_rx_uc_pkts     = be64_to_cpu(rsp->rx_uc_pkts);
++	ncs->hnc_rx_mc_pkts     = be64_to_cpu(rsp->rx_mc_pkts);
++	ncs->hnc_rx_bc_pkts     = be64_to_cpu(rsp->rx_bc_pkts);
++	ncs->hnc_tx_uc_pkts     = be64_to_cpu(rsp->tx_uc_pkts);
++	ncs->hnc_tx_mc_pkts     = be64_to_cpu(rsp->tx_mc_pkts);
++	ncs->hnc_tx_bc_pkts     = be64_to_cpu(rsp->tx_bc_pkts);
+ 	ncs->hnc_fcs_err        = ntohl(rsp->fcs_err);
+ 	ncs->hnc_align_err      = ntohl(rsp->align_err);
+ 	ncs->hnc_false_carrier  = ntohl(rsp->false_carrier);
+@@ -964,7 +963,7 @@ static int ncsi_rsp_handler_gcps(struct ncsi_request *nr)
+ 	ncs->hnc_tx_1023_frames = ntohl(rsp->tx_1023_frames);
+ 	ncs->hnc_tx_1522_frames = ntohl(rsp->tx_1522_frames);
+ 	ncs->hnc_tx_9022_frames = ntohl(rsp->tx_9022_frames);
+-	ncs->hnc_rx_valid_bytes = ntohl(rsp->rx_valid_bytes);
++	ncs->hnc_rx_valid_bytes = be64_to_cpu(rsp->rx_valid_bytes);
+ 	ncs->hnc_rx_runt_pkts   = ntohl(rsp->rx_runt_pkts);
+ 	ncs->hnc_rx_jabber_pkts = ntohl(rsp->rx_jabber_pkts);
+ 
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index aad84aabd7f1d1..f391cd267922b3 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -248,7 +248,7 @@ static noinline bool
+ nf_nat_used_tuple_new(const struct nf_conntrack_tuple *tuple,
+ 		      const struct nf_conn *ignored_ct)
+ {
+-	static const unsigned long uses_nat = IPS_NAT_MASK | IPS_SEQ_ADJUST_BIT;
++	static const unsigned long uses_nat = IPS_NAT_MASK | IPS_SEQ_ADJUST;
+ 	const struct nf_conntrack_tuple_hash *thash;
+ 	const struct nf_conntrack_zone *zone;
+ 	struct nf_conn *ct;
+@@ -287,8 +287,14 @@ nf_nat_used_tuple_new(const struct nf_conntrack_tuple *tuple,
+ 	zone = nf_ct_zone(ignored_ct);
+ 
+ 	thash = nf_conntrack_find_get(net, zone, tuple);
+-	if (unlikely(!thash)) /* clashing entry went away */
+-		return false;
++	if (unlikely(!thash)) {
++		struct nf_conntrack_tuple reply;
++
++		nf_ct_invert_tuple(&reply, tuple);
++		thash = nf_conntrack_find_get(net, zone, &reply);
++		if (!thash) /* clashing entry went away */
++			return false;
++	}
+ 
+ 	ct = nf_ct_tuplehash_to_ctrack(thash);
+ 
+diff --git a/net/netfilter/nft_quota.c b/net/netfilter/nft_quota.c
+index 9b2d7463d3d326..df0798da2329b9 100644
+--- a/net/netfilter/nft_quota.c
++++ b/net/netfilter/nft_quota.c
+@@ -19,10 +19,16 @@ struct nft_quota {
+ };
+ 
+ static inline bool nft_overquota(struct nft_quota *priv,
+-				 const struct sk_buff *skb)
++				 const struct sk_buff *skb,
++				 bool *report)
+ {
+-	return atomic64_add_return(skb->len, priv->consumed) >=
+-	       atomic64_read(&priv->quota);
++	u64 consumed = atomic64_add_return(skb->len, priv->consumed);
++	u64 quota = atomic64_read(&priv->quota);
++
++	if (report)
++		*report = consumed >= quota;
++
++	return consumed > quota;
+ }
+ 
+ static inline bool nft_quota_invert(struct nft_quota *priv)
+@@ -34,7 +40,7 @@ static inline void nft_quota_do_eval(struct nft_quota *priv,
+ 				     struct nft_regs *regs,
+ 				     const struct nft_pktinfo *pkt)
+ {
+-	if (nft_overquota(priv, pkt->skb) ^ nft_quota_invert(priv))
++	if (nft_overquota(priv, pkt->skb, NULL) ^ nft_quota_invert(priv))
+ 		regs->verdict.code = NFT_BREAK;
+ }
+ 
+@@ -51,13 +57,13 @@ static void nft_quota_obj_eval(struct nft_object *obj,
+ 			       const struct nft_pktinfo *pkt)
+ {
+ 	struct nft_quota *priv = nft_obj_data(obj);
+-	bool overquota;
++	bool overquota, report;
+ 
+-	overquota = nft_overquota(priv, pkt->skb);
++	overquota = nft_overquota(priv, pkt->skb, &report);
+ 	if (overquota ^ nft_quota_invert(priv))
+ 		regs->verdict.code = NFT_BREAK;
+ 
+-	if (overquota &&
++	if (report &&
+ 	    !test_and_set_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
+ 		nft_obj_notify(nft_net(pkt), obj->key.table, obj, 0, 0,
+ 			       NFT_MSG_NEWOBJ, 0, nft_pf(pkt), 0, GFP_ATOMIC);
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 7be342b495f5f7..0529e4ef752070 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -683,6 +683,30 @@ static int pipapo_realloc_mt(struct nft_pipapo_field *f,
+ 	return 0;
+ }
+ 
++
++/**
++ * lt_calculate_size() - Get storage size for lookup table with overflow check
++ * @groups:	Amount of bit groups
++ * @bb:		Number of bits grouped together in lookup table buckets
++ * @bsize:	Size of each bucket in lookup table, in longs
++ *
++ * Return: allocation size including alignment overhead, negative on overflow
++ */
++static ssize_t lt_calculate_size(unsigned int groups, unsigned int bb,
++				 unsigned int bsize)
++{
++	ssize_t ret = groups * NFT_PIPAPO_BUCKETS(bb) * sizeof(long);
++
++	if (check_mul_overflow(ret, bsize, &ret))
++		return -1;
++	if (check_add_overflow(ret, NFT_PIPAPO_ALIGN_HEADROOM, &ret))
++		return -1;
++	if (ret > INT_MAX)
++		return -1;
++
++	return ret;
++}
++
+ /**
+  * pipapo_resize() - Resize lookup or mapping table, or both
+  * @f:		Field containing lookup and mapping tables
+@@ -701,6 +725,7 @@ static int pipapo_resize(struct nft_pipapo_field *f,
+ 	long *new_lt = NULL, *new_p, *old_lt = f->lt, *old_p;
+ 	unsigned int new_bucket_size, copy;
+ 	int group, bucket, err;
++	ssize_t lt_size;
+ 
+ 	if (rules >= NFT_PIPAPO_RULE0_MAX)
+ 		return -ENOSPC;
+@@ -719,10 +744,11 @@ static int pipapo_resize(struct nft_pipapo_field *f,
+ 	else
+ 		copy = new_bucket_size;
+ 
+-	new_lt = kvzalloc(f->groups * NFT_PIPAPO_BUCKETS(f->bb) *
+-			  new_bucket_size * sizeof(*new_lt) +
+-			  NFT_PIPAPO_ALIGN_HEADROOM,
+-			  GFP_KERNEL);
++	lt_size = lt_calculate_size(f->groups, f->bb, new_bucket_size);
++	if (lt_size < 0)
++		return -ENOMEM;
++
++	new_lt = kvzalloc(lt_size, GFP_KERNEL_ACCOUNT);
+ 	if (!new_lt)
+ 		return -ENOMEM;
+ 
+@@ -907,7 +933,7 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ {
+ 	unsigned int groups, bb;
+ 	unsigned long *new_lt;
+-	size_t lt_size;
++	ssize_t lt_size;
+ 
+ 	lt_size = f->groups * NFT_PIPAPO_BUCKETS(f->bb) * f->bsize *
+ 		  sizeof(*f->lt);
+@@ -917,15 +943,17 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ 		groups = f->groups * 2;
+ 		bb = NFT_PIPAPO_GROUP_BITS_LARGE_SET;
+ 
+-		lt_size = groups * NFT_PIPAPO_BUCKETS(bb) * f->bsize *
+-			  sizeof(*f->lt);
++		lt_size = lt_calculate_size(groups, bb, f->bsize);
++		if (lt_size < 0)
++			return;
+ 	} else if (f->bb == NFT_PIPAPO_GROUP_BITS_LARGE_SET &&
+ 		   lt_size < NFT_PIPAPO_LT_SIZE_LOW) {
+ 		groups = f->groups / 2;
+ 		bb = NFT_PIPAPO_GROUP_BITS_SMALL_SET;
+ 
+-		lt_size = groups * NFT_PIPAPO_BUCKETS(bb) * f->bsize *
+-			  sizeof(*f->lt);
++		lt_size = lt_calculate_size(groups, bb, f->bsize);
++		if (lt_size < 0)
++			return;
+ 
+ 		/* Don't increase group width if the resulting lookup table size
+ 		 * would exceed the upper size threshold for a "small" set.
+@@ -936,7 +964,7 @@ static void pipapo_lt_bits_adjust(struct nft_pipapo_field *f)
+ 		return;
+ 	}
+ 
+-	new_lt = kvzalloc(lt_size + NFT_PIPAPO_ALIGN_HEADROOM, GFP_KERNEL_ACCOUNT);
++	new_lt = kvzalloc(lt_size, GFP_KERNEL_ACCOUNT);
+ 	if (!new_lt)
+ 		return;
+ 
+@@ -1451,13 +1479,15 @@ static struct nft_pipapo_match *pipapo_clone(struct nft_pipapo_match *old)
+ 
+ 	for (i = 0; i < old->field_count; i++) {
+ 		unsigned long *new_lt;
++		ssize_t lt_size;
+ 
+ 		memcpy(dst, src, offsetof(struct nft_pipapo_field, lt));
+ 
+-		new_lt = kvzalloc(src->groups * NFT_PIPAPO_BUCKETS(src->bb) *
+-				  src->bsize * sizeof(*dst->lt) +
+-				  NFT_PIPAPO_ALIGN_HEADROOM,
+-				  GFP_KERNEL_ACCOUNT);
++		lt_size = lt_calculate_size(src->groups, src->bb, src->bsize);
++		if (lt_size < 0)
++			goto out_lt;
++
++		new_lt = kvzalloc(lt_size, GFP_KERNEL_ACCOUNT);
+ 		if (!new_lt)
+ 			goto out_lt;
+ 
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index c15db28c5ebc43..be7c16c79f711e 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1113,6 +1113,25 @@ bool nft_pipapo_avx2_estimate(const struct nft_set_desc *desc, u32 features,
+ 	return true;
+ }
+ 
++/**
++ * pipapo_resmap_init_avx2() - Initialise result map before first use
++ * @m:		Matching data, including mapping table
++ * @res_map:	Result map
++ *
++ * Like pipapo_resmap_init() but do not set start map bits covered by the first field.
++ */
++static inline void pipapo_resmap_init_avx2(const struct nft_pipapo_match *m, unsigned long *res_map)
++{
++	const struct nft_pipapo_field *f = m->f;
++	int i;
++
++	/* Starting map doesn't need to be set to all-ones for this implementation,
++	 * but we do need to zero the remaining bits, if any.
++	 */
++	for (i = f->bsize; i < m->bsize_max; i++)
++		res_map[i] = 0ul;
++}
++
+ /**
+  * nft_pipapo_avx2_lookup() - Lookup function for AVX2 implementation
+  * @net:	Network namespace
+@@ -1171,7 +1190,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ 	res  = scratch->map + (map_index ? m->bsize_max : 0);
+ 	fill = scratch->map + (map_index ? 0 : m->bsize_max);
+ 
+-	/* Starting map doesn't need to be set for this implementation */
++	pipapo_resmap_init_avx2(m, res);
+ 
+ 	nft_pipapo_avx2_prepare();
+ 
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 0c63d1367cf7a7..a12486ae089d6f 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -621,10 +621,10 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb,
+ 		struct geneve_opt *opt;
+ 		int offset = 0;
+ 
+-		inner = nla_nest_start_noflag(skb, NFTA_TUNNEL_KEY_OPTS_GENEVE);
+-		if (!inner)
+-			goto failure;
+ 		while (opts->len > offset) {
++			inner = nla_nest_start_noflag(skb, NFTA_TUNNEL_KEY_OPTS_GENEVE);
++			if (!inner)
++				goto failure;
+ 			opt = (struct geneve_opt *)(opts->u.data + offset);
+ 			if (nla_put_be16(skb, NFTA_TUNNEL_KEY_GENEVE_CLASS,
+ 					 opt->opt_class) ||
+@@ -634,8 +634,8 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb,
+ 				    opt->length * 4, opt->opt_data))
+ 				goto inner_failure;
+ 			offset += sizeof(*opt) + opt->length * 4;
++			nla_nest_end(skb, inner);
+ 		}
+-		nla_nest_end(skb, inner);
+ 	}
+ 	nla_nest_end(skb, nest);
+ 	return 0;
+diff --git a/net/netfilter/xt_TCPOPTSTRIP.c b/net/netfilter/xt_TCPOPTSTRIP.c
+index 30e99464171b7b..93f064306901c0 100644
+--- a/net/netfilter/xt_TCPOPTSTRIP.c
++++ b/net/netfilter/xt_TCPOPTSTRIP.c
+@@ -91,7 +91,7 @@ tcpoptstrip_tg4(struct sk_buff *skb, const struct xt_action_param *par)
+ 	return tcpoptstrip_mangle_packet(skb, par, ip_hdrlen(skb));
+ }
+ 
+-#if IS_ENABLED(CONFIG_IP6_NF_MANGLE)
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
+ static unsigned int
+ tcpoptstrip_tg6(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+@@ -119,7 +119,7 @@ static struct xt_target tcpoptstrip_tg_reg[] __read_mostly = {
+ 		.targetsize = sizeof(struct xt_tcpoptstrip_target_info),
+ 		.me         = THIS_MODULE,
+ 	},
+-#if IS_ENABLED(CONFIG_IP6_NF_MANGLE)
++#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
+ 	{
+ 		.name       = "TCPOPTSTRIP",
+ 		.family     = NFPROTO_IPV6,
+diff --git a/net/netfilter/xt_mark.c b/net/netfilter/xt_mark.c
+index 65b965ca40ea7e..59b9d04400cac2 100644
+--- a/net/netfilter/xt_mark.c
++++ b/net/netfilter/xt_mark.c
+@@ -48,7 +48,7 @@ static struct xt_target mark_tg_reg[] __read_mostly = {
+ 		.targetsize     = sizeof(struct xt_mark_tginfo2),
+ 		.me             = THIS_MODULE,
+ 	},
+-#if IS_ENABLED(CONFIG_IP_NF_ARPTABLES)
++#if IS_ENABLED(CONFIG_IP_NF_ARPTABLES) || IS_ENABLED(CONFIG_NFT_COMPAT_ARP)
+ 	{
+ 		.name           = "MARK",
+ 		.revision       = 2,
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index cd9160bbc91974..33b77084a4e5f3 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -1165,6 +1165,11 @@ int netlbl_conn_setattr(struct sock *sk,
+ 		break;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	case AF_INET6:
++		if (sk->sk_family != AF_INET6) {
++			ret_val = -EAFNOSUPPORT;
++			goto conn_setattr_return;
++		}
++
+ 		addr6 = (struct sockaddr_in6 *)addr;
+ 		entry = netlbl_domhsh_getentry_af6(secattr->domain,
+ 						   &addr6->sin6_addr);
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index 8a848ce72e2910..b80bd3a9077397 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -788,7 +788,7 @@ static int key_extract_l3l4(struct sk_buff *skb, struct sw_flow_key *key)
+ 			memset(&key->ipv4, 0, sizeof(key->ipv4));
+ 		}
+ 	} else if (eth_p_mpls(key->eth.type)) {
+-		u8 label_count = 1;
++		size_t label_count = 1;
+ 
+ 		memset(&key->mpls, 0, sizeof(key->mpls));
+ 		skb_set_inner_network_header(skb, skb->mac_len);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index d4dba06297c33e..20be2c47cf4191 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3713,15 +3713,15 @@ static int packet_dev_mc(struct net_device *dev, struct packet_mclist *i,
+ }
+ 
+ static void packet_dev_mclist_delete(struct net_device *dev,
+-				     struct packet_mclist **mlp)
++				     struct packet_mclist **mlp,
++				     struct list_head *list)
+ {
+ 	struct packet_mclist *ml;
+ 
+ 	while ((ml = *mlp) != NULL) {
+ 		if (ml->ifindex == dev->ifindex) {
+-			packet_dev_mc(dev, ml, -1);
++			list_add(&ml->remove_list, list);
+ 			*mlp = ml->next;
+-			kfree(ml);
+ 		} else
+ 			mlp = &ml->next;
+ 	}
+@@ -3769,6 +3769,7 @@ static int packet_mc_add(struct sock *sk, struct packet_mreq_max *mreq)
+ 	memcpy(i->addr, mreq->mr_address, i->alen);
+ 	memset(i->addr + i->alen, 0, sizeof(i->addr) - i->alen);
+ 	i->count = 1;
++	INIT_LIST_HEAD(&i->remove_list);
+ 	i->next = po->mclist;
+ 	po->mclist = i;
+ 	err = packet_dev_mc(dev, i, 1);
+@@ -4233,9 +4234,11 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ static int packet_notifier(struct notifier_block *this,
+ 			   unsigned long msg, void *ptr)
+ {
+-	struct sock *sk;
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 	struct net *net = dev_net(dev);
++	struct packet_mclist *ml, *tmp;
++	LIST_HEAD(mclist);
++	struct sock *sk;
+ 
+ 	rcu_read_lock();
+ 	sk_for_each_rcu(sk, &net->packet.sklist) {
+@@ -4244,7 +4247,8 @@ static int packet_notifier(struct notifier_block *this,
+ 		switch (msg) {
+ 		case NETDEV_UNREGISTER:
+ 			if (po->mclist)
+-				packet_dev_mclist_delete(dev, &po->mclist);
++				packet_dev_mclist_delete(dev, &po->mclist,
++							 &mclist);
+ 			fallthrough;
+ 
+ 		case NETDEV_DOWN:
+@@ -4277,6 +4281,13 @@ static int packet_notifier(struct notifier_block *this,
+ 		}
+ 	}
+ 	rcu_read_unlock();
++
++	/* packet_dev_mc might grab instance locks so can't run under rcu */
++	list_for_each_entry_safe(ml, tmp, &mclist, remove_list) {
++		packet_dev_mc(dev, ml, -1);
++		kfree(ml);
++	}
++
+ 	return NOTIFY_DONE;
+ }
+ 
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index d5d70712007ad3..1e743d0316fdda 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -11,6 +11,7 @@ struct packet_mclist {
+ 	unsigned short		type;
+ 	unsigned short		alen;
+ 	unsigned char		addr[MAX_ADDR_LEN];
++	struct list_head	remove_list;
+ };
+ 
+ /* kbdq - kernel block descriptor queue */
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index 2c069f0181c62b..037f764822b965 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -661,7 +661,7 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 	for (i = q->nbands; i < oldbands; i++) {
+ 		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+ 			list_del_init(&q->classes[i].alist);
+-		qdisc_tree_flush_backlog(q->classes[i].qdisc);
++		qdisc_purge_queue(q->classes[i].qdisc);
+ 	}
+ 	WRITE_ONCE(q->nstrict, nstrict);
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c
+index cc30f7a32f1a78..9e2b9a490db23d 100644
+--- a/net/sched/sch_prio.c
++++ b/net/sched/sch_prio.c
+@@ -211,7 +211,7 @@ static int prio_tune(struct Qdisc *sch, struct nlattr *opt,
+ 	memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1);
+ 
+ 	for (i = q->bands; i < oldbands; i++)
+-		qdisc_tree_flush_backlog(q->queues[i]);
++		qdisc_purge_queue(q->queues[i]);
+ 
+ 	for (i = oldbands; i < q->bands; i++) {
+ 		q->queues[i] = queues[i];
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index 1ba3e0bba54f0c..4696c893cf553c 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -285,7 +285,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb,
+ 	q->userbits = userbits;
+ 	q->limit = ctl->limit;
+ 	if (child) {
+-		qdisc_tree_flush_backlog(q->qdisc);
++		qdisc_purge_queue(q->qdisc);
+ 		old_child = q->qdisc;
+ 		q->qdisc = child;
+ 	}
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index b912ad99aa15d9..77fa02f2bfcd56 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -310,7 +310,10 @@ static unsigned int sfq_drop(struct Qdisc *sch, struct sk_buff **to_free)
+ 		/* It is difficult to believe, but ALL THE SLOTS HAVE LENGTH 1. */
+ 		x = q->tail->next;
+ 		slot = &q->slots[x];
+-		q->tail->next = slot->next;
++		if (slot->next == x)
++			q->tail = NULL; /* no more active slots */
++		else
++			q->tail->next = slot->next;
+ 		q->ht[slot->hash] = SFQ_EMPTY_SLOT;
+ 		goto drop;
+ 	}
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index dc26b22d53c734..4c977f049670a6 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -452,7 +452,7 @@ static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ 	sch_tree_lock(sch);
+ 	if (child) {
+-		qdisc_tree_flush_backlog(q->qdisc);
++		qdisc_purge_queue(q->qdisc);
+ 		old = q->qdisc;
+ 		q->qdisc = child;
+ 	}
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index aca8bdf65d729f..ca6172822b68ae 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -406,12 +406,12 @@ static void svc_rdma_xprt_done(struct rpcrdma_notification *rn)
+  */
+ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ {
++	unsigned int ctxts, rq_depth, maxpayload;
+ 	struct svcxprt_rdma *listen_rdma;
+ 	struct svcxprt_rdma *newxprt = NULL;
+ 	struct rdma_conn_param conn_param;
+ 	struct rpcrdma_connect_private pmsg;
+ 	struct ib_qp_init_attr qp_attr;
+-	unsigned int ctxts, rq_depth;
+ 	struct ib_device *dev;
+ 	int ret = 0;
+ 	RPC_IFDEBUG(struct sockaddr *sap);
+@@ -462,12 +462,14 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ 		newxprt->sc_max_bc_requests = 2;
+ 	}
+ 
+-	/* Arbitrarily estimate the number of rw_ctxs needed for
+-	 * this transport. This is enough rw_ctxs to make forward
+-	 * progress even if the client is using one rkey per page
+-	 * in each Read chunk.
++	/* Arbitrary estimate of the needed number of rdma_rw contexts.
+ 	 */
+-	ctxts = 3 * RPCSVC_MAXPAGES;
++	maxpayload = min(xprt->xpt_server->sv_max_payload,
++			 RPCSVC_MAXPAYLOAD_RDMA);
++	ctxts = newxprt->sc_max_requests * 3 *
++		rdma_rw_mr_factor(dev, newxprt->sc_port_num,
++				  maxpayload >> PAGE_SHIFT);
++
+ 	newxprt->sc_sq_depth = rq_depth + ctxts;
+ 	if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr)
+ 		newxprt->sc_sq_depth = dev->attrs.max_qp_wr;
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 8584893b478510..79f91b6ca8c847 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -818,7 +818,11 @@ static int tipc_aead_encrypt(struct tipc_aead *aead, struct sk_buff *skb,
+ 	}
+ 
+ 	/* Get net to avoid freed tipc_crypto when delete namespace */
+-	get_net(aead->crypto->net);
++	if (!maybe_get_net(aead->crypto->net)) {
++		tipc_bearer_put(b);
++		rc = -ENODEV;
++		goto exit;
++	}
+ 
+ 	/* Now, do encrypt */
+ 	rc = crypto_aead_encrypt(req);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 914d4e1516a3cd..fc88e34b7f33fe 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -908,6 +908,13 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 					    &msg_redir, send, flags);
+ 		lock_sock(sk);
+ 		if (err < 0) {
++			/* Regardless of whether the data represented by
++			 * msg_redir is sent successfully, we have already
++			 * uncharged it via sk_msg_return_zero(). The
++			 * msg->sg.size represents the remaining unprocessed
++			 * data, which needs to be uncharged here.
++			 */
++			sk_mem_uncharge(sk, msg->sg.size);
+ 			*copied -= sk_msg_free_nocharge(sk, &msg_redir);
+ 			msg->sg.size = 0;
+ 		}
+@@ -1120,9 +1127,13 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
+ 					num_async++;
+ 				else if (ret == -ENOMEM)
+ 					goto wait_for_memory;
+-				else if (ctx->open_rec && ret == -ENOSPC)
++				else if (ctx->open_rec && ret == -ENOSPC) {
++					if (msg_pl->cork_bytes) {
++						ret = 0;
++						goto send_end;
++					}
+ 					goto rollback_iter;
+-				else if (ret != -EAGAIN)
++				} else if (ret != -EAGAIN)
+ 					goto send_end;
+ 			}
+ 			continue;
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 7f7de6d8809655..2c9b1011cdcc80 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -441,18 +441,20 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
+ static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
+ 					u32 len)
+ {
+-	if (vvs->rx_bytes + len > vvs->buf_alloc)
++	if (vvs->buf_used + len > vvs->buf_alloc)
+ 		return false;
+ 
+ 	vvs->rx_bytes += len;
++	vvs->buf_used += len;
+ 	return true;
+ }
+ 
+ static void virtio_transport_dec_rx_pkt(struct virtio_vsock_sock *vvs,
+-					u32 len)
++					u32 bytes_read, u32 bytes_dequeued)
+ {
+-	vvs->rx_bytes -= len;
+-	vvs->fwd_cnt += len;
++	vvs->rx_bytes -= bytes_read;
++	vvs->buf_used -= bytes_dequeued;
++	vvs->fwd_cnt += bytes_dequeued;
+ }
+ 
+ void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct sk_buff *skb)
+@@ -581,11 +583,11 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 				   size_t len)
+ {
+ 	struct virtio_vsock_sock *vvs = vsk->trans;
+-	size_t bytes, total = 0;
+ 	struct sk_buff *skb;
+ 	u32 fwd_cnt_delta;
+ 	bool low_rx_bytes;
+ 	int err = -EFAULT;
++	size_t total = 0;
+ 	u32 free_space;
+ 
+ 	spin_lock_bh(&vvs->rx_lock);
+@@ -597,6 +599,8 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 	}
+ 
+ 	while (total < len && !skb_queue_empty(&vvs->rx_queue)) {
++		size_t bytes, dequeued = 0;
++
+ 		skb = skb_peek(&vvs->rx_queue);
+ 
+ 		bytes = min_t(size_t, len - total,
+@@ -620,12 +624,12 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
+ 		VIRTIO_VSOCK_SKB_CB(skb)->offset += bytes;
+ 
+ 		if (skb->len == VIRTIO_VSOCK_SKB_CB(skb)->offset) {
+-			u32 pkt_len = le32_to_cpu(virtio_vsock_hdr(skb)->len);
+-
+-			virtio_transport_dec_rx_pkt(vvs, pkt_len);
++			dequeued = le32_to_cpu(virtio_vsock_hdr(skb)->len);
+ 			__skb_unlink(skb, &vvs->rx_queue);
+ 			consume_skb(skb);
+ 		}
++
++		virtio_transport_dec_rx_pkt(vvs, bytes, dequeued);
+ 	}
+ 
+ 	fwd_cnt_delta = vvs->fwd_cnt - vvs->last_fwd_cnt;
+@@ -781,7 +785,7 @@ static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
+ 				msg->msg_flags |= MSG_EOR;
+ 		}
+ 
+-		virtio_transport_dec_rx_pkt(vvs, pkt_len);
++		virtio_transport_dec_rx_pkt(vvs, pkt_len, pkt_len);
+ 		kfree_skb(skb);
+ 	}
+ 
+@@ -1735,6 +1739,7 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ 	struct sock *sk = sk_vsock(vsk);
+ 	struct virtio_vsock_hdr *hdr;
+ 	struct sk_buff *skb;
++	u32 pkt_len;
+ 	int off = 0;
+ 	int err;
+ 
+@@ -1752,7 +1757,8 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
+ 	if (le32_to_cpu(hdr->flags) & VIRTIO_VSOCK_SEQ_EOM)
+ 		vvs->msg_count--;
+ 
+-	virtio_transport_dec_rx_pkt(vvs, le32_to_cpu(hdr->len));
++	pkt_len = le32_to_cpu(hdr->len);
++	virtio_transport_dec_rx_pkt(vvs, pkt_len, pkt_len);
+ 	spin_unlock_bh(&vvs->rx_lock);
+ 
+ 	virtio_transport_send_credit_update(vsk);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index ddd3a97f6609d1..e8a4fe44ec2d80 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -3250,6 +3250,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 	const u8 *ie;
+ 	size_t ielen;
+ 	u64 tsf;
++	size_t s1g_optional_len;
+ 
+ 	if (WARN_ON(!mgmt))
+ 		return NULL;
+@@ -3264,12 +3265,11 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control)) {
+ 		ext = (void *) mgmt;
+-		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_short_beacon.variable);
+-		else
+-			min_hdr_len = offsetof(struct ieee80211_ext,
+-					       u.s1g_beacon.variable);
++		s1g_optional_len =
++			ieee80211_s1g_optional_len(ext->frame_control);
++		min_hdr_len =
++			offsetof(struct ieee80211_ext, u.s1g_beacon.variable) +
++			s1g_optional_len;
+ 	} else {
+ 		/* same for beacons */
+ 		min_hdr_len = offsetof(struct ieee80211_mgmt,
+@@ -3285,11 +3285,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 		const struct ieee80211_s1g_bcn_compat_ie *compat;
+ 		const struct element *elem;
+ 
+-		if (ieee80211_is_s1g_short_beacon(mgmt->frame_control))
+-			ie = ext->u.s1g_short_beacon.variable;
+-		else
+-			ie = ext->u.s1g_beacon.variable;
+-
++		ie = ext->u.s1g_beacon.variable + s1g_optional_len;
+ 		elem = cfg80211_find_elem(WLAN_EID_S1G_BCN_COMPAT, ie, ielen);
+ 		if (!elem)
+ 			return NULL;
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index d62f76161d83e2..f46a9e5764f014 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -314,7 +314,6 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 
+ 	xso->dev = dev;
+ 	netdev_tracker_alloc(dev, &xso->dev_tracker, GFP_ATOMIC);
+-	xso->real_dev = dev;
+ 
+ 	if (xuo->flags & XFRM_OFFLOAD_INBOUND)
+ 		xso->dir = XFRM_DEV_OFFLOAD_IN;
+@@ -326,11 +325,10 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 	else
+ 		xso->type = XFRM_DEV_OFFLOAD_CRYPTO;
+ 
+-	err = dev->xfrmdev_ops->xdo_dev_state_add(x, extack);
++	err = dev->xfrmdev_ops->xdo_dev_state_add(dev, x, extack);
+ 	if (err) {
+ 		xso->dev = NULL;
+ 		xso->dir = 0;
+-		xso->real_dev = NULL;
+ 		netdev_put(dev, &xso->dev_tracker);
+ 		xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 
+@@ -378,7 +376,6 @@ int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
+ 
+ 	xdo->dev = dev;
+ 	netdev_tracker_alloc(dev, &xdo->dev_tracker, GFP_ATOMIC);
+-	xdo->real_dev = dev;
+ 	xdo->type = XFRM_DEV_OFFLOAD_PACKET;
+ 	switch (dir) {
+ 	case XFRM_POLICY_IN:
+@@ -400,7 +397,6 @@ int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
+ 	err = dev->xfrmdev_ops->xdo_dev_policy_add(xp, extack);
+ 	if (err) {
+ 		xdo->dev = NULL;
+-		xdo->real_dev = NULL;
+ 		xdo->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 		xdo->dir = 0;
+ 		netdev_put(dev, &xdo->dev_tracker);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 07fe8e5daa32b0..5ece039846e201 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -767,7 +767,7 @@ void xfrm_dev_state_delete(struct xfrm_state *x)
+ 	struct net_device *dev = READ_ONCE(xso->dev);
+ 
+ 	if (dev) {
+-		dev->xfrmdev_ops->xdo_dev_state_delete(x);
++		dev->xfrmdev_ops->xdo_dev_state_delete(dev, x);
+ 		spin_lock_bh(&xfrm_state_dev_gc_lock);
+ 		hlist_add_head(&x->dev_gclist, &xfrm_state_dev_gc_list);
+ 		spin_unlock_bh(&xfrm_state_dev_gc_lock);
+@@ -789,7 +789,7 @@ void xfrm_dev_state_free(struct xfrm_state *x)
+ 		spin_unlock_bh(&xfrm_state_dev_gc_lock);
+ 
+ 		if (dev->xfrmdev_ops->xdo_dev_state_free)
+-			dev->xfrmdev_ops->xdo_dev_state_free(x);
++			dev->xfrmdev_ops->xdo_dev_state_free(dev, x);
+ 		WRITE_ONCE(xso->dev, NULL);
+ 		xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 		netdev_put(dev, &xso->dev_tracker);
+@@ -1548,19 +1548,19 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ 		if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
+ 			struct xfrm_dev_offload *xdo = &pol->xdo;
+ 			struct xfrm_dev_offload *xso = &x->xso;
++			struct net_device *dev = xdo->dev;
+ 
+ 			xso->type = XFRM_DEV_OFFLOAD_PACKET;
+ 			xso->dir = xdo->dir;
+-			xso->dev = xdo->dev;
+-			xso->real_dev = xdo->real_dev;
++			xso->dev = dev;
+ 			xso->flags = XFRM_DEV_OFFLOAD_FLAG_ACQ;
+-			netdev_hold(xso->dev, &xso->dev_tracker, GFP_ATOMIC);
+-			error = xso->dev->xfrmdev_ops->xdo_dev_state_add(x, NULL);
++			netdev_hold(dev, &xso->dev_tracker, GFP_ATOMIC);
++			error = dev->xfrmdev_ops->xdo_dev_state_add(dev, x,
++								    NULL);
+ 			if (error) {
+ 				xso->dir = 0;
+-				netdev_put(xso->dev, &xso->dev_tracker);
++				netdev_put(dev, &xso->dev_tracker);
+ 				xso->dev = NULL;
+-				xso->real_dev = NULL;
+ 				xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
+ 				x->km.state = XFRM_STATE_DEAD;
+ 				to_put = x;
+diff --git a/rust/Makefile b/rust/Makefile
+index 3aca903a7d08cf..313a200112ce18 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -60,6 +60,8 @@ endif
+ core-cfgs = \
+     --cfg no_fp_fmt_parse
+ 
++core-edition := $(if $(call rustc-min-version,108700),2024,2021)
++
+ # `rustc` recognizes `--remap-path-prefix` since 1.26.0, but `rustdoc` only
+ # since Rust 1.81.0. Moreover, `rustdoc` ICEs on out-of-tree builds since Rust
+ # 1.82.0 (https://github.com/rust-lang/rust/issues/138520). Thus workaround both
+@@ -106,8 +108,8 @@ rustdoc-macros: $(src)/macros/lib.rs FORCE
+ 
+ # Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should
+ # not be needed -- see https://github.com/rust-lang/rust/pull/128307.
+-rustdoc-core: private skip_flags = -Wrustdoc::unescaped_backticks
+-rustdoc-core: private rustc_target_flags = $(core-cfgs)
++rustdoc-core: private skip_flags = --edition=2021 -Wrustdoc::unescaped_backticks
++rustdoc-core: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
+ rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
+ 	+$(call if_changed,rustdoc)
+ 
+@@ -416,7 +418,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
+       cmd_rustc_library = \
+ 	OBJTREE=$(abspath $(objtree)) \
+ 	$(if $(skip_clippy),$(RUSTC),$(RUSTC_OR_CLIPPY)) \
+-		$(filter-out $(skip_flags),$(rust_flags) $(rustc_target_flags)) \
++		$(filter-out $(skip_flags),$(rust_flags)) $(rustc_target_flags) \
+ 		--emit=dep-info=$(depfile) --emit=obj=$@ \
+ 		--emit=metadata=$(dir $@)$(patsubst %.o,lib%.rmeta,$(notdir $@)) \
+ 		--crate-type rlib -L$(objtree)/$(obj) \
+@@ -427,7 +429,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
+ 
+ rust-analyzer:
+ 	$(Q)MAKEFLAGS= $(srctree)/scripts/generate_rust_analyzer.py \
+-		--cfgs='core=$(core-cfgs)' \
++		--cfgs='core=$(core-cfgs)' $(core-edition) \
+ 		$(realpath $(srctree)) $(realpath $(objtree)) \
+ 		$(rustc_sysroot) $(RUST_LIB_SRC) $(if $(KBUILD_EXTMOD),$(srcroot)) \
+ 		> rust-project.json
+@@ -483,9 +485,9 @@ $(obj)/helpers/helpers.o: $(src)/helpers/helpers.c $(recordmcount_source) FORCE
+ $(obj)/exports.o: private skip_gendwarfksyms = 1
+ 
+ $(obj)/core.o: private skip_clippy = 1
+-$(obj)/core.o: private skip_flags = -Wunreachable_pub
++$(obj)/core.o: private skip_flags = --edition=2021 -Wunreachable_pub
+ $(obj)/core.o: private rustc_objcopy = $(foreach sym,$(redirect-intrinsics),--redefine-sym $(sym)=__rust$(sym))
+-$(obj)/core.o: private rustc_target_flags = $(core-cfgs)
++$(obj)/core.o: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
+ $(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs \
+     $(wildcard $(objtree)/include/config/RUSTC_VERSION_TEXT) FORCE
+ 	+$(call if_changed_rule,rustc_library)
+diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs
+index 87a71fd40c3cad..f62204fe563f58 100644
+--- a/rust/kernel/alloc/kvec.rs
++++ b/rust/kernel/alloc/kvec.rs
+@@ -196,6 +196,9 @@ pub fn len(&self) -> usize {
+     #[inline]
+     pub unsafe fn set_len(&mut self, new_len: usize) {
+         debug_assert!(new_len <= self.capacity());
++
++        // INVARIANT: By the safety requirements of this method `new_len` represents the exact
++        // number of elements stored within `self`.
+         self.len = new_len;
+     }
+ 
+diff --git a/rust/kernel/fs/file.rs b/rust/kernel/fs/file.rs
+index 13a0e44cd1aa81..138693bdeb3fdf 100644
+--- a/rust/kernel/fs/file.rs
++++ b/rust/kernel/fs/file.rs
+@@ -219,6 +219,7 @@ unsafe fn dec_ref(obj: ptr::NonNull<File>) {
+ ///   must be on the same thread as this file.
+ ///
+ /// [`assume_no_fdget_pos`]: LocalFile::assume_no_fdget_pos
++#[repr(transparent)]
+ pub struct LocalFile {
+     inner: Opaque<bindings::file>,
+ }
+diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
+index 13c50df37b89d1..a88a2dc65aa7cf 100644
+--- a/rust/kernel/list/arc.rs
++++ b/rust/kernel/list/arc.rs
+@@ -96,7 +96,7 @@ unsafe fn on_drop_list_arc(&self) {}
+     } $($rest:tt)*) => {
+         impl$(<$($generics)*>)? $crate::list::ListArcSafe<$num> for $t {
+             unsafe fn on_create_list_arc_from_unique(self: ::core::pin::Pin<&mut Self>) {
+-                $crate::assert_pinned!($t, $field, $fty, inline);
++                ::pin_init::assert_pinned!($t, $field, $fty, inline);
+ 
+                 // SAFETY: This field is structurally pinned as per the above assertion.
+                 let field = unsafe {
+diff --git a/rust/kernel/miscdevice.rs b/rust/kernel/miscdevice.rs
+index fa9ecc42602a47..15d10e5c1db7da 100644
+--- a/rust/kernel/miscdevice.rs
++++ b/rust/kernel/miscdevice.rs
+@@ -121,7 +121,7 @@ fn release(device: Self::Ptr, _file: &File) {
+ 
+     /// Handler for ioctls.
+     ///
+-    /// The `cmd` argument is usually manipulated using the utilties in [`kernel::ioctl`].
++    /// The `cmd` argument is usually manipulated using the utilities in [`kernel::ioctl`].
+     ///
+     /// [`kernel::ioctl`]: mod@crate::ioctl
+     fn ioctl(
+diff --git a/rust/kernel/pci.rs b/rust/kernel/pci.rs
+index c97d6d470b2822..bbc453c6d9ea88 100644
+--- a/rust/kernel/pci.rs
++++ b/rust/kernel/pci.rs
+@@ -118,7 +118,9 @@ macro_rules! module_pci_driver {
+ };
+ }
+ 
+-/// Abstraction for bindings::pci_device_id.
++/// Abstraction for the PCI device ID structure ([`struct pci_device_id`]).
++///
++/// [`struct pci_device_id`]: https://docs.kernel.org/PCI/pci.html#c.pci_device_id
+ #[repr(transparent)]
+ #[derive(Clone, Copy)]
+ pub struct DeviceId(bindings::pci_device_id);
+@@ -173,7 +175,7 @@ fn index(&self) -> usize {
+     }
+ }
+ 
+-/// IdTable type for PCI
++/// `IdTable` type for PCI.
+ pub type IdTable<T> = &'static dyn kernel::device_id::IdTable<DeviceId, T>;
+ 
+ /// Create a PCI `IdTable` with its alias for modpost.
+@@ -224,10 +226,11 @@ macro_rules! pci_device_table {
+ /// `Adapter` documentation for an example.
+ pub trait Driver: Send {
+     /// The type holding information about each device id supported by the driver.
+-    ///
+-    /// TODO: Use associated_type_defaults once stabilized:
+-    ///
+-    /// type IdInfo: 'static = ();
++    // TODO: Use `associated_type_defaults` once stabilized:
++    //
++    // ```
++    // type IdInfo: 'static = ();
++    // ```
+     type IdInfo: 'static;
+ 
+     /// The table of device ids supported by the driver.
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 3222c1070444fa..ef12c8f929eda3 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -123,6 +123,38 @@ static inline tree build_const_char_string(int len, const char *str)
+ 	return cstr;
+ }
+ 
++static inline void __add_type_attr(tree type, const char *attr, tree args)
++{
++	tree oldattr;
++
++	if (type == NULL_TREE)
++		return;
++	oldattr = lookup_attribute(attr, TYPE_ATTRIBUTES(type));
++	if (oldattr != NULL_TREE) {
++		gcc_assert(TREE_VALUE(oldattr) == args || TREE_VALUE(TREE_VALUE(oldattr)) == TREE_VALUE(args));
++		return;
++	}
++
++	TYPE_ATTRIBUTES(type) = copy_list(TYPE_ATTRIBUTES(type));
++	TYPE_ATTRIBUTES(type) = tree_cons(get_identifier(attr), args, TYPE_ATTRIBUTES(type));
++}
++
++static inline void add_type_attr(tree type, const char *attr, tree args)
++{
++	tree main_variant = TYPE_MAIN_VARIANT(type);
++
++	__add_type_attr(TYPE_CANONICAL(type), attr, args);
++	__add_type_attr(TYPE_CANONICAL(main_variant), attr, args);
++	__add_type_attr(main_variant, attr, args);
++
++	for (type = TYPE_NEXT_VARIANT(main_variant); type; type = TYPE_NEXT_VARIANT(type)) {
++		if (!lookup_attribute(attr, TYPE_ATTRIBUTES(type)))
++			TYPE_ATTRIBUTES(type) = TYPE_ATTRIBUTES(main_variant);
++
++		__add_type_attr(TYPE_CANONICAL(type), attr, args);
++	}
++}
++
+ #define PASS_INFO(NAME, REF, ID, POS)		\
+ struct register_pass_info NAME##_pass_info = {	\
+ 	.pass = make_##NAME##_pass(),		\
+diff --git a/scripts/gcc-plugins/randomize_layout_plugin.c b/scripts/gcc-plugins/randomize_layout_plugin.c
+index 5694df3da2e95b..ff65a4f87f240a 100644
+--- a/scripts/gcc-plugins/randomize_layout_plugin.c
++++ b/scripts/gcc-plugins/randomize_layout_plugin.c
+@@ -73,6 +73,9 @@ static tree handle_randomize_layout_attr(tree *node, tree name, tree args, int f
+ 
+ 	if (TYPE_P(*node)) {
+ 		type = *node;
++	} else if (TREE_CODE(*node) == FIELD_DECL) {
++		*no_add_attrs = false;
++		return NULL_TREE;
+ 	} else {
+ 		gcc_assert(TREE_CODE(*node) == TYPE_DECL);
+ 		type = TREE_TYPE(*node);
+@@ -344,35 +347,18 @@ static int relayout_struct(tree type)
+ 
+ 	shuffle(type, (tree *)newtree, shuffle_length);
+ 
+-	/*
+-	 * set up a bogus anonymous struct field designed to error out on unnamed struct initializers
+-	 * as gcc provides no other way to detect such code
+-	 */
+-	list = make_node(FIELD_DECL);
+-	TREE_CHAIN(list) = newtree[0];
+-	TREE_TYPE(list) = void_type_node;
+-	DECL_SIZE(list) = bitsize_zero_node;
+-	DECL_NONADDRESSABLE_P(list) = 1;
+-	DECL_FIELD_BIT_OFFSET(list) = bitsize_zero_node;
+-	DECL_SIZE_UNIT(list) = size_zero_node;
+-	DECL_FIELD_OFFSET(list) = size_zero_node;
+-	DECL_CONTEXT(list) = type;
+-	// to satisfy the constify plugin
+-	TREE_READONLY(list) = 1;
+-
+ 	for (i = 0; i < num_fields - 1; i++)
+ 		TREE_CHAIN(newtree[i]) = newtree[i+1];
+ 	TREE_CHAIN(newtree[num_fields - 1]) = NULL_TREE;
+ 
++	add_type_attr(type, "randomize_performed", NULL_TREE);
++	add_type_attr(type, "designated_init", NULL_TREE);
++	if (has_flexarray)
++		add_type_attr(type, "has_flexarray", NULL_TREE);
++
+ 	main_variant = TYPE_MAIN_VARIANT(type);
+-	for (variant = main_variant; variant; variant = TYPE_NEXT_VARIANT(variant)) {
+-		TYPE_FIELDS(variant) = list;
+-		TYPE_ATTRIBUTES(variant) = copy_list(TYPE_ATTRIBUTES(variant));
+-		TYPE_ATTRIBUTES(variant) = tree_cons(get_identifier("randomize_performed"), NULL_TREE, TYPE_ATTRIBUTES(variant));
+-		TYPE_ATTRIBUTES(variant) = tree_cons(get_identifier("designated_init"), NULL_TREE, TYPE_ATTRIBUTES(variant));
+-		if (has_flexarray)
+-			TYPE_ATTRIBUTES(type) = tree_cons(get_identifier("has_flexarray"), NULL_TREE, TYPE_ATTRIBUTES(type));
+-	}
++	for (variant = main_variant; variant; variant = TYPE_NEXT_VARIANT(variant))
++		TYPE_FIELDS(variant) = newtree[0];
+ 
+ 	/*
+ 	 * force a re-layout of the main variant
+@@ -440,10 +426,8 @@ static void randomize_type(tree type)
+ 	if (lookup_attribute("randomize_layout", TYPE_ATTRIBUTES(TYPE_MAIN_VARIANT(type))) || is_pure_ops_struct(type))
+ 		relayout_struct(type);
+ 
+-	for (variant = TYPE_MAIN_VARIANT(type); variant; variant = TYPE_NEXT_VARIANT(variant)) {
+-		TYPE_ATTRIBUTES(type) = copy_list(TYPE_ATTRIBUTES(type));
+-		TYPE_ATTRIBUTES(type) = tree_cons(get_identifier("randomize_considered"), NULL_TREE, TYPE_ATTRIBUTES(type));
+-	}
++	add_type_attr(type, "randomize_considered", NULL_TREE);
++
+ #ifdef __DEBUG_PLUGIN
+ 	fprintf(stderr, "Marking randomize_considered on struct %s\n", ORIG_TYPE_NAME(type));
+ #ifdef __DEBUG_VERBOSE
+diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py
+index fe663dd0c43b04..7c3ea2b55041f8 100755
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -19,7 +19,7 @@ def args_crates_cfgs(cfgs):
+ 
+     return crates_cfgs
+ 
+-def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
++def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs, core_edition):
+     # Generate the configuration list.
+     cfg = []
+     with open(objtree / "include" / "generated" / "rustc_cfg") as fd:
+@@ -35,7 +35,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+     crates_indexes = {}
+     crates_cfgs = args_crates_cfgs(cfgs)
+ 
+-    def append_crate(display_name, root_module, deps, cfg=[], is_workspace_member=True, is_proc_macro=False):
++    def append_crate(display_name, root_module, deps, cfg=[], is_workspace_member=True, is_proc_macro=False, edition="2021"):
+         crate = {
+             "display_name": display_name,
+             "root_module": str(root_module),
+@@ -43,7 +43,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+             "is_proc_macro": is_proc_macro,
+             "deps": [{"crate": crates_indexes[dep], "name": dep} for dep in deps],
+             "cfg": cfg,
+-            "edition": "2021",
++            "edition": edition,
+             "env": {
+                 "RUST_MODFILE": "This is only for rust-analyzer"
+             }
+@@ -61,6 +61,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+         display_name,
+         deps,
+         cfg=[],
++        edition="2021",
+     ):
+         append_crate(
+             display_name,
+@@ -68,12 +69,13 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+             deps,
+             cfg,
+             is_workspace_member=False,
++            edition=edition,
+         )
+ 
+     # NB: sysroot crates reexport items from one another so setting up our transitive dependencies
+     # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth
+     # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`.
+-    append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []))
++    append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []), edition=core_edition)
+     append_sysroot_crate("alloc", ["core"])
+     append_sysroot_crate("std", ["alloc", "core"])
+     append_sysroot_crate("proc_macro", ["core", "std"])
+@@ -177,6 +179,7 @@ def main():
+     parser = argparse.ArgumentParser()
+     parser.add_argument('--verbose', '-v', action='store_true')
+     parser.add_argument('--cfgs', action='append', default=[])
++    parser.add_argument("core_edition")
+     parser.add_argument("srctree", type=pathlib.Path)
+     parser.add_argument("objtree", type=pathlib.Path)
+     parser.add_argument("sysroot", type=pathlib.Path)
+@@ -193,7 +196,7 @@ def main():
+     assert args.sysroot in args.sysroot_src.parents
+ 
+     rust_project = {
+-        "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs),
++        "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs, args.core_edition),
+         "sysroot": str(args.sysroot),
+     }
+ 
+diff --git a/scripts/genksyms/genksyms.c b/scripts/genksyms/genksyms.c
+index 8b0d7ac73dbb09..83e48670c2fcfb 100644
+--- a/scripts/genksyms/genksyms.c
++++ b/scripts/genksyms/genksyms.c
+@@ -181,13 +181,9 @@ static int is_unknown_symbol(struct symbol *sym)
+ 			strcmp(defn->string, "{") == 0);
+ }
+ 
+-static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+-			    struct string_list *defn, int is_extern,
+-			    int is_reference)
++static struct string_list *process_enum(const char *name, enum symbol_type type,
++					struct string_list *defn)
+ {
+-	unsigned long h;
+-	struct symbol *sym;
+-	enum symbol_status status = STATUS_UNCHANGED;
+ 	/* The parser adds symbols in the order their declaration completes,
+ 	 * so it is safe to store the value of the previous enum constant in
+ 	 * a static variable.
+@@ -216,7 +212,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ 				defn = mk_node(buf);
+ 			}
+ 		}
+-	} else if (type == SYM_ENUM) {
++	} else {
+ 		free_list(last_enum_expr, NULL);
+ 		last_enum_expr = NULL;
+ 		enum_counter = 0;
+@@ -225,6 +221,23 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ 			return NULL;
+ 	}
+ 
++	return defn;
++}
++
++static struct symbol *__add_symbol(const char *name, enum symbol_type type,
++			    struct string_list *defn, int is_extern,
++			    int is_reference)
++{
++	unsigned long h;
++	struct symbol *sym;
++	enum symbol_status status = STATUS_UNCHANGED;
++
++	if ((type == SYM_ENUM_CONST || type == SYM_ENUM) && !is_reference) {
++		defn = process_enum(name, type, defn);
++		if (defn == NULL)
++			return NULL;
++	}
++
+ 	h = crc32(name);
+ 	hash_for_each_possible(symbol_hashtable, sym, hnode, h) {
+ 		if (map_to_ns(sym->type) != map_to_ns(type) ||
+diff --git a/sound/core/seq_device.c b/sound/core/seq_device.c
+index 4492be5d2317c7..bac9f860373425 100644
+--- a/sound/core/seq_device.c
++++ b/sound/core/seq_device.c
+@@ -43,7 +43,7 @@ MODULE_LICENSE("GPL");
+ static int snd_seq_bus_match(struct device *dev, const struct device_driver *drv)
+ {
+ 	struct snd_seq_device *sdev = to_seq_dev(dev);
+-	struct snd_seq_driver *sdrv = to_seq_drv(drv);
++	const struct snd_seq_driver *sdrv = to_seq_drv(drv);
+ 
+ 	return strcmp(sdrv->id, sdev->id) == 0 &&
+ 		sdrv->argsize == sdev->argsize;
+diff --git a/sound/hda/ext/hdac_ext_controller.c b/sound/hda/ext/hdac_ext_controller.c
+index 6199bb60ccf00f..c84754434d1627 100644
+--- a/sound/hda/ext/hdac_ext_controller.c
++++ b/sound/hda/ext/hdac_ext_controller.c
+@@ -9,6 +9,7 @@
+  * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <sound/hda_register.h>
+@@ -81,6 +82,7 @@ int snd_hdac_ext_bus_get_ml_capabilities(struct hdac_bus *bus)
+ 	int idx;
+ 	u32 link_count;
+ 	struct hdac_ext_link *hlink;
++	u32 leptr;
+ 
+ 	link_count = readl(bus->mlcap + AZX_REG_ML_MLCD) + 1;
+ 
+@@ -96,6 +98,12 @@ int snd_hdac_ext_bus_get_ml_capabilities(struct hdac_bus *bus)
+ 					(AZX_ML_INTERVAL * idx);
+ 		hlink->lcaps  = readl(hlink->ml_addr + AZX_REG_ML_LCAP);
+ 		hlink->lsdiid = readw(hlink->ml_addr + AZX_REG_ML_LSDIID);
++		hlink->slcount = FIELD_GET(AZX_ML_HDA_LCAP_SLCOUNT, hlink->lcaps) + 1;
++
++		if (hdac_ext_link_alt(hlink)) {
++			leptr = readl(hlink->ml_addr + AZX_REG_ML_LEPTR);
++			hlink->id = FIELD_GET(AZX_REG_ML_LEPTR_ID, leptr);
++		}
+ 
+ 		/* since link in On, update the ref */
+ 		hlink->ref_count = 1;
+@@ -125,6 +133,17 @@ void snd_hdac_ext_link_free_all(struct hdac_bus *bus)
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_ext_link_free_all);
+ 
++struct hdac_ext_link *snd_hdac_ext_bus_get_hlink_by_id(struct hdac_bus *bus, u32 id)
++{
++	struct hdac_ext_link *hlink;
++
++	list_for_each_entry(hlink, &bus->hlink_list, list)
++		if (hdac_ext_link_alt(hlink) && hlink->id == id)
++			return hlink;
++	return NULL;
++}
++EXPORT_SYMBOL_GPL(snd_hdac_ext_bus_get_hlink_by_id);
++
+ /**
+  * snd_hdac_ext_bus_get_hlink_by_addr - get hlink at specified address
+  * @bus: hlink's parent bus device
+diff --git a/sound/hda/hda_bus_type.c b/sound/hda/hda_bus_type.c
+index 7545ace7b0ee4b..eb72a7af2e56e8 100644
+--- a/sound/hda/hda_bus_type.c
++++ b/sound/hda/hda_bus_type.c
+@@ -21,7 +21,7 @@ MODULE_LICENSE("GPL");
+  * driver id_table and returns the matching device id entry.
+  */
+ const struct hda_device_id *
+-hdac_get_device_id(struct hdac_device *hdev, struct hdac_driver *drv)
++hdac_get_device_id(struct hdac_device *hdev, const struct hdac_driver *drv)
+ {
+ 	if (drv->id_table) {
+ 		const struct hda_device_id *id  = drv->id_table;
+@@ -38,7 +38,7 @@ hdac_get_device_id(struct hdac_device *hdev, struct hdac_driver *drv)
+ }
+ EXPORT_SYMBOL_GPL(hdac_get_device_id);
+ 
+-static int hdac_codec_match(struct hdac_device *dev, struct hdac_driver *drv)
++static int hdac_codec_match(struct hdac_device *dev, const struct hdac_driver *drv)
+ {
+ 	if (hdac_get_device_id(dev, drv))
+ 		return 1;
+@@ -49,7 +49,7 @@ static int hdac_codec_match(struct hdac_device *dev, struct hdac_driver *drv)
+ static int hda_bus_match(struct device *dev, const struct device_driver *drv)
+ {
+ 	struct hdac_device *hdev = dev_to_hdac_dev(dev);
+-	struct hdac_driver *hdrv = drv_to_hdac_driver(drv);
++	const struct hdac_driver *hdrv = drv_to_hdac_driver(drv);
+ 
+ 	if (hdev->type != hdrv->type)
+ 		return 0;
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 9521e5e0e6e6f8..1fef350d821ef0 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -18,10 +18,10 @@
+ /*
+  * find a matching codec id
+  */
+-static int hda_codec_match(struct hdac_device *dev, struct hdac_driver *drv)
++static int hda_codec_match(struct hdac_device *dev, const struct hdac_driver *drv)
+ {
+ 	struct hda_codec *codec = container_of(dev, struct hda_codec, core);
+-	struct hda_codec_driver *driver =
++	const struct hda_codec_driver *driver =
+ 		container_of(drv, struct hda_codec_driver, core);
+ 	const struct hda_device_id *list;
+ 	/* check probe_id instead of vendor_id if set */
+diff --git a/sound/soc/apple/mca.c b/sound/soc/apple/mca.c
+index b4f4696809dd23..5dd24ab90d0f05 100644
+--- a/sound/soc/apple/mca.c
++++ b/sound/soc/apple/mca.c
+@@ -464,6 +464,28 @@ static int mca_configure_serdes(struct mca_cluster *cl, int serdes_unit,
+ 	return -EINVAL;
+ }
+ 
++static int mca_fe_startup(struct snd_pcm_substream *substream,
++			  struct snd_soc_dai *dai)
++{
++	struct mca_cluster *cl = mca_dai_to_cluster(dai);
++	unsigned int mask, nchannels;
++
++	if (cl->tdm_slots) {
++		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++			mask = cl->tdm_tx_mask;
++		else
++			mask = cl->tdm_rx_mask;
++
++		nchannels = hweight32(mask);
++	} else {
++		nchannels = 2;
++	}
++
++	return snd_pcm_hw_constraint_minmax(substream->runtime,
++					    SNDRV_PCM_HW_PARAM_CHANNELS,
++					    1, nchannels);
++}
++
+ static int mca_fe_set_tdm_slot(struct snd_soc_dai *dai, unsigned int tx_mask,
+ 			       unsigned int rx_mask, int slots, int slot_width)
+ {
+@@ -680,6 +702,7 @@ static int mca_fe_hw_params(struct snd_pcm_substream *substream,
+ }
+ 
+ static const struct snd_soc_dai_ops mca_fe_ops = {
++	.startup = mca_fe_startup,
+ 	.set_fmt = mca_fe_set_fmt,
+ 	.set_bclk_ratio = mca_set_bclk_ratio,
+ 	.set_tdm_slot = mca_fe_set_tdm_slot,
+diff --git a/sound/soc/codecs/hda.c b/sound/soc/codecs/hda.c
+index ddc00927313cfe..dc7794c9ac44ce 100644
+--- a/sound/soc/codecs/hda.c
++++ b/sound/soc/codecs/hda.c
+@@ -152,7 +152,7 @@ int hda_codec_probe_complete(struct hda_codec *codec)
+ 	ret = snd_hda_codec_build_controls(codec);
+ 	if (ret < 0) {
+ 		dev_err(&hdev->dev, "unable to create controls %d\n", ret);
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	/* Bus suspended codecs as it does not manage their pm */
+@@ -160,7 +160,7 @@ int hda_codec_probe_complete(struct hda_codec *codec)
+ 	/* rpm was forbidden in snd_hda_codec_device_new() */
+ 	snd_hda_codec_set_power_save(codec, 2000);
+ 	snd_hda_codec_register(codec);
+-out:
++
+ 	/* Complement pm_runtime_get_sync(bus) in probe */
+ 	pm_runtime_mark_last_busy(bus->dev);
+ 	pm_runtime_put_autosuspend(bus->dev);
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index 08aa7ee3425689..fbfe4d032df7b2 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -546,6 +546,8 @@ static uint8_t sn012776_bop_presets[] = {
+ 	0x06, 0x3e, 0x37, 0x30, 0xff, 0xe6
+ };
+ 
++static const struct regmap_config tas2764_i2c_regmap;
++
+ static int tas2764_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
+@@ -559,9 +561,10 @@ static int tas2764_codec_probe(struct snd_soc_component *component)
+ 	}
+ 
+ 	tas2764_reset(tas2764);
++	regmap_reinit_cache(tas2764->regmap, &tas2764_i2c_regmap);
+ 
+ 	if (tas2764->irq) {
+-		ret = snd_soc_component_write(tas2764->component, TAS2764_INT_MASK0, 0xff);
++		ret = snd_soc_component_write(tas2764->component, TAS2764_INT_MASK0, 0x00);
+ 		if (ret < 0)
+ 			return ret;
+ 
+diff --git a/sound/soc/intel/avs/avs.h b/sound/soc/intel/avs/avs.h
+index 585543f872fccc..ec5502f9d5cb1d 100644
+--- a/sound/soc/intel/avs/avs.h
++++ b/sound/soc/intel/avs/avs.h
+@@ -72,6 +72,8 @@ extern const struct avs_dsp_ops avs_tgl_dsp_ops;
+ 
+ #define AVS_PLATATTR_CLDMA		BIT_ULL(0)
+ #define AVS_PLATATTR_IMR		BIT_ULL(1)
++#define AVS_PLATATTR_ACE		BIT_ULL(2)
++#define AVS_PLATATTR_ALTHDA		BIT_ULL(3)
+ 
+ #define avs_platattr_test(adev, attr) \
+ 	((adev)->spec->attributes & AVS_PLATATTR_##attr)
+@@ -79,7 +81,6 @@ extern const struct avs_dsp_ops avs_tgl_dsp_ops;
+ struct avs_sram_spec {
+ 	const u32 base_offset;
+ 	const u32 window_size;
+-	const u32 rom_status_offset;
+ };
+ 
+ struct avs_hipc_spec {
+@@ -91,6 +92,7 @@ struct avs_hipc_spec {
+ 	const u32 rsp_offset;
+ 	const u32 rsp_busy_mask;
+ 	const u32 ctl_offset;
++	const u32 sts_offset;
+ };
+ 
+ /* Platform specific descriptor */
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 8fbf33e30dfc3e..cbbc656fcc3f86 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -54,14 +54,17 @@ void avs_hda_power_gating_enable(struct avs_dev *adev, bool enable)
+ {
+ 	u32 value = enable ? 0 : pgctl_mask;
+ 
+-	avs_hda_update_config_dword(&adev->base.core, AZX_PCIREG_PGCTL, pgctl_mask, value);
++	if (!avs_platattr_test(adev, ACE))
++		avs_hda_update_config_dword(&adev->base.core, AZX_PCIREG_PGCTL, pgctl_mask, value);
+ }
+ 
+ static void avs_hdac_clock_gating_enable(struct hdac_bus *bus, bool enable)
+ {
++	struct avs_dev *adev = hdac_to_avs(bus);
+ 	u32 value = enable ? cgctl_mask : 0;
+ 
+-	avs_hda_update_config_dword(bus, AZX_PCIREG_CGCTL, cgctl_mask, value);
++	if (!avs_platattr_test(adev, ACE))
++		avs_hda_update_config_dword(bus, AZX_PCIREG_CGCTL, cgctl_mask, value);
+ }
+ 
+ void avs_hda_clock_gating_enable(struct avs_dev *adev, bool enable)
+@@ -71,6 +74,8 @@ void avs_hda_clock_gating_enable(struct avs_dev *adev, bool enable)
+ 
+ void avs_hda_l1sen_enable(struct avs_dev *adev, bool enable)
+ {
++	if (avs_platattr_test(adev, ACE))
++		return;
+ 	if (enable) {
+ 		if (atomic_inc_and_test(&adev->l1sen_counter))
+ 			snd_hdac_chip_updatel(&adev->base.core, VS_EM2, AZX_VS_EM2_L1SEN,
+@@ -99,6 +104,7 @@ static int avs_hdac_bus_init_streams(struct hdac_bus *bus)
+ 
+ static bool avs_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ {
++	struct avs_dev *adev = hdac_to_avs(bus);
+ 	struct hdac_ext_link *hlink;
+ 	bool ret;
+ 
+@@ -114,7 +120,8 @@ static bool avs_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ 	/* Set DUM bit to address incorrect position reporting for capture
+ 	 * streams. In order to do so, CTRL needs to be out of reset state
+ 	 */
+-	snd_hdac_chip_updatel(bus, VS_EM2, AZX_VS_EM2_DUM, AZX_VS_EM2_DUM);
++	if (!avs_platattr_test(adev, ACE))
++		snd_hdac_chip_updatel(bus, VS_EM2, AZX_VS_EM2_DUM, AZX_VS_EM2_DUM);
+ 
+ 	return ret;
+ }
+@@ -748,13 +755,11 @@ static const struct dev_pm_ops avs_dev_pm = {
+ static const struct avs_sram_spec skl_sram_spec = {
+ 	.base_offset = SKL_ADSP_SRAM_BASE_OFFSET,
+ 	.window_size = SKL_ADSP_SRAM_WINDOW_SIZE,
+-	.rom_status_offset = SKL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_sram_spec apl_sram_spec = {
+ 	.base_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ 	.window_size = APL_ADSP_SRAM_WINDOW_SIZE,
+-	.rom_status_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_hipc_spec skl_hipc_spec = {
+@@ -766,6 +771,19 @@ static const struct avs_hipc_spec skl_hipc_spec = {
+ 	.rsp_offset = SKL_ADSP_REG_HIPCT,
+ 	.rsp_busy_mask = SKL_ADSP_HIPCT_BUSY,
+ 	.ctl_offset = SKL_ADSP_REG_HIPCCTL,
++	.sts_offset = SKL_ADSP_SRAM_BASE_OFFSET,
++};
++
++static const struct avs_hipc_spec apl_hipc_spec = {
++	.req_offset = SKL_ADSP_REG_HIPCI,
++	.req_ext_offset = SKL_ADSP_REG_HIPCIE,
++	.req_busy_mask = SKL_ADSP_HIPCI_BUSY,
++	.ack_offset = SKL_ADSP_REG_HIPCIE,
++	.ack_done_mask = SKL_ADSP_HIPCIE_DONE,
++	.rsp_offset = SKL_ADSP_REG_HIPCT,
++	.rsp_busy_mask = SKL_ADSP_HIPCT_BUSY,
++	.ctl_offset = SKL_ADSP_REG_HIPCCTL,
++	.sts_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_hipc_spec cnl_hipc_spec = {
+@@ -777,6 +795,7 @@ static const struct avs_hipc_spec cnl_hipc_spec = {
+ 	.rsp_offset = CNL_ADSP_REG_HIPCTDR,
+ 	.rsp_busy_mask = CNL_ADSP_HIPCTDR_BUSY,
+ 	.ctl_offset = CNL_ADSP_REG_HIPCCTL,
++	.sts_offset = APL_ADSP_SRAM_BASE_OFFSET,
+ };
+ 
+ static const struct avs_spec skl_desc = {
+@@ -796,7 +815,7 @@ static const struct avs_spec apl_desc = {
+ 	.core_init_mask = 3,
+ 	.attributes = AVS_PLATATTR_IMR,
+ 	.sram = &apl_sram_spec,
+-	.hipc = &skl_hipc_spec,
++	.hipc = &apl_hipc_spec,
+ };
+ 
+ static const struct avs_spec cnl_desc = {
+@@ -902,13 +921,13 @@ MODULE_AUTHOR("Cezary Rojewski <cezary.rojewski@intel.com>");
+ MODULE_AUTHOR("Amadeusz Slawinski <amadeuszx.slawinski@linux.intel.com>");
+ MODULE_DESCRIPTION("Intel cAVS sound driver");
+ MODULE_LICENSE("GPL");
+-MODULE_FIRMWARE("intel/skl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/apl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/cnl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/icl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/jsl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/lkf/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/tgl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/ehl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/adl/dsp_basefw.bin");
+-MODULE_FIRMWARE("intel/adl_n/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/skl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/apl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/cnl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/icl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/jsl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/lkf/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/tgl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/ehl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/adl/dsp_basefw.bin");
++MODULE_FIRMWARE("intel/avs/adl_n/dsp_basefw.bin");
+diff --git a/sound/soc/intel/avs/debugfs.c b/sound/soc/intel/avs/debugfs.c
+index 8c4edda97f757f..0e826ca20619ca 100644
+--- a/sound/soc/intel/avs/debugfs.c
++++ b/sound/soc/intel/avs/debugfs.c
+@@ -373,7 +373,10 @@ static ssize_t trace_control_write(struct file *file, const char __user *from, s
+ 		return ret;
+ 
+ 	num_elems = *array;
+-	resource_mask = array[1];
++	if (!num_elems) {
++		ret = -EINVAL;
++		goto free_array;
++	}
+ 
+ 	/*
+ 	 * Disable if just resource mask is provided - no log priority flags.
+@@ -381,6 +384,7 @@ static ssize_t trace_control_write(struct file *file, const char __user *from, s
+ 	 * Enable input format:   mask, prio1, .., prioN
+ 	 * Where 'N' equals number of bits set in the 'mask'.
+ 	 */
++	resource_mask = array[1];
+ 	if (num_elems == 1) {
+ 		ret = disable_logs(adev, resource_mask);
+ 	} else {
+diff --git a/sound/soc/intel/avs/ipc.c b/sound/soc/intel/avs/ipc.c
+index 08ed9d96738a05..0314f9d4ea5f40 100644
+--- a/sound/soc/intel/avs/ipc.c
++++ b/sound/soc/intel/avs/ipc.c
+@@ -169,7 +169,9 @@ static void avs_dsp_exception_caught(struct avs_dev *adev, union avs_notify_msg
+ 
+ 	dev_crit(adev->dev, "communication severed, rebooting dsp..\n");
+ 
+-	cancel_delayed_work_sync(&ipc->d0ix_work);
++	/* Avoid deadlock as the exception may be the response to SET_D0IX. */
++	if (current_work() != &ipc->d0ix_work.work)
++		cancel_delayed_work_sync(&ipc->d0ix_work);
+ 	ipc->in_d0ix = false;
+ 	/* Re-enabled on recovery completion. */
+ 	pm_runtime_disable(adev->dev);
+diff --git a/sound/soc/intel/avs/loader.c b/sound/soc/intel/avs/loader.c
+index 0b29941feb0ef0..138e4e9de5e309 100644
+--- a/sound/soc/intel/avs/loader.c
++++ b/sound/soc/intel/avs/loader.c
+@@ -310,7 +310,7 @@ avs_hda_init_rom(struct avs_dev *adev, unsigned int dma_id, bool purge)
+ 	}
+ 
+ 	/* await ROM init */
+-	ret = snd_hdac_adsp_readl_poll(adev, spec->sram->rom_status_offset, reg,
++	ret = snd_hdac_adsp_readl_poll(adev, spec->hipc->sts_offset, reg,
+ 				       (reg & 0xF) == AVS_ROM_INIT_DONE ||
+ 				       (reg & 0xF) == APL_ROM_FW_ENTERED,
+ 				       AVS_ROM_INIT_POLLING_US, APL_ROM_INIT_TIMEOUT_US);
+@@ -683,6 +683,7 @@ int avs_dsp_boot_firmware(struct avs_dev *adev, bool purge)
+ 
+ static int avs_dsp_alloc_resources(struct avs_dev *adev)
+ {
++	struct hdac_ext_link *link;
+ 	int ret, i;
+ 
+ 	ret = avs_ipc_get_hw_config(adev, &adev->hw_cfg);
+@@ -693,6 +694,14 @@ static int avs_dsp_alloc_resources(struct avs_dev *adev)
+ 	if (ret)
+ 		return AVS_IPC_RET(ret);
+ 
++	/* If hw allows, read capabilities directly from it. */
++	if (avs_platattr_test(adev, ALTHDA)) {
++		link = snd_hdac_ext_bus_get_hlink_by_id(&adev->base.core,
++							AZX_REG_ML_LEPTR_ID_INTEL_SSP);
++		if (link)
++			adev->hw_cfg.i2s_caps.ctrl_count = link->slcount;
++	}
++
+ 	adev->core_refs = devm_kcalloc(adev->dev, adev->hw_cfg.dsp_cores,
+ 				       sizeof(*adev->core_refs), GFP_KERNEL);
+ 	adev->lib_names = devm_kcalloc(adev->dev, adev->fw_cfg.max_libs_count,
+diff --git a/sound/soc/intel/avs/path.c b/sound/soc/intel/avs/path.c
+index cafb8c6198bedb..43b3d995391072 100644
+--- a/sound/soc/intel/avs/path.c
++++ b/sound/soc/intel/avs/path.c
+@@ -131,9 +131,11 @@ int avs_path_set_constraint(struct avs_dev *adev, struct avs_tplg_path_template
+ 	list_for_each_entry(path_template, &template->path_list, node)
+ 		i++;
+ 
+-	rlist = kcalloc(i, sizeof(rlist), GFP_KERNEL);
+-	clist = kcalloc(i, sizeof(clist), GFP_KERNEL);
+-	slist = kcalloc(i, sizeof(slist), GFP_KERNEL);
++	rlist = kcalloc(i, sizeof(*rlist), GFP_KERNEL);
++	clist = kcalloc(i, sizeof(*clist), GFP_KERNEL);
++	slist = kcalloc(i, sizeof(*slist), GFP_KERNEL);
++	if (!rlist || !clist || !slist)
++		return -ENOMEM;
+ 
+ 	i = 0;
+ 	list_for_each_entry(path_template, &template->path_list, node) {
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index d83ef504643bbb..5a2330e4e4225d 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -36,6 +36,7 @@ struct avs_dma_data {
+ 	struct snd_pcm_hw_constraint_list sample_bits_list;
+ 
+ 	struct work_struct period_elapsed_work;
++	struct hdac_ext_link *link;
+ 	struct snd_pcm_substream *substream;
+ };
+ 
+@@ -81,10 +82,8 @@ void avs_period_elapsed(struct snd_pcm_substream *substream)
+ static int hw_rule_param_size(struct snd_pcm_hw_params *params, struct snd_pcm_hw_rule *rule);
+ static int avs_hw_constraints_init(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct snd_pcm_hw_constraint_list *r, *c, *s;
+-	struct avs_tplg_path_template *template;
+ 	struct avs_dma_data *data;
+ 	int ret;
+ 
+@@ -97,8 +96,7 @@ static int avs_hw_constraints_init(struct snd_pcm_substream *substream, struct s
+ 	c = &(data->channels_list);
+ 	s = &(data->sample_bits_list);
+ 
+-	template = avs_dai_find_path_template(dai, !rtd->dai_link->no_pcm, substream->stream);
+-	ret = avs_path_set_constraint(data->adev, template, r, c, s);
++	ret = avs_path_set_constraint(data->adev, data->template, r, c, s);
+ 	if (ret <= 0)
+ 		return ret;
+ 
+@@ -325,32 +323,75 @@ static const struct snd_soc_dai_ops avs_dai_nonhda_be_ops = {
+ 	.trigger = avs_dai_nonhda_be_trigger,
+ };
+ 
+-static int avs_dai_hda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++static int __avs_dai_hda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai,
++				    struct hdac_ext_link *link)
+ {
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ 	struct hdac_ext_stream *link_stream;
+ 	struct avs_dma_data *data;
+-	struct hda_codec *codec;
+ 	int ret;
+ 
+ 	ret = avs_dai_startup(substream, dai);
+ 	if (ret)
+ 		return ret;
+ 
+-	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
+-	link_stream = snd_hdac_ext_stream_assign(&codec->bus->core, substream,
++	data = snd_soc_dai_get_dma_data(dai, substream);
++	link_stream = snd_hdac_ext_stream_assign(&data->adev->base.core, substream,
+ 						 HDAC_EXT_STREAM_TYPE_LINK);
+ 	if (!link_stream) {
+ 		avs_dai_shutdown(substream, dai);
+ 		return -EBUSY;
+ 	}
+ 
+-	data = snd_soc_dai_get_dma_data(dai, substream);
+ 	data->link_stream = link_stream;
+-	substream->runtime->private_data = link_stream;
++	data->link = link;
+ 	return 0;
+ }
+ 
++static int avs_dai_hda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
++	struct hdac_ext_link *link;
++	struct avs_dma_data *data;
++	struct hda_codec *codec;
++	int ret;
++
++	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
++
++	link = snd_hdac_ext_bus_get_hlink_by_addr(&codec->bus->core, codec->core.addr);
++	if (!link)
++		return -EINVAL;
++
++	ret = __avs_dai_hda_be_startup(substream, dai, link);
++	if (!ret) {
++		data = snd_soc_dai_get_dma_data(dai, substream);
++		substream->runtime->private_data = data->link_stream;
++	}
++
++	return ret;
++}
++
++static int avs_dai_i2shda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct avs_dev *adev = to_avs_dev(dai->component->dev);
++	struct hdac_ext_link *link;
++
++	link = snd_hdac_ext_bus_get_hlink_by_id(&adev->base.core, AZX_REG_ML_LEPTR_ID_INTEL_SSP);
++	if (!link)
++		return -EINVAL;
++	return __avs_dai_hda_be_startup(substream, dai, link);
++}
++
++static int avs_dai_dmichda_be_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct avs_dev *adev = to_avs_dev(dai->component->dev);
++	struct hdac_ext_link *link;
++
++	link = snd_hdac_ext_bus_get_hlink_by_id(&adev->base.core, AZX_REG_ML_LEPTR_ID_INTEL_DMIC);
++	if (!link)
++		return -EINVAL;
++	return __avs_dai_hda_be_startup(substream, dai, link);
++}
++
+ static void avs_dai_hda_be_shutdown(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+ 	struct avs_dma_data *data = snd_soc_dai_get_dma_data(dai, substream);
+@@ -360,6 +401,14 @@ static void avs_dai_hda_be_shutdown(struct snd_pcm_substream *substream, struct
+ 	avs_dai_shutdown(substream, dai);
+ }
+ 
++static void avs_dai_althda_be_shutdown(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
++{
++	struct avs_dma_data *data = snd_soc_dai_get_dma_data(dai, substream);
++
++	snd_hdac_ext_stream_release(data->link_stream, HDAC_EXT_STREAM_TYPE_LINK);
++	avs_dai_shutdown(substream, dai);
++}
++
+ static int avs_dai_hda_be_hw_params(struct snd_pcm_substream *substream,
+ 				    struct snd_pcm_hw_params *hw_params, struct snd_soc_dai *dai)
+ {
+@@ -375,13 +424,8 @@ static int avs_dai_hda_be_hw_params(struct snd_pcm_substream *substream,
+ 
+ static int avs_dai_hda_be_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+-	struct avs_dma_data *data;
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+ 	struct hdac_ext_stream *link_stream;
+-	struct hdac_ext_link *link;
+-	struct hda_codec *codec;
+-
+-	dev_dbg(dai->dev, "%s: %s\n", __func__, dai->name);
++	struct avs_dma_data *data;
+ 
+ 	data = snd_soc_dai_get_dma_data(dai, substream);
+ 	if (!data->path)
+@@ -393,54 +437,43 @@ static int avs_dai_hda_be_hw_free(struct snd_pcm_substream *substream, struct sn
+ 	data->path = NULL;
+ 
+ 	/* clear link <-> stream mapping */
+-	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
+-	link = snd_hdac_ext_bus_get_hlink_by_addr(&codec->bus->core, codec->core.addr);
+-	if (!link)
+-		return -EINVAL;
+-
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		snd_hdac_ext_bus_link_clear_stream_id(link, hdac_stream(link_stream)->stream_tag);
++		snd_hdac_ext_bus_link_clear_stream_id(data->link,
++						      hdac_stream(link_stream)->stream_tag);
+ 
+ 	return 0;
+ }
+ 
+ static int avs_dai_hda_be_prepare(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+ {
+-	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+-	struct snd_pcm_runtime *runtime = substream->runtime;
++	struct snd_soc_pcm_runtime *be = snd_soc_substream_to_rtd(substream);
+ 	const struct snd_soc_pcm_stream *stream_info;
+ 	struct hdac_ext_stream *link_stream;
+-	struct hdac_ext_link *link;
++	const struct snd_pcm_hw_params *p;
+ 	struct avs_dma_data *data;
+-	struct hda_codec *codec;
+-	struct hdac_bus *bus;
+ 	unsigned int format_val;
+ 	unsigned int bits;
+ 	int ret;
+ 
+ 	data = snd_soc_dai_get_dma_data(dai, substream);
+ 	link_stream = data->link_stream;
++	p = &be->dpcm[substream->stream].hw_params;
+ 
+ 	if (link_stream->link_prepared)
+ 		return 0;
+ 
+-	codec = dev_to_hda_codec(snd_soc_rtd_to_codec(rtd, 0)->dev);
+-	bus = &codec->bus->core;
+ 	stream_info = snd_soc_dai_get_pcm_stream(dai, substream->stream);
+-	bits = snd_hdac_stream_format_bits(runtime->format, runtime->subformat,
++	bits = snd_hdac_stream_format_bits(params_format(p), params_subformat(p),
+ 					   stream_info->sig_bits);
+-	format_val = snd_hdac_stream_format(runtime->channels, bits, runtime->rate);
++	format_val = snd_hdac_stream_format(params_channels(p), bits, params_rate(p));
+ 
+-	snd_hdac_ext_stream_decouple(bus, link_stream, true);
++	snd_hdac_ext_stream_decouple(&data->adev->base.core, link_stream, true);
+ 	snd_hdac_ext_stream_reset(link_stream);
+ 	snd_hdac_ext_stream_setup(link_stream, format_val);
+ 
+-	link = snd_hdac_ext_bus_get_hlink_by_addr(bus, codec->core.addr);
+-	if (!link)
+-		return -EINVAL;
+-
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		snd_hdac_ext_bus_link_set_stream_id(link, hdac_stream(link_stream)->stream_tag);
++		snd_hdac_ext_bus_link_set_stream_id(data->link,
++						    hdac_stream(link_stream)->stream_tag);
+ 
+ 	ret = avs_dai_prepare(substream, dai);
+ 	if (ret)
+@@ -515,6 +548,26 @@ static const struct snd_soc_dai_ops avs_dai_hda_be_ops = {
+ 	.trigger = avs_dai_hda_be_trigger,
+ };
+ 
++__maybe_unused
++static const struct snd_soc_dai_ops avs_dai_i2shda_be_ops = {
++	.startup = avs_dai_i2shda_be_startup,
++	.shutdown = avs_dai_althda_be_shutdown,
++	.hw_params = avs_dai_hda_be_hw_params,
++	.hw_free = avs_dai_hda_be_hw_free,
++	.prepare = avs_dai_hda_be_prepare,
++	.trigger = avs_dai_hda_be_trigger,
++};
++
++__maybe_unused
++static const struct snd_soc_dai_ops avs_dai_dmichda_be_ops = {
++	.startup = avs_dai_dmichda_be_startup,
++	.shutdown = avs_dai_althda_be_shutdown,
++	.hw_params = avs_dai_hda_be_hw_params,
++	.hw_free = avs_dai_hda_be_hw_free,
++	.prepare = avs_dai_hda_be_prepare,
++	.trigger = avs_dai_hda_be_trigger,
++};
++
+ static int hw_rule_param_size(struct snd_pcm_hw_params *params, struct snd_pcm_hw_rule *rule)
+ {
+ 	struct snd_interval *interval = hw_param_interval(params, rule->var);
+diff --git a/sound/soc/intel/avs/registers.h b/sound/soc/intel/avs/registers.h
+index 368ede05f2cdaa..4db0cdf68ffc7a 100644
+--- a/sound/soc/intel/avs/registers.h
++++ b/sound/soc/intel/avs/registers.h
+@@ -74,7 +74,7 @@
+ #define APL_ADSP_SRAM_WINDOW_SIZE	0x20000
+ 
+ /* Constants used when accessing SRAM, space shared with firmware */
+-#define AVS_FW_REG_BASE(adev)		((adev)->spec->sram->base_offset)
++#define AVS_FW_REG_BASE(adev)		((adev)->spec->hipc->sts_offset)
+ #define AVS_FW_REG_STATUS(adev)		(AVS_FW_REG_BASE(adev) + 0x0)
+ #define AVS_FW_REG_ERROR(adev)		(AVS_FW_REG_BASE(adev) + 0x4)
+ 
+diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359.c b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+index df29a9fa5aee5b..1fa664b56f30fa 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-mt6359.c
++++ b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+@@ -822,12 +822,12 @@ SND_SOC_DAILINK_DEFS(ETDM1_IN_BE,
+ 
+ SND_SOC_DAILINK_DEFS(ETDM2_IN_BE,
+ 		     DAILINK_COMP_ARRAY(COMP_CPU("ETDM2_IN")),
+-		     DAILINK_COMP_ARRAY(COMP_EMPTY()),
++		     DAILINK_COMP_ARRAY(COMP_DUMMY()),
+ 		     DAILINK_COMP_ARRAY(COMP_EMPTY()));
+ 
+ SND_SOC_DAILINK_DEFS(ETDM1_OUT_BE,
+ 		     DAILINK_COMP_ARRAY(COMP_CPU("ETDM1_OUT")),
+-		     DAILINK_COMP_ARRAY(COMP_EMPTY()),
++		     DAILINK_COMP_ARRAY(COMP_DUMMY()),
+ 		     DAILINK_COMP_ARRAY(COMP_EMPTY()));
+ 
+ SND_SOC_DAILINK_DEFS(ETDM2_OUT_BE,
+diff --git a/sound/soc/sof/amd/pci-acp70.c b/sound/soc/sof/amd/pci-acp70.c
+index 8fa1170a2161e9..9108f1139ff2dc 100644
+--- a/sound/soc/sof/amd/pci-acp70.c
++++ b/sound/soc/sof/amd/pci-acp70.c
+@@ -33,6 +33,7 @@ static const struct sof_amd_acp_desc acp70_chip_info = {
+ 	.ext_intr_cntl = ACP70_EXTERNAL_INTR_CNTL,
+ 	.ext_intr_stat	= ACP70_EXT_INTR_STAT,
+ 	.ext_intr_stat1	= ACP70_EXT_INTR_STAT1,
++	.acp_error_stat = ACP70_ERROR_STATUS,
+ 	.dsp_intr_base	= ACP70_DSP_SW_INTR_BASE,
+ 	.acp_sw0_i2s_err_reason = ACP7X_SW0_I2S_ERROR_REASON,
+ 	.sram_pte_offset = ACP70_SRAM_PTE_OFFSET,
+diff --git a/sound/soc/sof/ipc4-pcm.c b/sound/soc/sof/ipc4-pcm.c
+index c09b424ab863d7..8eee3e1aadf932 100644
+--- a/sound/soc/sof/ipc4-pcm.c
++++ b/sound/soc/sof/ipc4-pcm.c
+@@ -784,7 +784,8 @@ static int sof_ipc4_pcm_setup(struct snd_sof_dev *sdev, struct snd_sof_pcm *spcm
+ 
+ 		/* allocate memory for max number of pipeline IDs */
+ 		pipeline_list->pipelines = kcalloc(ipc4_data->max_num_pipelines,
+-						   sizeof(struct snd_sof_widget *), GFP_KERNEL);
++						   sizeof(*pipeline_list->pipelines),
++						   GFP_KERNEL);
+ 		if (!pipeline_list->pipelines) {
+ 			sof_ipc4_pcm_free(sdev, spcm);
+ 			return -ENOMEM;
+diff --git a/sound/soc/ti/omap-hdmi.c b/sound/soc/ti/omap-hdmi.c
+index cf43ac19c4a6d0..55e7cb96858fca 100644
+--- a/sound/soc/ti/omap-hdmi.c
++++ b/sound/soc/ti/omap-hdmi.c
+@@ -361,17 +361,20 @@ static int omap_hdmi_audio_probe(struct platform_device *pdev)
+ 	if (!card->dai_link)
+ 		return -ENOMEM;
+ 
+-	compnent = devm_kzalloc(dev, sizeof(*compnent), GFP_KERNEL);
++	compnent = devm_kzalloc(dev, 2 * sizeof(*compnent), GFP_KERNEL);
+ 	if (!compnent)
+ 		return -ENOMEM;
+-	card->dai_link->cpus		= compnent;
++	card->dai_link->cpus		= &compnent[0];
+ 	card->dai_link->num_cpus	= 1;
+ 	card->dai_link->codecs		= &snd_soc_dummy_dlc;
+ 	card->dai_link->num_codecs	= 1;
++	card->dai_link->platforms	= &compnent[1];
++	card->dai_link->num_platforms	= 1;
+ 
+ 	card->dai_link->name = card->name;
+ 	card->dai_link->stream_name = card->name;
+ 	card->dai_link->cpus->dai_name = dev_name(ad->dssdev);
++	card->dai_link->platforms->name = dev_name(ad->dssdev);
+ 	card->num_links = 1;
+ 	card->dev = dev;
+ 
+diff --git a/sound/usb/implicit.c b/sound/usb/implicit.c
+index 4727043fd74580..77f06da93151e8 100644
+--- a/sound/usb/implicit.c
++++ b/sound/usb/implicit.c
+@@ -57,6 +57,7 @@ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = {
+ 	IMPLICIT_FB_FIXED_DEV(0x31e9, 0x0002, 0x81, 2), /* Solid State Logic SSL2+ */
+ 	IMPLICIT_FB_FIXED_DEV(0x0499, 0x172f, 0x81, 2), /* Steinberg UR22C */
+ 	IMPLICIT_FB_FIXED_DEV(0x0d9a, 0x00df, 0x81, 2), /* RTX6001 */
++	IMPLICIT_FB_FIXED_DEV(0x19f7, 0x000a, 0x84, 3), /* RODE AI-1 */
+ 	IMPLICIT_FB_FIXED_DEV(0x22f0, 0x0006, 0x81, 3), /* Allen&Heath Qu-16 */
+ 	IMPLICIT_FB_FIXED_DEV(0x1686, 0xf029, 0x82, 2), /* Zoom UAC-2 */
+ 	IMPLICIT_FB_FIXED_DEV(0x2466, 0x8003, 0x86, 2), /* Fractal Audio Axe-Fx II */
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index cfed000f243ab9..c3de2b13743500 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1530,6 +1530,7 @@ static void snd_usbmidi_free(struct snd_usb_midi *umidi)
+ 			snd_usbmidi_in_endpoint_delete(ep->in);
+ 	}
+ 	mutex_destroy(&umidi->mutex);
++	timer_shutdown_sync(&umidi->error_timer);
+ 	kfree(umidi);
+ }
+ 
+@@ -1553,7 +1554,7 @@ void snd_usbmidi_disconnect(struct list_head *p)
+ 	spin_unlock_irq(&umidi->disc_lock);
+ 	up_write(&umidi->disc_rwsem);
+ 
+-	timer_delete_sync(&umidi->error_timer);
++	timer_shutdown_sync(&umidi->error_timer);
+ 
+ 	for (i = 0; i < MIDI_MAX_ENDPOINTS; ++i) {
+ 		struct snd_usb_midi_endpoint *ep = &umidi->endpoints[i];
+diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
+index 1b25c0a95d3f9a..40a9e59c2fd568 100644
+--- a/tools/arch/x86/kcpuid/kcpuid.c
++++ b/tools/arch/x86/kcpuid/kcpuid.c
+@@ -1,11 +1,12 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #define _GNU_SOURCE
+ 
+-#include <stdio.h>
++#include <err.h>
++#include <getopt.h>
+ #include <stdbool.h>
++#include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <getopt.h>
+ 
+ #define ARRAY_SIZE(x)	(sizeof(x) / sizeof((x)[0]))
+ #define min(a, b)	(((a) < (b)) ? (a) : (b))
+@@ -145,14 +146,14 @@ static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
+ 	if (!func->leafs) {
+ 		func->leafs = malloc(sizeof(struct subleaf));
+ 		if (!func->leafs)
+-			perror("malloc func leaf");
++			err(EXIT_FAILURE, NULL);
+ 
+ 		func->nr = 1;
+ 	} else {
+ 		s = func->nr;
+ 		func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
+ 		if (!func->leafs)
+-			perror("realloc f->leafs");
++			err(EXIT_FAILURE, NULL);
+ 
+ 		func->nr++;
+ 	}
+@@ -211,7 +212,7 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ 
+ 	range = malloc(sizeof(struct cpuid_range));
+ 	if (!range)
+-		perror("malloc range");
++		err(EXIT_FAILURE, NULL);
+ 
+ 	if (input_eax & 0x80000000)
+ 		range->is_ext = true;
+@@ -220,7 +221,7 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
+ 
+ 	range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ 	if (!range->funcs)
+-		perror("malloc range->funcs");
++		err(EXIT_FAILURE, NULL);
+ 
+ 	range->nr = idx_func;
+ 	memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+@@ -395,8 +396,8 @@ static int parse_line(char *line)
+ 	return 0;
+ 
+ err_exit:
+-	printf("Warning: wrong line format:\n");
+-	printf("\tline[%d]: %s\n", flines, line);
++	warnx("Wrong line format:\n"
++	      "\tline[%d]: %s", flines, line);
+ 	return -1;
+ }
+ 
+@@ -418,10 +419,8 @@ static void parse_text(void)
+ 		file = fopen("./cpuid.csv", "r");
+ 	}
+ 
+-	if (!file) {
+-		printf("Fail to open '%s'\n", filename);
+-		return;
+-	}
++	if (!file)
++		err(EXIT_FAILURE, "%s", filename);
+ 
+ 	while (1) {
+ 		ret = getline(&line, &len, file);
+@@ -530,7 +529,7 @@ static inline struct cpuid_func *index_to_func(u32 index)
+ 	func_idx = index & 0xffff;
+ 
+ 	if ((func_idx + 1) > (u32)range->nr) {
+-		printf("ERR: invalid input index (0x%x)\n", index);
++		warnx("Invalid input index (0x%x)", index);
+ 		return NULL;
+ 	}
+ 	return &range->funcs[func_idx];
+@@ -562,7 +561,7 @@ static void show_info(void)
+ 				return;
+ 			}
+ 
+-			printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
++			warnx("Invalid input subleaf (0x%x)", user_sub);
+ 		}
+ 
+ 		show_func(func);
+@@ -593,15 +592,15 @@ static void setup_platform_cpuid(void)
+ 
+ static void usage(void)
+ {
+-	printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+-		"\t-a|--all             Show both bit flags and complex bit fields info\n"
+-		"\t-b|--bitflags        Show boolean flags only\n"
+-		"\t-d|--detail          Show details of the flag/fields (default)\n"
+-		"\t-f|--flags           Specify the cpuid csv file\n"
+-		"\t-h|--help            Show usage info\n"
+-		"\t-l|--leaf=index      Specify the leaf you want to check\n"
+-		"\t-r|--raw             Show raw cpuid data\n"
+-		"\t-s|--subleaf=sub     Specify the subleaf you want to check\n"
++	warnx("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
++	      "\t-a|--all             Show both bit flags and complex bit fields info\n"
++	      "\t-b|--bitflags        Show boolean flags only\n"
++	      "\t-d|--detail          Show details of the flag/fields (default)\n"
++	      "\t-f|--flags           Specify the CPUID CSV file\n"
++	      "\t-h|--help            Show usage info\n"
++	      "\t-l|--leaf=index      Specify the leaf you want to check\n"
++	      "\t-r|--raw             Show raw CPUID data\n"
++	      "\t-s|--subleaf=sub     Specify the subleaf you want to check"
+ 	);
+ }
+ 
+@@ -652,7 +651,7 @@ static int parse_options(int argc, char *argv[])
+ 			user_sub = strtoul(optarg, NULL, 0);
+ 			break;
+ 		default:
+-			printf("%s: Invalid option '%c'\n", argv[0], optopt);
++			warnx("Invalid option '%c'", optopt);
+ 			return -1;
+ 	}
+ 
+diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
+index f5dd84eb55dcda..cd3fd5155f6ece 100644
+--- a/tools/arch/x86/lib/x86-opcode-map.txt
++++ b/tools/arch/x86/lib/x86-opcode-map.txt
+@@ -35,7 +35,7 @@
+ #  - (!F3) : the last prefix is not 0xF3 (including non-last prefix case)
+ #  - (66&F2): Both 0x66 and 0xF2 prefixes are specified.
+ #
+-# REX2 Prefix
++# REX2 Prefix Superscripts
+ #  - (!REX2): REX2 is not allowed
+ #  - (REX2): REX2 variant e.g. JMPABS
+ 
+@@ -286,10 +286,10 @@ df: ESC
+ # Note: "forced64" is Intel CPU behavior: they ignore 0x66 prefix
+ # in 64-bit mode. AMD CPUs accept 0x66 prefix, it causes RIP truncation
+ # to 16 bits. In 32-bit mode, 0x66 is accepted by both Intel and AMD.
+-e0: LOOPNE/LOOPNZ Jb (f64) (!REX2)
+-e1: LOOPE/LOOPZ Jb (f64) (!REX2)
+-e2: LOOP Jb (f64) (!REX2)
+-e3: JrCXZ Jb (f64) (!REX2)
++e0: LOOPNE/LOOPNZ Jb (f64),(!REX2)
++e1: LOOPE/LOOPZ Jb (f64),(!REX2)
++e2: LOOP Jb (f64),(!REX2)
++e3: JrCXZ Jb (f64),(!REX2)
+ e4: IN AL,Ib (!REX2)
+ e5: IN eAX,Ib (!REX2)
+ e6: OUT Ib,AL (!REX2)
+@@ -298,10 +298,10 @@ e7: OUT Ib,eAX (!REX2)
+ # in "near" jumps and calls is 16-bit. For CALL,
+ # push of return address is 16-bit wide, RSP is decremented by 2
+ # but is not truncated to 16 bits, unlike RIP.
+-e8: CALL Jz (f64) (!REX2)
+-e9: JMP-near Jz (f64) (!REX2)
+-ea: JMP-far Ap (i64) (!REX2)
+-eb: JMP-short Jb (f64) (!REX2)
++e8: CALL Jz (f64),(!REX2)
++e9: JMP-near Jz (f64),(!REX2)
++ea: JMP-far Ap (i64),(!REX2)
++eb: JMP-short Jb (f64),(!REX2)
+ ec: IN AL,DX (!REX2)
+ ed: IN eAX,DX (!REX2)
+ ee: OUT DX,AL (!REX2)
+@@ -478,22 +478,22 @@ AVXcode: 1
+ 7f: movq Qq,Pq | vmovdqa Wx,Vx (66) | vmovdqa32/64 Wx,Vx (66),(evo) | vmovdqu Wx,Vx (F3) | vmovdqu32/64 Wx,Vx (F3),(evo) | vmovdqu8/16 Wx,Vx (F2),(ev)
+ # 0x0f 0x80-0x8f
+ # Note: "forced64" is Intel CPU behavior (see comment about CALL insn).
+-80: JO Jz (f64) (!REX2)
+-81: JNO Jz (f64) (!REX2)
+-82: JB/JC/JNAE Jz (f64) (!REX2)
+-83: JAE/JNB/JNC Jz (f64) (!REX2)
+-84: JE/JZ Jz (f64) (!REX2)
+-85: JNE/JNZ Jz (f64) (!REX2)
+-86: JBE/JNA Jz (f64) (!REX2)
+-87: JA/JNBE Jz (f64) (!REX2)
+-88: JS Jz (f64) (!REX2)
+-89: JNS Jz (f64) (!REX2)
+-8a: JP/JPE Jz (f64) (!REX2)
+-8b: JNP/JPO Jz (f64) (!REX2)
+-8c: JL/JNGE Jz (f64) (!REX2)
+-8d: JNL/JGE Jz (f64) (!REX2)
+-8e: JLE/JNG Jz (f64) (!REX2)
+-8f: JNLE/JG Jz (f64) (!REX2)
++80: JO Jz (f64),(!REX2)
++81: JNO Jz (f64),(!REX2)
++82: JB/JC/JNAE Jz (f64),(!REX2)
++83: JAE/JNB/JNC Jz (f64),(!REX2)
++84: JE/JZ Jz (f64),(!REX2)
++85: JNE/JNZ Jz (f64),(!REX2)
++86: JBE/JNA Jz (f64),(!REX2)
++87: JA/JNBE Jz (f64),(!REX2)
++88: JS Jz (f64),(!REX2)
++89: JNS Jz (f64),(!REX2)
++8a: JP/JPE Jz (f64),(!REX2)
++8b: JNP/JPO Jz (f64),(!REX2)
++8c: JL/JNGE Jz (f64),(!REX2)
++8d: JNL/JGE Jz (f64),(!REX2)
++8e: JLE/JNG Jz (f64),(!REX2)
++8f: JNLE/JG Jz (f64),(!REX2)
+ # 0x0f 0x90-0x9f
+ 90: SETO Eb | kmovw/q Vk,Wk | kmovb/d Vk,Wk (66)
+ 91: SETNO Eb | kmovw/q Mv,Vk | kmovb/d Mv,Vk (66)
+diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
+index 93b139bfb9880a..3f1d6be512151d 100644
+--- a/tools/bpf/bpftool/cgroup.c
++++ b/tools/bpf/bpftool/cgroup.c
+@@ -221,7 +221,7 @@ static int cgroup_has_attached_progs(int cgroup_fd)
+ 	for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++) {
+ 		int count = count_attached_bpf_progs(cgroup_fd, cgroup_attach_types[i]);
+ 
+-		if (count < 0)
++		if (count < 0 && errno != EINVAL)
+ 			return -1;
+ 
+ 		if (count > 0) {
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index afbddea3a39c64..ce1b556dfa90f1 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -17,7 +17,7 @@ endif
+ 
+ # Overrides for the prepare step libraries.
+ HOST_OVERRIDES := AR="$(HOSTAR)" CC="$(HOSTCC)" LD="$(HOSTLD)" ARCH="$(HOSTARCH)" \
+-		  CROSS_COMPILE="" EXTRA_CFLAGS="$(HOSTCFLAGS)"
++		  CROSS_COMPILE="" CLANG_CROSS_FLAGS="" EXTRA_CFLAGS="$(HOSTCFLAGS)"
+ 
+ RM      ?= rm
+ HOSTCC  ?= gcc
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 1f44ca677ad3d6..57bd995ce6afa3 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -87,7 +87,6 @@ FEATURE_TESTS_BASIC :=                  \
+         libtracefs                      \
+         libcpupower                     \
+         libcrypto                       \
+-        libunwind                       \
+         pthread-attr-setaffinity-np     \
+         pthread-barrier     		\
+         reallocarray                    \
+@@ -148,15 +147,12 @@ endif
+ FEATURE_DISPLAY ?=              \
+          libdw                  \
+          glibc                  \
+-         libbfd                 \
+-         libbfd-buildid		\
+          libelf                 \
+          libnuma                \
+          numa_num_possible_cpus \
+          libperl                \
+          libpython              \
+          libcrypto              \
+-         libunwind              \
+          libcapstone            \
+          llvm-perf              \
+          zlib                   \
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index fd404729b1154b..fe5df2a9fe8ee6 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -2051,7 +2051,8 @@ union bpf_attr {
+  * 		untouched (unless **BPF_F_MARK_ENFORCE** is added as well), and
+  * 		for updates resulting in a null checksum the value is set to
+  * 		**CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
+- * 		the checksum is to be computed against a pseudo-header.
++ * 		that the modified header field is part of the pseudo-header.
++ * 		Flag **BPF_F_IPV6** should be set for IPv6 packets.
+  *
+  * 		This helper works in combination with **bpf_csum_diff**\ (),
+  * 		which does not update the checksum in-place, but offers more
+@@ -6068,6 +6069,7 @@ enum {
+ 	BPF_F_PSEUDO_HDR		= (1ULL << 4),
+ 	BPF_F_MARK_MANGLED_0		= (1ULL << 5),
+ 	BPF_F_MARK_ENFORCE		= (1ULL << 6),
++	BPF_F_IPV6			= (1ULL << 7),
+ };
+ 
+ /* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index c0e13cdf966077..b997c68bd94536 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -388,7 +388,13 @@ extern void *bpf_rdonly_cast(const void *obj, __u32 btf_id) __ksym __weak;
+ #define ___arrow10(a, b, c, d, e, f, g, h, i, j) a->b->c->d->e->f->g->h->i->j
+ #define ___arrow(...) ___apply(___arrow, ___narg(__VA_ARGS__))(__VA_ARGS__)
+ 
++#if defined(__clang__) && (__clang_major__ >= 19)
++#define ___type(...) __typeof_unqual__(___arrow(__VA_ARGS__))
++#elif defined(__GNUC__) && (__GNUC__ >= 14)
++#define ___type(...) __typeof_unqual__(___arrow(__VA_ARGS__))
++#else
+ #define ___type(...) typeof(___arrow(__VA_ARGS__))
++#endif
+ 
+ #define ___read(read_fn, dst, src_type, src, accessor)			    \
+ 	read_fn((void *)(dst), sizeof(*(dst)), &((src_type)(src))->accessor)
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 6b85060f07b3b4..147964bb64c8f4 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -60,6 +60,8 @@
+ #define BPF_FS_MAGIC		0xcafe4a11
+ #endif
+ 
++#define MAX_EVENT_NAME_LEN	64
++
+ #define BPF_FS_DEFAULT_PATH "/sys/fs/bpf"
+ 
+ #define BPF_INSN_SZ (sizeof(struct bpf_insn))
+@@ -284,7 +286,7 @@ void libbpf_print(enum libbpf_print_level level, const char *format, ...)
+ 	old_errno = errno;
+ 
+ 	va_start(args, format);
+-	__libbpf_pr(level, format, args);
++	print_fn(level, format, args);
+ 	va_end(args);
+ 
+ 	errno = old_errno;
+@@ -896,7 +898,7 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
+ 			return -LIBBPF_ERRNO__FORMAT;
+ 		}
+ 
+-		if (sec_off + prog_sz > sec_sz) {
++		if (sec_off + prog_sz > sec_sz || sec_off + prog_sz < sec_off) {
+ 			pr_warn("sec '%s': program at offset %zu crosses section boundary\n",
+ 				sec_name, sec_off);
+ 			return -LIBBPF_ERRNO__FORMAT;
+@@ -1725,15 +1727,6 @@ static Elf64_Sym *find_elf_var_sym(const struct bpf_object *obj, const char *nam
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-/* Some versions of Android don't provide memfd_create() in their libc
+- * implementation, so avoid complications and just go straight to Linux
+- * syscall.
+- */
+-static int sys_memfd_create(const char *name, unsigned flags)
+-{
+-	return syscall(__NR_memfd_create, name, flags);
+-}
+-
+ #ifndef MFD_CLOEXEC
+ #define MFD_CLOEXEC 0x0001U
+ #endif
+@@ -11121,16 +11114,16 @@ static const char *tracefs_available_filter_functions_addrs(void)
+ 			     : TRACEFS"/available_filter_functions_addrs";
+ }
+ 
+-static void gen_kprobe_legacy_event_name(char *buf, size_t buf_sz,
+-					 const char *kfunc_name, size_t offset)
++static void gen_probe_legacy_event_name(char *buf, size_t buf_sz,
++					const char *name, size_t offset)
+ {
+ 	static int index = 0;
+ 	int i;
+ 
+-	snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx_%d", getpid(), kfunc_name, offset,
+-		 __sync_fetch_and_add(&index, 1));
++	snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%zx", getpid(),
++		 __sync_fetch_and_add(&index, 1), name, offset);
+ 
+-	/* sanitize binary_path in the probe name */
++	/* sanitize name in the probe name */
+ 	for (i = 0; buf[i]; i++) {
+ 		if (!isalnum(buf[i]))
+ 			buf[i] = '_';
+@@ -11255,9 +11248,9 @@ int probe_kern_syscall_wrapper(int token_fd)
+ 
+ 		return pfd >= 0 ? 1 : 0;
+ 	} else { /* legacy mode */
+-		char probe_name[128];
++		char probe_name[MAX_EVENT_NAME_LEN];
+ 
+-		gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name), syscall_name, 0);
++		gen_probe_legacy_event_name(probe_name, sizeof(probe_name), syscall_name, 0);
+ 		if (add_kprobe_event_legacy(probe_name, false, syscall_name, 0) < 0)
+ 			return 0;
+ 
+@@ -11313,10 +11306,10 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
+ 					    func_name, offset,
+ 					    -1 /* pid */, 0 /* ref_ctr_off */);
+ 	} else {
+-		char probe_name[256];
++		char probe_name[MAX_EVENT_NAME_LEN];
+ 
+-		gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name),
+-					     func_name, offset);
++		gen_probe_legacy_event_name(probe_name, sizeof(probe_name),
++					    func_name, offset);
+ 
+ 		legacy_probe = strdup(probe_name);
+ 		if (!legacy_probe)
+@@ -11860,20 +11853,6 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
+ 	return ret;
+ }
+ 
+-static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz,
+-					 const char *binary_path, uint64_t offset)
+-{
+-	int i;
+-
+-	snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx", getpid(), binary_path, (size_t)offset);
+-
+-	/* sanitize binary_path in the probe name */
+-	for (i = 0; buf[i]; i++) {
+-		if (!isalnum(buf[i]))
+-			buf[i] = '_';
+-	}
+-}
+-
+ static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
+ 					  const char *binary_path, size_t offset)
+ {
+@@ -12297,13 +12276,14 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
+ 		pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path,
+ 					    func_offset, pid, ref_ctr_off);
+ 	} else {
+-		char probe_name[PATH_MAX + 64];
++		char probe_name[MAX_EVENT_NAME_LEN];
+ 
+ 		if (ref_ctr_off)
+ 			return libbpf_err_ptr(-EINVAL);
+ 
+-		gen_uprobe_legacy_event_name(probe_name, sizeof(probe_name),
+-					     binary_path, func_offset);
++		gen_probe_legacy_event_name(probe_name, sizeof(probe_name),
++					    strrchr(binary_path, '/') ? : binary_path,
++					    func_offset);
+ 
+ 		legacy_probe = strdup(probe_name);
+ 		if (!legacy_probe)
+@@ -13371,7 +13351,6 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
+ 	attr.config = PERF_COUNT_SW_BPF_OUTPUT;
+ 	attr.type = PERF_TYPE_SOFTWARE;
+ 	attr.sample_type = PERF_SAMPLE_RAW;
+-	attr.sample_period = sample_period;
+ 	attr.wakeup_events = sample_period;
+ 
+ 	p.attr = &attr;
+diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
+index 76669c73dcd162..477a3b3389a091 100644
+--- a/tools/lib/bpf/libbpf_internal.h
++++ b/tools/lib/bpf/libbpf_internal.h
+@@ -667,6 +667,15 @@ static inline int sys_dup3(int oldfd, int newfd, int flags)
+ 	return syscall(__NR_dup3, oldfd, newfd, flags);
+ }
+ 
++/* Some versions of Android don't provide memfd_create() in their libc
++ * implementation, so avoid complications and just go straight to Linux
++ * syscall.
++ */
++static inline int sys_memfd_create(const char *name, unsigned flags)
++{
++	return syscall(__NR_memfd_create, name, flags);
++}
++
+ /* Point *fixed_fd* to the same file that *tmp_fd* points to.
+  * Regardless of success, *tmp_fd* is closed.
+  * Whatever *fixed_fd* pointed to is closed silently.
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 800e0ef09c3787..a469e5d4fee70e 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -573,7 +573,7 @@ int bpf_linker__add_buf(struct bpf_linker *linker, void *buf, size_t buf_sz,
+ 
+ 	snprintf(filename, sizeof(filename), "mem:%p+%zu", buf, buf_sz);
+ 
+-	fd = memfd_create(filename, 0);
++	fd = sys_memfd_create(filename, 0);
+ 	if (fd < 0) {
+ 		ret = -errno;
+ 		pr_warn("failed to create memfd '%s': %s\n", filename, errstr(ret));
+@@ -1376,7 +1376,7 @@ static int linker_append_sec_data(struct bpf_linker *linker, struct src_obj *obj
+ 		} else {
+ 			if (!secs_match(dst_sec, src_sec)) {
+ 				pr_warn("ELF sections %s are incompatible\n", src_sec->sec_name);
+-				return -1;
++				return -EINVAL;
+ 			}
+ 
+ 			/* "license" and "version" sections are deduped */
+@@ -2223,7 +2223,7 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
+ 			}
+ 		} else if (!secs_match(dst_sec, src_sec)) {
+ 			pr_warn("sections %s are not compatible\n", src_sec->sec_name);
+-			return -1;
++			return -EINVAL;
+ 		}
+ 
+ 		/* shdr->sh_link points to SYMTAB */
+diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
+index 975e265eab3bfe..06663f9ea581f9 100644
+--- a/tools/lib/bpf/nlattr.c
++++ b/tools/lib/bpf/nlattr.c
+@@ -63,16 +63,16 @@ static int validate_nla(struct nlattr *nla, int maxtype,
+ 		minlen = nla_attr_minlen[pt->type];
+ 
+ 	if (libbpf_nla_len(nla) < minlen)
+-		return -1;
++		return -EINVAL;
+ 
+ 	if (pt->maxlen && libbpf_nla_len(nla) > pt->maxlen)
+-		return -1;
++		return -EINVAL;
+ 
+ 	if (pt->type == LIBBPF_NLA_STRING) {
+ 		char *data = libbpf_nla_data(nla);
+ 
+ 		if (data[libbpf_nla_len(nla) - 1] != '\0')
+-			return -1;
++			return -EINVAL;
+ 	}
+ 
+ 	return 0;
+@@ -118,19 +118,18 @@ int libbpf_nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head,
+ 		if (policy) {
+ 			err = validate_nla(nla, maxtype, policy);
+ 			if (err < 0)
+-				goto errout;
++				return err;
+ 		}
+ 
+-		if (tb[type])
++		if (tb[type]) {
+ 			pr_warn("Attribute of type %#x found multiple times in message, "
+ 				"previous attribute is being ignored.\n", type);
++		}
+ 
+ 		tb[type] = nla;
+ 	}
+ 
+-	err = 0;
+-errout:
+-	return err;
++	return 0;
+ }
+ 
+ /**
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index b21b12ec88d960..f23bdda737aaa5 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -230,7 +230,8 @@ static bool is_rust_noreturn(const struct symbol *func)
+ 	       str_ends_with(func->name, "_7___rustc17rust_begin_unwind")				||
+ 	       strstr(func->name, "_4core9panicking13assert_failed")					||
+ 	       strstr(func->name, "_4core9panicking11panic_const24panic_const_")			||
+-	       (strstr(func->name, "_4core5slice5index24slice_") &&
++	       (strstr(func->name, "_4core5slice5index") &&
++		strstr(func->name, "slice_") &&
+ 		str_ends_with(func->name, "_fail"));
+ }
+ 
+diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
+index 364b55b00b4841..34af57b8ec2a9c 100644
+--- a/tools/perf/MANIFEST
++++ b/tools/perf/MANIFEST
+@@ -1,8 +1,10 @@
+ COPYING
+ LICENSES/preferred/GPL-2.0
+ arch/arm64/tools/gen-sysreg.awk
++arch/arm64/tools/syscall_64.tbl
+ arch/arm64/tools/sysreg
+ arch/*/include/uapi/asm/bpf_perf_event.h
++include/uapi/asm-generic/Kbuild
+ tools/perf
+ tools/arch
+ tools/scripts
+@@ -25,6 +27,10 @@ tools/lib/str_error_r.c
+ tools/lib/vsprintf.c
+ tools/lib/zalloc.c
+ scripts/bpf_doc.py
++scripts/Kbuild.include
++scripts/Makefile.asm-headers
++scripts/syscall.tbl
++scripts/syscallhdr.sh
+ tools/bpf/bpftool
+ kernel/bpf/disasm.c
+ kernel/bpf/disasm.h
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index b7769a22fe1afa..d1ea7bf449647e 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -560,6 +560,8 @@ ifndef NO_LIBELF
+     ifeq ($(feature-libdebuginfod), 1)
+       CFLAGS += -DHAVE_DEBUGINFOD_SUPPORT
+       EXTLIBS += -ldebuginfod
++    else
++      $(warning No elfutils/debuginfod.h found, no debuginfo server support, please install libdebuginfod-dev/elfutils-debuginfod-client-devel or equivalent)
+     endif
+   endif
+ 
+@@ -625,6 +627,8 @@ endif
+ ifndef NO_LIBUNWIND
+   have_libunwind :=
+ 
++  $(call feature_check,libunwind)
++
+   $(call feature_check,libunwind-x86)
+   ifeq ($(feature-libunwind-x86), 1)
+     $(call detected,CONFIG_LIBUNWIND_X86)
+@@ -649,7 +653,7 @@ ifndef NO_LIBUNWIND
+   endif
+ 
+   ifneq ($(feature-libunwind), 1)
+-    $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR)
++    $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR and set LIBUNWIND=1 in the make command line as it is opt-in now)
+     NO_LOCAL_LIBUNWIND := 1
+   else
+     have_libunwind := 1
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 979d4691221a07..a7ae5637dadeeb 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -1147,7 +1147,8 @@ install-tests: all install-gtk
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
+ 		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+-		$(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
++		$(INSTALL) tests/shell/base_report/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
++		$(INSTALL) tests/shell/base_report/*.txt '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+ 		$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight' ; \
+ 		$(INSTALL) tests/shell/coresight/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight'
+ 	$(Q)$(MAKE) -C tests/shell/coresight install-tests
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index ba20bf7c011d77..d56273a0e241c7 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -3480,7 +3480,7 @@ static struct option __record_options[] = {
+ 		    "sample selected machine registers on interrupt,"
+ 		    " use '-I?' to list register names", parse_intr_regs),
+ 	OPT_CALLBACK_OPTARG(0, "user-regs", &record.opts.sample_user_regs, NULL, "any register",
+-		    "sample selected machine registers on interrupt,"
++		    "sample selected machine registers in user space,"
+ 		    " use '--user-regs=?' to list register names", parse_user_regs),
+ 	OPT_BOOLEAN(0, "running-time", &record.opts.running_time,
+ 		    "Record running/enabled time of read (:S) events"),
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index 6ac51925ea4249..33cce59bdfbdb5 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -1352,7 +1352,7 @@ static const struct syscall_fmt syscall_fmts[] = {
+ 	  .arg = { [0] = { .scnprintf = SCA_FDAT, /* olddirfd */ },
+ 		   [2] = { .scnprintf = SCA_FDAT, /* newdirfd */ },
+ 		   [4] = { .scnprintf = SCA_RENAMEAT2_FLAGS, /* flags */ }, }, },
+-	{ .name	    = "rseq",	    .errpid = true,
++	{ .name	    = "rseq",
+ 	  .arg = { [0] = { .from_user = true /* rseq */, }, }, },
+ 	{ .name	    = "rt_sigaction",
+ 	  .arg = { [0] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, },
+@@ -1376,7 +1376,7 @@ static const struct syscall_fmt syscall_fmts[] = {
+ 	{ .name	    = "sendto",
+ 	  .arg = { [3] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ },
+ 		   [4] = SCA_SOCKADDR_FROM_USER(addr), }, },
+-	{ .name	    = "set_robust_list",	    .errpid = true,
++	{ .name	    = "set_robust_list",
+ 	  .arg = { [0] = { .from_user = true /* head */, }, }, },
+ 	{ .name	    = "set_tid_address", .errpid = true, },
+ 	{ .name	    = "setitimer",
+@@ -2842,7 +2842,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+ 	e_machine = thread__e_machine(thread, trace->host);
+ 	sc = trace__syscall_info(trace, evsel, e_machine, id);
+ 	if (sc == NULL)
+-		return -1;
++		goto out_put;
+ 	ttrace = thread__trace(thread, trace);
+ 	/*
+ 	 * We need to get ttrace just to make sure it is there when syscall__scnprintf_args()
+@@ -3005,8 +3005,8 @@ errno_print: {
+ 	else if (sc->fmt->errpid) {
+ 		struct thread *child = machine__find_thread(trace->host, ret, ret);
+ 
++		fprintf(trace->output, "%ld", ret);
+ 		if (child != NULL) {
+-			fprintf(trace->output, "%ld", ret);
+ 			if (thread__comm_set(child))
+ 				fprintf(trace->output, " (%s)", thread__comm_str(child));
+ 			thread__put(child);
+@@ -4128,10 +4128,13 @@ static int trace__set_filter_loop_pids(struct trace *trace)
+ 		if (!strcmp(thread__comm_str(parent), "sshd") ||
+ 		    strstarts(thread__comm_str(parent), "gnome-terminal")) {
+ 			pids[nr++] = thread__tid(parent);
++			thread__put(parent);
+ 			break;
+ 		}
++		thread__put(thread);
+ 		thread = parent;
+ 	}
++	thread__put(thread);
+ 
+ 	err = evlist__append_tp_filter_pids(trace->evlist, nr, pids);
+ 	if (!err && trace->filter_pids.map)
+diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
+index 121cf61ba1b345..e0b2e7268ef68c 100755
+--- a/tools/perf/scripts/python/exported-sql-viewer.py
++++ b/tools/perf/scripts/python/exported-sql-viewer.py
+@@ -680,7 +680,10 @@ class CallGraphModelBase(TreeModel):
+ 				s = value.replace("%", "\\%")
+ 				s = s.replace("_", "\\_")
+ 				# Translate * and ? into SQL LIKE pattern characters % and _
+-				trans = string.maketrans("*?", "%_")
++				if sys.version_info[0] == 3:
++					trans = str.maketrans("*?", "%_")
++				else:
++					trans = string.maketrans("*?", "%_")
+ 				match = " LIKE '" + str(s).translate(trans) + "'"
+ 			else:
+ 				match = " GLOB '" + str(value) + "'"
+diff --git a/tools/perf/tests/shell/lib/stat_output.sh b/tools/perf/tests/shell/lib/stat_output.sh
+index 4d4aac547f0109..c2ec7881ec1de4 100644
+--- a/tools/perf/tests/shell/lib/stat_output.sh
++++ b/tools/perf/tests/shell/lib/stat_output.sh
+@@ -151,6 +151,11 @@ check_per_socket()
+ check_metric_only()
+ {
+ 	echo -n "Checking $1 output: metric only "
++	if [ "$(uname -m)" = "s390x" ] && ! grep '^facilities' /proc/cpuinfo  | grep -qw 67
++	then
++		echo "[Skip] CPU-measurement counter facility not installed"
++		return
++	fi
+ 	perf stat --metric-only $2 -e instructions,cycles true
+ 	commachecker --metric-only
+ 	echo "[Success]"
+diff --git a/tools/perf/tests/shell/stat+json_output.sh b/tools/perf/tests/shell/stat+json_output.sh
+index a4f257ea839e13..98fb65274ac4f7 100755
+--- a/tools/perf/tests/shell/stat+json_output.sh
++++ b/tools/perf/tests/shell/stat+json_output.sh
+@@ -176,6 +176,11 @@ check_per_socket()
+ check_metric_only()
+ {
+ 	echo -n "Checking json output: metric only "
++	if [ "$(uname -m)" = "s390x" ] && ! grep '^facilities' /proc/cpuinfo  | grep -qw 67
++	then
++		echo "[Skip] CPU-measurement counter facility not installed"
++		return
++	fi
+ 	perf stat -j --metric-only -e instructions,cycles -o "${stat_output}" true
+ 	$PYTHON $pythonchecker --metric-only --file "${stat_output}"
+ 	echo "[Success]"
+diff --git a/tools/perf/tests/switch-tracking.c b/tools/perf/tests/switch-tracking.c
+index 8df3f9d9ffd2b2..6b3aac283c371c 100644
+--- a/tools/perf/tests/switch-tracking.c
++++ b/tools/perf/tests/switch-tracking.c
+@@ -264,7 +264,7 @@ static int compar(const void *a, const void *b)
+ 	const struct event_node *nodeb = b;
+ 	s64 cmp = nodea->event_time - nodeb->event_time;
+ 
+-	return cmp;
++	return cmp < 0 ? -1 : (cmp > 0 ? 1 : 0);
+ }
+ 
+ static int process_events(struct evlist *evlist,
+diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
+index 35c10509b797f3..992d1c822a97a8 100644
+--- a/tools/perf/ui/browsers/hists.c
++++ b/tools/perf/ui/browsers/hists.c
+@@ -3274,10 +3274,10 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
+ 				/*
+ 				 * No need to set actions->dso here since
+ 				 * it's just to remove the current filter.
+-				 * Ditto for thread below.
+ 				 */
+ 				do_zoom_dso(browser, actions);
+ 			} else if (top == &browser->hists->thread_filter) {
++				actions->thread = thread;
+ 				do_zoom_thread(browser, actions);
+ 			} else if (top == &browser->hists->socket_filter) {
+ 				do_zoom_socket(browser, actions);
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 4e8a9b172fbcc7..9b1011fe482671 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -127,6 +127,7 @@ struct intel_pt {
+ 
+ 	bool single_pebs;
+ 	bool sample_pebs;
++	int pebs_data_src_fmt;
+ 	struct evsel *pebs_evsel;
+ 
+ 	u64 evt_sample_type;
+@@ -175,6 +176,7 @@ enum switch_state {
+ struct intel_pt_pebs_event {
+ 	struct evsel *evsel;
+ 	u64 id;
++	int data_src_fmt;
+ };
+ 
+ struct intel_pt_queue {
+@@ -2272,7 +2274,146 @@ static void intel_pt_add_lbrs(struct branch_stack *br_stack,
+ 	}
+ }
+ 
+-static int intel_pt_do_synth_pebs_sample(struct intel_pt_queue *ptq, struct evsel *evsel, u64 id)
++#define P(a, b) PERF_MEM_S(a, b)
++#define OP_LH (P(OP, LOAD) | P(LVL, HIT))
++#define LEVEL(x) P(LVLNUM, x)
++#define REM P(REMOTE, REMOTE)
++#define SNOOP_NONE_MISS (P(SNOOP, NONE) | P(SNOOP, MISS))
++
++#define PERF_PEBS_DATA_SOURCE_GRT_MAX	0x10
++#define PERF_PEBS_DATA_SOURCE_GRT_MASK	(PERF_PEBS_DATA_SOURCE_GRT_MAX - 1)
++
++/* Based on kernel __intel_pmu_pebs_data_source_grt() and pebs_data_source */
++static const u64 pebs_data_source_grt[PERF_PEBS_DATA_SOURCE_GRT_MAX] = {
++	P(OP, LOAD) | P(LVL, MISS) | LEVEL(L3) | P(SNOOP, NA),         /* L3 miss|SNP N/A */
++	OP_LH | P(LVL, L1)  | LEVEL(L1)  | P(SNOOP, NONE),             /* L1 hit|SNP None */
++	OP_LH | P(LVL, LFB) | LEVEL(LFB) | P(SNOOP, NONE),             /* LFB/MAB hit|SNP None */
++	OP_LH | P(LVL, L2)  | LEVEL(L2)  | P(SNOOP, NONE),             /* L2 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, NONE),             /* L3 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HIT),              /* L3 hit|SNP Hit */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HITM),             /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HITM),             /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOPX, FWD),             /* L3 hit|SNP Fwd */
++	OP_LH | P(LVL, REM_CCE1) | REM | LEVEL(L3) | P(SNOOP, HITM),   /* Remote L3 hit|SNP HitM */
++	OP_LH | P(LVL, LOC_RAM)  | LEVEL(RAM) | P(SNOOP, HIT),         /* RAM hit|SNP Hit */
++	OP_LH | P(LVL, REM_RAM1) | REM | LEVEL(L3) | P(SNOOP, HIT),    /* Remote L3 hit|SNP Hit */
++	OP_LH | P(LVL, LOC_RAM)  | LEVEL(RAM) | SNOOP_NONE_MISS,       /* RAM hit|SNP None or Miss */
++	OP_LH | P(LVL, REM_RAM1) | LEVEL(RAM) | REM | SNOOP_NONE_MISS, /* Remote RAM hit|SNP None or Miss */
++	OP_LH | P(LVL, IO)  | LEVEL(NA) | P(SNOOP, NONE),              /* I/O hit|SNP None */
++	OP_LH | P(LVL, UNC) | LEVEL(NA) | P(SNOOP, NONE),              /* Uncached hit|SNP None */
++};
++
++/* Based on kernel __intel_pmu_pebs_data_source_cmt() and pebs_data_source */
++static const u64 pebs_data_source_cmt[PERF_PEBS_DATA_SOURCE_GRT_MAX] = {
++	P(OP, LOAD) | P(LVL, MISS) | LEVEL(L3) | P(SNOOP, NA),       /* L3 miss|SNP N/A */
++	OP_LH | P(LVL, L1)  | LEVEL(L1)  | P(SNOOP, NONE),           /* L1 hit|SNP None */
++	OP_LH | P(LVL, LFB) | LEVEL(LFB) | P(SNOOP, NONE),           /* LFB/MAB hit|SNP None */
++	OP_LH | P(LVL, L2)  | LEVEL(L2)  | P(SNOOP, NONE),           /* L2 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, NONE),           /* L3 hit|SNP None */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, MISS),           /* L3 hit|SNP Hit */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HIT),            /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOPX, FWD),           /* L3 hit|SNP HitM */
++	OP_LH | P(LVL, L3)  | LEVEL(L3)  | P(SNOOP, HITM),           /* L3 hit|SNP Fwd */
++	OP_LH | P(LVL, REM_CCE1) | REM | LEVEL(L3) | P(SNOOP, HITM), /* Remote L3 hit|SNP HitM */
++	OP_LH | P(LVL, LOC_RAM)  | LEVEL(RAM) | P(SNOOP, NONE),      /* RAM hit|SNP Hit */
++	OP_LH | LEVEL(RAM) | REM | P(SNOOP, NONE),                   /* Remote L3 hit|SNP Hit */
++	OP_LH | LEVEL(RAM) | REM | P(SNOOPX, FWD),                   /* RAM hit|SNP None or Miss */
++	OP_LH | LEVEL(RAM) | REM | P(SNOOP, HITM),                   /* Remote RAM hit|SNP None or Miss */
++	OP_LH | P(LVL, IO)  | LEVEL(NA) | P(SNOOP, NONE),            /* I/O hit|SNP None */
++	OP_LH | P(LVL, UNC) | LEVEL(NA) | P(SNOOP, NONE),            /* Uncached hit|SNP None */
++};
++
++/* Based on kernel pebs_set_tlb_lock() */
++static inline void pebs_set_tlb_lock(u64 *val, bool tlb, bool lock)
++{
++	/*
++	 * TLB access
++	 * 0 = did not miss 2nd level TLB
++	 * 1 = missed 2nd level TLB
++	 */
++	if (tlb)
++		*val |= P(TLB, MISS) | P(TLB, L2);
++	else
++		*val |= P(TLB, HIT) | P(TLB, L1) | P(TLB, L2);
++
++	/* locked prefix */
++	if (lock)
++		*val |= P(LOCK, LOCKED);
++}
++
++/* Based on kernel __grt_latency_data() */
++static u64 intel_pt_grt_latency_data(u8 dse, bool tlb, bool lock, bool blk,
++				     const u64 *pebs_data_source)
++{
++	u64 val;
++
++	dse &= PERF_PEBS_DATA_SOURCE_GRT_MASK;
++	val = pebs_data_source[dse];
++
++	pebs_set_tlb_lock(&val, tlb, lock);
++
++	if (blk)
++		val |= P(BLK, DATA);
++	else
++		val |= P(BLK, NA);
++
++	return val;
++}
++
++/* Default value for data source */
++#define PERF_MEM_NA (PERF_MEM_S(OP, NA)    |\
++		     PERF_MEM_S(LVL, NA)   |\
++		     PERF_MEM_S(SNOOP, NA) |\
++		     PERF_MEM_S(LOCK, NA)  |\
++		     PERF_MEM_S(TLB, NA)   |\
++		     PERF_MEM_S(LVLNUM, NA))
++
++enum DATA_SRC_FORMAT {
++	DATA_SRC_FORMAT_ERR  = -1,
++	DATA_SRC_FORMAT_NA   =  0,
++	DATA_SRC_FORMAT_GRT  =  1,
++	DATA_SRC_FORMAT_CMT  =  2,
++};
++
++/* Based on kernel grt_latency_data() and cmt_latency_data */
++static u64 intel_pt_get_data_src(u64 mem_aux_info, int data_src_fmt)
++{
++	switch (data_src_fmt) {
++	case DATA_SRC_FORMAT_GRT: {
++		union {
++			u64 val;
++			struct {
++				unsigned int dse:4;
++				unsigned int locked:1;
++				unsigned int stlb_miss:1;
++				unsigned int fwd_blk:1;
++				unsigned int reserved:25;
++			};
++		} x = {.val = mem_aux_info};
++		return intel_pt_grt_latency_data(x.dse, x.stlb_miss, x.locked, x.fwd_blk,
++						 pebs_data_source_grt);
++	}
++	case DATA_SRC_FORMAT_CMT: {
++		union {
++			u64 val;
++			struct {
++				unsigned int dse:5;
++				unsigned int locked:1;
++				unsigned int stlb_miss:1;
++				unsigned int fwd_blk:1;
++				unsigned int reserved:24;
++			};
++		} x = {.val = mem_aux_info};
++		return intel_pt_grt_latency_data(x.dse, x.stlb_miss, x.locked, x.fwd_blk,
++						 pebs_data_source_cmt);
++	}
++	default:
++		return PERF_MEM_NA;
++	}
++}
++
++static int intel_pt_do_synth_pebs_sample(struct intel_pt_queue *ptq, struct evsel *evsel,
++					 u64 id, int data_src_fmt)
+ {
+ 	const struct intel_pt_blk_items *items = &ptq->state->items;
+ 	struct perf_sample sample;
+@@ -2393,6 +2534,18 @@ static int intel_pt_do_synth_pebs_sample(struct intel_pt_queue *ptq, struct evse
+ 		}
+ 	}
+ 
++	if (sample_type & PERF_SAMPLE_DATA_SRC) {
++		if (items->has_mem_aux_info && data_src_fmt) {
++			if (data_src_fmt < 0) {
++				pr_err("Intel PT missing data_src info\n");
++				return -1;
++			}
++			sample.data_src = intel_pt_get_data_src(items->mem_aux_info, data_src_fmt);
++		} else {
++			sample.data_src = PERF_MEM_NA;
++		}
++	}
++
+ 	if (sample_type & PERF_SAMPLE_TRANSACTION && items->has_tsx_aux_info) {
+ 		u64 ax = items->has_rax ? items->rax : 0;
+ 		/* Refer kernel's intel_hsw_transaction() */
+@@ -2413,9 +2566,10 @@ static int intel_pt_synth_single_pebs_sample(struct intel_pt_queue *ptq)
+ {
+ 	struct intel_pt *pt = ptq->pt;
+ 	struct evsel *evsel = pt->pebs_evsel;
++	int data_src_fmt = pt->pebs_data_src_fmt;
+ 	u64 id = evsel->core.id[0];
+ 
+-	return intel_pt_do_synth_pebs_sample(ptq, evsel, id);
++	return intel_pt_do_synth_pebs_sample(ptq, evsel, id, data_src_fmt);
+ }
+ 
+ static int intel_pt_synth_pebs_sample(struct intel_pt_queue *ptq)
+@@ -2440,7 +2594,7 @@ static int intel_pt_synth_pebs_sample(struct intel_pt_queue *ptq)
+ 				       hw_id);
+ 			return intel_pt_synth_single_pebs_sample(ptq);
+ 		}
+-		err = intel_pt_do_synth_pebs_sample(ptq, pe->evsel, pe->id);
++		err = intel_pt_do_synth_pebs_sample(ptq, pe->evsel, pe->id, pe->data_src_fmt);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -3407,6 +3561,49 @@ static int intel_pt_process_itrace_start(struct intel_pt *pt,
+ 					event->itrace_start.tid);
+ }
+ 
++/*
++ * Events with data_src are identified by L1_Hit_Indication
++ * refer https://github.com/intel/perfmon
++ */
++static int intel_pt_data_src_fmt(struct intel_pt *pt, struct evsel *evsel)
++{
++	struct perf_env *env = pt->machine->env;
++	int fmt = DATA_SRC_FORMAT_NA;
++
++	if (!env->cpuid)
++		return DATA_SRC_FORMAT_ERR;
++
++	/*
++	 * PEBS-via-PT is only supported on E-core non-hybrid. Of those only
++	 * Gracemont and Crestmont have data_src. Check for:
++	 *	Alderlake N   (Gracemont)
++	 *	Sierra Forest (Crestmont)
++	 *	Grand Ridge   (Crestmont)
++	 */
++
++	if (!strncmp(env->cpuid, "GenuineIntel,6,190,", 19))
++		fmt = DATA_SRC_FORMAT_GRT;
++
++	if (!strncmp(env->cpuid, "GenuineIntel,6,175,", 19) ||
++	    !strncmp(env->cpuid, "GenuineIntel,6,182,", 19))
++		fmt = DATA_SRC_FORMAT_CMT;
++
++	if (fmt == DATA_SRC_FORMAT_NA)
++		return fmt;
++
++	/*
++	 * Only data_src events are:
++	 *	mem-loads	event=0xd0,umask=0x5
++	 *	mem-stores	event=0xd0,umask=0x6
++	 */
++	if (evsel->core.attr.type == PERF_TYPE_RAW &&
++	    ((evsel->core.attr.config & 0xffff) == 0x5d0 ||
++	     (evsel->core.attr.config & 0xffff) == 0x6d0))
++		return fmt;
++
++	return DATA_SRC_FORMAT_NA;
++}
++
+ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
+ 					     union perf_event *event,
+ 					     struct perf_sample *sample)
+@@ -3427,6 +3624,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
+ 
+ 	ptq->pebs[hw_id].evsel = evsel;
+ 	ptq->pebs[hw_id].id = sample->id;
++	ptq->pebs[hw_id].data_src_fmt = intel_pt_data_src_fmt(pt, evsel);
+ 
+ 	return 0;
+ }
+@@ -3976,6 +4174,7 @@ static void intel_pt_setup_pebs_events(struct intel_pt *pt)
+ 			}
+ 			pt->single_pebs = true;
+ 			pt->sample_pebs = true;
++			pt->pebs_data_src_fmt = intel_pt_data_src_fmt(pt, evsel);
+ 			pt->pebs_evsel = evsel;
+ 		}
+ 	}
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 2531b373f2cf7c..b048165b10c141 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1976,7 +1976,7 @@ static void ip__resolve_ams(struct thread *thread,
+ 	 * Thus, we have to try consecutively until we find a match
+ 	 * or else, the symbol is unknown
+ 	 */
+-	thread__find_cpumode_addr_location(thread, ip, &al);
++	thread__find_cpumode_addr_location(thread, ip, /*symbols=*/true, &al);
+ 
+ 	ams->addr = ip;
+ 	ams->al_addr = al.addr;
+@@ -2078,7 +2078,7 @@ static int add_callchain_ip(struct thread *thread,
+ 	al.sym = NULL;
+ 	al.srcline = NULL;
+ 	if (!cpumode) {
+-		thread__find_cpumode_addr_location(thread, ip, &al);
++		thread__find_cpumode_addr_location(thread, ip, symbols, &al);
+ 	} else {
+ 		if (ip >= PERF_CONTEXT_MAX) {
+ 			switch (ip) {
+@@ -2106,6 +2106,8 @@ static int add_callchain_ip(struct thread *thread,
+ 		}
+ 		if (symbols)
+ 			thread__find_symbol(thread, *cpumode, ip, &al);
++		else
++			thread__find_map(thread, *cpumode, ip, &al);
+ 	}
+ 
+ 	if (al.sym != NULL) {
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index b7ebac5ab1d112..e2e3969e12d360 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -2052,6 +2052,9 @@ static bool perf_pmu___name_match(const struct perf_pmu *pmu, const char *to_mat
+ 	for (size_t i = 0; i < ARRAY_SIZE(names); i++) {
+ 		const char *name = names[i];
+ 
++		if (!name)
++			continue;
++
+ 		if (wildcard && perf_pmu__match_wildcard_uncore(name, to_match))
+ 			return true;
+ 		if (!wildcard && perf_pmu__match_ignoring_suffix_uncore(name, to_match))
+diff --git a/tools/perf/util/symbol-minimal.c b/tools/perf/util/symbol-minimal.c
+index c6f369b5d893f3..36c1d3090689fc 100644
+--- a/tools/perf/util/symbol-minimal.c
++++ b/tools/perf/util/symbol-minimal.c
+@@ -90,11 +90,23 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ {
+ 	FILE *fp;
+ 	int ret = -1;
+-	bool need_swap = false;
++	bool need_swap = false, elf32;
+ 	u8 e_ident[EI_NIDENT];
+-	size_t buf_size;
+-	void *buf;
+ 	int i;
++	union {
++		struct {
++			Elf32_Ehdr ehdr32;
++			Elf32_Phdr *phdr32;
++		};
++		struct {
++			Elf64_Ehdr ehdr64;
++			Elf64_Phdr *phdr64;
++		};
++	} hdrs;
++	void *phdr;
++	size_t phdr_size;
++	void *buf = NULL;
++	size_t buf_size = 0;
+ 
+ 	fp = fopen(filename, "r");
+ 	if (fp == NULL)
+@@ -108,117 +120,79 @@ int filename__read_build_id(const char *filename, struct build_id *bid)
+ 		goto out;
+ 
+ 	need_swap = check_need_swap(e_ident[EI_DATA]);
++	elf32 = e_ident[EI_CLASS] == ELFCLASS32;
+ 
+-	/* for simplicity */
+-	fseek(fp, 0, SEEK_SET);
+-
+-	if (e_ident[EI_CLASS] == ELFCLASS32) {
+-		Elf32_Ehdr ehdr;
+-		Elf32_Phdr *phdr;
+-
+-		if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+-			goto out;
++	if (fread(elf32 ? (void *)&hdrs.ehdr32 : (void *)&hdrs.ehdr64,
++		  elf32 ? sizeof(hdrs.ehdr32) : sizeof(hdrs.ehdr64),
++		  1, fp) != 1)
++		goto out;
+ 
+-		if (need_swap) {
+-			ehdr.e_phoff = bswap_32(ehdr.e_phoff);
+-			ehdr.e_phentsize = bswap_16(ehdr.e_phentsize);
+-			ehdr.e_phnum = bswap_16(ehdr.e_phnum);
++	if (need_swap) {
++		if (elf32) {
++			hdrs.ehdr32.e_phoff = bswap_32(hdrs.ehdr32.e_phoff);
++			hdrs.ehdr32.e_phentsize = bswap_16(hdrs.ehdr32.e_phentsize);
++			hdrs.ehdr32.e_phnum = bswap_16(hdrs.ehdr32.e_phnum);
++		} else {
++			hdrs.ehdr64.e_phoff = bswap_64(hdrs.ehdr64.e_phoff);
++			hdrs.ehdr64.e_phentsize = bswap_16(hdrs.ehdr64.e_phentsize);
++			hdrs.ehdr64.e_phnum = bswap_16(hdrs.ehdr64.e_phnum);
+ 		}
++	}
++	phdr_size = elf32 ? hdrs.ehdr32.e_phentsize * hdrs.ehdr32.e_phnum
++			  : hdrs.ehdr64.e_phentsize * hdrs.ehdr64.e_phnum;
++	phdr = malloc(phdr_size);
++	if (phdr == NULL)
++		goto out;
+ 
+-		buf_size = ehdr.e_phentsize * ehdr.e_phnum;
+-		buf = malloc(buf_size);
+-		if (buf == NULL)
+-			goto out;
+-
+-		fseek(fp, ehdr.e_phoff, SEEK_SET);
+-		if (fread(buf, buf_size, 1, fp) != 1)
+-			goto out_free;
+-
+-		for (i = 0, phdr = buf; i < ehdr.e_phnum; i++, phdr++) {
+-			void *tmp;
+-			long offset;
+-
+-			if (need_swap) {
+-				phdr->p_type = bswap_32(phdr->p_type);
+-				phdr->p_offset = bswap_32(phdr->p_offset);
+-				phdr->p_filesz = bswap_32(phdr->p_filesz);
+-			}
+-
+-			if (phdr->p_type != PT_NOTE)
+-				continue;
+-
+-			buf_size = phdr->p_filesz;
+-			offset = phdr->p_offset;
+-			tmp = realloc(buf, buf_size);
+-			if (tmp == NULL)
+-				goto out_free;
+-
+-			buf = tmp;
+-			fseek(fp, offset, SEEK_SET);
+-			if (fread(buf, buf_size, 1, fp) != 1)
+-				goto out_free;
++	fseek(fp, elf32 ? hdrs.ehdr32.e_phoff : hdrs.ehdr64.e_phoff, SEEK_SET);
++	if (fread(phdr, phdr_size, 1, fp) != 1)
++		goto out_free;
+ 
+-			ret = read_build_id(buf, buf_size, bid, need_swap);
+-			if (ret == 0) {
+-				ret = bid->size;
+-				break;
+-			}
+-		}
+-	} else {
+-		Elf64_Ehdr ehdr;
+-		Elf64_Phdr *phdr;
++	if (elf32)
++		hdrs.phdr32 = phdr;
++	else
++		hdrs.phdr64 = phdr;
+ 
+-		if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+-			goto out;
++	for (i = 0; i < elf32 ? hdrs.ehdr32.e_phnum : hdrs.ehdr64.e_phnum; i++) {
++		size_t p_filesz;
+ 
+ 		if (need_swap) {
+-			ehdr.e_phoff = bswap_64(ehdr.e_phoff);
+-			ehdr.e_phentsize = bswap_16(ehdr.e_phentsize);
+-			ehdr.e_phnum = bswap_16(ehdr.e_phnum);
++			if (elf32) {
++				hdrs.phdr32[i].p_type = bswap_32(hdrs.phdr32[i].p_type);
++				hdrs.phdr32[i].p_offset = bswap_32(hdrs.phdr32[i].p_offset);
++				hdrs.phdr32[i].p_filesz = bswap_32(hdrs.phdr32[i].p_offset);
++			} else {
++				hdrs.phdr64[i].p_type = bswap_32(hdrs.phdr64[i].p_type);
++				hdrs.phdr64[i].p_offset = bswap_64(hdrs.phdr64[i].p_offset);
++				hdrs.phdr64[i].p_filesz = bswap_64(hdrs.phdr64[i].p_filesz);
++			}
+ 		}
++		if ((elf32 ? hdrs.phdr32[i].p_type : hdrs.phdr64[i].p_type) != PT_NOTE)
++			continue;
+ 
+-		buf_size = ehdr.e_phentsize * ehdr.e_phnum;
+-		buf = malloc(buf_size);
+-		if (buf == NULL)
+-			goto out;
+-
+-		fseek(fp, ehdr.e_phoff, SEEK_SET);
+-		if (fread(buf, buf_size, 1, fp) != 1)
+-			goto out_free;
+-
+-		for (i = 0, phdr = buf; i < ehdr.e_phnum; i++, phdr++) {
++		p_filesz = elf32 ? hdrs.phdr32[i].p_filesz : hdrs.phdr64[i].p_filesz;
++		if (p_filesz > buf_size) {
+ 			void *tmp;
+-			long offset;
+-
+-			if (need_swap) {
+-				phdr->p_type = bswap_32(phdr->p_type);
+-				phdr->p_offset = bswap_64(phdr->p_offset);
+-				phdr->p_filesz = bswap_64(phdr->p_filesz);
+-			}
+ 
+-			if (phdr->p_type != PT_NOTE)
+-				continue;
+-
+-			buf_size = phdr->p_filesz;
+-			offset = phdr->p_offset;
++			buf_size = p_filesz;
+ 			tmp = realloc(buf, buf_size);
+ 			if (tmp == NULL)
+ 				goto out_free;
+-
+ 			buf = tmp;
+-			fseek(fp, offset, SEEK_SET);
+-			if (fread(buf, buf_size, 1, fp) != 1)
+-				goto out_free;
++		}
++		fseek(fp, elf32 ? hdrs.phdr32[i].p_offset : hdrs.phdr64[i].p_offset, SEEK_SET);
++		if (fread(buf, p_filesz, 1, fp) != 1)
++			goto out_free;
+ 
+-			ret = read_build_id(buf, buf_size, bid, need_swap);
+-			if (ret == 0) {
+-				ret = bid->size;
+-				break;
+-			}
++		ret = read_build_id(buf, p_filesz, bid, need_swap);
++		if (ret == 0) {
++			ret = bid->size;
++			break;
+ 		}
+ 	}
+ out_free:
+ 	free(buf);
++	free(phdr);
+ out:
+ 	fclose(fp);
+ 	return ret;
+diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
+index 89585f53c1d5cc..10a01f8fbd4000 100644
+--- a/tools/perf/util/thread.c
++++ b/tools/perf/util/thread.c
+@@ -410,7 +410,7 @@ int thread__fork(struct thread *thread, struct thread *parent, u64 timestamp, bo
+ }
+ 
+ void thread__find_cpumode_addr_location(struct thread *thread, u64 addr,
+-					struct addr_location *al)
++					bool symbols, struct addr_location *al)
+ {
+ 	size_t i;
+ 	const u8 cpumodes[] = {
+@@ -421,7 +421,11 @@ void thread__find_cpumode_addr_location(struct thread *thread, u64 addr,
+ 	};
+ 
+ 	for (i = 0; i < ARRAY_SIZE(cpumodes); i++) {
+-		thread__find_symbol(thread, cpumodes[i], addr, al);
++		if (symbols)
++			thread__find_symbol(thread, cpumodes[i], addr, al);
++		else
++			thread__find_map(thread, cpumodes[i], addr, al);
++
+ 		if (al->map)
+ 			break;
+ 	}
+diff --git a/tools/perf/util/thread.h b/tools/perf/util/thread.h
+index cd574a896418ac..2b90bbed7a6121 100644
+--- a/tools/perf/util/thread.h
++++ b/tools/perf/util/thread.h
+@@ -126,7 +126,7 @@ struct symbol *thread__find_symbol_fb(struct thread *thread, u8 cpumode,
+ 				      u64 addr, struct addr_location *al);
+ 
+ void thread__find_cpumode_addr_location(struct thread *thread, u64 addr,
+-					struct addr_location *al);
++					bool symbols, struct addr_location *al);
+ 
+ int thread__memcpy(struct thread *thread, struct machine *machine,
+ 		   void *buf, u64 ip, int len, bool *is64bit);
+diff --git a/tools/perf/util/tool_pmu.c b/tools/perf/util/tool_pmu.c
+index 97b327d1ce4a01..727a10e3f99001 100644
+--- a/tools/perf/util/tool_pmu.c
++++ b/tools/perf/util/tool_pmu.c
+@@ -486,8 +486,14 @@ int evsel__tool_pmu_read(struct evsel *evsel, int cpu_map_idx, int thread)
+ 		delta_start *= 1000000000 / ticks_per_sec;
+ 	}
+ 	count->val    = delta_start;
+-	count->ena    = count->run = delta_start;
+ 	count->lost   = 0;
++	/*
++	 * The values of enabled and running must make a ratio of 100%. The
++	 * exact values don't matter as long as they are non-zero to avoid
++	 * issues with evsel__count_has_error.
++	 */
++	count->ena++;
++	count->run++;
+ 	return 0;
+ }
+ 
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 0170d3cc68194c..ab79854cb296e4 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4766,6 +4766,38 @@ unsigned long pmt_read_counter(struct pmt_counter *ppmt, unsigned int domain_id)
+ 	return (value & value_mask) >> value_shift;
+ }
+ 
++
++/* Rapl domain enumeration helpers */
++static inline int get_rapl_num_domains(void)
++{
++	int num_packages = topo.max_package_id + 1;
++	int num_cores_per_package;
++	int num_cores;
++
++	if (!platform->has_per_core_rapl)
++		return num_packages;
++
++	num_cores_per_package = topo.max_core_id + 1;
++	num_cores = num_cores_per_package * num_packages;
++
++	return num_cores;
++}
++
++static inline int get_rapl_domain_id(int cpu)
++{
++	int nr_cores_per_package = topo.max_core_id + 1;
++	int rapl_core_id;
++
++	if (!platform->has_per_core_rapl)
++		return cpus[cpu].physical_package_id;
++
++	/* Compute the system-wide unique core-id for @cpu */
++	rapl_core_id = cpus[cpu].physical_core_id;
++	rapl_core_id += cpus[cpu].physical_package_id * nr_cores_per_package;
++
++	return rapl_core_id;
++}
++
+ /*
+  * get_counters(...)
+  * migrate to cpu
+@@ -4821,7 +4853,7 @@ int get_counters(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+ 		goto done;
+ 
+ 	if (platform->has_per_core_rapl) {
+-		status = get_rapl_counters(cpu, c->core_id, c, p);
++		status = get_rapl_counters(cpu, get_rapl_domain_id(cpu), c, p);
+ 		if (status != 0)
+ 			return status;
+ 	}
+@@ -4887,7 +4919,7 @@ int get_counters(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+ 		p->sys_lpi = cpuidle_cur_sys_lpi_us;
+ 
+ 	if (!platform->has_per_core_rapl) {
+-		status = get_rapl_counters(cpu, p->package_id, c, p);
++		status = get_rapl_counters(cpu, get_rapl_domain_id(cpu), c, p);
+ 		if (status != 0)
+ 			return status;
+ 	}
+@@ -7863,7 +7895,7 @@ void linux_perf_init(void)
+ 
+ void rapl_perf_init(void)
+ {
+-	const unsigned int num_domains = (platform->has_per_core_rapl ? topo.max_core_id : topo.max_package_id) + 1;
++	const unsigned int num_domains = get_rapl_num_domains();
+ 	bool *domain_visited = calloc(num_domains, sizeof(bool));
+ 
+ 	rapl_counter_info_perdomain = calloc(num_domains, sizeof(*rapl_counter_info_perdomain));
+@@ -7904,8 +7936,7 @@ void rapl_perf_init(void)
+ 				continue;
+ 
+ 			/* Skip already seen and handled RAPL domains */
+-			next_domain =
+-			    platform->has_per_core_rapl ? cpus[cpu].physical_core_id : cpus[cpu].physical_package_id;
++			next_domain = get_rapl_domain_id(cpu);
+ 
+ 			assert(next_domain < num_domains);
+ 
+diff --git a/tools/testing/kunit/qemu_configs/sparc.py b/tools/testing/kunit/qemu_configs/sparc.py
+index 256d9573b44646..2019550a1b692e 100644
+--- a/tools/testing/kunit/qemu_configs/sparc.py
++++ b/tools/testing/kunit/qemu_configs/sparc.py
+@@ -2,6 +2,8 @@ from ..qemu_config import QemuArchParams
+ 
+ QEMU_ARCH = QemuArchParams(linux_arch='sparc',
+ 			   kconfig='''
++CONFIG_KUNIT_FAULT_TEST=n
++CONFIG_SPARC32=y
+ CONFIG_SERIAL_SUNZILOG=y
+ CONFIG_SERIAL_SUNZILOG_CONSOLE=y
+ ''',
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 80fb84fa3cfcbd..9c477321a5b474 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -202,7 +202,7 @@ export KHDR_INCLUDES
+ 
+ all:
+ 	@ret=1;							\
+-	for TARGET in $(TARGETS); do				\
++	for TARGET in $(TARGETS) $(INSTALL_DEP_TARGETS); do	\
+ 		BUILD_TARGET=$$BUILD/$$TARGET;			\
+ 		mkdir $$BUILD_TARGET  -p;			\
+ 		$(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET	\
+diff --git a/tools/testing/selftests/arm64/fp/fp-ptrace.c b/tools/testing/selftests/arm64/fp/fp-ptrace.c
+index 4930e03a7b9903..762048eb354ffe 100644
+--- a/tools/testing/selftests/arm64/fp/fp-ptrace.c
++++ b/tools/testing/selftests/arm64/fp/fp-ptrace.c
+@@ -891,18 +891,11 @@ static void set_initial_values(struct test_config *config)
+ {
+ 	int vq = __sve_vq_from_vl(vl_in(config));
+ 	int sme_vq = __sve_vq_from_vl(config->sme_vl_in);
+-	bool sm_change;
+ 
+ 	svcr_in = config->svcr_in;
+ 	svcr_expected = config->svcr_expected;
+ 	svcr_out = 0;
+ 
+-	if (sme_supported() &&
+-	    (svcr_in & SVCR_SM) != (svcr_expected & SVCR_SM))
+-		sm_change = true;
+-	else
+-		sm_change = false;
+-
+ 	fill_random(&v_in, sizeof(v_in));
+ 	memcpy(v_expected, v_in, sizeof(v_in));
+ 	memset(v_out, 0, sizeof(v_out));
+@@ -953,12 +946,7 @@ static void set_initial_values(struct test_config *config)
+ 	if (fpmr_supported()) {
+ 		fill_random(&fpmr_in, sizeof(fpmr_in));
+ 		fpmr_in &= FPMR_SAFE_BITS;
+-
+-		/* Entering or exiting streaming mode clears FPMR */
+-		if (sm_change)
+-			fpmr_expected = 0;
+-		else
+-			fpmr_expected = fpmr_in;
++		fpmr_expected = fpmr_in;
+ 	} else {
+ 		fpmr_in = 0;
+ 		fpmr_expected = 0;
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+index dbd13f8e42a7aa..dd6512fa652be0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+@@ -63,6 +63,12 @@ static void test_bpf_nf_ct(int mode)
+ 		.repeat = 1,
+ 	);
+ 
++	if (SYS_NOFAIL("iptables-legacy --version")) {
++		fprintf(stdout, "Missing required iptables-legacy tool\n");
++		test__skip();
++		return;
++	}
++
+ 	skel = test_bpf_nf__open_and_load();
+ 	if (!ASSERT_OK_PTR(skel, "test_bpf_nf__open_and_load"))
+ 		return;
+diff --git a/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c b/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
+index 8e13a3416a21d2..1de14b111931aa 100644
+--- a/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c
+@@ -104,7 +104,7 @@ void test_kmem_cache_iter(void)
+ 		goto destroy;
+ 
+ 	memset(buf, 0, sizeof(buf));
+-	while (read(iter_fd, buf, sizeof(buf) > 0)) {
++	while (read(iter_fd, buf, sizeof(buf)) > 0) {
+ 		/* Read out all contents */
+ 		printf("%s", buf);
+ 	}
+diff --git a/tools/testing/selftests/bpf/progs/verifier_load_acquire.c b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c
+index 77698d5a19e446..a696ab84bfd662 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_load_acquire.c
++++ b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c
+@@ -10,65 +10,81 @@
+ 
+ SEC("socket")
+ __description("load-acquire, 8-bit")
+-__success __success_unpriv __retval(0x12)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_8(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12;"
+ 	"*(u8 *)(r10 - 1) = w1;"
+-	".8byte %[load_acquire_insn];" // w0 = load_acquire((u8 *)(r10 - 1));
++	".8byte %[load_acquire_insn];" // w2 = load_acquire((u8 *)(r10 - 1));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -1))
++		     BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -1))
+ 	: __clobber_all);
+ }
+ 
+ SEC("socket")
+ __description("load-acquire, 16-bit")
+-__success __success_unpriv __retval(0x1234)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_16(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x1234;"
+ 	"*(u16 *)(r10 - 2) = w1;"
+-	".8byte %[load_acquire_insn];" // w0 = load_acquire((u16 *)(r10 - 2));
++	".8byte %[load_acquire_insn];" // w2 = load_acquire((u16 *)(r10 - 2));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -2))
++		     BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -2))
+ 	: __clobber_all);
+ }
+ 
+ SEC("socket")
+ __description("load-acquire, 32-bit")
+-__success __success_unpriv __retval(0x12345678)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_32(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12345678;"
+ 	"*(u32 *)(r10 - 4) = w1;"
+-	".8byte %[load_acquire_insn];" // w0 = load_acquire((u32 *)(r10 - 4));
++	".8byte %[load_acquire_insn];" // w2 = load_acquire((u32 *)(r10 - 4));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -4))
++		     BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -4))
+ 	: __clobber_all);
+ }
+ 
+ SEC("socket")
+ __description("load-acquire, 64-bit")
+-__success __success_unpriv __retval(0x1234567890abcdef)
++__success __success_unpriv __retval(0)
+ __naked void load_acquire_64(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"r1 = 0x1234567890abcdef ll;"
+ 	"*(u64 *)(r10 - 8) = r1;"
+-	".8byte %[load_acquire_insn];" // r0 = load_acquire((u64 *)(r10 - 8));
++	".8byte %[load_acquire_insn];" // r2 = load_acquire((u64 *)(r10 - 8));
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(load_acquire_insn,
+-		     BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -8))
++		     BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -8))
+ 	: __clobber_all);
+ }
+ 
+diff --git a/tools/testing/selftests/bpf/progs/verifier_store_release.c b/tools/testing/selftests/bpf/progs/verifier_store_release.c
+index c0442d5bb049d8..022d03d9835957 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_store_release.c
++++ b/tools/testing/selftests/bpf/progs/verifier_store_release.c
+@@ -11,13 +11,17 @@
+ 
+ SEC("socket")
+ __description("store-release, 8-bit")
+-__success __success_unpriv __retval(0x12)
++__success __success_unpriv __retval(0)
+ __naked void store_release_8(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12;"
+ 	".8byte %[store_release_insn];" // store_release((u8 *)(r10 - 1), w1);
+-	"w0 = *(u8 *)(r10 - 1);"
++	"w2 = *(u8 *)(r10 - 1);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+@@ -27,13 +31,17 @@ __naked void store_release_8(void)
+ 
+ SEC("socket")
+ __description("store-release, 16-bit")
+-__success __success_unpriv __retval(0x1234)
++__success __success_unpriv __retval(0)
+ __naked void store_release_16(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x1234;"
+ 	".8byte %[store_release_insn];" // store_release((u16 *)(r10 - 2), w1);
+-	"w0 = *(u16 *)(r10 - 2);"
++	"w2 = *(u16 *)(r10 - 2);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+@@ -43,13 +51,17 @@ __naked void store_release_16(void)
+ 
+ SEC("socket")
+ __description("store-release, 32-bit")
+-__success __success_unpriv __retval(0x12345678)
++__success __success_unpriv __retval(0)
+ __naked void store_release_32(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"w1 = 0x12345678;"
+ 	".8byte %[store_release_insn];" // store_release((u32 *)(r10 - 4), w1);
+-	"w0 = *(u32 *)(r10 - 4);"
++	"w2 = *(u32 *)(r10 - 4);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+@@ -59,13 +71,17 @@ __naked void store_release_32(void)
+ 
+ SEC("socket")
+ __description("store-release, 64-bit")
+-__success __success_unpriv __retval(0x1234567890abcdef)
++__success __success_unpriv __retval(0)
+ __naked void store_release_64(void)
+ {
+ 	asm volatile (
++	"r0 = 0;"
+ 	"r1 = 0x1234567890abcdef ll;"
+ 	".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r1);
+-	"r0 = *(u64 *)(r10 - 8);"
++	"r2 = *(u64 *)(r10 - 8);"
++	"if r2 == r1 goto 1f;"
++	"r0 = 1;"
++"1:"
+ 	"exit;"
+ 	:
+ 	: __imm_insn(store_release_insn,
+diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
+index 49f2fc61061f5d..9551d8d5f8f9f8 100644
+--- a/tools/testing/selftests/bpf/test_loader.c
++++ b/tools/testing/selftests/bpf/test_loader.c
+@@ -1042,6 +1042,14 @@ void run_subtest(struct test_loader *tester,
+ 	emit_verifier_log(tester->log_buf, false /*force*/);
+ 	validate_msgs(tester->log_buf, &subspec->expect_msgs, emit_verifier_log);
+ 
++	/* Restore capabilities because the kernel will silently ignore requests
++	 * for program info (such as xlated program text) if we are not
++	 * bpf-capable. Also, for some reason test_verifier executes programs
++	 * with all capabilities restored. Do the same here.
++	 */
++	if (restore_capabilities(&caps))
++		goto tobj_cleanup;
++
+ 	if (subspec->expect_xlated.cnt) {
+ 		err = get_xlated_program_text(bpf_program__fd(tprog),
+ 					      tester->log_buf, tester->log_buf_sz);
+@@ -1067,12 +1075,6 @@ void run_subtest(struct test_loader *tester,
+ 	}
+ 
+ 	if (should_do_test_run(spec, subspec)) {
+-		/* For some reason test_verifier executes programs
+-		 * with all capabilities restored. Do the same here.
+-		 */
+-		if (restore_capabilities(&caps))
+-			goto tobj_cleanup;
+-
+ 		/* Do bpf_map__attach_struct_ops() for each struct_ops map.
+ 		 * This should trigger bpf_struct_ops->reg callback on kernel side.
+ 		 */
+diff --git a/tools/testing/selftests/coredump/stackdump_test.c b/tools/testing/selftests/coredump/stackdump_test.c
+index 137b2364a08207..fe3c728cd6be5a 100644
+--- a/tools/testing/selftests/coredump/stackdump_test.c
++++ b/tools/testing/selftests/coredump/stackdump_test.c
+@@ -89,14 +89,14 @@ FIXTURE_TEARDOWN(coredump)
+ 	fprintf(stderr, "Failed to cleanup stackdump test: %s\n", reason);
+ }
+ 
+-TEST_F(coredump, stackdump)
++TEST_F_TIMEOUT(coredump, stackdump, 120)
+ {
+ 	struct sigaction action = {};
+ 	unsigned long long stack;
+ 	char *test_dir, *line;
+ 	size_t line_length;
+ 	char buf[PATH_MAX];
+-	int ret, i;
++	int ret, i, status;
+ 	FILE *file;
+ 	pid_t pid;
+ 
+@@ -129,6 +129,10 @@ TEST_F(coredump, stackdump)
+ 	/*
+ 	 * Step 3: Wait for the stackdump script to write the stack pointers to the stackdump file
+ 	 */
++	waitpid(pid, &status, 0);
++	ASSERT_TRUE(WIFSIGNALED(status));
++	ASSERT_TRUE(WCOREDUMP(status));
++
+ 	for (i = 0; i < 10; ++i) {
+ 		file = fopen(STACKDUMP_FILE, "r");
+ 		if (file)
+@@ -138,10 +142,12 @@ TEST_F(coredump, stackdump)
+ 	ASSERT_NE(file, NULL);
+ 
+ 	/* Step 4: Make sure all stack pointer values are non-zero */
++	line = NULL;
+ 	for (i = 0; -1 != getline(&line, &line_length, file); ++i) {
+ 		stack = strtoull(line, NULL, 10);
+ 		ASSERT_NE(stack, 0);
+ 	}
++	free(line);
+ 
+ 	ASSERT_EQ(i, 1 + NUM_THREAD_SPAWN);
+ 
+diff --git a/tools/testing/selftests/cpufreq/cpufreq.sh b/tools/testing/selftests/cpufreq/cpufreq.sh
+index e350c521b46750..3aad9db921b533 100755
+--- a/tools/testing/selftests/cpufreq/cpufreq.sh
++++ b/tools/testing/selftests/cpufreq/cpufreq.sh
+@@ -244,9 +244,10 @@ do_suspend()
+ 					printf "Failed to suspend using RTC wake alarm\n"
+ 					return 1
+ 				fi
++			else
++				echo $filename > $SYSFS/power/state
+ 			fi
+ 
+-			echo $filename > $SYSFS/power/state
+ 			printf "Came out of $1\n"
+ 
+ 			printf "Do basic tests after finishing $1 to verify cpufreq state\n\n"
+diff --git a/tools/testing/selftests/drivers/net/hw/tso.py b/tools/testing/selftests/drivers/net/hw/tso.py
+index e1ecb92f79d9b4..3370827409aa02 100755
+--- a/tools/testing/selftests/drivers/net/hw/tso.py
++++ b/tools/testing/selftests/drivers/net/hw/tso.py
+@@ -39,7 +39,7 @@ def run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso):
+     port = rand_port()
+     listen_cmd = f"socat -{ipver} -t 2 -u TCP-LISTEN:{port},reuseport /dev/null,ignoreeof"
+ 
+-    with bkg(listen_cmd, host=cfg.remote) as nc:
++    with bkg(listen_cmd, host=cfg.remote, exit_wait=True) as nc:
+         wait_port_listen(port, host=cfg.remote)
+ 
+         if ipver == "4":
+@@ -216,7 +216,7 @@ def main() -> None:
+             ("",            "6", "tx-tcp6-segmentation",          None),
+             ("vxlan",        "", "tx-udp_tnl-segmentation",       ("vxlan",  True,  "id 100 dstport 4789 noudpcsum")),
+             ("vxlan_csum",   "", "tx-udp_tnl-csum-segmentation",  ("vxlan",  False, "id 100 dstport 4789 udpcsum")),
+-            ("gre",         "4", "tx-gre-segmentation",           ("ipgre",  False,  "")),
++            ("gre",         "4", "tx-gre-segmentation",           ("gre",    False,  "")),
+             ("gre",         "6", "tx-gre-segmentation",           ("ip6gre", False,  "")),
+         )
+ 
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index b2f76a52215ad2..61acbd45ffaaf8 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -1629,14 +1629,8 @@ void teardown_trace_fixture(struct __test_metadata *_metadata,
+ {
+ 	if (tracer) {
+ 		int status;
+-		/*
+-		 * Extract the exit code from the other process and
+-		 * adopt it for ourselves in case its asserts failed.
+-		 */
+ 		ASSERT_EQ(0, kill(tracer, SIGUSR1));
+ 		ASSERT_EQ(tracer, waitpid(tracer, &status, 0));
+-		if (WEXITSTATUS(status))
+-			_metadata->exit_code = KSFT_FAIL;
+ 	}
+ }
+ 
+@@ -3166,12 +3160,15 @@ TEST(syscall_restart)
+ 	ret = get_syscall(_metadata, child_pid);
+ #if defined(__arm__)
+ 	/*
+-	 * FIXME:
+ 	 * - native ARM registers do NOT expose true syscall.
+ 	 * - compat ARM registers on ARM64 DO expose true syscall.
++	 * - values of utsbuf.machine include 'armv8l' or 'armb8b'
++	 *   for ARM64 running in compat mode.
+ 	 */
+ 	ASSERT_EQ(0, uname(&utsbuf));
+-	if (strncmp(utsbuf.machine, "arm", 3) == 0) {
++	if ((strncmp(utsbuf.machine, "arm", 3) == 0) &&
++	    (strncmp(utsbuf.machine, "armv8l", 6) != 0) &&
++	    (strncmp(utsbuf.machine, "armv8b", 6) != 0)) {
+ 		EXPECT_EQ(__NR_nanosleep, ret);
+ 	} else
+ #endif
+diff --git a/tools/tracing/rtla/src/timerlat_bpf.c b/tools/tracing/rtla/src/timerlat_bpf.c
+index 5abee884037aeb..0bc44ce5d69bd9 100644
+--- a/tools/tracing/rtla/src/timerlat_bpf.c
++++ b/tools/tracing/rtla/src/timerlat_bpf.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #ifdef HAVE_BPF_SKEL
++#define _GNU_SOURCE
+ #include "timerlat.h"
+ #include "timerlat_bpf.h"
+ #include "timerlat.skel.h"


             reply	other threads:[~2025-06-19 14:21 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-19 14:21 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2025-08-21  4:00 [gentoo-commits] proj/linux-patches:6.15 commit in: / Arisu Tachibana
2025-08-21  1:10 Arisu Tachibana
2025-08-16  3:10 Arisu Tachibana
2025-08-04  5:58 Arisu Tachibana
2025-08-01 10:30 Arisu Tachibana
2025-07-24  9:17 Arisu Tachibana
2025-07-18 12:05 Arisu Tachibana
2025-07-11  2:26 Arisu Tachibana
2025-07-06 13:42 Arisu Tachibana
2025-06-27 11:17 Mike Pagano
2025-06-24 17:42 Mike Pagano
2025-06-10 12:24 Mike Pagano
2025-06-10 12:14 Mike Pagano
2025-06-05 19:13 Mike Pagano
2025-06-04 18:07 Mike Pagano
2025-06-01 21:41 Mike Pagano
2025-05-27 19:29 Mike Pagano
2025-05-26 10:21 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1750342899.17ff67232f48bbe882b1a95d4b9447752f5ac7d0.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox