public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Alice Ferrazzi" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: /
Date: Tue, 24 Jan 2023 07:19:46 +0000 (UTC)	[thread overview]
Message-ID: <1674544753.a7e1722d1d75744592599e63e49a84ad72e66cfc.alicef@gentoo> (raw)

commit:     a7e1722d1d75744592599e63e49a84ad72e66cfc
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 24 07:18:40 2023 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Jan 24 07:19:13 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a7e1722d

Linux patch 6.1.8

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README            |    4 +
 1007_linux-6.1.8.patch | 6638 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6642 insertions(+)

diff --git a/0000_README b/0000_README
index b3e83a42..396dd2ee 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-6.1.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 6.1.7
 
+Patch:  1007_linux-6.1.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 6.1.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-6.1.8.patch b/1007_linux-6.1.8.patch
new file mode 100644
index 00000000..bdd0484e
--- /dev/null
+++ b/1007_linux-6.1.8.patch
@@ -0,0 +1,6638 @@
+diff --git a/Documentation/ABI/testing/sysfs-kernel-oops_count b/Documentation/ABI/testing/sysfs-kernel-oops_count
+new file mode 100644
+index 0000000000000..156cca9dbc960
+--- /dev/null
++++ b/Documentation/ABI/testing/sysfs-kernel-oops_count
+@@ -0,0 +1,6 @@
++What:		/sys/kernel/oops_count
++Date:		November 2022
++KernelVersion:	6.2.0
++Contact:	Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
++Description:
++		Shows how many times the system has Oopsed since last boot.
+diff --git a/Documentation/ABI/testing/sysfs-kernel-warn_count b/Documentation/ABI/testing/sysfs-kernel-warn_count
+new file mode 100644
+index 0000000000000..90a029813717d
+--- /dev/null
++++ b/Documentation/ABI/testing/sysfs-kernel-warn_count
+@@ -0,0 +1,6 @@
++What:		/sys/kernel/warn_count
++Date:		November 2022
++KernelVersion:	6.2.0
++Contact:	Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
++Description:
++		Shows how many times the system has Warned since last boot.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index c2c64c1b706ff..b3588fff1ec0a 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -667,6 +667,15 @@ This is the default behavior.
+ an oops event is detected.
+ 
+ 
++oops_limit
++==========
++
++Number of kernel oopses after which the kernel should panic when
++``panic_on_oops`` is not set. Setting this to 0 disables checking
++the count. Setting this to  1 has the same effect as setting
++``panic_on_oops=1``. The default value is 10000.
++
++
+ osrelease, ostype & version
+ ===========================
+ 
+@@ -1523,6 +1532,16 @@ entry will default to 2 instead of 0.
+ 2 Unprivileged calls to ``bpf()`` are disabled
+ = =============================================================
+ 
++
++warn_limit
++==========
++
++Number of kernel warnings after which the kernel should panic when
++``panic_on_warn`` is not set. Setting this to 0 disables checking
++the warning count. Setting this to 1 has the same effect as setting
++``panic_on_warn=1``. The default value is 0.
++
++
+ watchdog
+ ========
+ 
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
+new file mode 100644
+index 0000000000000..bb01c6b34dabc
+--- /dev/null
++++ b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
+@@ -0,0 +1,78 @@
++# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
++# Copyright 2019 BayLibre, SAS
++%YAML 1.2
++---
++$id: "http://devicetree.org/schemas/phy/amlogic,g12a-usb2-phy.yaml#"
++$schema: "http://devicetree.org/meta-schemas/core.yaml#"
++
++title: Amlogic G12A USB2 PHY
++
++maintainers:
++  - Neil Armstrong <neil.armstrong@linaro.org>
++
++properties:
++  compatible:
++    enum:
++      - amlogic,g12a-usb2-phy
++      - amlogic,a1-usb2-phy
++
++  reg:
++    maxItems: 1
++
++  clocks:
++    maxItems: 1
++
++  clock-names:
++    items:
++      - const: xtal
++
++  resets:
++    maxItems: 1
++
++  reset-names:
++    items:
++      - const: phy
++
++  "#phy-cells":
++    const: 0
++
++  phy-supply:
++    description:
++      Phandle to a regulator that provides power to the PHY. This
++      regulator will be managed during the PHY power on/off sequence.
++
++required:
++  - compatible
++  - reg
++  - clocks
++  - clock-names
++  - resets
++  - reset-names
++  - "#phy-cells"
++
++if:
++  properties:
++    compatible:
++      enum:
++        - amlogic,meson-a1-usb-ctrl
++
++then:
++  properties:
++    power-domains:
++      maxItems: 1
++  required:
++    - power-domains
++
++additionalProperties: false
++
++examples:
++  - |
++    phy@36000 {
++          compatible = "amlogic,g12a-usb2-phy";
++          reg = <0x36000 0x2000>;
++          clocks = <&xtal>;
++          clock-names = "xtal";
++          resets = <&phy_reset>;
++          reset-names = "phy";
++          #phy-cells = <0>;
++    };
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
+new file mode 100644
+index 0000000000000..129d26e99776b
+--- /dev/null
++++ b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
+@@ -0,0 +1,59 @@
++# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
++# Copyright 2019 BayLibre, SAS
++%YAML 1.2
++---
++$id: "http://devicetree.org/schemas/phy/amlogic,g12a-usb3-pcie-phy.yaml#"
++$schema: "http://devicetree.org/meta-schemas/core.yaml#"
++
++title: Amlogic G12A USB3 + PCIE Combo PHY
++
++maintainers:
++  - Neil Armstrong <neil.armstrong@linaro.org>
++
++properties:
++  compatible:
++    enum:
++      - amlogic,g12a-usb3-pcie-phy
++
++  reg:
++    maxItems: 1
++
++  clocks:
++    maxItems: 1
++
++  clock-names:
++    items:
++      - const: ref_clk
++
++  resets:
++    maxItems: 1
++
++  reset-names:
++    items:
++      - const: phy
++
++  "#phy-cells":
++    const: 1
++
++required:
++  - compatible
++  - reg
++  - clocks
++  - clock-names
++  - resets
++  - reset-names
++  - "#phy-cells"
++
++additionalProperties: false
++
++examples:
++  - |
++    phy@46000 {
++          compatible = "amlogic,g12a-usb3-pcie-phy";
++          reg = <0x46000 0x2000>;
++          clocks = <&ref_clk>;
++          clock-names = "ref_clk";
++          resets = <&phy_reset>;
++          reset-names = "phy";
++          #phy-cells = <1>;
++    };
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml
+deleted file mode 100644
+index f3a5fbabbbb59..0000000000000
+--- a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml
++++ /dev/null
+@@ -1,78 +0,0 @@
+-# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+-# Copyright 2019 BayLibre, SAS
+-%YAML 1.2
+----
+-$id: "http://devicetree.org/schemas/phy/amlogic,meson-g12a-usb2-phy.yaml#"
+-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+-
+-title: Amlogic G12A USB2 PHY
+-
+-maintainers:
+-  - Neil Armstrong <neil.armstrong@linaro.org>
+-
+-properties:
+-  compatible:
+-    enum:
+-      - amlogic,meson-g12a-usb2-phy
+-      - amlogic,meson-a1-usb2-phy
+-
+-  reg:
+-    maxItems: 1
+-
+-  clocks:
+-    maxItems: 1
+-
+-  clock-names:
+-    items:
+-      - const: xtal
+-
+-  resets:
+-    maxItems: 1
+-
+-  reset-names:
+-    items:
+-      - const: phy
+-
+-  "#phy-cells":
+-    const: 0
+-
+-  phy-supply:
+-    description:
+-      Phandle to a regulator that provides power to the PHY. This
+-      regulator will be managed during the PHY power on/off sequence.
+-
+-required:
+-  - compatible
+-  - reg
+-  - clocks
+-  - clock-names
+-  - resets
+-  - reset-names
+-  - "#phy-cells"
+-
+-if:
+-  properties:
+-    compatible:
+-      enum:
+-        - amlogic,meson-a1-usb-ctrl
+-
+-then:
+-  properties:
+-    power-domains:
+-      maxItems: 1
+-  required:
+-    - power-domains
+-
+-additionalProperties: false
+-
+-examples:
+-  - |
+-    phy@36000 {
+-          compatible = "amlogic,meson-g12a-usb2-phy";
+-          reg = <0x36000 0x2000>;
+-          clocks = <&xtal>;
+-          clock-names = "xtal";
+-          resets = <&phy_reset>;
+-          reset-names = "phy";
+-          #phy-cells = <0>;
+-    };
+diff --git a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml
+deleted file mode 100644
+index 868b4e6fde71f..0000000000000
+--- a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml
++++ /dev/null
+@@ -1,59 +0,0 @@
+-# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+-# Copyright 2019 BayLibre, SAS
+-%YAML 1.2
+----
+-$id: "http://devicetree.org/schemas/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml#"
+-$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+-
+-title: Amlogic G12A USB3 + PCIE Combo PHY
+-
+-maintainers:
+-  - Neil Armstrong <neil.armstrong@linaro.org>
+-
+-properties:
+-  compatible:
+-    enum:
+-      - amlogic,meson-g12a-usb3-pcie-phy
+-
+-  reg:
+-    maxItems: 1
+-
+-  clocks:
+-    maxItems: 1
+-
+-  clock-names:
+-    items:
+-      - const: ref_clk
+-
+-  resets:
+-    maxItems: 1
+-
+-  reset-names:
+-    items:
+-      - const: phy
+-
+-  "#phy-cells":
+-    const: 1
+-
+-required:
+-  - compatible
+-  - reg
+-  - clocks
+-  - clock-names
+-  - resets
+-  - reset-names
+-  - "#phy-cells"
+-
+-additionalProperties: false
+-
+-examples:
+-  - |
+-    phy@46000 {
+-          compatible = "amlogic,meson-g12a-usb3-pcie-phy";
+-          reg = <0x46000 0x2000>;
+-          clocks = <&ref_clk>;
+-          clock-names = "ref_clk";
+-          resets = <&phy_reset>;
+-          reset-names = "phy";
+-          #phy-cells = <1>;
+-    };
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 886d3f69ee644..d4822ae39e396 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -11112,6 +11112,8 @@ M:	Kees Cook <keescook@chromium.org>
+ L:	linux-hardening@vger.kernel.org
+ S:	Supported
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening
++F:	Documentation/ABI/testing/sysfs-kernel-oops_count
++F:	Documentation/ABI/testing/sysfs-kernel-warn_count
+ F:	include/linux/overflow.h
+ F:	include/linux/randomize_kstack.h
+ F:	mm/usercopy.c
+diff --git a/Makefile b/Makefile
+index 7eb6793ecfbfd..49261450039a1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 1
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/qcom-apq8084-ifc6540.dts b/arch/arm/boot/dts/qcom-apq8084-ifc6540.dts
+index 44cd72f1b1be4..116e59a3b76d0 100644
+--- a/arch/arm/boot/dts/qcom-apq8084-ifc6540.dts
++++ b/arch/arm/boot/dts/qcom-apq8084-ifc6540.dts
+@@ -19,16 +19,16 @@
+ 		serial@f995e000 {
+ 			status = "okay";
+ 		};
++	};
++};
+ 
+-		sdhci@f9824900 {
+-			bus-width = <8>;
+-			non-removable;
+-			status = "okay";
+-		};
++&sdhc_1 {
++	bus-width = <8>;
++	non-removable;
++	status = "okay";
++};
+ 
+-		sdhci@f98a4900 {
+-			cd-gpios = <&tlmm 122 GPIO_ACTIVE_LOW>;
+-			bus-width = <4>;
+-		};
+-	};
++&sdhc_2 {
++	cd-gpios = <&tlmm 122 GPIO_ACTIVE_LOW>;
++	bus-width = <4>;
+ };
+diff --git a/arch/arm/boot/dts/qcom-apq8084.dtsi b/arch/arm/boot/dts/qcom-apq8084.dtsi
+index f2fb7c975af84..69da87577ad0d 100644
+--- a/arch/arm/boot/dts/qcom-apq8084.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8084.dtsi
+@@ -419,7 +419,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		mmc@f9824900 {
++		sdhc_1: mmc@f9824900 {
+ 			compatible = "qcom,apq8084-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0xf9824900 0x11c>, <0xf9824000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -432,7 +432,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		mmc@f98a4900 {
++		sdhc_2: mmc@f98a4900 {
+ 			compatible = "qcom,apq8084-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0xf98a4900 0x11c>, <0xf98a4000 0x800>;
+ 			reg-names = "hc", "core";
+diff --git a/arch/arm/mach-omap1/Kconfig b/arch/arm/mach-omap1/Kconfig
+index 538a960257cc7..7ec7ada287e05 100644
+--- a/arch/arm/mach-omap1/Kconfig
++++ b/arch/arm/mach-omap1/Kconfig
+@@ -4,6 +4,7 @@ menuconfig ARCH_OMAP1
+ 	depends on ARCH_MULTI_V4T || ARCH_MULTI_V5
+ 	depends on CPU_LITTLE_ENDIAN
+ 	depends on ATAGS
++	select ARCH_OMAP
+ 	select ARCH_HAS_HOLES_MEMORYMODEL
+ 	select ARCH_OMAP
+ 	select CLKSRC_MMIO
+@@ -45,10 +46,6 @@ config ARCH_OMAP16XX
+ 	select CPU_ARM926T
+ 	select OMAP_DM_TIMER
+ 
+-config ARCH_OMAP1_ANY
+-	select ARCH_OMAP
+-	def_bool ARCH_OMAP730 || ARCH_OMAP850 || ARCH_OMAP15XX || ARCH_OMAP16XX
+-
+ config ARCH_OMAP
+ 	bool
+ 
+diff --git a/arch/arm/mach-omap1/Makefile b/arch/arm/mach-omap1/Makefile
+index 506074b86333f..0615cb0ba580b 100644
+--- a/arch/arm/mach-omap1/Makefile
++++ b/arch/arm/mach-omap1/Makefile
+@@ -3,8 +3,6 @@
+ # Makefile for the linux kernel.
+ #
+ 
+-ifdef CONFIG_ARCH_OMAP1_ANY
+-
+ # Common support
+ obj-y := io.o id.o sram-init.o sram.o time.o irq.o mux.o flash.o \
+ 	 serial.o devices.o dma.o omap-dma.o fb.o
+@@ -59,5 +57,3 @@ obj-$(CONFIG_ARCH_OMAP730)		+= gpio7xx.o
+ obj-$(CONFIG_ARCH_OMAP850)		+= gpio7xx.o
+ obj-$(CONFIG_ARCH_OMAP15XX)		+= gpio15xx.o
+ obj-$(CONFIG_ARCH_OMAP16XX)		+= gpio16xx.o
+-
+-endif
+diff --git a/arch/arm/mach-omap1/io.c b/arch/arm/mach-omap1/io.c
+index d2db9b8aed3fb..0074b011a05a4 100644
+--- a/arch/arm/mach-omap1/io.c
++++ b/arch/arm/mach-omap1/io.c
+@@ -22,17 +22,14 @@
+  * The machine specific code may provide the extra mapping besides the
+  * default mapping provided here.
+  */
+-static struct map_desc omap_io_desc[] __initdata = {
++#if defined (CONFIG_ARCH_OMAP730) || defined (CONFIG_ARCH_OMAP850)
++static struct map_desc omap7xx_io_desc[] __initdata = {
+ 	{
+ 		.virtual	= OMAP1_IO_VIRT,
+ 		.pfn		= __phys_to_pfn(OMAP1_IO_PHYS),
+ 		.length		= OMAP1_IO_SIZE,
+ 		.type		= MT_DEVICE
+-	}
+-};
+-
+-#if defined (CONFIG_ARCH_OMAP730) || defined (CONFIG_ARCH_OMAP850)
+-static struct map_desc omap7xx_io_desc[] __initdata = {
++	},
+ 	{
+ 		.virtual	= OMAP7XX_DSP_BASE,
+ 		.pfn		= __phys_to_pfn(OMAP7XX_DSP_START),
+@@ -49,6 +46,12 @@ static struct map_desc omap7xx_io_desc[] __initdata = {
+ 
+ #ifdef CONFIG_ARCH_OMAP15XX
+ static struct map_desc omap1510_io_desc[] __initdata = {
++	{
++		.virtual	= OMAP1_IO_VIRT,
++		.pfn		= __phys_to_pfn(OMAP1_IO_PHYS),
++		.length		= OMAP1_IO_SIZE,
++		.type		= MT_DEVICE
++	},
+ 	{
+ 		.virtual	= OMAP1510_DSP_BASE,
+ 		.pfn		= __phys_to_pfn(OMAP1510_DSP_START),
+@@ -65,6 +68,12 @@ static struct map_desc omap1510_io_desc[] __initdata = {
+ 
+ #if defined(CONFIG_ARCH_OMAP16XX)
+ static struct map_desc omap16xx_io_desc[] __initdata = {
++	{
++		.virtual	= OMAP1_IO_VIRT,
++		.pfn		= __phys_to_pfn(OMAP1_IO_PHYS),
++		.length		= OMAP1_IO_SIZE,
++		.type		= MT_DEVICE
++	},
+ 	{
+ 		.virtual	= OMAP16XX_DSP_BASE,
+ 		.pfn		= __phys_to_pfn(OMAP16XX_DSP_START),
+@@ -79,18 +88,9 @@ static struct map_desc omap16xx_io_desc[] __initdata = {
+ };
+ #endif
+ 
+-/*
+- * Maps common IO regions for omap1
+- */
+-static void __init omap1_map_common_io(void)
+-{
+-	iotable_init(omap_io_desc, ARRAY_SIZE(omap_io_desc));
+-}
+-
+ #if defined (CONFIG_ARCH_OMAP730) || defined (CONFIG_ARCH_OMAP850)
+ void __init omap7xx_map_io(void)
+ {
+-	omap1_map_common_io();
+ 	iotable_init(omap7xx_io_desc, ARRAY_SIZE(omap7xx_io_desc));
+ }
+ #endif
+@@ -98,7 +98,6 @@ void __init omap7xx_map_io(void)
+ #ifdef CONFIG_ARCH_OMAP15XX
+ void __init omap15xx_map_io(void)
+ {
+-	omap1_map_common_io();
+ 	iotable_init(omap1510_io_desc, ARRAY_SIZE(omap1510_io_desc));
+ }
+ #endif
+@@ -106,7 +105,6 @@ void __init omap15xx_map_io(void)
+ #if defined(CONFIG_ARCH_OMAP16XX)
+ void __init omap16xx_map_io(void)
+ {
+-	omap1_map_common_io();
+ 	iotable_init(omap16xx_io_desc, ARRAY_SIZE(omap16xx_io_desc));
+ }
+ #endif
+diff --git a/arch/arm/mach-omap1/mcbsp.c b/arch/arm/mach-omap1/mcbsp.c
+index 05c25c432449f..b1632cbe37e6f 100644
+--- a/arch/arm/mach-omap1/mcbsp.c
++++ b/arch/arm/mach-omap1/mcbsp.c
+@@ -89,7 +89,6 @@ static struct omap_mcbsp_ops omap1_mcbsp_ops = {
+ #define OMAP1610_MCBSP2_BASE	0xfffb1000
+ #define OMAP1610_MCBSP3_BASE	0xe1017000
+ 
+-#if defined(CONFIG_ARCH_OMAP730) || defined(CONFIG_ARCH_OMAP850)
+ struct resource omap7xx_mcbsp_res[][6] = {
+ 	{
+ 		{
+@@ -159,14 +158,7 @@ static struct omap_mcbsp_platform_data omap7xx_mcbsp_pdata[] = {
+ };
+ #define OMAP7XX_MCBSP_RES_SZ		ARRAY_SIZE(omap7xx_mcbsp_res[1])
+ #define OMAP7XX_MCBSP_COUNT		ARRAY_SIZE(omap7xx_mcbsp_res)
+-#else
+-#define omap7xx_mcbsp_res_0		NULL
+-#define omap7xx_mcbsp_pdata		NULL
+-#define OMAP7XX_MCBSP_RES_SZ		0
+-#define OMAP7XX_MCBSP_COUNT		0
+-#endif
+ 
+-#ifdef CONFIG_ARCH_OMAP15XX
+ struct resource omap15xx_mcbsp_res[][6] = {
+ 	{
+ 		{
+@@ -266,14 +258,7 @@ static struct omap_mcbsp_platform_data omap15xx_mcbsp_pdata[] = {
+ };
+ #define OMAP15XX_MCBSP_RES_SZ		ARRAY_SIZE(omap15xx_mcbsp_res[1])
+ #define OMAP15XX_MCBSP_COUNT		ARRAY_SIZE(omap15xx_mcbsp_res)
+-#else
+-#define omap15xx_mcbsp_res_0		NULL
+-#define omap15xx_mcbsp_pdata		NULL
+-#define OMAP15XX_MCBSP_RES_SZ		0
+-#define OMAP15XX_MCBSP_COUNT		0
+-#endif
+ 
+-#ifdef CONFIG_ARCH_OMAP16XX
+ struct resource omap16xx_mcbsp_res[][6] = {
+ 	{
+ 		{
+@@ -373,12 +358,6 @@ static struct omap_mcbsp_platform_data omap16xx_mcbsp_pdata[] = {
+ };
+ #define OMAP16XX_MCBSP_RES_SZ		ARRAY_SIZE(omap16xx_mcbsp_res[1])
+ #define OMAP16XX_MCBSP_COUNT		ARRAY_SIZE(omap16xx_mcbsp_res)
+-#else
+-#define omap16xx_mcbsp_res_0		NULL
+-#define omap16xx_mcbsp_pdata		NULL
+-#define OMAP16XX_MCBSP_RES_SZ		0
+-#define OMAP16XX_MCBSP_COUNT		0
+-#endif
+ 
+ static void omap_mcbsp_register_board_cfg(struct resource *res, int res_count,
+ 			struct omap_mcbsp_platform_data *config, int size)
+diff --git a/arch/arm/mach-omap1/pm.h b/arch/arm/mach-omap1/pm.h
+index d9165709c5323..0d1f092821ff8 100644
+--- a/arch/arm/mach-omap1/pm.h
++++ b/arch/arm/mach-omap1/pm.h
+@@ -106,13 +106,6 @@
+ #define OMAP7XX_IDLECT3		0xfffece24
+ #define OMAP7XX_IDLE_LOOP_REQUEST	0x0C00
+ 
+-#if     !defined(CONFIG_ARCH_OMAP730) && \
+-	!defined(CONFIG_ARCH_OMAP850) && \
+-	!defined(CONFIG_ARCH_OMAP15XX) && \
+-	!defined(CONFIG_ARCH_OMAP16XX)
+-#warning "Power management for this processor not implemented yet"
+-#endif
+-
+ #ifndef __ASSEMBLER__
+ 
+ #include <linux/clk.h>
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index bb916a0948a8f..d944ecca1b3c2 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -1279,7 +1279,7 @@
+ 			reg = <0x32f10100 0x8>,
+ 			      <0x381f0000 0x20>;
+ 			clocks = <&clk IMX8MP_CLK_HSIO_ROOT>,
+-				 <&clk IMX8MP_CLK_USB_ROOT>;
++				 <&clk IMX8MP_CLK_USB_SUSP>;
+ 			clock-names = "hsio", "suspend";
+ 			interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
+ 			power-domains = <&hsio_blk_ctrl IMX8MP_HSIOBLK_PD_USB>;
+@@ -1292,9 +1292,9 @@
+ 			usb_dwc3_0: usb@38100000 {
+ 				compatible = "snps,dwc3";
+ 				reg = <0x38100000 0x10000>;
+-				clocks = <&clk IMX8MP_CLK_HSIO_AXI>,
++				clocks = <&clk IMX8MP_CLK_USB_ROOT>,
+ 					 <&clk IMX8MP_CLK_USB_CORE_REF>,
+-					 <&clk IMX8MP_CLK_USB_ROOT>;
++					 <&clk IMX8MP_CLK_USB_SUSP>;
+ 				clock-names = "bus_early", "ref", "suspend";
+ 				interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+ 				phys = <&usb3_phy0>, <&usb3_phy0>;
+@@ -1321,7 +1321,7 @@
+ 			reg = <0x32f10108 0x8>,
+ 			      <0x382f0000 0x20>;
+ 			clocks = <&clk IMX8MP_CLK_HSIO_ROOT>,
+-				 <&clk IMX8MP_CLK_USB_ROOT>;
++				 <&clk IMX8MP_CLK_USB_SUSP>;
+ 			clock-names = "hsio", "suspend";
+ 			interrupts = <GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>;
+ 			power-domains = <&hsio_blk_ctrl IMX8MP_HSIOBLK_PD_USB>;
+@@ -1334,9 +1334,9 @@
+ 			usb_dwc3_1: usb@38200000 {
+ 				compatible = "snps,dwc3";
+ 				reg = <0x38200000 0x10000>;
+-				clocks = <&clk IMX8MP_CLK_HSIO_AXI>,
++				clocks = <&clk IMX8MP_CLK_USB_ROOT>,
+ 					 <&clk IMX8MP_CLK_USB_CORE_REF>,
+-					 <&clk IMX8MP_CLK_USB_ROOT>;
++					 <&clk IMX8MP_CLK_USB_SUSP>;
+ 				clock-names = "bus_early", "ref", "suspend";
+ 				interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+ 				phys = <&usb3_phy1>, <&usb3_phy1>;
+diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
+index 439e2bc5d5d8b..b9f3165075c9d 100644
+--- a/arch/arm64/include/asm/efi.h
++++ b/arch/arm64/include/asm/efi.h
+@@ -25,6 +25,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ ({									\
+ 	efi_virtmap_load();						\
+ 	__efi_fpsimd_begin();						\
++	spin_lock(&efi_rt_lock);					\
+ })
+ 
+ #undef arch_efi_call_virt
+@@ -33,10 +34,12 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ 
+ #define arch_efi_call_virt_teardown()					\
+ ({									\
++	spin_unlock(&efi_rt_lock);					\
+ 	__efi_fpsimd_end();						\
+ 	efi_virtmap_unload();						\
+ })
+ 
++extern spinlock_t efi_rt_lock;
+ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
+ 
+ #define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S
+index 75691a2641c1c..2d3c4b02393e4 100644
+--- a/arch/arm64/kernel/efi-rt-wrapper.S
++++ b/arch/arm64/kernel/efi-rt-wrapper.S
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <linux/linkage.h>
++#include <asm/assembler.h>
+ 
+ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	stp	x29, x30, [sp, #-32]!
+@@ -16,6 +17,12 @@ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	 */
+ 	stp	x1, x18, [sp, #16]
+ 
++	ldr_l	x16, efi_rt_stack_top
++	mov	sp, x16
++#ifdef CONFIG_SHADOW_CALL_STACK
++	str	x18, [sp, #-16]!
++#endif
++
+ 	/*
+ 	 * We are lucky enough that no EFI runtime services take more than
+ 	 * 5 arguments, so all are passed in registers rather than via the
+@@ -29,6 +36,7 @@ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	mov	x4, x6
+ 	blr	x8
+ 
++	mov	sp, x29
+ 	ldp	x1, x2, [sp, #16]
+ 	cmp	x2, x18
+ 	ldp	x29, x30, [sp], #32
+@@ -42,6 +50,10 @@ SYM_FUNC_START(__efi_rt_asm_wrapper)
+ 	 * called with preemption disabled and a separate shadow stack is used
+ 	 * for interrupts.
+ 	 */
+-	mov	x18, x2
++#ifdef CONFIG_SHADOW_CALL_STACK
++	ldr_l	x18, efi_rt_stack_top
++	ldr	x18, [x18, #-16]
++#endif
++
+ 	b	efi_handle_corrupted_x18	// tail call
+ SYM_FUNC_END(__efi_rt_asm_wrapper)
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index a908a37f03678..386bd81ca12bb 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -144,3 +144,30 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
+ 	pr_err_ratelimited(FW_BUG "register x18 corrupted by EFI %s\n", f);
+ 	return s;
+ }
++
++DEFINE_SPINLOCK(efi_rt_lock);
++
++asmlinkage u64 *efi_rt_stack_top __ro_after_init;
++
++/* EFI requires 8 KiB of stack space for runtime services */
++static_assert(THREAD_SIZE >= SZ_8K);
++
++static int __init arm64_efi_rt_init(void)
++{
++	void *p;
++
++	if (!efi_enabled(EFI_RUNTIME_SERVICES))
++		return 0;
++
++	p = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, GFP_KERNEL,
++			   NUMA_NO_NODE, &&l);
++l:	if (!p) {
++		pr_warn("Failed to allocate EFI runtime stack\n");
++		clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
++		return -ENOMEM;
++	}
++
++	efi_rt_stack_top = p + THREAD_SIZE;
++	return 0;
++}
++core_initcall(arm64_efi_rt_init);
+diff --git a/arch/loongarch/kernel/cpu-probe.c b/arch/loongarch/kernel/cpu-probe.c
+index 255a09876ef28..3a3fce2d78461 100644
+--- a/arch/loongarch/kernel/cpu-probe.c
++++ b/arch/loongarch/kernel/cpu-probe.c
+@@ -94,7 +94,7 @@ static void cpu_probe_common(struct cpuinfo_loongarch *c)
+ 	c->options = LOONGARCH_CPU_CPUCFG | LOONGARCH_CPU_CSR |
+ 		     LOONGARCH_CPU_TLB | LOONGARCH_CPU_VINT | LOONGARCH_CPU_WATCH;
+ 
+-	elf_hwcap |= HWCAP_LOONGARCH_CRC32;
++	elf_hwcap = HWCAP_LOONGARCH_CPUCFG | HWCAP_LOONGARCH_CRC32;
+ 
+ 	config = read_cpucfg(LOONGARCH_CPUCFG1);
+ 	if (config & CPUCFG1_UAL) {
+diff --git a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+index 43bed6c0a84fe..5235fd1c9cb67 100644
+--- a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
++++ b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+@@ -328,7 +328,7 @@
+ 			bus-range = <0x0 0xff>;
+ 			ranges = <0x81000000  0x0 0x60080000  0x0 0x60080000 0x0 0x10000>,      /* I/O */
+ 				 <0x82000000  0x0 0x60090000  0x0 0x60090000 0x0 0xff70000>,    /* mem */
+-				 <0x82000000  0x0 0x70000000  0x0 0x70000000 0x0 0x1000000>,    /* mem */
++				 <0x82000000  0x0 0x70000000  0x0 0x70000000 0x0 0x10000000>,    /* mem */
+ 				 <0xc3000000 0x20 0x00000000 0x20 0x00000000 0x20 0x00000000>;  /* mem prefetchable */
+ 			num-lanes = <0x8>;
+ 			interrupts = <56>, <57>, <58>, <59>, <60>, <61>, <62>, <63>, <64>;
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index a829492bca4c1..52e6e7ed4f78a 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -800,13 +800,18 @@ static const struct x86_cpu_id rapl_model_match[] __initconst = {
+ 	X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X,		&model_hsx),
+ 	X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L,		&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE,		&model_skl),
++	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L,		&model_skl),
++	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE,		&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE,		&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L,		&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N,		&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X,	&model_spr),
++	X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X,	&model_spr),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE,		&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P,	&model_skl),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S,	&model_skl),
++	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE,		&model_skl),
++	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L,	&model_skl),
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(x86cpu, rapl_model_match);
+diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
+index 8946f89761cc3..851eb13edc014 100644
+--- a/arch/x86/kernel/fpu/init.c
++++ b/arch/x86/kernel/fpu/init.c
+@@ -133,9 +133,6 @@ static void __init fpu__init_system_generic(void)
+ 	fpu__init_system_mxcsr();
+ }
+ 
+-/* Get alignment of the TYPE. */
+-#define TYPE_ALIGN(TYPE) offsetof(struct { char x; TYPE test; }, test)
+-
+ /*
+  * Enforce that 'MEMBER' is the last field of 'TYPE'.
+  *
+@@ -143,8 +140,8 @@ static void __init fpu__init_system_generic(void)
+  * because that's how C aligns structs.
+  */
+ #define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \
+-	BUILD_BUG_ON(sizeof(TYPE) != ALIGN(offsetofend(TYPE, MEMBER), \
+-					   TYPE_ALIGN(TYPE)))
++	BUILD_BUG_ON(sizeof(TYPE) !=         \
++		     ALIGN(offsetofend(TYPE, MEMBER), _Alignof(TYPE)))
+ 
+ /*
+  * We append the 'struct fpu' to the task_struct:
+diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S
+index a1f9416bf67a5..6ff2f56cb0f71 100644
+--- a/arch/x86/lib/iomap_copy_64.S
++++ b/arch/x86/lib/iomap_copy_64.S
+@@ -10,6 +10,6 @@
+  */
+ SYM_FUNC_START(__iowrite32_copy)
+ 	movl %edx,%ecx
+-	rep movsd
++	rep movsl
+ 	RET
+ SYM_FUNC_END(__iowrite32_copy)
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 6672f1bce3795..f10c2a0d18d41 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -294,7 +294,7 @@ static inline int deadline_check_fifo(struct dd_per_prio *per_prio,
+ /*
+  * Check if rq has a sequential request preceding it.
+  */
+-static bool deadline_is_seq_writes(struct deadline_data *dd, struct request *rq)
++static bool deadline_is_seq_write(struct deadline_data *dd, struct request *rq)
+ {
+ 	struct request *prev = deadline_earlier_request(rq);
+ 
+@@ -353,7 +353,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
+ 	list_for_each_entry(rq, &per_prio->fifo_list[DD_WRITE], queuelist) {
+ 		if (blk_req_can_dispatch_to_zone(rq) &&
+ 		    (blk_queue_nonrot(rq->q) ||
+-		     !deadline_is_seq_writes(dd, rq)))
++		     !deadline_is_seq_write(dd, rq)))
+ 			goto out;
+ 	}
+ 	rq = NULL;
+diff --git a/drivers/accessibility/speakup/spk_ttyio.c b/drivers/accessibility/speakup/spk_ttyio.c
+index 08cf8a17754bb..07373b3debd1e 100644
+--- a/drivers/accessibility/speakup/spk_ttyio.c
++++ b/drivers/accessibility/speakup/spk_ttyio.c
+@@ -354,6 +354,9 @@ void spk_ttyio_release(struct spk_synth *in_synth)
+ {
+ 	struct tty_struct *tty = in_synth->dev;
+ 
++	if (tty == NULL)
++		return;
++
+ 	tty_lock(tty);
+ 
+ 	if (tty->ops->close)
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index 998101cf16e47..3d4c4620f9f95 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -236,6 +236,11 @@ static acpi_status acpi_platformrt_space_handler(u32 function,
+ 	efi_status_t status;
+ 	struct prm_context_buffer context;
+ 
++	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
++		pr_err_ratelimited("PRM: EFI runtime services no longer available\n");
++		return AE_NO_HANDLER;
++	}
++
+ 	/*
+ 	 * The returned acpi_status will always be AE_OK. Error values will be
+ 	 * saved in the first byte of the PRM message buffer to be used by ASL.
+@@ -325,6 +330,11 @@ void __init init_prmt(void)
+ 
+ 	pr_info("PRM: found %u modules\n", mc);
+ 
++	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
++		pr_err("PRM: EFI runtime services unavailable\n");
++		return;
++	}
++
+ 	status = acpi_install_address_space_handler(ACPI_ROOT_OBJECT,
+ 						    ACPI_ADR_SPACE_PLATFORM_RT,
+ 						    &acpi_platformrt_space_handler,
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index 4cea3b08087ed..2f1a92509271c 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -2400,6 +2400,8 @@ static void pkt_submit_bio(struct bio *bio)
+ 	struct bio *split;
+ 
+ 	bio = bio_split_to_limits(bio);
++	if (!bio)
++		return;
+ 
+ 	pkt_dbg(2, pd, "start = %6llx stop = %6llx\n",
+ 		(unsigned long long)bio->bi_iter.bi_sector,
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index bae9b2a408d95..e4398590b0edc 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2157,10 +2157,17 @@ static void qca_serdev_shutdown(struct device *dev)
+ 	int timeout = msecs_to_jiffies(CMD_TRANS_TIMEOUT_MS);
+ 	struct serdev_device *serdev = to_serdev_device(dev);
+ 	struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
++	struct hci_uart *hu = &qcadev->serdev_hu;
++	struct hci_dev *hdev = hu->hdev;
++	struct qca_data *qca = hu->priv;
+ 	const u8 ibs_wake_cmd[] = { 0xFD };
+ 	const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 };
+ 
+ 	if (qcadev->btsoc_type == QCA_QCA6390) {
++		if (test_bit(QCA_BT_OFF, &qca->flags) ||
++		    !test_bit(HCI_RUNNING, &hdev->flags))
++			return;
++
+ 		serdev_device_write_flush(serdev);
+ 		ret = serdev_device_write_buf(serdev, ibs_wake_cmd,
+ 					      sizeof(ibs_wake_cmd));
+diff --git a/drivers/comedi/drivers/adv_pci1760.c b/drivers/comedi/drivers/adv_pci1760.c
+index fcfc2e299110e..27f3890f471df 100644
+--- a/drivers/comedi/drivers/adv_pci1760.c
++++ b/drivers/comedi/drivers/adv_pci1760.c
+@@ -58,7 +58,7 @@
+ #define PCI1760_CMD_CLR_IMB2		0x00	/* Clears IMB2 */
+ #define PCI1760_CMD_SET_DO		0x01	/* Set output state */
+ #define PCI1760_CMD_GET_DO		0x02	/* Read output status */
+-#define PCI1760_CMD_GET_STATUS		0x03	/* Read current status */
++#define PCI1760_CMD_GET_STATUS		0x07	/* Read current status */
+ #define PCI1760_CMD_GET_FW_VER		0x0e	/* Read firmware version */
+ #define PCI1760_CMD_GET_HW_VER		0x0f	/* Read hardware version */
+ #define PCI1760_CMD_SET_PWM_HI(x)	(0x10 + (x) * 2) /* Set "hi" period */
+diff --git a/drivers/dma-buf/dma-buf-sysfs-stats.c b/drivers/dma-buf/dma-buf-sysfs-stats.c
+index 2bba0babcb62b..4b680e10c15a3 100644
+--- a/drivers/dma-buf/dma-buf-sysfs-stats.c
++++ b/drivers/dma-buf/dma-buf-sysfs-stats.c
+@@ -168,14 +168,11 @@ void dma_buf_uninit_sysfs_statistics(void)
+ 	kset_unregister(dma_buf_stats_kset);
+ }
+ 
+-int dma_buf_stats_setup(struct dma_buf *dmabuf)
++int dma_buf_stats_setup(struct dma_buf *dmabuf, struct file *file)
+ {
+ 	struct dma_buf_sysfs_entry *sysfs_entry;
+ 	int ret;
+ 
+-	if (!dmabuf || !dmabuf->file)
+-		return -EINVAL;
+-
+ 	if (!dmabuf->exp_name) {
+ 		pr_err("exporter name must not be empty if stats needed\n");
+ 		return -EINVAL;
+@@ -192,7 +189,7 @@ int dma_buf_stats_setup(struct dma_buf *dmabuf)
+ 
+ 	/* create the directory for buffer stats */
+ 	ret = kobject_init_and_add(&sysfs_entry->kobj, &dma_buf_ktype, NULL,
+-				   "%lu", file_inode(dmabuf->file)->i_ino);
++				   "%lu", file_inode(file)->i_ino);
+ 	if (ret)
+ 		goto err_sysfs_dmabuf;
+ 
+diff --git a/drivers/dma-buf/dma-buf-sysfs-stats.h b/drivers/dma-buf/dma-buf-sysfs-stats.h
+index a49c6e2650ccc..7a8a995b75bae 100644
+--- a/drivers/dma-buf/dma-buf-sysfs-stats.h
++++ b/drivers/dma-buf/dma-buf-sysfs-stats.h
+@@ -13,7 +13,7 @@
+ int dma_buf_init_sysfs_statistics(void);
+ void dma_buf_uninit_sysfs_statistics(void);
+ 
+-int dma_buf_stats_setup(struct dma_buf *dmabuf);
++int dma_buf_stats_setup(struct dma_buf *dmabuf, struct file *file);
+ 
+ void dma_buf_stats_teardown(struct dma_buf *dmabuf);
+ #else
+@@ -25,7 +25,7 @@ static inline int dma_buf_init_sysfs_statistics(void)
+ 
+ static inline void dma_buf_uninit_sysfs_statistics(void) {}
+ 
+-static inline int dma_buf_stats_setup(struct dma_buf *dmabuf)
++static inline int dma_buf_stats_setup(struct dma_buf *dmabuf, struct file *file)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index e6f36c014c4cd..eb6b59363c4f5 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -95,10 +95,11 @@ static int dma_buf_file_release(struct inode *inode, struct file *file)
+ 		return -EINVAL;
+ 
+ 	dmabuf = file->private_data;
+-
+-	mutex_lock(&db_list.lock);
+-	list_del(&dmabuf->list_node);
+-	mutex_unlock(&db_list.lock);
++	if (dmabuf) {
++		mutex_lock(&db_list.lock);
++		list_del(&dmabuf->list_node);
++		mutex_unlock(&db_list.lock);
++	}
+ 
+ 	return 0;
+ }
+@@ -523,17 +524,17 @@ static inline int is_dma_buf_file(struct file *file)
+ 	return file->f_op == &dma_buf_fops;
+ }
+ 
+-static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
++static struct file *dma_buf_getfile(size_t size, int flags)
+ {
+ 	static atomic64_t dmabuf_inode = ATOMIC64_INIT(0);
+-	struct file *file;
+ 	struct inode *inode = alloc_anon_inode(dma_buf_mnt->mnt_sb);
++	struct file *file;
+ 
+ 	if (IS_ERR(inode))
+ 		return ERR_CAST(inode);
+ 
+-	inode->i_size = dmabuf->size;
+-	inode_set_bytes(inode, dmabuf->size);
++	inode->i_size = size;
++	inode_set_bytes(inode, size);
+ 
+ 	/*
+ 	 * The ->i_ino acquired from get_next_ino() is not unique thus
+@@ -547,8 +548,6 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
+ 				 flags, &dma_buf_fops);
+ 	if (IS_ERR(file))
+ 		goto err_alloc_file;
+-	file->private_data = dmabuf;
+-	file->f_path.dentry->d_fsdata = dmabuf;
+ 
+ 	return file;
+ 
+@@ -614,19 +613,11 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
+ 	size_t alloc_size = sizeof(struct dma_buf);
+ 	int ret;
+ 
+-	if (!exp_info->resv)
+-		alloc_size += sizeof(struct dma_resv);
+-	else
+-		/* prevent &dma_buf[1] == dma_buf->resv */
+-		alloc_size += 1;
+-
+-	if (WARN_ON(!exp_info->priv
+-			  || !exp_info->ops
+-			  || !exp_info->ops->map_dma_buf
+-			  || !exp_info->ops->unmap_dma_buf
+-			  || !exp_info->ops->release)) {
++	if (WARN_ON(!exp_info->priv || !exp_info->ops
++		    || !exp_info->ops->map_dma_buf
++		    || !exp_info->ops->unmap_dma_buf
++		    || !exp_info->ops->release))
+ 		return ERR_PTR(-EINVAL);
+-	}
+ 
+ 	if (WARN_ON(exp_info->ops->cache_sgt_mapping &&
+ 		    (exp_info->ops->pin || exp_info->ops->unpin)))
+@@ -638,10 +629,21 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
+ 	if (!try_module_get(exp_info->owner))
+ 		return ERR_PTR(-ENOENT);
+ 
++	file = dma_buf_getfile(exp_info->size, exp_info->flags);
++	if (IS_ERR(file)) {
++		ret = PTR_ERR(file);
++		goto err_module;
++	}
++
++	if (!exp_info->resv)
++		alloc_size += sizeof(struct dma_resv);
++	else
++		/* prevent &dma_buf[1] == dma_buf->resv */
++		alloc_size += 1;
+ 	dmabuf = kzalloc(alloc_size, GFP_KERNEL);
+ 	if (!dmabuf) {
+ 		ret = -ENOMEM;
+-		goto err_module;
++		goto err_file;
+ 	}
+ 
+ 	dmabuf->priv = exp_info->priv;
+@@ -653,44 +655,36 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
+ 	init_waitqueue_head(&dmabuf->poll);
+ 	dmabuf->cb_in.poll = dmabuf->cb_out.poll = &dmabuf->poll;
+ 	dmabuf->cb_in.active = dmabuf->cb_out.active = 0;
++	mutex_init(&dmabuf->lock);
++	INIT_LIST_HEAD(&dmabuf->attachments);
+ 
+ 	if (!resv) {
+-		resv = (struct dma_resv *)&dmabuf[1];
+-		dma_resv_init(resv);
++		dmabuf->resv = (struct dma_resv *)&dmabuf[1];
++		dma_resv_init(dmabuf->resv);
++	} else {
++		dmabuf->resv = resv;
+ 	}
+-	dmabuf->resv = resv;
+ 
+-	file = dma_buf_getfile(dmabuf, exp_info->flags);
+-	if (IS_ERR(file)) {
+-		ret = PTR_ERR(file);
++	ret = dma_buf_stats_setup(dmabuf, file);
++	if (ret)
+ 		goto err_dmabuf;
+-	}
+ 
++	file->private_data = dmabuf;
++	file->f_path.dentry->d_fsdata = dmabuf;
+ 	dmabuf->file = file;
+ 
+-	mutex_init(&dmabuf->lock);
+-	INIT_LIST_HEAD(&dmabuf->attachments);
+-
+ 	mutex_lock(&db_list.lock);
+ 	list_add(&dmabuf->list_node, &db_list.head);
+ 	mutex_unlock(&db_list.lock);
+ 
+-	ret = dma_buf_stats_setup(dmabuf);
+-	if (ret)
+-		goto err_sysfs;
+-
+ 	return dmabuf;
+ 
+-err_sysfs:
+-	/*
+-	 * Set file->f_path.dentry->d_fsdata to NULL so that when
+-	 * dma_buf_release() gets invoked by dentry_ops, it exits
+-	 * early before calling the release() dma_buf op.
+-	 */
+-	file->f_path.dentry->d_fsdata = NULL;
+-	fput(file);
+ err_dmabuf:
++	if (!resv)
++		dma_resv_fini(dmabuf->resv);
+ 	kfree(dmabuf);
++err_file:
++	fput(file);
+ err_module:
+ 	module_put(exp_info->owner);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+index a183d93bd7e29..bf85aa0979ecb 100644
+--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
++++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+@@ -1018,6 +1018,11 @@ static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+ 
+ 	/* The bad descriptor currently is in the head of vc list */
+ 	vd = vchan_next_desc(&chan->vc);
++	if (!vd) {
++		dev_err(chan2dev(chan), "BUG: %s, IRQ with no descriptors\n",
++			axi_chan_name(chan));
++		goto out;
++	}
+ 	/* Remove the completed descriptor from issued list */
+ 	list_del(&vd->node);
+ 
+@@ -1032,6 +1037,7 @@ static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+ 	/* Try to restart the controller */
+ 	axi_chan_start_first_queued(chan);
+ 
++out:
+ 	spin_unlock_irqrestore(&chan->vc.lock, flags);
+ }
+ 
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 6f44fa8f78a5d..6d8ff664fdfb2 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -1173,8 +1173,19 @@ static void idxd_flush_pending_descs(struct idxd_irq_entry *ie)
+ 	spin_unlock(&ie->list_lock);
+ 
+ 	list_for_each_entry_safe(desc, itr, &flist, list) {
++		struct dma_async_tx_descriptor *tx;
++
+ 		list_del(&desc->list);
+ 		ctype = desc->completion->status ? IDXD_COMPLETE_NORMAL : IDXD_COMPLETE_ABORT;
++		/*
++		 * wq is being disabled. Any remaining descriptors are
++		 * likely to be stuck and can be dropped. callback could
++		 * point to code that is no longer accessible, for example
++		 * if dmatest module has been unloaded.
++		 */
++		tx = &desc->txd;
++		tx->callback = NULL;
++		tx->callback_result = NULL;
+ 		idxd_dma_complete_txd(desc, ctype, true);
+ 	}
+ }
+@@ -1391,8 +1402,7 @@ err_res_alloc:
+ err_irq:
+ 	idxd_wq_unmap_portal(wq);
+ err_map_portal:
+-	rc = idxd_wq_disable(wq, false);
+-	if (rc < 0)
++	if (idxd_wq_disable(wq, false))
+ 		dev_dbg(dev, "wq %s disable failed\n", dev_name(wq_confdev(wq)));
+ err:
+ 	return rc;
+@@ -1409,11 +1419,11 @@ void drv_disable_wq(struct idxd_wq *wq)
+ 		dev_warn(dev, "Clients has claim on wq %d: %d\n",
+ 			 wq->id, idxd_wq_refcount(wq));
+ 
+-	idxd_wq_free_resources(wq);
+ 	idxd_wq_unmap_portal(wq);
+ 	idxd_wq_drain(wq);
+ 	idxd_wq_free_irq(wq);
+ 	idxd_wq_reset(wq);
++	idxd_wq_free_resources(wq);
+ 	percpu_ref_exit(&wq->wq_active);
+ 	wq->type = IDXD_WQT_NONE;
+ 	wq->client_count = 0;
+diff --git a/drivers/dma/lgm/lgm-dma.c b/drivers/dma/lgm/lgm-dma.c
+index 9b9184f964be3..1709d159af7e0 100644
+--- a/drivers/dma/lgm/lgm-dma.c
++++ b/drivers/dma/lgm/lgm-dma.c
+@@ -914,7 +914,7 @@ static void ldma_dev_init(struct ldma_dev *d)
+ 	}
+ }
+ 
+-static int ldma_cfg_init(struct ldma_dev *d)
++static int ldma_parse_dt(struct ldma_dev *d)
+ {
+ 	struct fwnode_handle *fwnode = dev_fwnode(d->dev);
+ 	struct ldma_port *p;
+@@ -1661,10 +1661,6 @@ static int intel_ldma_probe(struct platform_device *pdev)
+ 		p->ldev = d;
+ 	}
+ 
+-	ret = ldma_cfg_init(d);
+-	if (ret)
+-		return ret;
+-
+ 	dma_dev->dev = &pdev->dev;
+ 
+ 	ch_mask = (unsigned long)d->channels_mask;
+@@ -1675,6 +1671,10 @@ static int intel_ldma_probe(struct platform_device *pdev)
+ 			ldma_dma_init_v3X(j, d);
+ 	}
+ 
++	ret = ldma_parse_dt(d);
++	if (ret)
++		return ret;
++
+ 	dma_dev->device_alloc_chan_resources = ldma_alloc_chan_resources;
+ 	dma_dev->device_free_chan_resources = ldma_free_chan_resources;
+ 	dma_dev->device_terminate_all = ldma_terminate_all;
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index ae39b52012b2f..79da93cc77b64 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -221,7 +221,7 @@ static int tegra_adma_init(struct tegra_adma *tdma)
+ 	int ret;
+ 
+ 	/* Clear any interrupts */
+-	tdma_write(tdma, tdma->cdata->global_int_clear, 0x1);
++	tdma_write(tdma, tdma->cdata->ch_base_offset + tdma->cdata->global_int_clear, 0x1);
+ 
+ 	/* Assert soft reset */
+ 	tdma_write(tdma, ADMA_GLOBAL_SOFT_RESET, 0x1);
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index 4e2575dfeb908..871bedf533a80 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -361,9 +361,10 @@ static efi_status_t gsmi_get_variable(efi_char16_t *name,
+ 		memcpy(data, gsmi_dev.data_buf->start, *data_size);
+ 
+ 		/* All variables are have the following attributes */
+-		*attr = EFI_VARIABLE_NON_VOLATILE |
+-			EFI_VARIABLE_BOOTSERVICE_ACCESS |
+-			EFI_VARIABLE_RUNTIME_ACCESS;
++		if (attr)
++			*attr = EFI_VARIABLE_NON_VOLATILE |
++				EFI_VARIABLE_BOOTSERVICE_ACCESS |
++				EFI_VARIABLE_RUNTIME_ACCESS;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&gsmi_dev.lock, flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 3993e61349141..d8441e273cb5d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1507,6 +1507,7 @@ static int amdgpu_discovery_set_common_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
++	case IP_VERSION(11, 0, 4):
+ 		amdgpu_device_ip_block_add(adev, &soc21_common_ip_block);
+ 		break;
+ 	default:
+@@ -1551,6 +1552,7 @@ static int amdgpu_discovery_set_gmc_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
++	case IP_VERSION(11, 0, 4):
+ 		amdgpu_device_ip_block_add(adev, &gmc_v11_0_ip_block);
+ 		break;
+ 	default:
+@@ -1636,6 +1638,7 @@ static int amdgpu_discovery_set_psp_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(13, 0, 7):
+ 	case IP_VERSION(13, 0, 8):
+ 	case IP_VERSION(13, 0, 10):
++	case IP_VERSION(13, 0, 11):
+ 		amdgpu_device_ip_block_add(adev, &psp_v13_0_ip_block);
+ 		break;
+ 	case IP_VERSION(13, 0, 4):
+@@ -1686,6 +1689,7 @@ static int amdgpu_discovery_set_smu_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(13, 0, 7):
+ 	case IP_VERSION(13, 0, 8):
+ 	case IP_VERSION(13, 0, 10):
++	case IP_VERSION(13, 0, 11):
+ 		amdgpu_device_ip_block_add(adev, &smu_v13_0_ip_block);
+ 		break;
+ 	default:
+@@ -1785,6 +1789,7 @@ static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
++	case IP_VERSION(11, 0, 4):
+ 		amdgpu_device_ip_block_add(adev, &gfx_v11_0_ip_block);
+ 		break;
+ 	default:
+@@ -1948,6 +1953,7 @@ static int amdgpu_discovery_set_mes_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
++	case IP_VERSION(11, 0, 4):
+ 		amdgpu_device_ip_block_add(adev, &mes_v11_0_ip_block);
+ 		adev->enable_mes = true;
+ 		adev->enable_mes_kiq = true;
+@@ -2177,6 +2183,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->family = AMDGPU_FAMILY_GC_11_0_0;
+ 		break;
+ 	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
+ 		adev->family = AMDGPU_FAMILY_GC_11_0_1;
+ 		break;
+ 	default:
+@@ -2194,6 +2201,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 	case IP_VERSION(10, 3, 6):
+ 	case IP_VERSION(10, 3, 7):
+ 	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
+ 		adev->flags |= AMD_IS_APU;
+ 		break;
+ 	default:
+@@ -2250,6 +2258,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
+ 		adev->nbio.hdp_flush_reg = &nbio_v4_3_hdp_flush_reg;
+ 		break;
+ 	case IP_VERSION(7, 7, 0):
++	case IP_VERSION(7, 7, 1):
+ 		adev->nbio.funcs = &nbio_v7_7_funcs;
+ 		adev->nbio.hdp_flush_reg = &nbio_v7_7_hdp_flush_reg;
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 9546adc8a76f6..99f5e38c4835e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -156,6 +156,9 @@ static bool amdgpu_gfx_is_compute_multipipe_capable(struct amdgpu_device *adev)
+ 		return amdgpu_compute_multipipe == 1;
+ 	}
+ 
++	if (adev->ip_versions[GC_HWIP][0] > IP_VERSION(9, 0, 0))
++		return true;
++
+ 	/* FIXME: spreading the queues across pipes causes perf regressions
+ 	 * on POLARIS11 compute workloads */
+ 	if (adev->asic_type == CHIP_POLARIS11)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index 28612e56d0d45..02a4c93673ce2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -548,6 +548,8 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
+ 	case IP_VERSION(10, 3, 1):
+ 	/* YELLOW_CARP*/
+ 	case IP_VERSION(10, 3, 3):
++	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
+ 		/* Don't enable it by default yet.
+ 		 */
+ 		if (amdgpu_tmz < 1) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index adac650cf544a..3bf0e893c07df 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -154,8 +154,14 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
+ 	struct dma_fence *f;
+ 	unsigned i;
+ 
+-	/* use sched fence if available */
+-	f = job->base.s_fence ? &job->base.s_fence->finished :  &job->hw_fence;
++	/* Check if any fences where initialized */
++	if (job->base.s_fence && job->base.s_fence->finished.ops)
++		f = &job->base.s_fence->finished;
++	else if (job->hw_fence.ops)
++		f = &job->hw_fence;
++	else
++		f = NULL;
++
+ 	for (i = 0; i < job->num_ibs; ++i)
+ 		amdgpu_ib_free(ring->adev, &job->ibs[i], f);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 7978307e1d6d2..712dd72f3ccf2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -139,6 +139,7 @@ static int psp_early_init(void *handle)
+ 	case IP_VERSION(13, 0, 5):
+ 	case IP_VERSION(13, 0, 8):
+ 	case IP_VERSION(13, 0, 10):
++	case IP_VERSION(13, 0, 11):
+ 		psp_v13_0_set_psp_funcs(psp);
+ 		psp->autoload_supported = true;
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 0fecc5bf45bc5..3bf4f2edc1089 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -77,6 +77,10 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_3_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_3_me.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_11_0_3_rlc.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_4_pfp.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_4_me.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_4_mec.bin");
++MODULE_FIRMWARE("amdgpu/gc_11_0_4_rlc.bin");
+ 
+ static const struct soc15_reg_golden golden_settings_gc_11_0_1[] =
+ {
+@@ -262,6 +266,7 @@ static void gfx_v11_0_init_golden_registers(struct amdgpu_device *adev)
+ {
+ 	switch (adev->ip_versions[GC_HWIP][0]) {
+ 	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
+ 		soc15_program_register_sequence(adev,
+ 						golden_settings_gc_11_0_1,
+ 						(const u32)ARRAY_SIZE(golden_settings_gc_11_0_1));
+@@ -856,6 +861,7 @@ static int gfx_v11_0_gpu_early_init(struct amdgpu_device *adev)
+ 		adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+ 		break;
+ 	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
+ 		adev->gfx.config.max_hw_contexts = 8;
+ 		adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+ 		adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+@@ -1282,7 +1288,6 @@ static int gfx_v11_0_sw_init(void *handle)
+ 
+ 	switch (adev->ip_versions[GC_HWIP][0]) {
+ 	case IP_VERSION(11, 0, 0):
+-	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
+ 		adev->gfx.me.num_me = 1;
+@@ -1292,6 +1297,15 @@ static int gfx_v11_0_sw_init(void *handle)
+ 		adev->gfx.mec.num_pipe_per_mec = 4;
+ 		adev->gfx.mec.num_queue_per_pipe = 4;
+ 		break;
++	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
++		adev->gfx.me.num_me = 1;
++		adev->gfx.me.num_pipe_per_me = 1;
++		adev->gfx.me.num_queue_per_pipe = 1;
++		adev->gfx.mec.num_mec = 1;
++		adev->gfx.mec.num_pipe_per_mec = 4;
++		adev->gfx.mec.num_queue_per_pipe = 4;
++		break;
+ 	default:
+ 		adev->gfx.me.num_me = 1;
+ 		adev->gfx.me.num_pipe_per_me = 1;
+@@ -2486,7 +2500,8 @@ static int gfx_v11_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev)
+ 	for (i = 0; i < adev->usec_timeout; i++) {
+ 		cp_status = RREG32_SOC15(GC, 0, regCP_STAT);
+ 
+-		if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 1))
++		if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 1) ||
++				adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 4))
+ 			bootload_status = RREG32_SOC15(GC, 0,
+ 					regRLC_RLCS_BOOTLOAD_STATUS_gc_11_0_1);
+ 		else
+@@ -5022,6 +5037,7 @@ static void gfx_v11_cntl_power_gating(struct amdgpu_device *adev, bool enable)
+ 	if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) {
+ 		switch (adev->ip_versions[GC_HWIP][0]) {
+ 		case IP_VERSION(11, 0, 1):
++		case IP_VERSION(11, 0, 4):
+ 			WREG32_SOC15(GC, 0, regRLC_PG_DELAY_3, RLC_PG_DELAY_3_DEFAULT_GC_11_0_1);
+ 			break;
+ 		default:
+@@ -5055,6 +5071,7 @@ static int gfx_v11_0_set_powergating_state(void *handle,
+ 		amdgpu_gfx_off_ctrl(adev, enable);
+ 		break;
+ 	case IP_VERSION(11, 0, 1):
++	case IP_VERSION(11, 0, 4):
+ 		gfx_v11_cntl_pg(adev, enable);
+ 		amdgpu_gfx_off_ctrl(adev, enable);
+ 		break;
+@@ -5078,6 +5095,7 @@ static int gfx_v11_0_set_clockgating_state(void *handle,
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
++	case IP_VERSION(11, 0, 4):
+ 	        gfx_v11_0_update_gfx_clock_gating(adev,
+ 	                        state ==  AMD_CG_STATE_GATE);
+ 	        break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+index 66dfb574cc7d1..96e0bb5bee78e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+@@ -749,6 +749,7 @@ static int gmc_v11_0_sw_init(void *handle)
+ 	case IP_VERSION(11, 0, 1):
+ 	case IP_VERSION(11, 0, 2):
+ 	case IP_VERSION(11, 0, 3):
++	case IP_VERSION(11, 0, 4):
+ 		adev->num_vmhubs = 2;
+ 		/*
+ 		 * To fulfill 4-level page support,
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index 88f9b327183ab..8c5fa4b7b68a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -46,6 +46,8 @@ MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin");
+ MODULE_FIRMWARE("amdgpu/psp_13_0_10_ta.bin");
++MODULE_FIRMWARE("amdgpu/psp_13_0_11_toc.bin");
++MODULE_FIRMWARE("amdgpu/psp_13_0_11_ta.bin");
+ 
+ /* For large FW files the time to complete can be very long */
+ #define USBC_PD_POLLING_LIMIT_S 240
+@@ -102,6 +104,7 @@ static int psp_v13_0_init_microcode(struct psp_context *psp)
+ 	case IP_VERSION(13, 0, 3):
+ 	case IP_VERSION(13, 0, 5):
+ 	case IP_VERSION(13, 0, 8):
++	case IP_VERSION(13, 0, 11):
+ 		err = psp_init_toc_microcode(psp, chip_name);
+ 		if (err)
+ 			return err;
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index 909cf9f220c19..9bc9852b9cda9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -325,6 +325,7 @@ soc21_asic_reset_method(struct amdgpu_device *adev)
+ 	case IP_VERSION(13, 0, 10):
+ 		return AMD_RESET_METHOD_MODE1;
+ 	case IP_VERSION(13, 0, 4):
++	case IP_VERSION(13, 0, 11):
+ 		return AMD_RESET_METHOD_MODE2;
+ 	default:
+ 		if (amdgpu_dpm_is_baco_supported(adev))
+@@ -654,7 +655,23 @@ static int soc21_common_early_init(void *handle)
+ 		adev->external_rev_id = adev->rev_id + 0x20;
+ 		break;
+ 	case IP_VERSION(11, 0, 4):
+-		adev->cg_flags = AMD_CG_SUPPORT_VCN_MGCG |
++		adev->cg_flags =
++			AMD_CG_SUPPORT_GFX_CGCG |
++			AMD_CG_SUPPORT_GFX_CGLS |
++			AMD_CG_SUPPORT_GFX_MGCG |
++			AMD_CG_SUPPORT_GFX_FGCG |
++			AMD_CG_SUPPORT_REPEATER_FGCG |
++			AMD_CG_SUPPORT_GFX_PERF_CLK |
++			AMD_CG_SUPPORT_MC_MGCG |
++			AMD_CG_SUPPORT_MC_LS |
++			AMD_CG_SUPPORT_HDP_MGCG |
++			AMD_CG_SUPPORT_HDP_LS |
++			AMD_CG_SUPPORT_ATHUB_MGCG |
++			AMD_CG_SUPPORT_ATHUB_LS |
++			AMD_CG_SUPPORT_IH_CG |
++			AMD_CG_SUPPORT_BIF_MGCG |
++			AMD_CG_SUPPORT_BIF_LS |
++			AMD_CG_SUPPORT_VCN_MGCG |
+ 			AMD_CG_SUPPORT_JPEG_MGCG;
+ 		adev->pg_flags = AMD_PG_SUPPORT_VCN |
+ 			AMD_PG_SUPPORT_VCN_DPG |
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index dacad8b85963c..e10f1f15c9c43 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1512,8 +1512,6 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 		case IP_VERSION(3, 0, 1):
+ 		case IP_VERSION(3, 1, 2):
+ 		case IP_VERSION(3, 1, 3):
+-		case IP_VERSION(3, 1, 4):
+-		case IP_VERSION(3, 1, 5):
+ 		case IP_VERSION(3, 1, 6):
+ 			init_data.flags.gpu_vm_support = true;
+ 			break;
+@@ -5283,8 +5281,6 @@ static void fill_stream_properties_from_drm_display_mode(
+ 
+ 	timing_out->aspect_ratio = get_aspect_ratio(mode_in);
+ 
+-	stream->output_color_space = get_output_color_space(timing_out);
+-
+ 	stream->out_transfer_func->type = TF_TYPE_PREDEFINED;
+ 	stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB;
+ 	if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
+@@ -5295,6 +5291,8 @@ static void fill_stream_properties_from_drm_display_mode(
+ 			adjust_colour_depth_from_display_info(timing_out, info);
+ 		}
+ 	}
++
++	stream->output_color_space = get_output_color_space(timing_out);
+ }
+ 
+ static void fill_audio_info(struct audio_info *audio_info,
+@@ -9433,8 +9431,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			goto fail;
+ 		}
+ 
+-		if (dm_old_con_state->abm_level !=
+-		    dm_new_con_state->abm_level)
++		if (dm_old_con_state->abm_level != dm_new_con_state->abm_level ||
++		    dm_old_con_state->scaling != dm_new_con_state->scaling)
+ 			new_crtc_state->connectors_changed = true;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 7c2e3b8dc26ad..95562efad6515 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -90,8 +90,8 @@ static const struct out_csc_color_matrix_type output_csc_matrix[] = {
+ 		{ 0xE00, 0xF349, 0xFEB7, 0x1000, 0x6CE, 0x16E3,
+ 				0x24F, 0x200, 0xFCCB, 0xF535, 0xE00, 0x1000} },
+ 	{ COLOR_SPACE_YCBCR2020_TYPE,
+-		{ 0x1000, 0xF149, 0xFEB7, 0x0000, 0x0868, 0x15B2,
+-				0x01E6, 0x0000, 0xFB88, 0xF478, 0x1000, 0x0000} },
++		{ 0x1000, 0xF149, 0xFEB7, 0x1004, 0x0868, 0x15B2,
++				0x01E6, 0x201, 0xFB88, 0xF478, 0x1000, 0x1004} },
+ 	{ COLOR_SPACE_YCBCR709_BLACK_TYPE,
+ 		{ 0x0000, 0x0000, 0x0000, 0x1000, 0x0000, 0x0000,
+ 				0x0000, 0x0200, 0x0000, 0x0000, 0x0000, 0x1000} },
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index b880f4d7d67e6..2875f6bc3a6a2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -585,6 +585,7 @@ static int smu_set_funcs(struct amdgpu_device *adev)
+ 		yellow_carp_set_ppt_funcs(smu);
+ 		break;
+ 	case IP_VERSION(13, 0, 4):
++	case IP_VERSION(13, 0, 11):
+ 		smu_v13_0_4_set_ppt_funcs(smu);
+ 		break;
+ 	case IP_VERSION(13, 0, 5):
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index 85e22210963fc..5cdc07165480b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -1171,6 +1171,7 @@ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ 	int ret = 0;
+ 	uint32_t apu_percent = 0;
+ 	uint32_t dgpu_percent = 0;
++	struct amdgpu_device *adev = smu->adev;
+ 
+ 
+ 	ret = smu_cmn_get_metrics_table(smu,
+@@ -1196,7 +1197,11 @@ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ 		*value = metrics->AverageUvdActivity / 100;
+ 		break;
+ 	case METRICS_AVERAGE_SOCKETPOWER:
+-		*value = (metrics->CurrentSocketPower << 8) / 1000;
++		if (((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 1)) && (adev->pm.fw_version >= 0x40000f)) ||
++		((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(12, 0, 0)) && (adev->pm.fw_version >= 0x373200)))
++			*value = metrics->CurrentSocketPower << 8;
++		else
++			*value = (metrics->CurrentSocketPower << 8) / 1000;
+ 		break;
+ 	case METRICS_TEMPERATURE_EDGE:
+ 		*value = (metrics->GfxTemperature / 100) *
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index 9f9f64c5cdd88..479cbf05c3310 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -250,6 +250,7 @@ int smu_v13_0_check_fw_status(struct smu_context *smu)
+ 
+ 	switch (adev->ip_versions[MP1_HWIP][0]) {
+ 	case IP_VERSION(13, 0, 4):
++	case IP_VERSION(13, 0, 11):
+ 		mp1_fw_flags = RREG32_PCIE(MP1_Public |
+ 					   (smnMP1_V13_0_4_FIRMWARE_FLAGS & 0xffffffff));
+ 		break;
+@@ -303,6 +304,7 @@ int smu_v13_0_check_fw_version(struct smu_context *smu)
+ 		smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_YELLOW_CARP;
+ 		break;
+ 	case IP_VERSION(13, 0, 4):
++	case IP_VERSION(13, 0, 11):
+ 		smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_4;
+ 		break;
+ 	case IP_VERSION(13, 0, 5):
+@@ -843,6 +845,7 @@ int smu_v13_0_gfx_off_control(struct smu_context *smu, bool enable)
+ 	case IP_VERSION(13, 0, 7):
+ 	case IP_VERSION(13, 0, 8):
+ 	case IP_VERSION(13, 0, 10):
++	case IP_VERSION(13, 0, 11):
+ 		if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))
+ 			return 0;
+ 		if (enable)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+index 97e1d55dcaad5..8fa9a36c38b64 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
+@@ -1026,6 +1026,15 @@ static const struct pptable_funcs smu_v13_0_4_ppt_funcs = {
+ 	.set_gfx_power_up_by_imu = smu_v13_0_set_gfx_power_up_by_imu,
+ };
+ 
++static void smu_v13_0_4_set_smu_mailbox_registers(struct smu_context *smu)
++{
++	struct amdgpu_device *adev = smu->adev;
++
++	smu->param_reg = SOC15_REG_OFFSET(MP1, 0, mmMP1_SMN_C2PMSG_82);
++	smu->msg_reg = SOC15_REG_OFFSET(MP1, 0, mmMP1_SMN_C2PMSG_66);
++	smu->resp_reg = SOC15_REG_OFFSET(MP1, 0, mmMP1_SMN_C2PMSG_90);
++}
++
+ void smu_v13_0_4_set_ppt_funcs(struct smu_context *smu)
+ {
+ 	struct amdgpu_device *adev = smu->adev;
+@@ -1035,7 +1044,9 @@ void smu_v13_0_4_set_ppt_funcs(struct smu_context *smu)
+ 	smu->feature_map = smu_v13_0_4_feature_mask_map;
+ 	smu->table_map = smu_v13_0_4_table_map;
+ 	smu->is_apu = true;
+-	smu->param_reg = SOC15_REG_OFFSET(MP1, 0, mmMP1_SMN_C2PMSG_82);
+-	smu->msg_reg = SOC15_REG_OFFSET(MP1, 0, mmMP1_SMN_C2PMSG_66);
+-	smu->resp_reg = SOC15_REG_OFFSET(MP1, 0, mmMP1_SMN_C2PMSG_90);
++
++	if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 4))
++		smu_v13_0_4_set_smu_mailbox_registers(smu);
++	else
++		smu_v13_0_set_smu_mailbox_registers(smu);
+ }
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index 7cb7130434089..bc523a3d1d42f 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -1620,7 +1620,7 @@ static int skl_check_main_surface(struct intel_plane_state *plane_state)
+ 	u32 offset;
+ 	int ret;
+ 
+-	if (w > max_width || w < min_width || h > max_height) {
++	if (w > max_width || w < min_width || h > max_height || h < 1) {
+ 		drm_dbg_kms(&dev_priv->drm,
+ 			    "requested Y/RGB source size %dx%d outside limits (min: %dx1 max: %dx%d)\n",
+ 			    w, h, min_width, max_width, max_height);
+diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
+index 2ce30cff461a0..35bc2a3fa811c 100644
+--- a/drivers/gpu/drm/i915/i915_driver.c
++++ b/drivers/gpu/drm/i915/i915_driver.c
+@@ -1070,12 +1070,9 @@ static int i915_driver_open(struct drm_device *dev, struct drm_file *file)
+  */
+ static void i915_driver_lastclose(struct drm_device *dev)
+ {
+-	struct drm_i915_private *i915 = to_i915(dev);
+-
+ 	intel_fbdev_restore_mode(dev);
+ 
+-	if (HAS_DISPLAY(i915))
+-		vga_switcheroo_process_delayed_switch();
++	vga_switcheroo_process_delayed_switch();
+ }
+ 
+ static void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index cd4487a1d3be0..34f2d9da201e2 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -423,7 +423,8 @@ static const struct intel_device_info ilk_m_info = {
+ 	.has_coherent_ggtt = true, \
+ 	.has_llc = 1, \
+ 	.has_rc6 = 1, \
+-	.has_rc6p = 1, \
++	/* snb does support rc6p, but enabling it causes various issues */ \
++	.has_rc6p = 0, \
+ 	.has_rps = true, \
+ 	.dma_mask_size = 40, \
+ 	.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING, \
+diff --git a/drivers/gpu/drm/i915/i915_switcheroo.c b/drivers/gpu/drm/i915/i915_switcheroo.c
+index 23777d500cdf9..f45bd6b6cede4 100644
+--- a/drivers/gpu/drm/i915/i915_switcheroo.c
++++ b/drivers/gpu/drm/i915/i915_switcheroo.c
+@@ -19,6 +19,10 @@ static void i915_switcheroo_set_state(struct pci_dev *pdev,
+ 		dev_err(&pdev->dev, "DRM not initialized, aborting switch.\n");
+ 		return;
+ 	}
++	if (!HAS_DISPLAY(i915)) {
++		dev_err(&pdev->dev, "Device state not initialized, aborting switch.\n");
++		return;
++	}
+ 
+ 	if (state == VGA_SWITCHEROO_ON) {
+ 		drm_info(&i915->drm, "switched on\n");
+@@ -44,7 +48,7 @@ static bool i915_switcheroo_can_switch(struct pci_dev *pdev)
+ 	 * locking inversion with the driver load path. And the access here is
+ 	 * completely racy anyway. So don't bother with locking for now.
+ 	 */
+-	return i915 && atomic_read(&i915->drm.open_count) == 0;
++	return i915 && HAS_DISPLAY(i915) && atomic_read(&i915->drm.open_count) == 0;
+ }
+ 
+ static const struct vga_switcheroo_client_ops i915_switcheroo_ops = {
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
+index 00b0068fda208..5d94db453df32 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.h
++++ b/drivers/infiniband/ulp/srp/ib_srp.h
+@@ -62,9 +62,6 @@ enum {
+ 	SRP_DEFAULT_CMD_SQ_SIZE = SRP_DEFAULT_QUEUE_SIZE - SRP_RSP_SQ_SIZE -
+ 				  SRP_TSK_MGMT_SQ_SIZE,
+ 
+-	SRP_TAG_NO_REQ		= ~0U,
+-	SRP_TAG_TSK_MGMT	= 1U << 31,
+-
+ 	SRP_MAX_PAGES_PER_MR	= 512,
+ 
+ 	SRP_MAX_ADD_CDB_LEN	= 16,
+@@ -79,6 +76,11 @@ enum {
+ 				  sizeof(struct srp_imm_buf),
+ };
+ 
++enum {
++	SRP_TAG_NO_REQ		= ~0U,
++	SRP_TAG_TSK_MGMT	= BIT(31),
++};
++
+ enum srp_target_state {
+ 	SRP_TARGET_SCANNING,
+ 	SRP_TARGET_LIVE,
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 7ff0b63c25e37..80811e852d8fd 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -316,6 +316,13 @@ static void fastrpc_free_map(struct kref *ref)
+ 		dma_buf_put(map->buf);
+ 	}
+ 
++	if (map->fl) {
++		spin_lock(&map->fl->lock);
++		list_del(&map->node);
++		spin_unlock(&map->fl->lock);
++		map->fl = NULL;
++	}
++
+ 	kfree(map);
+ }
+ 
+@@ -325,38 +332,41 @@ static void fastrpc_map_put(struct fastrpc_map *map)
+ 		kref_put(&map->refcount, fastrpc_free_map);
+ }
+ 
+-static void fastrpc_map_get(struct fastrpc_map *map)
++static int fastrpc_map_get(struct fastrpc_map *map)
+ {
+-	if (map)
+-		kref_get(&map->refcount);
++	if (!map)
++		return -ENOENT;
++
++	return kref_get_unless_zero(&map->refcount) ? 0 : -ENOENT;
+ }
+ 
+ 
+ static int fastrpc_map_lookup(struct fastrpc_user *fl, int fd,
+-			    struct fastrpc_map **ppmap)
++			    struct fastrpc_map **ppmap, bool take_ref)
+ {
++	struct fastrpc_session_ctx *sess = fl->sctx;
+ 	struct fastrpc_map *map = NULL;
++	int ret = -ENOENT;
+ 
+-	mutex_lock(&fl->mutex);
++	spin_lock(&fl->lock);
+ 	list_for_each_entry(map, &fl->maps, node) {
+-		if (map->fd == fd) {
+-			*ppmap = map;
+-			mutex_unlock(&fl->mutex);
+-			return 0;
+-		}
+-	}
+-	mutex_unlock(&fl->mutex);
+-
+-	return -ENOENT;
+-}
++		if (map->fd != fd)
++			continue;
+ 
+-static int fastrpc_map_find(struct fastrpc_user *fl, int fd,
+-			    struct fastrpc_map **ppmap)
+-{
+-	int ret = fastrpc_map_lookup(fl, fd, ppmap);
++		if (take_ref) {
++			ret = fastrpc_map_get(map);
++			if (ret) {
++				dev_dbg(sess->dev, "%s: Failed to get map fd=%d ret=%d\n",
++					__func__, fd, ret);
++				break;
++			}
++		}
+ 
+-	if (!ret)
+-		fastrpc_map_get(*ppmap);
++		*ppmap = map;
++		ret = 0;
++		break;
++	}
++	spin_unlock(&fl->lock);
+ 
+ 	return ret;
+ }
+@@ -703,7 +713,7 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
+ 	struct fastrpc_map *map = NULL;
+ 	int err = 0;
+ 
+-	if (!fastrpc_map_find(fl, fd, ppmap))
++	if (!fastrpc_map_lookup(fl, fd, ppmap, true))
+ 		return 0;
+ 
+ 	map = kzalloc(sizeof(*map), GFP_KERNEL);
+@@ -1026,7 +1036,7 @@ static int fastrpc_put_args(struct fastrpc_invoke_ctx *ctx,
+ 	for (i = 0; i < FASTRPC_MAX_FDLIST; i++) {
+ 		if (!fdlist[i])
+ 			break;
+-		if (!fastrpc_map_lookup(fl, (int)fdlist[i], &mmap))
++		if (!fastrpc_map_lookup(fl, (int)fdlist[i], &mmap, false))
+ 			fastrpc_map_put(mmap);
+ 	}
+ 
+@@ -1265,12 +1275,7 @@ err_invoke:
+ 	fl->init_mem = NULL;
+ 	fastrpc_buf_free(imem);
+ err_alloc:
+-	if (map) {
+-		spin_lock(&fl->lock);
+-		list_del(&map->node);
+-		spin_unlock(&fl->lock);
+-		fastrpc_map_put(map);
+-	}
++	fastrpc_map_put(map);
+ err:
+ 	kfree(args);
+ 
+@@ -1346,10 +1351,8 @@ static int fastrpc_device_release(struct inode *inode, struct file *file)
+ 		fastrpc_context_put(ctx);
+ 	}
+ 
+-	list_for_each_entry_safe(map, m, &fl->maps, node) {
+-		list_del(&map->node);
++	list_for_each_entry_safe(map, m, &fl->maps, node)
+ 		fastrpc_map_put(map);
+-	}
+ 
+ 	list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
+ 		list_del(&buf->node);
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 46aa3554e97b0..7b7f4190cd023 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -665,13 +665,15 @@ void *mei_cldev_dma_map(struct mei_cl_device *cldev, u8 buffer_id, size_t size)
+ 	if (cl->state == MEI_FILE_UNINITIALIZED) {
+ 		ret = mei_cl_link(cl);
+ 		if (ret)
+-			goto out;
++			goto notlinked;
+ 		/* update pointers */
+ 		cl->cldev = cldev;
+ 	}
+ 
+ 	ret = mei_cl_dma_alloc_and_map(cl, NULL, buffer_id, size);
+-out:
++	if (ret)
++		mei_cl_unlink(cl);
++notlinked:
+ 	mutex_unlock(&bus->device_lock);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+@@ -721,7 +723,7 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 	if (cl->state == MEI_FILE_UNINITIALIZED) {
+ 		ret = mei_cl_link(cl);
+ 		if (ret)
+-			goto out;
++			goto notlinked;
+ 		/* update pointers */
+ 		cl->cldev = cldev;
+ 	}
+@@ -748,6 +750,9 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 	}
+ 
+ out:
++	if (ret)
++		mei_cl_unlink(cl);
++notlinked:
+ 	mutex_unlock(&bus->device_lock);
+ 
+ 	return ret;
+@@ -1115,7 +1120,6 @@ static void mei_cl_bus_dev_release(struct device *dev)
+ 	mei_cl_flush_queues(cldev->cl, NULL);
+ 	mei_me_cl_put(cldev->me_cl);
+ 	mei_dev_bus_put(cldev->bus);
+-	mei_cl_unlink(cldev->cl);
+ 	kfree(cldev->cl);
+ 	kfree(cldev);
+ }
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 99966cd3e7d89..bdc65d50b945f 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -111,6 +111,8 @@
+ 
+ #define MEI_DEV_ID_RPL_S      0x7A68  /* Raptor Lake Point S */
+ 
++#define MEI_DEV_ID_MTL_M      0x7E70  /* Meteor Lake Point M */
++
+ /*
+  * MEI HW Section
+  */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 704cd0caa172c..5bf0d50d55a00 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -118,6 +118,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/misc/vmw_vmci/vmci_guest.c b/drivers/misc/vmw_vmci/vmci_guest.c
+index aa7b05de97dd5..4f8d962bb5b2a 100644
+--- a/drivers/misc/vmw_vmci/vmci_guest.c
++++ b/drivers/misc/vmw_vmci/vmci_guest.c
+@@ -56,8 +56,6 @@ struct vmci_guest_device {
+ 
+ 	bool exclusive_vectors;
+ 
+-	struct tasklet_struct datagram_tasklet;
+-	struct tasklet_struct bm_tasklet;
+ 	struct wait_queue_head inout_wq;
+ 
+ 	void *data_buffer;
+@@ -304,9 +302,8 @@ static int vmci_check_host_caps(struct pci_dev *pdev)
+  * This function assumes that it has exclusive access to the data
+  * in register(s) for the duration of the call.
+  */
+-static void vmci_dispatch_dgs(unsigned long data)
++static void vmci_dispatch_dgs(struct vmci_guest_device *vmci_dev)
+ {
+-	struct vmci_guest_device *vmci_dev = (struct vmci_guest_device *)data;
+ 	u8 *dg_in_buffer = vmci_dev->data_buffer;
+ 	struct vmci_datagram *dg;
+ 	size_t dg_in_buffer_size = VMCI_MAX_DG_SIZE;
+@@ -465,10 +462,8 @@ static void vmci_dispatch_dgs(unsigned long data)
+  * Scans the notification bitmap for raised flags, clears them
+  * and handles the notifications.
+  */
+-static void vmci_process_bitmap(unsigned long data)
++static void vmci_process_bitmap(struct vmci_guest_device *dev)
+ {
+-	struct vmci_guest_device *dev = (struct vmci_guest_device *)data;
+-
+ 	if (!dev->notification_bitmap) {
+ 		dev_dbg(dev->dev, "No bitmap present in %s\n", __func__);
+ 		return;
+@@ -486,13 +481,13 @@ static irqreturn_t vmci_interrupt(int irq, void *_dev)
+ 	struct vmci_guest_device *dev = _dev;
+ 
+ 	/*
+-	 * If we are using MSI-X with exclusive vectors then we simply schedule
+-	 * the datagram tasklet, since we know the interrupt was meant for us.
++	 * If we are using MSI-X with exclusive vectors then we simply call
++	 * vmci_dispatch_dgs(), since we know the interrupt was meant for us.
+ 	 * Otherwise we must read the ICR to determine what to do.
+ 	 */
+ 
+ 	if (dev->exclusive_vectors) {
+-		tasklet_schedule(&dev->datagram_tasklet);
++		vmci_dispatch_dgs(dev);
+ 	} else {
+ 		unsigned int icr;
+ 
+@@ -502,12 +497,12 @@ static irqreturn_t vmci_interrupt(int irq, void *_dev)
+ 			return IRQ_NONE;
+ 
+ 		if (icr & VMCI_ICR_DATAGRAM) {
+-			tasklet_schedule(&dev->datagram_tasklet);
++			vmci_dispatch_dgs(dev);
+ 			icr &= ~VMCI_ICR_DATAGRAM;
+ 		}
+ 
+ 		if (icr & VMCI_ICR_NOTIFICATION) {
+-			tasklet_schedule(&dev->bm_tasklet);
++			vmci_process_bitmap(dev);
+ 			icr &= ~VMCI_ICR_NOTIFICATION;
+ 		}
+ 
+@@ -536,7 +531,7 @@ static irqreturn_t vmci_interrupt_bm(int irq, void *_dev)
+ 	struct vmci_guest_device *dev = _dev;
+ 
+ 	/* For MSI-X we can just assume it was meant for us. */
+-	tasklet_schedule(&dev->bm_tasklet);
++	vmci_process_bitmap(dev);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -638,10 +633,6 @@ static int vmci_guest_probe_device(struct pci_dev *pdev,
+ 	vmci_dev->iobase = iobase;
+ 	vmci_dev->mmio_base = mmio_base;
+ 
+-	tasklet_init(&vmci_dev->datagram_tasklet,
+-		     vmci_dispatch_dgs, (unsigned long)vmci_dev);
+-	tasklet_init(&vmci_dev->bm_tasklet,
+-		     vmci_process_bitmap, (unsigned long)vmci_dev);
+ 	init_waitqueue_head(&vmci_dev->inout_wq);
+ 
+ 	if (mmio_base != NULL) {
+@@ -808,8 +799,9 @@ static int vmci_guest_probe_device(struct pci_dev *pdev,
+ 	 * Request IRQ for legacy or MSI interrupts, or for first
+ 	 * MSI-X vector.
+ 	 */
+-	error = request_irq(pci_irq_vector(pdev, 0), vmci_interrupt,
+-			    IRQF_SHARED, KBUILD_MODNAME, vmci_dev);
++	error = request_threaded_irq(pci_irq_vector(pdev, 0), NULL,
++				     vmci_interrupt, IRQF_SHARED,
++				     KBUILD_MODNAME, vmci_dev);
+ 	if (error) {
+ 		dev_err(&pdev->dev, "Irq %u in use: %d\n",
+ 			pci_irq_vector(pdev, 0), error);
+@@ -823,9 +815,9 @@ static int vmci_guest_probe_device(struct pci_dev *pdev,
+ 	 * between the vectors.
+ 	 */
+ 	if (vmci_dev->exclusive_vectors) {
+-		error = request_irq(pci_irq_vector(pdev, 1),
+-				    vmci_interrupt_bm, 0, KBUILD_MODNAME,
+-				    vmci_dev);
++		error = request_threaded_irq(pci_irq_vector(pdev, 1), NULL,
++					     vmci_interrupt_bm, 0,
++					     KBUILD_MODNAME, vmci_dev);
+ 		if (error) {
+ 			dev_err(&pdev->dev,
+ 				"Failed to allocate irq %u: %d\n",
+@@ -833,9 +825,11 @@ static int vmci_guest_probe_device(struct pci_dev *pdev,
+ 			goto err_free_irq;
+ 		}
+ 		if (caps_in_use & VMCI_CAPS_DMA_DATAGRAM) {
+-			error = request_irq(pci_irq_vector(pdev, 2),
+-					    vmci_interrupt_dma_datagram,
+-					    0, KBUILD_MODNAME, vmci_dev);
++			error = request_threaded_irq(pci_irq_vector(pdev, 2),
++						     NULL,
++						    vmci_interrupt_dma_datagram,
++						     0, KBUILD_MODNAME,
++						     vmci_dev);
+ 			if (error) {
+ 				dev_err(&pdev->dev,
+ 					"Failed to allocate irq %u: %d\n",
+@@ -871,8 +865,6 @@ err_free_bm_irq:
+ 
+ err_free_irq:
+ 	free_irq(pci_irq_vector(pdev, 0), vmci_dev);
+-	tasklet_kill(&vmci_dev->datagram_tasklet);
+-	tasklet_kill(&vmci_dev->bm_tasklet);
+ 
+ err_disable_msi:
+ 	pci_free_irq_vectors(pdev);
+@@ -943,9 +935,6 @@ static void vmci_guest_remove_device(struct pci_dev *pdev)
+ 	free_irq(pci_irq_vector(pdev, 0), vmci_dev);
+ 	pci_free_irq_vectors(pdev);
+ 
+-	tasklet_kill(&vmci_dev->datagram_tasklet);
+-	tasklet_kill(&vmci_dev->bm_tasklet);
+-
+ 	if (vmci_dev->notification_bitmap) {
+ 		/*
+ 		 * The device reset above cleared the bitmap state of the
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index ffeb5759830ff..8c62c3fba75e8 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -107,6 +107,7 @@
+ #define ESDHC_TUNING_START_TAP_DEFAULT	0x1
+ #define ESDHC_TUNING_START_TAP_MASK	0x7f
+ #define ESDHC_TUNING_CMD_CRC_CHECK_DISABLE	(1 << 7)
++#define ESDHC_TUNING_STEP_DEFAULT	0x1
+ #define ESDHC_TUNING_STEP_MASK		0x00070000
+ #define ESDHC_TUNING_STEP_SHIFT		16
+ 
+@@ -1361,7 +1362,7 @@ static void sdhci_esdhc_imx_hwinit(struct sdhci_host *host)
+ 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ 	struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
+ 	struct cqhci_host *cq_host = host->mmc->cqe_private;
+-	int tmp;
++	u32 tmp;
+ 
+ 	if (esdhc_is_usdhc(imx_data)) {
+ 		/*
+@@ -1416,17 +1417,24 @@ static void sdhci_esdhc_imx_hwinit(struct sdhci_host *host)
+ 
+ 		if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) {
+ 			tmp = readl(host->ioaddr + ESDHC_TUNING_CTRL);
+-			tmp |= ESDHC_STD_TUNING_EN |
+-				ESDHC_TUNING_START_TAP_DEFAULT;
+-			if (imx_data->boarddata.tuning_start_tap) {
+-				tmp &= ~ESDHC_TUNING_START_TAP_MASK;
++			tmp |= ESDHC_STD_TUNING_EN;
++
++			/*
++			 * ROM code or bootloader may config the start tap
++			 * and step, unmask them first.
++			 */
++			tmp &= ~(ESDHC_TUNING_START_TAP_MASK | ESDHC_TUNING_STEP_MASK);
++			if (imx_data->boarddata.tuning_start_tap)
+ 				tmp |= imx_data->boarddata.tuning_start_tap;
+-			}
++			else
++				tmp |= ESDHC_TUNING_START_TAP_DEFAULT;
+ 
+ 			if (imx_data->boarddata.tuning_step) {
+-				tmp &= ~ESDHC_TUNING_STEP_MASK;
+ 				tmp |= imx_data->boarddata.tuning_step
+ 					<< ESDHC_TUNING_STEP_SHIFT;
++			} else {
++				tmp |= ESDHC_TUNING_STEP_DEFAULT
++					<< ESDHC_TUNING_STEP_SHIFT;
+ 			}
+ 
+ 			/* Disable the CMD CRC check for tuning, if not, need to
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index b16e12e62e722..3db9f32d6a7b9 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -1492,9 +1492,11 @@ static int sunxi_mmc_remove(struct platform_device *pdev)
+ 	struct sunxi_mmc_host *host = mmc_priv(mmc);
+ 
+ 	mmc_remove_host(mmc);
+-	pm_runtime_force_suspend(&pdev->dev);
+-	disable_irq(host->irq);
+-	sunxi_mmc_disable(host);
++	pm_runtime_disable(&pdev->dev);
++	if (!pm_runtime_status_suspended(&pdev->dev)) {
++		disable_irq(host->irq);
++		sunxi_mmc_disable(host);
++	}
+ 	dma_free_coherent(&pdev->dev, PAGE_SIZE, host->sg_cpu, host->sg_dma);
+ 	mmc_free_host(mmc);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index fa8029a940689..eb25e458266ca 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -589,7 +589,7 @@ int rvu_mbox_handler_mcs_free_resources(struct rvu *rvu,
+ 	u16 pcifunc = req->hdr.pcifunc;
+ 	struct mcs_rsrc_map *map;
+ 	struct mcs *mcs;
+-	int rc;
++	int rc = 0;
+ 
+ 	if (req->mcs_id >= rvu->mcs_blk_cnt)
+ 		return MCS_AF_ERR_INVALID_MCSID;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 88f8772a61cd5..8a41ad8ca04f1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1012,7 +1012,6 @@ static void otx2_pool_refill_task(struct work_struct *work)
+ 	rbpool = cq->rbpool;
+ 	free_ptrs = cq->pool_ptrs;
+ 
+-	get_cpu();
+ 	while (cq->pool_ptrs) {
+ 		if (otx2_alloc_rbuf(pfvf, rbpool, &bufptr)) {
+ 			/* Schedule a WQ if we fails to free atleast half of the
+@@ -1032,7 +1031,6 @@ static void otx2_pool_refill_task(struct work_struct *work)
+ 		pfvf->hw_ops->aura_freeptr(pfvf, qidx, bufptr + OTX2_HEAD_ROOM);
+ 		cq->pool_ptrs--;
+ 	}
+-	put_cpu();
+ 	cq->refill_task_sched = false;
+ }
+ 
+@@ -1370,7 +1368,6 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
+ 	if (err)
+ 		goto fail;
+ 
+-	get_cpu();
+ 	/* Allocate pointers and free them to aura/pool */
+ 	for (qidx = 0; qidx < hw->tot_tx_queues; qidx++) {
+ 		pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
+@@ -1394,7 +1391,6 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
+ 	}
+ 
+ err_mem:
+-	put_cpu();
+ 	return err ? -ENOMEM : 0;
+ 
+ fail:
+@@ -1435,21 +1431,18 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf)
+ 	if (err)
+ 		goto fail;
+ 
+-	get_cpu();
+ 	/* Allocate pointers and free them to aura/pool */
+ 	for (pool_id = 0; pool_id < hw->rqpool_cnt; pool_id++) {
+ 		pool = &pfvf->qset.pool[pool_id];
+ 		for (ptr = 0; ptr < num_ptrs; ptr++) {
+ 			err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
+ 			if (err)
+-				goto err_mem;
++				return -ENOMEM;
+ 			pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
+ 						   bufptr + OTX2_HEAD_ROOM);
+ 		}
+ 	}
+-err_mem:
+-	put_cpu();
+-	return err ? -ENOMEM : 0;
++	return 0;
+ fail:
+ 	otx2_mbox_reset(&pfvf->mbox.mbox, 0);
+ 	otx2_aura_pool_free(pfvf);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 67aa02bb2b85c..712715a49d201 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -733,8 +733,10 @@ static inline void cn10k_aura_freeptr(void *dev, int aura, u64 buf)
+ 	u64 ptrs[2];
+ 
+ 	ptrs[1] = buf;
++	get_cpu();
+ 	/* Free only one buffer at time during init and teardown */
+ 	__cn10k_aura_freeptr(pfvf, aura, ptrs, 2);
++	put_cpu();
+ }
+ 
+ /* Alloc pointer from pool/aura */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index 96417c5feed76..879555ba847dd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -677,6 +677,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
+ 	mutex_lock(&dev->intf_state_mutex);
+ 	if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
+ 		mlx5_core_err(dev, "health works are not permitted at this stage\n");
++		mutex_unlock(&dev->intf_state_mutex);
+ 		return;
+ 	}
+ 	mutex_unlock(&dev->intf_state_mutex);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index fe8dc8e0522b0..cabed1b7b45ed 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2207,28 +2207,6 @@ static int rtl_set_mac_address(struct net_device *dev, void *p)
+ 	return 0;
+ }
+ 
+-static void rtl_wol_enable_rx(struct rtl8169_private *tp)
+-{
+-	if (tp->mac_version >= RTL_GIGA_MAC_VER_25)
+-		RTL_W32(tp, RxConfig, RTL_R32(tp, RxConfig) |
+-			AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
+-}
+-
+-static void rtl_prepare_power_down(struct rtl8169_private *tp)
+-{
+-	if (tp->dash_type != RTL_DASH_NONE)
+-		return;
+-
+-	if (tp->mac_version == RTL_GIGA_MAC_VER_32 ||
+-	    tp->mac_version == RTL_GIGA_MAC_VER_33)
+-		rtl_ephy_write(tp, 0x19, 0xff64);
+-
+-	if (device_may_wakeup(tp_to_dev(tp))) {
+-		phy_speed_down(tp->phydev, false);
+-		rtl_wol_enable_rx(tp);
+-	}
+-}
+-
+ static void rtl_init_rxcfg(struct rtl8169_private *tp)
+ {
+ 	switch (tp->mac_version) {
+@@ -2452,6 +2430,31 @@ static void rtl_enable_rxdvgate(struct rtl8169_private *tp)
+ 	rtl_wait_txrx_fifo_empty(tp);
+ }
+ 
++static void rtl_wol_enable_rx(struct rtl8169_private *tp)
++{
++	if (tp->mac_version >= RTL_GIGA_MAC_VER_25)
++		RTL_W32(tp, RxConfig, RTL_R32(tp, RxConfig) |
++			AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
++
++	if (tp->mac_version >= RTL_GIGA_MAC_VER_40)
++		rtl_disable_rxdvgate(tp);
++}
++
++static void rtl_prepare_power_down(struct rtl8169_private *tp)
++{
++	if (tp->dash_type != RTL_DASH_NONE)
++		return;
++
++	if (tp->mac_version == RTL_GIGA_MAC_VER_32 ||
++	    tp->mac_version == RTL_GIGA_MAC_VER_33)
++		rtl_ephy_write(tp, 0x19, 0xff64);
++
++	if (device_may_wakeup(tp_to_dev(tp))) {
++		phy_speed_down(tp->phydev, false);
++		rtl_wol_enable_rx(tp);
++	}
++}
++
+ static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+ 	u32 val = TX_DMA_BURST << TxDMAShift |
+@@ -3869,7 +3872,7 @@ static void rtl8169_tx_clear(struct rtl8169_private *tp)
+ 	netdev_reset_queue(tp->dev);
+ }
+ 
+-static void rtl8169_cleanup(struct rtl8169_private *tp, bool going_down)
++static void rtl8169_cleanup(struct rtl8169_private *tp)
+ {
+ 	napi_disable(&tp->napi);
+ 
+@@ -3881,9 +3884,6 @@ static void rtl8169_cleanup(struct rtl8169_private *tp, bool going_down)
+ 
+ 	rtl_rx_close(tp);
+ 
+-	if (going_down && tp->dev->wol_enabled)
+-		goto no_reset;
+-
+ 	switch (tp->mac_version) {
+ 	case RTL_GIGA_MAC_VER_28:
+ 	case RTL_GIGA_MAC_VER_31:
+@@ -3904,7 +3904,7 @@ static void rtl8169_cleanup(struct rtl8169_private *tp, bool going_down)
+ 	}
+ 
+ 	rtl_hw_reset(tp);
+-no_reset:
++
+ 	rtl8169_tx_clear(tp);
+ 	rtl8169_init_ring_indexes(tp);
+ }
+@@ -3915,7 +3915,7 @@ static void rtl_reset_work(struct rtl8169_private *tp)
+ 
+ 	netif_stop_queue(tp->dev);
+ 
+-	rtl8169_cleanup(tp, false);
++	rtl8169_cleanup(tp);
+ 
+ 	for (i = 0; i < NUM_RX_DESC; i++)
+ 		rtl8169_mark_to_asic(tp->RxDescArray + i);
+@@ -4601,7 +4601,7 @@ static void rtl8169_down(struct rtl8169_private *tp)
+ 	pci_clear_master(tp->pci_dev);
+ 	rtl_pci_commit(tp);
+ 
+-	rtl8169_cleanup(tp, true);
++	rtl8169_cleanup(tp);
+ 	rtl_disable_exit_l1(tp);
+ 	rtl_prepare_power_down(tp);
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 5630f6e718e12..067ea019b110a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -1218,7 +1218,7 @@ static int brcmf_pcie_init_ringbuffers(struct brcmf_pciedev_info *devinfo)
+ 				BRCMF_NROF_H2D_COMMON_MSGRINGS;
+ 		max_completionrings = BRCMF_NROF_D2H_COMMON_MSGRINGS;
+ 	}
+-	if (max_flowrings > 256) {
++	if (max_flowrings > 512) {
+ 		brcmf_err(bus, "invalid max_flowrings(%d)\n", max_flowrings);
+ 		return -EIO;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index e6d64152c81a7..a02e5a67b7066 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -1106,6 +1106,11 @@ int iwl_read_ppag_table(struct iwl_fw_runtime *fwrt, union iwl_ppag_table_cmd *c
+         int i, j, num_sub_bands;
+         s8 *gain;
+ 
++	/* many firmware images for JF lie about this */
++	if (CSR_HW_RFID_TYPE(fwrt->trans->hw_rf_id) ==
++	    CSR_HW_RFID_TYPE(CSR_HW_RF_ID_TYPE_JF))
++		return -EOPNOTSUPP;
++
+         if (!fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SET_PPAG)) {
+                 IWL_DEBUG_RADIO(fwrt,
+                                 "PPAG capability not supported by FW, command not sent.\n");
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 4f88e8bbdd279..f08b25195ae79 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -1163,18 +1163,32 @@ int __init early_init_dt_scan_chosen(char *cmdline)
+ 	if (node < 0)
+ 		node = fdt_path_offset(fdt, "/chosen@0");
+ 	if (node < 0)
+-		return -ENOENT;
++		/* Handle the cmdline config options even if no /chosen node */
++		goto handle_cmdline;
+ 
+ 	chosen_node_offset = node;
+ 
+ 	early_init_dt_check_for_initrd(node);
+ 	early_init_dt_check_for_elfcorehdr(node);
+ 
++	rng_seed = of_get_flat_dt_prop(node, "rng-seed", &l);
++	if (rng_seed && l > 0) {
++		add_bootloader_randomness(rng_seed, l);
++
++		/* try to clear seed so it won't be found. */
++		fdt_nop_property(initial_boot_params, node, "rng-seed");
++
++		/* update CRC check value */
++		of_fdt_crc32 = crc32_be(~0, initial_boot_params,
++				fdt_totalsize(initial_boot_params));
++	}
++
+ 	/* Retrieve command line */
+ 	p = of_get_flat_dt_prop(node, "bootargs", &l);
+ 	if (p != NULL && l > 0)
+ 		strscpy(cmdline, p, min(l, COMMAND_LINE_SIZE));
+ 
++handle_cmdline:
+ 	/*
+ 	 * CONFIG_CMDLINE is meant to be a default in case nothing else
+ 	 * managed to set the command line, unless CONFIG_CMDLINE_FORCE
+@@ -1195,18 +1209,6 @@ int __init early_init_dt_scan_chosen(char *cmdline)
+ 
+ 	pr_debug("Command line is: %s\n", (char *)cmdline);
+ 
+-	rng_seed = of_get_flat_dt_prop(node, "rng-seed", &l);
+-	if (rng_seed && l > 0) {
+-		add_bootloader_randomness(rng_seed, l);
+-
+-		/* try to clear seed so it won't be found. */
+-		fdt_nop_property(initial_boot_params, node, "rng-seed");
+-
+-		/* update CRC check value */
+-		of_fdt_crc32 = crc32_be(~0, initial_boot_params,
+-				fdt_totalsize(initial_boot_params));
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
+index cd44f17dad3d0..d51abb462ae5d 100644
+--- a/drivers/soc/qcom/apr.c
++++ b/drivers/soc/qcom/apr.c
+@@ -461,9 +461,10 @@ static int apr_add_device(struct device *dev, struct device_node *np,
+ 		goto out;
+ 	}
+ 
++	/* Protection domain is optional, it does not exist on older platforms */
+ 	ret = of_property_read_string_index(np, "qcom,protection-domain",
+ 					    1, &adev->service_path);
+-	if (ret < 0) {
++	if (ret < 0 && ret != -EINVAL) {
+ 		dev_err(dev, "Failed to read second value of qcom,protection-domain\n");
+ 		goto out;
+ 	}
+diff --git a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+index db1441c0cc662..690ab7165b2c1 100644
+--- a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
++++ b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+@@ -86,7 +86,7 @@ struct vchiq_service_params_kernel {
+ 
+ struct vchiq_instance;
+ 
+-extern enum vchiq_status vchiq_initialise(struct vchiq_instance **pinstance);
++extern int vchiq_initialise(struct vchiq_instance **pinstance);
+ extern enum vchiq_status vchiq_shutdown(struct vchiq_instance *instance);
+ extern enum vchiq_status vchiq_connect(struct vchiq_instance *instance);
+ extern enum vchiq_status vchiq_open_service(struct vchiq_instance *instance,
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
+index 2851ef6b9cd0f..cd20eb18f2751 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
+@@ -100,10 +100,10 @@ vchiq_dump_platform_use_state(struct vchiq_state *state);
+ extern void
+ vchiq_dump_service_use_state(struct vchiq_state *state);
+ 
+-extern enum vchiq_status
++extern int
+ vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
+ 		   enum USE_TYPE_E use_type);
+-extern enum vchiq_status
++extern int
+ vchiq_release_internal(struct vchiq_state *state,
+ 		       struct vchiq_service *service);
+ 
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 81252e31014a1..56008eb91e2e4 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -427,13 +427,6 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ {
+ 	u32 status[TB_MAX_RETIMER_INDEX + 1] = {};
+ 	int ret, i, last_idx = 0;
+-	struct usb4_port *usb4;
+-
+-	usb4 = port->usb4;
+-	if (!usb4)
+-		return 0;
+-
+-	pm_runtime_get_sync(&usb4->dev);
+ 
+ 	/*
+ 	 * Send broadcast RT to make sure retimer indices facing this
+@@ -441,7 +434,7 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ 	 */
+ 	ret = usb4_port_enumerate_retimers(port);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+ 	/*
+ 	 * Enable sideband channel for each retimer. We can do this
+@@ -471,12 +464,11 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ 			break;
+ 	}
+ 
+-	if (!last_idx) {
+-		ret = 0;
+-		goto out;
+-	}
++	if (!last_idx)
++		return 0;
+ 
+ 	/* Add on-board retimers if they do not exist already */
++	ret = 0;
+ 	for (i = 1; i <= last_idx; i++) {
+ 		struct tb_retimer *rt;
+ 
+@@ -490,10 +482,6 @@ int tb_retimer_scan(struct tb_port *port, bool add)
+ 		}
+ 	}
+ 
+-out:
+-	pm_runtime_mark_last_busy(&usb4->dev);
+-	pm_runtime_put_autosuspend(&usb4->dev);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 4628458044270..3f1ab30c4fb15 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -628,11 +628,15 @@ static void tb_scan_port(struct tb_port *port)
+ 			 * Downstream switch is reachable through two ports.
+ 			 * Only scan on the primary port (link_nr == 0).
+ 			 */
++
++	if (port->usb4)
++		pm_runtime_get_sync(&port->usb4->dev);
++
+ 	if (tb_wait_for_port(port, false) <= 0)
+-		return;
++		goto out_rpm_put;
+ 	if (port->remote) {
+ 		tb_port_dbg(port, "port already has a remote\n");
+-		return;
++		goto out_rpm_put;
+ 	}
+ 
+ 	tb_retimer_scan(port, true);
+@@ -647,12 +651,12 @@ static void tb_scan_port(struct tb_port *port)
+ 		 */
+ 		if (PTR_ERR(sw) == -EIO || PTR_ERR(sw) == -EADDRNOTAVAIL)
+ 			tb_scan_xdomain(port);
+-		return;
++		goto out_rpm_put;
+ 	}
+ 
+ 	if (tb_switch_configure(sw)) {
+ 		tb_switch_put(sw);
+-		return;
++		goto out_rpm_put;
+ 	}
+ 
+ 	/*
+@@ -681,7 +685,7 @@ static void tb_scan_port(struct tb_port *port)
+ 
+ 	if (tb_switch_add(sw)) {
+ 		tb_switch_put(sw);
+-		return;
++		goto out_rpm_put;
+ 	}
+ 
+ 	/* Link the switches using both links if available */
+@@ -733,6 +737,12 @@ static void tb_scan_port(struct tb_port *port)
+ 
+ 	tb_add_dp_resources(sw);
+ 	tb_scan_switch(sw);
++
++out_rpm_put:
++	if (port->usb4) {
++		pm_runtime_mark_last_busy(&port->usb4->dev);
++		pm_runtime_put_autosuspend(&port->usb4->dev);
++	}
+ }
+ 
+ static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel)
+diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
+index 2c3cf7fc33571..1fc3c29b24f83 100644
+--- a/drivers/thunderbolt/tunnel.c
++++ b/drivers/thunderbolt/tunnel.c
+@@ -1275,7 +1275,7 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
+ 		return;
+ 	} else if (!ret) {
+ 		/* Use maximum link rate if the link valid is not set */
+-		ret = usb4_usb3_port_max_link_rate(tunnel->src_port);
++		ret = tb_usb3_max_link_rate(tunnel->dst_port, tunnel->src_port);
+ 		if (ret < 0) {
+ 			tb_tunnel_warn(tunnel, "failed to read maximum link rate\n");
+ 			return;
+diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
+index f00b2f62d8e3c..9a3c52f6b8c97 100644
+--- a/drivers/thunderbolt/xdomain.c
++++ b/drivers/thunderbolt/xdomain.c
+@@ -1419,12 +1419,19 @@ static int tb_xdomain_get_properties(struct tb_xdomain *xd)
+ 	 * registered, we notify the userspace that it has changed.
+ 	 */
+ 	if (!update) {
+-		struct tb_port *port;
++		/*
++		 * Now disable lane 1 if bonding was not enabled. Do
++		 * this only if bonding was possible at the beginning
++		 * (that is we are the connection manager and there are
++		 * two lanes).
++		 */
++		if (xd->bonding_possible) {
++			struct tb_port *port;
+ 
+-		/* Now disable lane 1 if bonding was not enabled */
+-		port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+-		if (!port->bonded)
+-			tb_port_disable(port->dual_link_port);
++			port = tb_port_at(xd->route, tb_xdomain_parent(xd));
++			if (!port->bonded)
++				tb_port_disable(port->dual_link_port);
++		}
+ 
+ 		if (device_add(&xd->dev)) {
+ 			dev_err(&xd->dev, "failed to add XDomain device\n");
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 314a05e009df9..64770c62bbec5 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -43,6 +43,12 @@
+ #define PCI_DEVICE_ID_EXAR_XR17V4358		0x4358
+ #define PCI_DEVICE_ID_EXAR_XR17V8358		0x8358
+ 
++#define PCI_DEVICE_ID_SEALEVEL_710xC		0x1001
++#define PCI_DEVICE_ID_SEALEVEL_720xC		0x1002
++#define PCI_DEVICE_ID_SEALEVEL_740xC		0x1004
++#define PCI_DEVICE_ID_SEALEVEL_780xC		0x1008
++#define PCI_DEVICE_ID_SEALEVEL_716xC		0x1010
++
+ #define UART_EXAR_INT0		0x80
+ #define UART_EXAR_8XMODE	0x88	/* 8X sampling rate select */
+ #define UART_EXAR_SLEEP		0x8b	/* Sleep mode */
+@@ -638,6 +644,8 @@ exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent)
+ 		nr_ports = BIT(((pcidev->device & 0x38) >> 3) - 1);
+ 	else if (board->num_ports)
+ 		nr_ports = board->num_ports;
++	else if (pcidev->vendor == PCI_VENDOR_ID_SEALEVEL)
++		nr_ports = pcidev->device & 0xff;
+ 	else
+ 		nr_ports = pcidev->device & 0x0f;
+ 
+@@ -864,6 +872,12 @@ static const struct pci_device_id exar_pci_tbl[] = {
+ 	EXAR_DEVICE(COMMTECH, 4224PCI335, pbn_fastcom335_4),
+ 	EXAR_DEVICE(COMMTECH, 2324PCI335, pbn_fastcom335_4),
+ 	EXAR_DEVICE(COMMTECH, 2328PCI335, pbn_fastcom335_8),
++
++	EXAR_DEVICE(SEALEVEL, 710xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 720xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 740xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 780xC, pbn_exar_XR17V35x),
++	EXAR_DEVICE(SEALEVEL, 716xC, pbn_exar_XR17V35x),
+ 	{ 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, exar_pci_tbl);
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index aa0bbb7abeacf..0a1cc36f93aa7 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1467,6 +1467,10 @@ static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq)
+ 	struct circ_buf *xmit = &uap->port.state->xmit;
+ 	int count = uap->fifosize >> 1;
+ 
++	if ((uap->port.rs485.flags & SER_RS485_ENABLED) &&
++	    !uap->rs485_tx_started)
++		pl011_rs485_tx_start(uap);
++
+ 	if (uap->port.x_char) {
+ 		if (!pl011_tx_char(uap, uap->port.x_char, from_irq))
+ 			return true;
+@@ -1478,10 +1482,6 @@ static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq)
+ 		return false;
+ 	}
+ 
+-	if ((uap->port.rs485.flags & SER_RS485_ENABLED) &&
+-	    !uap->rs485_tx_started)
+-		pl011_rs485_tx_start(uap);
+-
+ 	/* If we are using DMA mode, try to send some characters. */
+ 	if (pl011_dma_tx_irq(uap))
+ 		return true;
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index bd07f79a2df91..cff64e5edee26 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -2673,13 +2673,7 @@ static void __init atmel_console_get_options(struct uart_port *port, int *baud,
+ 	else if (mr == ATMEL_US_PAR_ODD)
+ 		*parity = 'o';
+ 
+-	/*
+-	 * The serial core only rounds down when matching this to a
+-	 * supported baud rate. Make sure we don't end up slightly
+-	 * lower than one of those, as it would make us fall through
+-	 * to a much lower baud rate than we really want.
+-	 */
+-	*baud = port->uartclk / (16 * (quot - 1));
++	*baud = port->uartclk / (16 * quot);
+ }
+ 
+ static int __init atmel_console_setup(struct console *co, char *options)
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index b17788cf309b1..83f35b7b0897c 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -752,7 +752,7 @@ static void pch_dma_tx_complete(void *arg)
+ 	}
+ 	xmit->tail &= UART_XMIT_SIZE - 1;
+ 	async_tx_ack(priv->desc_tx);
+-	dma_unmap_sg(port->dev, sg, priv->orig_nent, DMA_TO_DEVICE);
++	dma_unmap_sg(port->dev, priv->sg_tx_p, priv->orig_nent, DMA_TO_DEVICE);
+ 	priv->tx_dma_use = 0;
+ 	priv->nent = 0;
+ 	priv->orig_nent = 0;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 83b66b73303af..7905935b9f1b4 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -864,9 +864,10 @@ out_unlock:
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void get_tx_fifo_size(struct qcom_geni_serial_port *port)
++static int setup_fifos(struct qcom_geni_serial_port *port)
+ {
+ 	struct uart_port *uport;
++	u32 old_rx_fifo_depth = port->rx_fifo_depth;
+ 
+ 	uport = &port->uport;
+ 	port->tx_fifo_depth = geni_se_get_tx_fifo_depth(&port->se);
+@@ -874,6 +875,16 @@ static void get_tx_fifo_size(struct qcom_geni_serial_port *port)
+ 	port->rx_fifo_depth = geni_se_get_rx_fifo_depth(&port->se);
+ 	uport->fifosize =
+ 		(port->tx_fifo_depth * port->tx_fifo_width) / BITS_PER_BYTE;
++
++	if (port->rx_fifo && (old_rx_fifo_depth != port->rx_fifo_depth) && port->rx_fifo_depth) {
++		port->rx_fifo = devm_krealloc(uport->dev, port->rx_fifo,
++					      port->rx_fifo_depth * sizeof(u32),
++					      GFP_KERNEL);
++		if (!port->rx_fifo)
++			return -ENOMEM;
++	}
++
++	return 0;
+ }
+ 
+ 
+@@ -888,6 +899,7 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
+ 	u32 rxstale = DEFAULT_BITS_PER_CHAR * STALE_TIMEOUT;
+ 	u32 proto;
+ 	u32 pin_swap;
++	int ret;
+ 
+ 	proto = geni_se_read_proto(&port->se);
+ 	if (proto != GENI_SE_UART) {
+@@ -897,7 +909,9 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
+ 
+ 	qcom_geni_serial_stop_rx(uport);
+ 
+-	get_tx_fifo_size(port);
++	ret = setup_fifos(port);
++	if (ret)
++		return ret;
+ 
+ 	writel(rxstale, uport->membase + SE_UART_RX_STALE_CNT);
+ 
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index 5adcb349718c3..ccfaebca6faa7 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -2614,6 +2614,7 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
+ 	u8 req_on_hw_ring = 0;
+ 	unsigned long flags;
+ 	int ret = 0;
++	int val;
+ 
+ 	if (!ep || !request || !ep->desc)
+ 		return -EINVAL;
+@@ -2649,6 +2650,13 @@ found:
+ 
+ 	/* Update ring only if removed request is on pending_req_list list */
+ 	if (req_on_hw_ring && link_trb) {
++		/* Stop DMA */
++		writel(EP_CMD_DFLUSH, &priv_dev->regs->ep_cmd);
++
++		/* wait for DFLUSH cleared */
++		readl_poll_timeout_atomic(&priv_dev->regs->ep_cmd, val,
++					  !(val & EP_CMD_DFLUSH), 1, 1000);
++
+ 		link_trb->buffer = cpu_to_le32(TRB_BUFFER(priv_ep->trb_pool_dma +
+ 			((priv_req->end_trb + 1) * TRB_SIZE)));
+ 		link_trb->control = cpu_to_le32((le32_to_cpu(link_trb->control) & TRB_CYCLE) |
+@@ -2660,6 +2668,10 @@ found:
+ 
+ 	cdns3_gadget_giveback(priv_ep, priv_req, -ECONNRESET);
+ 
++	req = cdns3_next_request(&priv_ep->pending_req_list);
++	if (req)
++		cdns3_rearm_transfer(priv_ep, 1);
++
+ not_found:
+ 	spin_unlock_irqrestore(&priv_dev->lock, flags);
+ 	return ret;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index bbab424b0d559..0aaaadb02cc69 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -44,6 +44,9 @@
+ #define USB_PRODUCT_USB5534B			0x5534
+ #define USB_VENDOR_CYPRESS			0x04b4
+ #define USB_PRODUCT_CY7C65632			0x6570
++#define USB_VENDOR_TEXAS_INSTRUMENTS		0x0451
++#define USB_PRODUCT_TUSB8041_USB3		0x8140
++#define USB_PRODUCT_TUSB8041_USB2		0x8142
+ #define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	0x01
+ #define HUB_QUIRK_DISABLE_AUTOSUSPEND		0x02
+ 
+@@ -5798,6 +5801,16 @@ static const struct usb_device_id hub_id_table[] = {
+       .idVendor = USB_VENDOR_GENESYS_LOGIC,
+       .bInterfaceClass = USB_CLASS_HUB,
+       .driver_info = HUB_QUIRK_CHECK_PORT_AUTOSUSPEND},
++    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++      .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
++      .idProduct = USB_PRODUCT_TUSB8041_USB2,
++      .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
++    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++			| USB_DEVICE_ID_MATCH_PRODUCT,
++      .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
++      .idProduct = USB_PRODUCT_TUSB8041_USB3,
++      .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
+     { .match_flags = USB_DEVICE_ID_MATCH_DEV_CLASS,
+       .bDeviceClass = USB_CLASS_HUB},
+     { .match_flags = USB_DEVICE_ID_MATCH_INT_CLASS,
+diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c
+index 6d93428432f13..533baa85083c2 100644
+--- a/drivers/usb/core/usb-acpi.c
++++ b/drivers/usb/core/usb-acpi.c
+@@ -37,6 +37,71 @@ bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
+ }
+ EXPORT_SYMBOL_GPL(usb_acpi_power_manageable);
+ 
++#define UUID_USB_CONTROLLER_DSM "ce2ee385-00e6-48cb-9f05-2edb927c4899"
++#define USB_DSM_DISABLE_U1_U2_FOR_PORT	5
++
++/**
++ * usb_acpi_port_lpm_incapable - check if lpm should be disabled for a port.
++ * @hdev: USB device belonging to the usb hub
++ * @index: zero based port index
++ *
++ * Some USB3 ports may not support USB3 link power management U1/U2 states
++ * due to different retimer setup. ACPI provides _DSM method which returns 0x01
++ * if U1 and U2 states should be disabled. Evaluate _DSM with:
++ * Arg0: UUID = ce2ee385-00e6-48cb-9f05-2edb927c4899
++ * Arg1: Revision ID = 0
++ * Arg2: Function Index = 5
++ * Arg3: (empty)
++ *
++ * Return 1 if USB3 port is LPM incapable, negative on error, otherwise 0
++ */
++
++int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index)
++{
++	union acpi_object *obj;
++	acpi_handle port_handle;
++	int port1 = index + 1;
++	guid_t guid;
++	int ret;
++
++	ret = guid_parse(UUID_USB_CONTROLLER_DSM, &guid);
++	if (ret)
++		return ret;
++
++	port_handle = usb_get_hub_port_acpi_handle(hdev, port1);
++	if (!port_handle) {
++		dev_dbg(&hdev->dev, "port-%d no acpi handle\n", port1);
++		return -ENODEV;
++	}
++
++	if (!acpi_check_dsm(port_handle, &guid, 0,
++			    BIT(USB_DSM_DISABLE_U1_U2_FOR_PORT))) {
++		dev_dbg(&hdev->dev, "port-%d no _DSM function %d\n",
++			port1, USB_DSM_DISABLE_U1_U2_FOR_PORT);
++		return -ENODEV;
++	}
++
++	obj = acpi_evaluate_dsm(port_handle, &guid, 0,
++				USB_DSM_DISABLE_U1_U2_FOR_PORT, NULL);
++
++	if (!obj)
++		return -ENODEV;
++
++	if (obj->type != ACPI_TYPE_INTEGER) {
++		dev_dbg(&hdev->dev, "evaluate port-%d _DSM failed\n", port1);
++		ACPI_FREE(obj);
++		return -EINVAL;
++	}
++
++	if (obj->integer.value == 0x01)
++		ret = 1;
++
++	ACPI_FREE(obj);
++
++	return ret;
++}
++EXPORT_SYMBOL_GPL(usb_acpi_port_lpm_incapable);
++
+ /**
+  * usb_acpi_set_power_state - control usb port's power via acpi power
+  * resource
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 3a6b4926193ef..7bbc776185469 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -392,6 +392,7 @@ static void gadget_info_attr_release(struct config_item *item)
+ 	WARN_ON(!list_empty(&gi->string_list));
+ 	WARN_ON(!list_empty(&gi->available_func));
+ 	kfree(gi->composite.gadget_driver.function);
++	kfree(gi->composite.gadget_driver.driver.name);
+ 	kfree(gi);
+ }
+ 
+@@ -1571,7 +1572,6 @@ static const struct usb_gadget_driver configfs_driver_template = {
+ 	.max_speed	= USB_SPEED_SUPER_PLUS,
+ 	.driver = {
+ 		.owner          = THIS_MODULE,
+-		.name		= "configfs-gadget",
+ 	},
+ 	.match_existing_only = 1,
+ };
+@@ -1622,13 +1622,21 @@ static struct config_group *gadgets_make(
+ 
+ 	gi->composite.gadget_driver = configfs_driver_template;
+ 
++	gi->composite.gadget_driver.driver.name = kasprintf(GFP_KERNEL,
++							    "configfs-gadget.%s", name);
++	if (!gi->composite.gadget_driver.driver.name)
++		goto err;
++
+ 	gi->composite.gadget_driver.function = kstrdup(name, GFP_KERNEL);
+ 	gi->composite.name = gi->composite.gadget_driver.function;
+ 
+ 	if (!gi->composite.gadget_driver.function)
+-		goto err;
++		goto out_free_driver_name;
+ 
+ 	return &gi->group;
++
++out_free_driver_name:
++	kfree(gi->composite.gadget_driver.driver.name);
+ err:
+ 	kfree(gi);
+ 	return ERR_PTR(-ENOMEM);
+diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
+index c36bcfa0e9b46..424bb3b666dbd 100644
+--- a/drivers/usb/gadget/function/f_ncm.c
++++ b/drivers/usb/gadget/function/f_ncm.c
+@@ -83,7 +83,9 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f)
+ /* peak (theoretical) bulk transfer rate in bits-per-second */
+ static inline unsigned ncm_bitrate(struct usb_gadget *g)
+ {
+-	if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
++	if (!g)
++		return 0;
++	else if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
+ 		return 4250000000U;
+ 	else if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
+ 		return 3750000000U;
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 01c3ead7d1b42..d605bc2e7e8fd 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -229,6 +229,7 @@ static void put_ep (struct ep_data *data)
+  */
+ 
+ static const char *CHIP;
++static DEFINE_MUTEX(sb_mutex);		/* Serialize superblock operations */
+ 
+ /*----------------------------------------------------------------------*/
+ 
+@@ -2010,13 +2011,20 @@ gadgetfs_fill_super (struct super_block *sb, struct fs_context *fc)
+ {
+ 	struct inode	*inode;
+ 	struct dev_data	*dev;
++	int		rc;
+ 
+-	if (the_device)
+-		return -ESRCH;
++	mutex_lock(&sb_mutex);
++
++	if (the_device) {
++		rc = -ESRCH;
++		goto Done;
++	}
+ 
+ 	CHIP = usb_get_gadget_udc_name();
+-	if (!CHIP)
+-		return -ENODEV;
++	if (!CHIP) {
++		rc = -ENODEV;
++		goto Done;
++	}
+ 
+ 	/* superblock */
+ 	sb->s_blocksize = PAGE_SIZE;
+@@ -2053,13 +2061,17 @@ gadgetfs_fill_super (struct super_block *sb, struct fs_context *fc)
+ 	 * from binding to a controller.
+ 	 */
+ 	the_device = dev;
+-	return 0;
++	rc = 0;
++	goto Done;
+ 
+-Enomem:
++ Enomem:
+ 	kfree(CHIP);
+ 	CHIP = NULL;
++	rc = -ENOMEM;
+ 
+-	return -ENOMEM;
++ Done:
++	mutex_unlock(&sb_mutex);
++	return rc;
+ }
+ 
+ /* "mount -t gadgetfs path /dev/gadget" ends up here */
+@@ -2081,6 +2093,7 @@ static int gadgetfs_init_fs_context(struct fs_context *fc)
+ static void
+ gadgetfs_kill_sb (struct super_block *sb)
+ {
++	mutex_lock(&sb_mutex);
+ 	kill_litter_super (sb);
+ 	if (the_device) {
+ 		put_dev (the_device);
+@@ -2088,6 +2101,7 @@ gadgetfs_kill_sb (struct super_block *sb)
+ 	}
+ 	kfree(CHIP);
+ 	CHIP = NULL;
++	mutex_unlock(&sb_mutex);
+ }
+ 
+ /*----------------------------------------------------------------------*/
+diff --git a/drivers/usb/gadget/legacy/webcam.c b/drivers/usb/gadget/legacy/webcam.c
+index 94e22867da1d0..e9b5846b2322c 100644
+--- a/drivers/usb/gadget/legacy/webcam.c
++++ b/drivers/usb/gadget/legacy/webcam.c
+@@ -293,6 +293,7 @@ static const struct uvc_descriptor_header * const uvc_fs_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_format_yuv,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
++	(const struct uvc_descriptor_header *) &uvc_color_matching,
+ 	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+@@ -305,6 +306,7 @@ static const struct uvc_descriptor_header * const uvc_hs_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_format_yuv,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
++	(const struct uvc_descriptor_header *) &uvc_color_matching,
+ 	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+@@ -317,6 +319,7 @@ static const struct uvc_descriptor_header * const uvc_ss_streaming_cls[] = {
+ 	(const struct uvc_descriptor_header *) &uvc_format_yuv,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
++	(const struct uvc_descriptor_header *) &uvc_color_matching,
+ 	(const struct uvc_descriptor_header *) &uvc_format_mjpg,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
+ 	(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
+diff --git a/drivers/usb/host/ehci-fsl.c b/drivers/usb/host/ehci-fsl.c
+index 9cea785934e59..38d06e5abfbb3 100644
+--- a/drivers/usb/host/ehci-fsl.c
++++ b/drivers/usb/host/ehci-fsl.c
+@@ -29,7 +29,7 @@
+ #include "ehci-fsl.h"
+ 
+ #define DRIVER_DESC "Freescale EHCI Host controller driver"
+-#define DRV_NAME "ehci-fsl"
++#define DRV_NAME "fsl-ehci"
+ 
+ static struct hc_driver __read_mostly fsl_ehci_hc_driver;
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index f98cf30a3c1a5..232e175e4e964 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -78,9 +78,12 @@ static const char hcd_name[] = "xhci_hcd";
+ static struct hc_driver __read_mostly xhci_pci_hc_driver;
+ 
+ static int xhci_pci_setup(struct usb_hcd *hcd);
++static int xhci_pci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++				      struct usb_tt *tt, gfp_t mem_flags);
+ 
+ static const struct xhci_driver_overrides xhci_pci_overrides __initconst = {
+ 	.reset = xhci_pci_setup,
++	.update_hub_device = xhci_pci_update_hub_device,
+ };
+ 
+ /* called after powerup, by probe or system-pm "wakeup" */
+@@ -352,8 +355,38 @@ static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev)
+ 				NULL);
+ 	ACPI_FREE(obj);
+ }
++
++static void xhci_find_lpm_incapable_ports(struct usb_hcd *hcd, struct usb_device *hdev)
++{
++	struct xhci_hcd	*xhci = hcd_to_xhci(hcd);
++	struct xhci_hub *rhub = &xhci->usb3_rhub;
++	int ret;
++	int i;
++
++	/* This is not the usb3 roothub we are looking for */
++	if (hcd != rhub->hcd)
++		return;
++
++	if (hdev->maxchild > rhub->num_ports) {
++		dev_err(&hdev->dev, "USB3 roothub port number mismatch\n");
++		return;
++	}
++
++	for (i = 0; i < hdev->maxchild; i++) {
++		ret = usb_acpi_port_lpm_incapable(hdev, i);
++
++		dev_dbg(&hdev->dev, "port-%d disable U1/U2 _DSM: %d\n", i + 1, ret);
++
++		if (ret >= 0) {
++			rhub->ports[i]->lpm_incapable = ret;
++			continue;
++		}
++	}
++}
++
+ #else
+ static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev) { }
++static void xhci_find_lpm_incapable_ports(struct usb_hcd *hcd, struct usb_device *hdev) { }
+ #endif /* CONFIG_ACPI */
+ 
+ /* called during probe() after chip reset completes */
+@@ -386,6 +419,16 @@ static int xhci_pci_setup(struct usb_hcd *hcd)
+ 	return xhci_pci_reinit(xhci, pdev);
+ }
+ 
++static int xhci_pci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++				      struct usb_tt *tt, gfp_t mem_flags)
++{
++	/* Check if acpi claims some USB3 roothub ports are lpm incapable */
++	if (!hdev->parent)
++		xhci_find_lpm_incapable_ports(hcd, hdev);
++
++	return xhci_update_hub_device(hcd, hdev, tt, mem_flags);
++}
++
+ /*
+  * We need to register our own PCI probe function (instead of the USB core's
+  * function) in order to create a second roothub under xHCI.
+@@ -455,6 +498,8 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
+ 		pm_runtime_allow(&dev->dev);
+ 
++	dma_set_max_seg_size(&dev->dev, UINT_MAX);
++
+ 	return 0;
+ 
+ put_usb3_hcd:
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 343709af4c16f..dce02d0aad8d0 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1170,7 +1170,10 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci,
+ 	struct xhci_virt_ep *ep;
+ 	struct xhci_ring *ring;
+ 
+-	ep = &xhci->devs[slot_id]->eps[ep_index];
++	ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
++	if (!ep)
++		return;
++
+ 	if ((ep->ep_state & EP_HAS_STREAMS) ||
+ 			(ep->ep_state & EP_GETTING_NO_STREAMS)) {
+ 		int stream_id;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 79d7931c048a8..2b280beb00115 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3974,6 +3974,7 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 	struct xhci_virt_device *virt_dev;
+ 	struct xhci_slot_ctx *slot_ctx;
++	unsigned long flags;
+ 	int i, ret;
+ 
+ 	/*
+@@ -4000,7 +4001,11 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ 		virt_dev->eps[i].ep_state &= ~EP_STOP_CMD_PENDING;
+ 	virt_dev->udev = NULL;
+ 	xhci_disable_slot(xhci, udev->slot_id);
++
++	spin_lock_irqsave(&xhci->lock, flags);
+ 	xhci_free_virt_device(xhci, udev->slot_id);
++	spin_unlock_irqrestore(&xhci->lock, flags);
++
+ }
+ 
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+@@ -5044,6 +5049,7 @@ static int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+ 			struct usb_device *udev, enum usb3_link_state state)
+ {
+ 	struct xhci_hcd	*xhci;
++	struct xhci_port *port;
+ 	u16 hub_encoded_timeout;
+ 	int mel;
+ 	int ret;
+@@ -5060,6 +5066,13 @@ static int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
+ 	if (xhci_check_tier_policy(xhci, udev, state) < 0)
+ 		return USB3_LPM_DISABLED;
+ 
++	/* If connected to root port then check port can handle lpm */
++	if (udev->parent && !udev->parent->parent) {
++		port = xhci->usb3_rhub.ports[udev->portnum - 1];
++		if (port->lpm_incapable)
++			return USB3_LPM_DISABLED;
++	}
++
+ 	hub_encoded_timeout = xhci_calculate_lpm_timeout(hcd, udev, state);
+ 	mel = calculate_max_exit_latency(udev, state, hub_encoded_timeout);
+ 	if (mel < 0) {
+@@ -5119,7 +5132,7 @@ static int xhci_disable_usb3_lpm_timeout(struct usb_hcd *hcd,
+ /* Once a hub descriptor is fetched for a device, we need to update the xHC's
+  * internal data structures for the device.
+  */
+-static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ 			struct usb_tt *tt, gfp_t mem_flags)
+ {
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+@@ -5219,6 +5232,7 @@ static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ 	xhci_free_command(xhci, config_cmd);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(xhci_update_hub_device);
+ 
+ static int xhci_get_frame(struct usb_hcd *hcd)
+ {
+@@ -5502,6 +5516,8 @@ void xhci_init_driver(struct hc_driver *drv,
+ 			drv->check_bandwidth = over->check_bandwidth;
+ 		if (over->reset_bandwidth)
+ 			drv->reset_bandwidth = over->reset_bandwidth;
++		if (over->update_hub_device)
++			drv->update_hub_device = over->update_hub_device;
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(xhci_init_driver);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index c9f06c5e4e9d2..dcee7f3207add 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1735,6 +1735,7 @@ struct xhci_port {
+ 	int			hcd_portnum;
+ 	struct xhci_hub		*rhub;
+ 	struct xhci_port_cap	*port_cap;
++	unsigned int		lpm_incapable:1;
+ };
+ 
+ struct xhci_hub {
+@@ -1943,6 +1944,8 @@ struct xhci_driver_overrides {
+ 			     struct usb_host_endpoint *ep);
+ 	int (*check_bandwidth)(struct usb_hcd *, struct usb_device *);
+ 	void (*reset_bandwidth)(struct usb_hcd *, struct usb_device *);
++	int (*update_hub_device)(struct usb_hcd *hcd, struct usb_device *hdev,
++			    struct usb_tt *tt, gfp_t mem_flags);
+ };
+ 
+ #define	XHCI_CFC_DELAY		10
+@@ -2122,6 +2125,8 @@ int xhci_drop_endpoint(struct usb_hcd *hcd, struct usb_device *udev,
+ 		       struct usb_host_endpoint *ep);
+ int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
++int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
++			   struct usb_tt *tt, gfp_t mem_flags);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+ 
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 988a8c02e7e24..b421f13260875 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -814,7 +814,7 @@ static int iowarrior_probe(struct usb_interface *interface,
+ 			break;
+ 
+ 		case USB_DEVICE_ID_CODEMERCS_IOW100:
+-			dev->report_size = 13;
++			dev->report_size = 12;
+ 			break;
+ 		}
+ 	}
+diff --git a/drivers/usb/misc/onboard_usb_hub.c b/drivers/usb/misc/onboard_usb_hub.c
+index d63c63942af1b..044e75ad4d20c 100644
+--- a/drivers/usb/misc/onboard_usb_hub.c
++++ b/drivers/usb/misc/onboard_usb_hub.c
+@@ -27,7 +27,10 @@
+ 
+ #include "onboard_usb_hub.h"
+ 
++static void onboard_hub_attach_usb_driver(struct work_struct *work);
++
+ static struct usb_device_driver onboard_hub_usbdev_driver;
++static DECLARE_WORK(attach_usb_driver_work, onboard_hub_attach_usb_driver);
+ 
+ /************************** Platform driver **************************/
+ 
+@@ -45,7 +48,6 @@ struct onboard_hub {
+ 	bool is_powered_on;
+ 	bool going_away;
+ 	struct list_head udev_list;
+-	struct work_struct attach_usb_driver_work;
+ 	struct mutex lock;
+ };
+ 
+@@ -271,8 +273,7 @@ static int onboard_hub_probe(struct platform_device *pdev)
+ 	 * This needs to be done deferred to avoid self-deadlocks on systems
+ 	 * with nested onboard hubs.
+ 	 */
+-	INIT_WORK(&hub->attach_usb_driver_work, onboard_hub_attach_usb_driver);
+-	schedule_work(&hub->attach_usb_driver_work);
++	schedule_work(&attach_usb_driver_work);
+ 
+ 	return 0;
+ }
+@@ -285,9 +286,6 @@ static int onboard_hub_remove(struct platform_device *pdev)
+ 
+ 	hub->going_away = true;
+ 
+-	if (&hub->attach_usb_driver_work != current_work())
+-		cancel_work_sync(&hub->attach_usb_driver_work);
+-
+ 	mutex_lock(&hub->lock);
+ 
+ 	/* unbind the USB devices to avoid dangling references to this device */
+@@ -431,13 +429,13 @@ static int __init onboard_hub_init(void)
+ {
+ 	int ret;
+ 
+-	ret = platform_driver_register(&onboard_hub_driver);
++	ret = usb_register_device_driver(&onboard_hub_usbdev_driver, THIS_MODULE);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = usb_register_device_driver(&onboard_hub_usbdev_driver, THIS_MODULE);
++	ret = platform_driver_register(&onboard_hub_driver);
+ 	if (ret)
+-		platform_driver_unregister(&onboard_hub_driver);
++		usb_deregister_device_driver(&onboard_hub_usbdev_driver);
+ 
+ 	return ret;
+ }
+@@ -447,6 +445,8 @@ static void __exit onboard_hub_exit(void)
+ {
+ 	usb_deregister_device_driver(&onboard_hub_usbdev_driver);
+ 	platform_driver_unregister(&onboard_hub_driver);
++
++	cancel_work_sync(&attach_usb_driver_work);
+ }
+ module_exit(onboard_hub_exit);
+ 
+diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
+index 476f55d1fec30..44a21ec865fb2 100644
+--- a/drivers/usb/musb/omap2430.c
++++ b/drivers/usb/musb/omap2430.c
+@@ -411,8 +411,10 @@ static int omap2430_probe(struct platform_device *pdev)
+ 		memset(musb_res, 0, sizeof(*musb_res) * ARRAY_SIZE(musb_res));
+ 
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-		if (!res)
++		if (!res) {
++			ret = -EINVAL;
+ 			goto err2;
++		}
+ 
+ 		musb_res[i].start = res->start;
+ 		musb_res[i].end = res->end;
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index f6fb23620e87a..ba5638471de49 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -60,6 +60,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x0846, 0x1100) }, /* NetGear Managed Switch M4100 series, M5300 series, M7100 series */
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ 	{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
++	{ USB_DEVICE(0x0908, 0x0070) }, /* Siemens SCALANCE LPE-9000 USB Serial Console */
+ 	{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
+ 	{ USB_DEVICE(0x0988, 0x0578) }, /* Teraoka AD2000 */
+ 	{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index dee79c7d82d5c..6b69d05e2fb06 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -255,10 +255,16 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM05G			0x030a
+ #define QUECTEL_PRODUCT_EM060K			0x030b
++#define QUECTEL_PRODUCT_EM05G_CS		0x030c
++#define QUECTEL_PRODUCT_EM05CN_SG		0x0310
+ #define QUECTEL_PRODUCT_EM05G_SG		0x0311
++#define QUECTEL_PRODUCT_EM05CN			0x0312
++#define QUECTEL_PRODUCT_EM05G_GR		0x0313
++#define QUECTEL_PRODUCT_EM05G_RS		0x0314
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_RM520N			0x0801
++#define QUECTEL_PRODUCT_EC200U			0x0901
+ #define QUECTEL_PRODUCT_EC200S_CN		0x6002
+ #define QUECTEL_PRODUCT_EC200T			0x6026
+ #define QUECTEL_PRODUCT_RM500K			0x7001
+@@ -1159,8 +1165,18 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05CN, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05CN_SG, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_GR, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_CS, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_RS, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_SG, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+@@ -1180,6 +1196,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+diff --git a/drivers/usb/storage/uas-detect.h b/drivers/usb/storage/uas-detect.h
+index 3f720faa6f97c..d73282c0ec501 100644
+--- a/drivers/usb/storage/uas-detect.h
++++ b/drivers/usb/storage/uas-detect.h
+@@ -116,6 +116,19 @@ static int uas_use_uas_driver(struct usb_interface *intf,
+ 	if (le16_to_cpu(udev->descriptor.idVendor) == 0x0bc2)
+ 		flags |= US_FL_NO_ATA_1X;
+ 
++	/*
++	 * RTL9210-based enclosure from HIKSEMI, MD202 reportedly have issues
++	 * with UAS.  This isn't distinguishable with just idVendor and
++	 * idProduct, use manufacturer and product too.
++	 *
++	 * Reported-by: Hongling Zeng <zenghongling@kylinos.cn>
++	 */
++	if (le16_to_cpu(udev->descriptor.idVendor) == 0x0bda &&
++			le16_to_cpu(udev->descriptor.idProduct) == 0x9210 &&
++			(udev->manufacturer && !strcmp(udev->manufacturer, "HIKSEMI")) &&
++			(udev->product && !strcmp(udev->product, "MD202")))
++		flags |= US_FL_IGNORE_UAS;
++
+ 	usb_stor_adjust_quirks(udev, &flags);
+ 
+ 	if (flags & US_FL_IGNORE_UAS) {
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 251778d14e2dd..c7b763d6d1023 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -83,13 +83,6 @@ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_LUNS),
+ 
+-/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
+-UNUSUAL_DEV(0x0bda, 0x9210, 0x0000, 0x9999,
+-		"Hiksemi",
+-		"External HDD",
+-		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_IGNORE_UAS),
+-
+ /* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */
+ UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
+ 		"Initio Corporation",
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index de66a2949e33b..80d8c6c3be369 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -419,6 +419,18 @@ static const char * const pin_assignments[] = {
+ 	[DP_PIN_ASSIGN_F] = "F",
+ };
+ 
++/*
++ * Helper function to extract a peripheral's currently supported
++ * Pin Assignments from its DisplayPort alternate mode state.
++ */
++static u8 get_current_pin_assignments(struct dp_altmode *dp)
++{
++	if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
++		return DP_CAP_PIN_ASSIGN_DFP_D(dp->alt->vdo);
++	else
++		return DP_CAP_PIN_ASSIGN_UFP_D(dp->alt->vdo);
++}
++
+ static ssize_t
+ pin_assignment_store(struct device *dev, struct device_attribute *attr,
+ 		     const char *buf, size_t size)
+@@ -445,10 +457,7 @@ pin_assignment_store(struct device *dev, struct device_attribute *attr,
+ 		goto out_unlock;
+ 	}
+ 
+-	if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
+-		assignments = DP_CAP_UFP_D_PIN_ASSIGN(dp->alt->vdo);
+-	else
+-		assignments = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo);
++	assignments = get_current_pin_assignments(dp);
+ 
+ 	if (!(DP_CONF_GET_PIN_ASSIGN(conf) & assignments)) {
+ 		ret = -EINVAL;
+@@ -485,10 +494,7 @@ static ssize_t pin_assignment_show(struct device *dev,
+ 
+ 	cur = get_count_order(DP_CONF_GET_PIN_ASSIGN(dp->data.conf));
+ 
+-	if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
+-		assignments = DP_CAP_UFP_D_PIN_ASSIGN(dp->alt->vdo);
+-	else
+-		assignments = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo);
++	assignments = get_current_pin_assignments(dp);
+ 
+ 	for (i = 0; assignments; assignments >>= 1, i++) {
+ 		if (assignments & 1) {
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 904c7b4ce2f0c..59b366b5c6144 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -4594,14 +4594,13 @@ static void run_state_machine(struct tcpm_port *port)
+ 		tcpm_set_state(port, ready_state(port), 0);
+ 		break;
+ 	case DR_SWAP_CHANGE_DR:
+-		if (port->data_role == TYPEC_HOST) {
+-			tcpm_unregister_altmodes(port);
++		tcpm_unregister_altmodes(port);
++		if (port->data_role == TYPEC_HOST)
+ 			tcpm_set_roles(port, true, port->pwr_role,
+ 				       TYPEC_DEVICE);
+-		} else {
++		else
+ 			tcpm_set_roles(port, true, port->pwr_role,
+ 				       TYPEC_HOST);
+-		}
+ 		tcpm_ams_finish(port);
+ 		tcpm_set_state(port, ready_state(port), 0);
+ 		break;
+diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+index 6af9fdbb86b7a..058fbe28107e9 100644
+--- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
++++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+@@ -116,8 +116,9 @@ int mlx5_vdpa_create_mkey(struct mlx5_vdpa_dev *mvdev, u32 *mkey, u32 *in,
+ 			  int inlen);
+ int mlx5_vdpa_destroy_mkey(struct mlx5_vdpa_dev *mvdev, u32 mkey);
+ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+-			     bool *change_map);
+-int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb);
++			     bool *change_map, unsigned int asid);
++int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
++			unsigned int asid);
+ void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev);
+ 
+ #define mlx5_vdpa_warn(__dev, format, ...)                                                         \
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index a639b9208d414..a4d7ee2339fa5 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -511,7 +511,8 @@ out:
+ 	mutex_unlock(&mr->mkey_mtx);
+ }
+ 
+-static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
++static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev,
++				struct vhost_iotlb *iotlb, unsigned int asid)
+ {
+ 	struct mlx5_vdpa_mr *mr = &mvdev->mr;
+ 	int err;
+@@ -519,42 +520,49 @@ static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
+ 	if (mr->initialized)
+ 		return 0;
+ 
+-	if (iotlb)
+-		err = create_user_mr(mvdev, iotlb);
+-	else
+-		err = create_dma_mr(mvdev, mr);
++	if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
++		if (iotlb)
++			err = create_user_mr(mvdev, iotlb);
++		else
++			err = create_dma_mr(mvdev, mr);
+ 
+-	if (err)
+-		return err;
++		if (err)
++			return err;
++	}
+ 
+-	err = dup_iotlb(mvdev, iotlb);
+-	if (err)
+-		goto out_err;
++	if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid) {
++		err = dup_iotlb(mvdev, iotlb);
++		if (err)
++			goto out_err;
++	}
+ 
+ 	mr->initialized = true;
+ 	return 0;
+ 
+ out_err:
+-	if (iotlb)
+-		destroy_user_mr(mvdev, mr);
+-	else
+-		destroy_dma_mr(mvdev, mr);
++	if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
++		if (iotlb)
++			destroy_user_mr(mvdev, mr);
++		else
++			destroy_dma_mr(mvdev, mr);
++	}
+ 
+ 	return err;
+ }
+ 
+-int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
++int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
++			unsigned int asid)
+ {
+ 	int err;
+ 
+ 	mutex_lock(&mvdev->mr.mkey_mtx);
+-	err = _mlx5_vdpa_create_mr(mvdev, iotlb);
++	err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
+ 	mutex_unlock(&mvdev->mr.mkey_mtx);
+ 	return err;
+ }
+ 
+ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+-			     bool *change_map)
++			     bool *change_map, unsigned int asid)
+ {
+ 	struct mlx5_vdpa_mr *mr = &mvdev->mr;
+ 	int err = 0;
+@@ -566,7 +574,7 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
+ 		*change_map = true;
+ 	}
+ 	if (!*change_map)
+-		err = _mlx5_vdpa_create_mr(mvdev, iotlb);
++		err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid);
+ 	mutex_unlock(&mr->mkey_mtx);
+ 
+ 	return err;
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 444d6572b2d05..3a6dbbc6440d4 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1823,6 +1823,9 @@ static virtio_net_ctrl_ack handle_ctrl_vlan(struct mlx5_vdpa_dev *mvdev, u8 cmd)
+ 	size_t read;
+ 	u16 id;
+ 
++	if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VLAN)))
++		return status;
++
+ 	switch (cmd) {
+ 	case VIRTIO_NET_CTRL_VLAN_ADD:
+ 		read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, &vlan, sizeof(vlan));
+@@ -2391,7 +2394,8 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
+ 	}
+ }
+ 
+-static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
++static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev,
++				struct vhost_iotlb *iotlb, unsigned int asid)
+ {
+ 	struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
+ 	int err;
+@@ -2403,7 +2407,7 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb
+ 
+ 	teardown_driver(ndev);
+ 	mlx5_vdpa_destroy_mr(mvdev);
+-	err = mlx5_vdpa_create_mr(mvdev, iotlb);
++	err = mlx5_vdpa_create_mr(mvdev, iotlb, asid);
+ 	if (err)
+ 		goto err_mr;
+ 
+@@ -2584,7 +2588,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev)
+ 	++mvdev->generation;
+ 
+ 	if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
+-		if (mlx5_vdpa_create_mr(mvdev, NULL))
++		if (mlx5_vdpa_create_mr(mvdev, NULL, 0))
+ 			mlx5_vdpa_warn(mvdev, "create MR failed\n");
+ 	}
+ 	up_write(&ndev->reslock);
+@@ -2620,41 +2624,20 @@ static u32 mlx5_vdpa_get_generation(struct vdpa_device *vdev)
+ 	return mvdev->generation;
+ }
+ 
+-static int set_map_control(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
+-{
+-	u64 start = 0ULL, last = 0ULL - 1;
+-	struct vhost_iotlb_map *map;
+-	int err = 0;
+-
+-	spin_lock(&mvdev->cvq.iommu_lock);
+-	vhost_iotlb_reset(mvdev->cvq.iotlb);
+-
+-	for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
+-	     map = vhost_iotlb_itree_next(map, start, last)) {
+-		err = vhost_iotlb_add_range(mvdev->cvq.iotlb, map->start,
+-					    map->last, map->addr, map->perm);
+-		if (err)
+-			goto out;
+-	}
+-
+-out:
+-	spin_unlock(&mvdev->cvq.iommu_lock);
+-	return err;
+-}
+-
+-static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb)
++static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
++			unsigned int asid)
+ {
+ 	bool change_map;
+ 	int err;
+ 
+-	err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map);
++	err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map, asid);
+ 	if (err) {
+ 		mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err);
+ 		return err;
+ 	}
+ 
+ 	if (change_map)
+-		err = mlx5_vdpa_change_map(mvdev, iotlb);
++		err = mlx5_vdpa_change_map(mvdev, iotlb, asid);
+ 
+ 	return err;
+ }
+@@ -2667,16 +2650,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid,
+ 	int err = -EINVAL;
+ 
+ 	down_write(&ndev->reslock);
+-	if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] == asid) {
+-		err = set_map_data(mvdev, iotlb);
+-		if (err)
+-			goto out;
+-	}
+-
+-	if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] == asid)
+-		err = set_map_control(mvdev, iotlb);
+-
+-out:
++	err = set_map_data(mvdev, iotlb, asid);
+ 	up_write(&ndev->reslock);
+ 	return err;
+ }
+@@ -2842,8 +2816,8 @@ static int mlx5_vdpa_suspend(struct vdpa_device *vdev)
+ 	int i;
+ 
+ 	down_write(&ndev->reslock);
+-	mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+ 	ndev->nb_registered = false;
++	mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+ 	flush_workqueue(ndev->mvdev.wq);
+ 	for (i = 0; i < ndev->cur_num_vqs; i++) {
+ 		mvq = &ndev->vqs[i];
+@@ -3021,7 +2995,7 @@ static void update_carrier(struct work_struct *work)
+ 	else
+ 		ndev->config.status &= cpu_to_mlx5vdpa16(mvdev, ~VIRTIO_NET_S_LINK_UP);
+ 
+-	if (ndev->config_cb.callback)
++	if (ndev->nb_registered && ndev->config_cb.callback)
+ 		ndev->config_cb.callback(ndev->config_cb.private);
+ 
+ 	kfree(wqent);
+@@ -3038,21 +3012,13 @@ static int event_handler(struct notifier_block *nb, unsigned long event, void *p
+ 		switch (eqe->sub_type) {
+ 		case MLX5_PORT_CHANGE_SUBTYPE_DOWN:
+ 		case MLX5_PORT_CHANGE_SUBTYPE_ACTIVE:
+-			down_read(&ndev->reslock);
+-			if (!ndev->nb_registered) {
+-				up_read(&ndev->reslock);
+-				return NOTIFY_DONE;
+-			}
+ 			wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
+-			if (!wqent) {
+-				up_read(&ndev->reslock);
++			if (!wqent)
+ 				return NOTIFY_DONE;
+-			}
+ 
+ 			wqent->mvdev = &ndev->mvdev;
+ 			INIT_WORK(&wqent->work, update_carrier);
+ 			queue_work(ndev->mvdev.wq, &wqent->work);
+-			up_read(&ndev->reslock);
+ 			ret = NOTIFY_OK;
+ 			break;
+ 		default:
+@@ -3187,7 +3153,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ 		goto err_mpfs;
+ 
+ 	if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) {
+-		err = mlx5_vdpa_create_mr(mvdev, NULL);
++		err = mlx5_vdpa_create_mr(mvdev, NULL, 0);
+ 		if (err)
+ 			goto err_res;
+ 	}
+@@ -3239,8 +3205,8 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
+ 	struct workqueue_struct *wq;
+ 
+ 	if (ndev->nb_registered) {
+-		mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+ 		ndev->nb_registered = false;
++		mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+ 	}
+ 	wq = mvdev->wq;
+ 	mvdev->wq = NULL;
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
+index 11f5a121df243..584b975a98a7e 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c
+@@ -62,6 +62,9 @@ static bool receive_filter(struct vdpasim *vdpasim, size_t len)
+ 	if (len < ETH_ALEN + hdr_len)
+ 		return false;
+ 
++	if (is_broadcast_ether_addr(vdpasim->buffer + hdr_len) ||
++	    is_multicast_ether_addr(vdpasim->buffer + hdr_len))
++		return true;
+ 	if (!strncmp(vdpasim->buffer + hdr_len, vio_config->mac, ETH_ALEN))
+ 		return true;
+ 
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 35dceee3ed560..31017ebc4d7c7 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -1440,6 +1440,9 @@ static bool vduse_validate_config(struct vduse_dev_config *config)
+ 	if (config->config_size > PAGE_SIZE)
+ 		return false;
+ 
++	if (config->vq_num > 0xffff)
++		return false;
++
+ 	if (!device_is_allowed(config->device_id))
+ 		return false;
+ 
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+index 54b0f034c2edf..7cddb7b8ae344 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+@@ -1536,22 +1536,28 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
+ {
+ 	struct dsi_data *dsi = dsi_get_dsidrv_data(dsidev);
+ 	unsigned long flags;
+-	struct dsi_irq_stats stats;
++	struct dsi_irq_stats *stats;
++
++	stats = kzalloc(sizeof(*stats), GFP_KERNEL);
++	if (!stats) {
++		seq_printf(s, "out of memory\n");
++		return;
++	}
+ 
+ 	spin_lock_irqsave(&dsi->irq_stats_lock, flags);
+ 
+-	stats = dsi->irq_stats;
++	*stats = dsi->irq_stats;
+ 	memset(&dsi->irq_stats, 0, sizeof(dsi->irq_stats));
+ 	dsi->irq_stats.last_reset = jiffies;
+ 
+ 	spin_unlock_irqrestore(&dsi->irq_stats_lock, flags);
+ 
+ 	seq_printf(s, "period %u ms\n",
+-			jiffies_to_msecs(jiffies - stats.last_reset));
++			jiffies_to_msecs(jiffies - stats->last_reset));
+ 
+-	seq_printf(s, "irqs %d\n", stats.irq_count);
++	seq_printf(s, "irqs %d\n", stats->irq_count);
+ #define PIS(x) \
+-	seq_printf(s, "%-20s %10d\n", #x, stats.dsi_irqs[ffs(DSI_IRQ_##x)-1])
++	seq_printf(s, "%-20s %10d\n", #x, stats->dsi_irqs[ffs(DSI_IRQ_##x)-1])
+ 
+ 	seq_printf(s, "-- DSI%d interrupts --\n", dsi->module_id + 1);
+ 	PIS(VC0);
+@@ -1575,10 +1581,10 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
+ 
+ #define PIS(x) \
+ 	seq_printf(s, "%-20s %10d %10d %10d %10d\n", #x, \
+-			stats.vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
+-			stats.vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
+-			stats.vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
+-			stats.vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
++			stats->vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
++			stats->vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
++			stats->vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
++			stats->vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
+ 
+ 	seq_printf(s, "-- VC interrupts --\n");
+ 	PIS(CS);
+@@ -1594,7 +1600,7 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
+ 
+ #define PIS(x) \
+ 	seq_printf(s, "%-20s %10d\n", #x, \
+-			stats.cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
++			stats->cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
+ 
+ 	seq_printf(s, "-- CIO interrupts --\n");
+ 	PIS(ERRSYNCESC1);
+@@ -1618,6 +1624,8 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
+ 	PIS(ULPSACTIVENOT_ALL0);
+ 	PIS(ULPSACTIVENOT_ALL1);
+ #undef PIS
++
++	kfree(stats);
+ }
+ 
+ static void dsi1_dump_irqs(struct seq_file *s)
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index c3b9f27618497..edf2e18014cdf 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -303,7 +303,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	int err;
+ 
+ 	if (index >= vp_modern_get_num_queues(mdev))
+-		return ERR_PTR(-ENOENT);
++		return ERR_PTR(-EINVAL);
+ 
+ 	/* Check if queue is either not available or already active. */
+ 	num = vp_modern_get_queue_size(mdev, index);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 6538f52262ca4..883a3671a9774 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -344,7 +344,14 @@ error:
+ 	btrfs_print_tree(eb, 0);
+ 	btrfs_err(fs_info, "block=%llu write time tree block corruption detected",
+ 		  eb->start);
+-	WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++	/*
++	 * Be noisy if this is an extent buffer from a log tree. We don't abort
++	 * a transaction in case there's a bad log tree extent buffer, we just
++	 * fallback to a transaction commit. Still we want to know when there is
++	 * a bad log tree extent buffer, as that may signal a bug somewhere.
++	 */
++	WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG) ||
++		btrfs_header_owner(eb) == BTRFS_TREE_LOG_OBJECTID);
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 2801c991814f5..571fcc5ae4dcf 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -1706,6 +1706,11 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
+ 		BUG();
+ 	if (ret && insert_reserved)
+ 		btrfs_pin_extent(trans, node->bytenr, node->num_bytes, 1);
++	if (ret < 0)
++		btrfs_err(trans->fs_info,
++"failed to run delayed ref for logical %llu num_bytes %llu type %u action %u ref_mod %d: %d",
++			  node->bytenr, node->num_bytes, node->type,
++			  node->action, node->ref_mod, ret);
+ 	return ret;
+ }
+ 
+@@ -1947,8 +1952,6 @@ static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
+ 		if (ret) {
+ 			unselect_delayed_ref_head(delayed_refs, locked_ref);
+ 			btrfs_put_delayed_ref(ref);
+-			btrfs_debug(fs_info, "run_one_delayed_ref returned %d",
+-				    ret);
+ 			return ret;
+ 		}
+ 
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 9bef8eaa074a0..23056d9914d84 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -3838,6 +3838,7 @@ static loff_t find_desired_extent(struct btrfs_inode *inode, loff_t offset,
+ 		struct extent_buffer *leaf = path->nodes[0];
+ 		struct btrfs_file_extent_item *extent;
+ 		u64 extent_end;
++		u8 type;
+ 
+ 		if (path->slots[0] >= btrfs_header_nritems(leaf)) {
+ 			ret = btrfs_next_leaf(root, path);
+@@ -3892,10 +3893,16 @@ static loff_t find_desired_extent(struct btrfs_inode *inode, loff_t offset,
+ 
+ 		extent = btrfs_item_ptr(leaf, path->slots[0],
+ 					struct btrfs_file_extent_item);
++		type = btrfs_file_extent_type(leaf, extent);
+ 
+-		if (btrfs_file_extent_disk_bytenr(leaf, extent) == 0 ||
+-		    btrfs_file_extent_type(leaf, extent) ==
+-		    BTRFS_FILE_EXTENT_PREALLOC) {
++		/*
++		 * Can't access the extent's disk_bytenr field if this is an
++		 * inline extent, since at that offset, it's where the extent
++		 * data starts.
++		 */
++		if (type == BTRFS_FILE_EXTENT_PREALLOC ||
++		    (type == BTRFS_FILE_EXTENT_REG &&
++		     btrfs_file_extent_disk_bytenr(leaf, extent) == 0)) {
+ 			/*
+ 			 * Explicit hole or prealloc extent, search for delalloc.
+ 			 * A prealloc extent is treated like a hole.
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index b74105a10f16c..5ac65384c9471 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2751,9 +2751,19 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
+ 			      BTRFS_QGROUP_RUNTIME_FLAG_NO_ACCOUNTING)) {
+ 			/*
+ 			 * Old roots should be searched when inserting qgroup
+-			 * extent record
++			 * extent record.
++			 *
++			 * But for INCONSISTENT (NO_ACCOUNTING) -> rescan case,
++			 * we may have some record inserted during
++			 * NO_ACCOUNTING (thus no old_roots populated), but
++			 * later we start rescan, which clears NO_ACCOUNTING,
++			 * leaving some inserted records without old_roots
++			 * populated.
++			 *
++			 * Those cases are rare and should not cause too much
++			 * time spent during commit_transaction().
+ 			 */
+-			if (WARN_ON(!record->old_roots)) {
++			if (!record->old_roots) {
+ 				/* Search commit root to find old_roots */
+ 				ret = btrfs_find_all_roots(NULL, fs_info,
+ 						record->bytenr, 0,
+@@ -3338,6 +3348,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 	int err = -ENOMEM;
+ 	int ret = 0;
+ 	bool stopped = false;
++	bool did_leaf_rescans = false;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+@@ -3358,6 +3369,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
+ 		}
+ 
+ 		err = qgroup_rescan_leaf(trans, path);
++		did_leaf_rescans = true;
+ 
+ 		if (err > 0)
+ 			btrfs_commit_transaction(trans);
+@@ -3378,16 +3390,23 @@ out:
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+ 
+ 	/*
+-	 * only update status, since the previous part has already updated the
+-	 * qgroup info.
++	 * Only update status, since the previous part has already updated the
++	 * qgroup info, and only if we did any actual work. This also prevents
++	 * race with a concurrent quota disable, which has already set
++	 * fs_info->quota_root to NULL and cleared BTRFS_FS_QUOTA_ENABLED at
++	 * btrfs_quota_disable().
+ 	 */
+-	trans = btrfs_start_transaction(fs_info->quota_root, 1);
+-	if (IS_ERR(trans)) {
+-		err = PTR_ERR(trans);
++	if (did_leaf_rescans) {
++		trans = btrfs_start_transaction(fs_info->quota_root, 1);
++		if (IS_ERR(trans)) {
++			err = PTR_ERR(trans);
++			trans = NULL;
++			btrfs_err(fs_info,
++				  "fail to start transaction for status update: %d",
++				  err);
++		}
++	} else {
+ 		trans = NULL;
+-		btrfs_err(fs_info,
+-			  "fail to start transaction for status update: %d",
+-			  err);
+ 	}
+ 
+ 	mutex_lock(&fs_info->qgroup_rescan_lock);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index c3cf3dabe0b1b..7535857f4c8fb 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3011,7 +3011,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 		ret = 0;
+ 	if (ret) {
+ 		blk_finish_plug(&plug);
+-		btrfs_abort_transaction(trans, ret);
+ 		btrfs_set_log_full_commit(trans);
+ 		mutex_unlock(&root->log_mutex);
+ 		goto out;
+@@ -3076,15 +3075,12 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 
+ 		blk_finish_plug(&plug);
+ 		btrfs_set_log_full_commit(trans);
+-
+-		if (ret != -ENOSPC) {
+-			btrfs_abort_transaction(trans, ret);
+-			mutex_unlock(&log_root_tree->log_mutex);
+-			goto out;
+-		}
++		if (ret != -ENOSPC)
++			btrfs_err(fs_info,
++				  "failed to update log for root %llu ret %d",
++				  root->root_key.objectid, ret);
+ 		btrfs_wait_tree_log_extents(log, mark);
+ 		mutex_unlock(&log_root_tree->log_mutex);
+-		ret = BTRFS_LOG_FORCE_COMMIT;
+ 		goto out;
+ 	}
+ 
+@@ -3143,7 +3139,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 		goto out_wake_log_root;
+ 	} else if (ret) {
+ 		btrfs_set_log_full_commit(trans);
+-		btrfs_abort_transaction(trans, ret);
+ 		mutex_unlock(&log_root_tree->log_mutex);
+ 		goto out_wake_log_root;
+ 	}
+@@ -3857,7 +3852,10 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ 					      path->slots[0]);
+ 			if (tmp.type == BTRFS_DIR_INDEX_KEY)
+ 				last_old_dentry_offset = tmp.offset;
++		} else if (ret < 0) {
++			err = ret;
+ 		}
++
+ 		goto done;
+ 	}
+ 
+@@ -3877,19 +3875,34 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ 		 */
+ 		if (tmp.type == BTRFS_DIR_INDEX_KEY)
+ 			last_old_dentry_offset = tmp.offset;
++	} else if (ret < 0) {
++		err = ret;
++		goto done;
+ 	}
++
+ 	btrfs_release_path(path);
+ 
+ 	/*
+-	 * Find the first key from this transaction again.  See the note for
+-	 * log_new_dir_dentries, if we're logging a directory recursively we
+-	 * won't be holding its i_mutex, which means we can modify the directory
+-	 * while we're logging it.  If we remove an entry between our first
+-	 * search and this search we'll not find the key again and can just
+-	 * bail.
++	 * Find the first key from this transaction again or the one we were at
++	 * in the loop below in case we had to reschedule. We may be logging the
++	 * directory without holding its VFS lock, which happen when logging new
++	 * dentries (through log_new_dir_dentries()) or in some cases when we
++	 * need to log the parent directory of an inode. This means a dir index
++	 * key might be deleted from the inode's root, and therefore we may not
++	 * find it anymore. If we can't find it, just move to the next key. We
++	 * can not bail out and ignore, because if we do that we will simply
++	 * not log dir index keys that come after the one that was just deleted
++	 * and we can end up logging a dir index range that ends at (u64)-1
++	 * (@last_offset is initialized to that), resulting in removing dir
++	 * entries we should not remove at log replay time.
+ 	 */
+ search:
+ 	ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
++	if (ret > 0)
++		ret = btrfs_next_item(root, path);
++	if (ret < 0)
++		err = ret;
++	/* If ret is 1, there are no more keys in the inode's root. */
+ 	if (ret != 0)
+ 		goto done;
+ 
+@@ -5608,8 +5621,10 @@ static int add_conflicting_inode(struct btrfs_trans_handle *trans,
+ 	 * LOG_INODE_EXISTS mode) and slow down other fsyncs or transaction
+ 	 * commits.
+ 	 */
+-	if (ctx->num_conflict_inodes >= MAX_CONFLICT_INODES)
++	if (ctx->num_conflict_inodes >= MAX_CONFLICT_INODES) {
++		btrfs_set_log_full_commit(trans);
+ 		return BTRFS_LOG_FORCE_COMMIT;
++	}
+ 
+ 	inode = btrfs_iget(root->fs_info->sb, ino, root);
+ 	/*
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index dba087ad40ea2..65e4e887605f9 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -770,8 +770,11 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 					BTRFS_SUPER_FLAG_CHANGING_FSID_V2);
+ 
+ 	error = lookup_bdev(path, &path_devt);
+-	if (error)
++	if (error) {
++		btrfs_err(NULL, "failed to lookup block device for path %s: %d",
++			  path, error);
+ 		return ERR_PTR(error);
++	}
+ 
+ 	if (fsid_change_in_progress) {
+ 		if (!has_metadata_uuid)
+@@ -836,6 +839,9 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 
+ 	if (!device) {
+ 		if (fs_devices->opened) {
++			btrfs_err(NULL,
++		"device %s belongs to fsid %pU, and the fs is already mounted",
++				  path, fs_devices->fsid);
+ 			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EBUSY);
+ 		}
+@@ -910,6 +916,9 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 			 * generation are equal.
+ 			 */
+ 			mutex_unlock(&fs_devices->device_list_mutex);
++			btrfs_err(NULL,
++"device %s already registered with a higher generation, found %llu expect %llu",
++				  path, found_transid, device->generation);
+ 			return ERR_PTR(-EEXIST);
+ 		}
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index fde1c371605a1..eab36e4ea1300 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3554,9 +3554,6 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ 	struct cifs_tcon *tcon = mnt_ctx->tcon;
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ 	char *full_path;
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+-	bool nodfs = cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS;
+-#endif
+ 
+ 	if (!server->ops->is_path_accessible)
+ 		return -EOPNOTSUPP;
+@@ -3573,19 +3570,6 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ 
+ 	rc = server->ops->is_path_accessible(xid, tcon, cifs_sb,
+ 					     full_path);
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+-	if (nodfs) {
+-		if (rc == -EREMOTE)
+-			rc = -EOPNOTSUPP;
+-		goto out;
+-	}
+-
+-	/* path *might* exist with non-ASCII characters in DFS root
+-	 * try again with full path (only if nodfs is not set) */
+-	if (rc == -ENOENT && is_tcon_dfs(tcon))
+-		rc = cifs_dfs_query_info_nonascii_quirk(xid, tcon, cifs_sb,
+-							full_path);
+-#endif
+ 	if (rc != 0 && rc != -EREMOTE)
+ 		goto out;
+ 
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 4e2ca3c6e5c00..0c9b619e4386b 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -991,12 +991,6 @@ int cifs_get_inode_info(struct inode **inode, const char *full_path,
+ 		}
+ 		rc = server->ops->query_path_info(xid, tcon, cifs_sb, full_path, &tmp_data,
+ 						  &adjust_tz, &is_reparse_point);
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+-		if (rc == -ENOENT && is_tcon_dfs(tcon))
+-			rc = cifs_dfs_query_info_nonascii_quirk(xid, tcon,
+-								cifs_sb,
+-								full_path);
+-#endif
+ 		data = &tmp_data;
+ 	}
+ 
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 1cbecd64d697f..062175994e879 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -1314,49 +1314,4 @@ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix)
+ 	cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;
+ 	return 0;
+ }
+-
+-/** cifs_dfs_query_info_nonascii_quirk
+- * Handle weird Windows SMB server behaviour. It responds with
+- * STATUS_OBJECT_NAME_INVALID code to SMB2 QUERY_INFO request
+- * for "\<server>\<dfsname>\<linkpath>" DFS reference,
+- * where <dfsname> contains non-ASCII unicode symbols.
+- *
+- * Check such DFS reference.
+- */
+-int cifs_dfs_query_info_nonascii_quirk(const unsigned int xid,
+-				       struct cifs_tcon *tcon,
+-				       struct cifs_sb_info *cifs_sb,
+-				       const char *linkpath)
+-{
+-	char *treename, *dfspath, sep;
+-	int treenamelen, linkpathlen, rc;
+-
+-	treename = tcon->tree_name;
+-	/* MS-DFSC: All paths in REQ_GET_DFS_REFERRAL and RESP_GET_DFS_REFERRAL
+-	 * messages MUST be encoded with exactly one leading backslash, not two
+-	 * leading backslashes.
+-	 */
+-	sep = CIFS_DIR_SEP(cifs_sb);
+-	if (treename[0] == sep && treename[1] == sep)
+-		treename++;
+-	linkpathlen = strlen(linkpath);
+-	treenamelen = strnlen(treename, MAX_TREE_SIZE + 1);
+-	dfspath = kzalloc(treenamelen + linkpathlen + 1, GFP_KERNEL);
+-	if (!dfspath)
+-		return -ENOMEM;
+-	if (treenamelen)
+-		memcpy(dfspath, treename, treenamelen);
+-	memcpy(dfspath + treenamelen, linkpath, linkpathlen);
+-	rc = dfs_cache_find(xid, tcon->ses, cifs_sb->local_nls,
+-			    cifs_remap(cifs_sb), dfspath, NULL, NULL);
+-	if (rc == 0) {
+-		cifs_dbg(FYI, "DFS ref '%s' is found, emulate -EREMOTE\n",
+-			 dfspath);
+-		rc = -EREMOTE;
+-	} else {
+-		cifs_dbg(FYI, "%s: dfs_cache_find returned %d\n", __func__, rc);
+-	}
+-	kfree(dfspath);
+-	return rc;
+-}
+ #endif
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index 68e08c85fbb87..6c84d2983166a 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -540,22 +540,41 @@ int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 	rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, FILE_READ_ATTRIBUTES, FILE_OPEN,
+ 			      create_options, ACL_NO_MODE, data, SMB2_OP_QUERY_INFO, cfile,
+ 			      err_iov, err_buftype);
+-	if (rc == -EOPNOTSUPP) {
+-		if (err_iov[0].iov_base && err_buftype[0] != CIFS_NO_BUFFER &&
+-		    ((struct smb2_hdr *)err_iov[0].iov_base)->Command == SMB2_CREATE &&
+-		    ((struct smb2_hdr *)err_iov[0].iov_base)->Status == STATUS_STOPPED_ON_SYMLINK) {
+-			rc = smb2_parse_symlink_response(cifs_sb, err_iov, &data->symlink_target);
++	if (rc) {
++		struct smb2_hdr *hdr = err_iov[0].iov_base;
++
++		if (unlikely(!hdr || err_buftype[0] == CIFS_NO_BUFFER))
++			goto out;
++		if (rc == -EOPNOTSUPP && hdr->Command == SMB2_CREATE &&
++		    hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
++			rc = smb2_parse_symlink_response(cifs_sb, err_iov,
++							 &data->symlink_target);
+ 			if (rc)
+ 				goto out;
+-		}
+-		*reparse = true;
+-		create_options |= OPEN_REPARSE_POINT;
+ 
+-		/* Failed on a symbolic link - query a reparse point info */
+-		cifs_get_readable_path(tcon, full_path, &cfile);
+-		rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, FILE_READ_ATTRIBUTES,
+-				      FILE_OPEN, create_options, ACL_NO_MODE, data,
+-				      SMB2_OP_QUERY_INFO, cfile, NULL, NULL);
++			*reparse = true;
++			create_options |= OPEN_REPARSE_POINT;
++
++			/* Failed on a symbolic link - query a reparse point info */
++			cifs_get_readable_path(tcon, full_path, &cfile);
++			rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
++					      FILE_READ_ATTRIBUTES, FILE_OPEN,
++					      create_options, ACL_NO_MODE, data,
++					      SMB2_OP_QUERY_INFO, cfile, NULL, NULL);
++			goto out;
++		} else if (rc != -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) &&
++			   hdr->Status == STATUS_OBJECT_NAME_INVALID) {
++			/*
++			 * Handle weird Windows SMB server behaviour. It responds with
++			 * STATUS_OBJECT_NAME_INVALID code to SMB2 QUERY_INFO request
++			 * for "\<server>\<dfsname>\<linkpath>" DFS reference,
++			 * where <dfsname> contains non-ASCII unicode symbols.
++			 */
++			rc = -EREMOTE;
++		}
++		if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
++		    (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))
++			rc = -EOPNOTSUPP;
+ 	}
+ 
+ out:
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 8da98918cf86b..1ff5b6b0e07a1 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -797,7 +797,9 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ 	int rc;
+ 	__le16 *utf16_path;
+ 	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
++	int err_buftype = CIFS_NO_BUFFER;
+ 	struct cifs_open_parms oparms;
++	struct kvec err_iov = {};
+ 	struct cifs_fid fid;
+ 	struct cached_fid *cfid;
+ 
+@@ -821,14 +823,32 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+-	rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL, NULL,
+-		       NULL);
++	rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
++		       &err_iov, &err_buftype);
+ 	if (rc) {
+-		kfree(utf16_path);
+-		return rc;
++		struct smb2_hdr *hdr = err_iov.iov_base;
++
++		if (unlikely(!hdr || err_buftype == CIFS_NO_BUFFER))
++			goto out;
++		/*
++		 * Handle weird Windows SMB server behaviour. It responds with
++		 * STATUS_OBJECT_NAME_INVALID code to SMB2 QUERY_INFO request
++		 * for "\<server>\<dfsname>\<linkpath>" DFS reference,
++		 * where <dfsname> contains non-ASCII unicode symbols.
++		 */
++		if (rc != -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) &&
++		    hdr->Status == STATUS_OBJECT_NAME_INVALID)
++			rc = -EREMOTE;
++		if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
++		    (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))
++			rc = -EOPNOTSUPP;
++		goto out;
+ 	}
+ 
+ 	rc = SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
++
++out:
++	free_rsp_buf(err_buftype, err_iov.iov_base);
+ 	kfree(utf16_path);
+ 	return rc;
+ }
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 4ac5b1bfaf781..92f39052d3117 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -541,9 +541,10 @@ static void
+ assemble_neg_contexts(struct smb2_negotiate_req *req,
+ 		      struct TCP_Server_Info *server, unsigned int *total_len)
+ {
+-	char *pneg_ctxt;
+-	char *hostname = NULL;
+ 	unsigned int ctxt_len, neg_context_count;
++	struct TCP_Server_Info *pserver;
++	char *pneg_ctxt;
++	char *hostname;
+ 
+ 	if (*total_len > 200) {
+ 		/* In case length corrupted don't want to overrun smb buffer */
+@@ -574,8 +575,9 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
+ 	 * secondary channels don't have the hostname field populated
+ 	 * use the hostname field in the primary channel instead
+ 	 */
+-	hostname = CIFS_SERVER_IS_CHAN(server) ?
+-		server->primary_server->hostname : server->hostname;
++	pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
++	cifs_server_lock(pserver);
++	hostname = pserver->hostname;
+ 	if (hostname && (hostname[0] != 0)) {
+ 		ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt,
+ 					      hostname);
+@@ -584,6 +586,7 @@ assemble_neg_contexts(struct smb2_negotiate_req *req,
+ 		neg_context_count = 3;
+ 	} else
+ 		neg_context_count = 2;
++	cifs_server_unlock(pserver);
+ 
+ 	build_posix_ctxt((struct smb2_posix_neg_context *)pneg_ctxt);
+ 	*total_len += sizeof(struct smb2_posix_neg_context);
+@@ -4159,12 +4162,15 @@ smb2_readv_callback(struct mid_q_entry *mid)
+ 				(struct smb2_hdr *)rdata->iov[0].iov_base;
+ 	struct cifs_credits credits = { .value = 0, .instance = 0 };
+ 	struct smb_rqst rqst = { .rq_iov = &rdata->iov[1],
+-				 .rq_nvec = 1,
+-				 .rq_pages = rdata->pages,
+-				 .rq_offset = rdata->page_offset,
+-				 .rq_npages = rdata->nr_pages,
+-				 .rq_pagesz = rdata->pagesz,
+-				 .rq_tailsz = rdata->tailsz };
++				 .rq_nvec = 1, };
++
++	if (rdata->got_bytes) {
++		rqst.rq_pages = rdata->pages;
++		rqst.rq_offset = rdata->page_offset;
++		rqst.rq_npages = rdata->nr_pages;
++		rqst.rq_pagesz = rdata->pagesz;
++		rqst.rq_tailsz = rdata->tailsz;
++	}
+ 
+ 	WARN_ONCE(rdata->server != mid->server,
+ 		  "rdata server %p != mid server %p",
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 932c070173b97..6c9e6f78a3e37 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -415,7 +415,8 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
+ 	struct extent_node *en;
+ 	bool ret = false;
+ 
+-	f2fs_bug_on(sbi, !et);
++	if (!et)
++		return false;
+ 
+ 	trace_f2fs_lookup_extent_tree_start(inode, pgofs);
+ 
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index ad34a33b0737c..4974cd18ca468 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -783,6 +783,12 @@ filelayout_alloc_lseg(struct pnfs_layout_hdr *layoutid,
+ 	return &fl->generic_hdr;
+ }
+ 
++static bool
++filelayout_lseg_is_striped(const struct nfs4_filelayout_segment *flseg)
++{
++	return flseg->num_fh > 1;
++}
++
+ /*
+  * filelayout_pg_test(). Called by nfs_can_coalesce_requests()
+  *
+@@ -803,6 +809,8 @@ filelayout_pg_test(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
+ 	size = pnfs_generic_pg_test(pgio, prev, req);
+ 	if (!size)
+ 		return 0;
++	else if (!filelayout_lseg_is_striped(FILELAYOUT_LSEG(pgio->pg_lseg)))
++		return size;
+ 
+ 	/* see if req and prev are in the same stripe */
+ 	if (prev) {
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index b9d15c3df3cc1..40ce92a332fe7 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -480,9 +480,18 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
+ 	ret = nilfs_btnode_submit_block(btnc, ptr, 0, REQ_OP_READ, &bh,
+ 					&submit_ptr);
+ 	if (ret) {
+-		if (ret != -EEXIST)
+-			return ret;
+-		goto out_check;
++		if (likely(ret == -EEXIST))
++			goto out_check;
++		if (ret == -ENOENT) {
++			/*
++			 * Block address translation failed due to invalid
++			 * value of 'ptr'.  In this case, return internal code
++			 * -EINVAL (broken bmap) to notify bmap layer of fatal
++			 * metadata corruption.
++			 */
++			ret = -EINVAL;
++		}
++		return ret;
+ 	}
+ 
+ 	if (ra) {
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 578c2bcfb1d93..63169529b52c4 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -2038,7 +2038,7 @@ int attr_punch_hole(struct ntfs_inode *ni, u64 vbo, u64 bytes, u32 *frame_size)
+ 		return -ENOENT;
+ 
+ 	if (!attr_b->non_res) {
+-		u32 data_size = le32_to_cpu(attr->res.data_size);
++		u32 data_size = le32_to_cpu(attr_b->res.data_size);
+ 		u32 from, to;
+ 
+ 		if (vbo > data_size)
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 98ac37e34e3d4..cc694846617a5 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -108,6 +108,21 @@ static bool userfaultfd_is_initialized(struct userfaultfd_ctx *ctx)
+ 	return ctx->features & UFFD_FEATURE_INITIALIZED;
+ }
+ 
++static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
++				     vm_flags_t flags)
++{
++	const bool uffd_wp_changed = (vma->vm_flags ^ flags) & VM_UFFD_WP;
++
++	vma->vm_flags = flags;
++	/*
++	 * For shared mappings, we want to enable writenotify while
++	 * userfaultfd-wp is enabled (see vma_wants_writenotify()). We'll simply
++	 * recalculate vma->vm_page_prot whenever userfaultfd-wp changes.
++	 */
++	if ((vma->vm_flags & VM_SHARED) && uffd_wp_changed)
++		vma_set_page_prot(vma);
++}
++
+ static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode,
+ 				     int wake_flags, void *key)
+ {
+@@ -618,7 +633,8 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
+ 		for_each_vma(vmi, vma) {
+ 			if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
+ 				vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+-				vma->vm_flags &= ~__VM_UFFD_FLAGS;
++				userfaultfd_set_vm_flags(vma,
++							 vma->vm_flags & ~__VM_UFFD_FLAGS);
+ 			}
+ 		}
+ 		mmap_write_unlock(mm);
+@@ -652,7 +668,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
+ 	octx = vma->vm_userfaultfd_ctx.ctx;
+ 	if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) {
+ 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+-		vma->vm_flags &= ~__VM_UFFD_FLAGS;
++		userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS);
+ 		return 0;
+ 	}
+ 
+@@ -733,7 +749,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
+ 	} else {
+ 		/* Drop uffd context if remap feature not enabled */
+ 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+-		vma->vm_flags &= ~__VM_UFFD_FLAGS;
++		userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS);
+ 	}
+ }
+ 
+@@ -895,7 +911,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
+ 			prev = vma;
+ 		}
+ 
+-		vma->vm_flags = new_flags;
++		userfaultfd_set_vm_flags(vma, new_flags);
+ 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ 	}
+ 	mmap_write_unlock(mm);
+@@ -1463,7 +1479,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
+ 		 * the next vma was merged into the current one and
+ 		 * the current one has not been updated yet.
+ 		 */
+-		vma->vm_flags = new_flags;
++		userfaultfd_set_vm_flags(vma, new_flags);
+ 		vma->vm_userfaultfd_ctx.ctx = ctx;
+ 
+ 		if (is_vm_hugetlb_page(vma) && uffd_disable_huge_pmd_share(vma))
+@@ -1651,7 +1667,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ 		 * the next vma was merged into the current one and
+ 		 * the current one has not been updated yet.
+ 		 */
+-		vma->vm_flags = new_flags;
++		userfaultfd_set_vm_flags(vma, new_flags);
+ 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ 
+ 	skip:
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 2c53fbb8d918e..a9c5c3f720adf 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -442,6 +442,10 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
+ 			data_size = zonefs_check_zone_condition(inode, zone,
+ 								false, false);
+ 		}
++	} else if (sbi->s_mount_opts & ZONEFS_MNTOPT_ERRORS_RO &&
++		   data_size > isize) {
++		/* Do not expose garbage data */
++		data_size = isize;
+ 	}
+ 
+ 	/*
+@@ -805,6 +809,24 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
+ 
+ 	ret = submit_bio_wait(bio);
+ 
++	/*
++	 * If the file zone was written underneath the file system, the zone
++	 * write pointer may not be where we expect it to be, but the zone
++	 * append write can still succeed. So check manually that we wrote where
++	 * we intended to, that is, at zi->i_wpoffset.
++	 */
++	if (!ret) {
++		sector_t wpsector =
++			zi->i_zsector + (zi->i_wpoffset >> SECTOR_SHIFT);
++
++		if (bio->bi_iter.bi_sector != wpsector) {
++			zonefs_warn(inode->i_sb,
++				"Corrupted write pointer %llu for zone at %llu\n",
++				wpsector, zi->i_zsector);
++			ret = -EIO;
++		}
++	}
++
+ 	zonefs_file_write_dio_end_io(iocb, size, ret, 0);
+ 	trace_zonefs_file_dio_append(inode, size, ret);
+ 
+diff --git a/include/linux/panic.h b/include/linux/panic.h
+index c7759b3f20452..979b776e3bcb3 100644
+--- a/include/linux/panic.h
++++ b/include/linux/panic.h
+@@ -11,6 +11,7 @@ extern long (*panic_blink)(int state);
+ __printf(1, 2)
+ void panic(const char *fmt, ...) __noreturn __cold;
+ void nmi_panic(struct pt_regs *regs, const char *msg);
++void check_panic_on_warn(const char *origin);
+ extern void oops_enter(void);
+ extern void oops_exit(void);
+ extern bool oops_may_print(void);
+diff --git a/include/linux/soc/ti/omap1-io.h b/include/linux/soc/ti/omap1-io.h
+index f7f12728d4a63..9a60f45899d3c 100644
+--- a/include/linux/soc/ti/omap1-io.h
++++ b/include/linux/soc/ti/omap1-io.h
+@@ -5,7 +5,7 @@
+ #ifndef __ASSEMBLER__
+ #include <linux/types.h>
+ 
+-#ifdef CONFIG_ARCH_OMAP1_ANY
++#ifdef CONFIG_ARCH_OMAP1
+ /*
+  * NOTE: Please use ioremap + __raw_read/write where possible instead of these
+  */
+@@ -15,7 +15,7 @@ extern u32 omap_readl(u32 pa);
+ extern void omap_writeb(u8 v, u32 pa);
+ extern void omap_writew(u16 v, u32 pa);
+ extern void omap_writel(u32 v, u32 pa);
+-#else
++#elif defined(CONFIG_COMPILE_TEST)
+ static inline u8 omap_readb(u32 pa)  { return 0; }
+ static inline u16 omap_readw(u32 pa) { return 0; }
+ static inline u32 omap_readl(u32 pa) { return 0; }
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 9ff1ad4dfad12..6c95af3317f73 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -751,11 +751,14 @@ extern struct device *usb_intf_get_dma_device(struct usb_interface *intf);
+ extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+ 	bool enable);
+ extern bool usb_acpi_power_manageable(struct usb_device *hdev, int index);
++extern int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index);
+ #else
+ static inline int usb_acpi_set_power_state(struct usb_device *hdev, int index,
+ 	bool enable) { return 0; }
+ static inline bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
+ 	{ return true; }
++static inline int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index)
++	{ return 0; }
+ #endif
+ 
+ /* USB autosuspend and autoresume */
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index ed50e81174bf4..5e10b5b1d16c0 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -98,7 +98,7 @@ struct raid56_bio_trace_info;
+ 	EM( FLUSH_DELALLOC_WAIT,	"FLUSH_DELALLOC_WAIT")		\
+ 	EM( FLUSH_DELALLOC_FULL,	"FLUSH_DELALLOC_FULL")		\
+ 	EM( FLUSH_DELAYED_REFS_NR,	"FLUSH_DELAYED_REFS_NR")	\
+-	EM( FLUSH_DELAYED_REFS,		"FLUSH_ELAYED_REFS")		\
++	EM( FLUSH_DELAYED_REFS,		"FLUSH_DELAYED_REFS")		\
+ 	EM( ALLOC_CHUNK,		"ALLOC_CHUNK")			\
+ 	EM( ALLOC_CHUNK_FORCE,		"ALLOC_CHUNK_FORCE")		\
+ 	EM( RUN_DELAYED_IPUTS,		"RUN_DELAYED_IPUTS")		\
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index f2f9f174fc620..ab5ae475840f4 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -281,8 +281,12 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked)
+ 			 * to the waitqueue, so if we get nothing back, we
+ 			 * should be safe and attempt a reissue.
+ 			 */
+-			if (unlikely(!req->cqe.res))
++			if (unlikely(!req->cqe.res)) {
++				/* Multishot armed need not reissue */
++				if (!(req->apoll_events & EPOLLONESHOT))
++					continue;
+ 				return IOU_POLL_REISSUE;
++			}
+ 		}
+ 		if (req->apoll_events & EPOLLONESHOT)
+ 			return IOU_POLL_DONE;
+diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c
+index 13e4efc971e6d..190d9f9dc9870 100644
+--- a/kernel/bpf/offload.c
++++ b/kernel/bpf/offload.c
+@@ -216,9 +216,6 @@ static void __bpf_prog_offload_destroy(struct bpf_prog *prog)
+ 	if (offload->dev_state)
+ 		offload->offdev->ops->destroy(prog);
+ 
+-	/* Make sure BPF_PROG_GET_NEXT_ID can't find this dead program */
+-	bpf_prog_free_id(prog, true);
+-
+ 	list_del_init(&offload->offloads);
+ 	kfree(offload);
+ 	prog->aux->offload = NULL;
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 439ed7e5a82b8..6c61dba26f4d9 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1958,7 +1958,7 @@ static void bpf_audit_prog(const struct bpf_prog *prog, unsigned int op)
+ 		return;
+ 	if (audit_enabled == AUDIT_OFF)
+ 		return;
+-	if (op == BPF_AUDIT_LOAD)
++	if (!in_irq() && !irqs_disabled())
+ 		ctx = audit_context();
+ 	ab = audit_log_start(ctx, GFP_ATOMIC, AUDIT_BPF);
+ 	if (unlikely(!ab))
+@@ -2053,6 +2053,7 @@ static void bpf_prog_put_deferred(struct work_struct *work)
+ 	prog = aux->prog;
+ 	perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0);
+ 	bpf_audit_prog(prog, BPF_AUDIT_UNLOAD);
++	bpf_prog_free_id(prog, true);
+ 	__bpf_prog_put_noref(prog, true);
+ }
+ 
+@@ -2061,9 +2062,6 @@ static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock)
+ 	struct bpf_prog_aux *aux = prog->aux;
+ 
+ 	if (atomic64_dec_and_test(&aux->refcnt)) {
+-		/* bpf_prog_free_id() must be called first */
+-		bpf_prog_free_id(prog, do_idr_lock);
+-
+ 		if (in_irq() || irqs_disabled()) {
+ 			INIT_WORK(&aux->work, bpf_prog_put_deferred);
+ 			schedule_work(&aux->work);
+diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
+index c2a2182ce5702..c4ab9d6cdbe9c 100644
+--- a/kernel/bpf/task_iter.c
++++ b/kernel/bpf/task_iter.c
+@@ -438,6 +438,7 @@ struct bpf_iter_seq_task_vma_info {
+ 	 */
+ 	struct bpf_iter_seq_task_common common;
+ 	struct task_struct *task;
++	struct mm_struct *mm;
+ 	struct vm_area_struct *vma;
+ 	u32 tid;
+ 	unsigned long prev_vm_start;
+@@ -456,16 +457,19 @@ task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
+ 	enum bpf_task_vma_iter_find_op op;
+ 	struct vm_area_struct *curr_vma;
+ 	struct task_struct *curr_task;
++	struct mm_struct *curr_mm;
+ 	u32 saved_tid = info->tid;
+ 
+ 	/* If this function returns a non-NULL vma, it holds a reference to
+-	 * the task_struct, and holds read lock on vma->mm->mmap_lock.
++	 * the task_struct, holds a refcount on mm->mm_users, and holds
++	 * read lock on vma->mm->mmap_lock.
+ 	 * If this function returns NULL, it does not hold any reference or
+ 	 * lock.
+ 	 */
+ 	if (info->task) {
+ 		curr_task = info->task;
+ 		curr_vma = info->vma;
++		curr_mm = info->mm;
+ 		/* In case of lock contention, drop mmap_lock to unblock
+ 		 * the writer.
+ 		 *
+@@ -504,13 +508,15 @@ task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
+ 		 *    4.2) VMA2 and VMA2' covers different ranges, process
+ 		 *         VMA2'.
+ 		 */
+-		if (mmap_lock_is_contended(curr_task->mm)) {
++		if (mmap_lock_is_contended(curr_mm)) {
+ 			info->prev_vm_start = curr_vma->vm_start;
+ 			info->prev_vm_end = curr_vma->vm_end;
+ 			op = task_vma_iter_find_vma;
+-			mmap_read_unlock(curr_task->mm);
+-			if (mmap_read_lock_killable(curr_task->mm))
++			mmap_read_unlock(curr_mm);
++			if (mmap_read_lock_killable(curr_mm)) {
++				mmput(curr_mm);
+ 				goto finish;
++			}
+ 		} else {
+ 			op = task_vma_iter_next_vma;
+ 		}
+@@ -535,42 +541,47 @@ again:
+ 			op = task_vma_iter_find_vma;
+ 		}
+ 
+-		if (!curr_task->mm)
++		curr_mm = get_task_mm(curr_task);
++		if (!curr_mm)
+ 			goto next_task;
+ 
+-		if (mmap_read_lock_killable(curr_task->mm))
++		if (mmap_read_lock_killable(curr_mm)) {
++			mmput(curr_mm);
+ 			goto finish;
++		}
+ 	}
+ 
+ 	switch (op) {
+ 	case task_vma_iter_first_vma:
+-		curr_vma = find_vma(curr_task->mm, 0);
++		curr_vma = find_vma(curr_mm, 0);
+ 		break;
+ 	case task_vma_iter_next_vma:
+-		curr_vma = find_vma(curr_task->mm, curr_vma->vm_end);
++		curr_vma = find_vma(curr_mm, curr_vma->vm_end);
+ 		break;
+ 	case task_vma_iter_find_vma:
+ 		/* We dropped mmap_lock so it is necessary to use find_vma
+ 		 * to find the next vma. This is similar to the  mechanism
+ 		 * in show_smaps_rollup().
+ 		 */
+-		curr_vma = find_vma(curr_task->mm, info->prev_vm_end - 1);
++		curr_vma = find_vma(curr_mm, info->prev_vm_end - 1);
+ 		/* case 1) and 4.2) above just use curr_vma */
+ 
+ 		/* check for case 2) or case 4.1) above */
+ 		if (curr_vma &&
+ 		    curr_vma->vm_start == info->prev_vm_start &&
+ 		    curr_vma->vm_end == info->prev_vm_end)
+-			curr_vma = find_vma(curr_task->mm, curr_vma->vm_end);
++			curr_vma = find_vma(curr_mm, curr_vma->vm_end);
+ 		break;
+ 	}
+ 	if (!curr_vma) {
+ 		/* case 3) above, or case 2) 4.1) with vma->next == NULL */
+-		mmap_read_unlock(curr_task->mm);
++		mmap_read_unlock(curr_mm);
++		mmput(curr_mm);
+ 		goto next_task;
+ 	}
+ 	info->task = curr_task;
+ 	info->vma = curr_vma;
++	info->mm = curr_mm;
+ 	return curr_vma;
+ 
+ next_task:
+@@ -579,6 +590,7 @@ next_task:
+ 
+ 	put_task_struct(curr_task);
+ 	info->task = NULL;
++	info->mm = NULL;
+ 	info->tid++;
+ 	goto again;
+ 
+@@ -587,6 +599,7 @@ finish:
+ 		put_task_struct(curr_task);
+ 	info->task = NULL;
+ 	info->vma = NULL;
++	info->mm = NULL;
+ 	return NULL;
+ }
+ 
+@@ -658,7 +671,9 @@ static void task_vma_seq_stop(struct seq_file *seq, void *v)
+ 		 */
+ 		info->prev_vm_start = ~0UL;
+ 		info->prev_vm_end = info->vma->vm_end;
+-		mmap_read_unlock(info->task->mm);
++		mmap_read_unlock(info->mm);
++		mmput(info->mm);
++		info->mm = NULL;
+ 		put_task_struct(info->task);
+ 		info->task = NULL;
+ 	}
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 35e0a31a0315c..15dc2ec80c467 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -67,11 +67,58 @@
+ #include <linux/io_uring.h>
+ #include <linux/kprobes.h>
+ #include <linux/rethook.h>
++#include <linux/sysfs.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/unistd.h>
+ #include <asm/mmu_context.h>
+ 
++/*
++ * The default value should be high enough to not crash a system that randomly
++ * crashes its kernel from time to time, but low enough to at least not permit
++ * overflowing 32-bit refcounts or the ldsem writer count.
++ */
++static unsigned int oops_limit = 10000;
++
++#ifdef CONFIG_SYSCTL
++static struct ctl_table kern_exit_table[] = {
++	{
++		.procname       = "oops_limit",
++		.data           = &oops_limit,
++		.maxlen         = sizeof(oops_limit),
++		.mode           = 0644,
++		.proc_handler   = proc_douintvec,
++	},
++	{ }
++};
++
++static __init int kernel_exit_sysctls_init(void)
++{
++	register_sysctl_init("kernel", kern_exit_table);
++	return 0;
++}
++late_initcall(kernel_exit_sysctls_init);
++#endif
++
++static atomic_t oops_count = ATOMIC_INIT(0);
++
++#ifdef CONFIG_SYSFS
++static ssize_t oops_count_show(struct kobject *kobj, struct kobj_attribute *attr,
++			       char *page)
++{
++	return sysfs_emit(page, "%d\n", atomic_read(&oops_count));
++}
++
++static struct kobj_attribute oops_count_attr = __ATTR_RO(oops_count);
++
++static __init int kernel_exit_sysfs_init(void)
++{
++	sysfs_add_file_to_group(kernel_kobj, &oops_count_attr.attr, NULL);
++	return 0;
++}
++late_initcall(kernel_exit_sysfs_init);
++#endif
++
+ static void __unhash_process(struct task_struct *p, bool group_dead)
+ {
+ 	nr_threads--;
+@@ -884,6 +931,7 @@ void __noreturn make_task_dead(int signr)
+ 	 * Then do everything else.
+ 	 */
+ 	struct task_struct *tsk = current;
++	unsigned int limit;
+ 
+ 	if (unlikely(in_interrupt()))
+ 		panic("Aiee, killing interrupt handler!");
+@@ -897,6 +945,20 @@ void __noreturn make_task_dead(int signr)
+ 		preempt_count_set(PREEMPT_ENABLED);
+ 	}
+ 
++	/*
++	 * Every time the system oopses, if the oops happens while a reference
++	 * to an object was held, the reference leaks.
++	 * If the oops doesn't also leak memory, repeated oopsing can cause
++	 * reference counters to wrap around (if they're not using refcount_t).
++	 * This means that repeated oopsing can make unexploitable-looking bugs
++	 * exploitable through repeated oopsing.
++	 * To make sure this can't happen, place an upper bound on how often the
++	 * kernel may oops without panic().
++	 */
++	limit = READ_ONCE(oops_limit);
++	if (atomic_inc_return(&oops_count) >= limit && limit)
++		panic("Oopsed too often (kernel.oops_limit is %d)", limit);
++
+ 	/*
+ 	 * We're taking recursive faults here in make_task_dead. Safest is to just
+ 	 * leave this task alone and wait for reboot.
+diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c
+index 67794404042a5..e95ce7d7a76e7 100644
+--- a/kernel/kcsan/report.c
++++ b/kernel/kcsan/report.c
+@@ -492,8 +492,7 @@ static void print_report(enum kcsan_value_change value_change,
+ 	dump_stack_print_info(KERN_DEFAULT);
+ 	pr_err("==================================================================\n");
+ 
+-	if (panic_on_warn)
+-		panic("panic_on_warn set ...\n");
++	check_panic_on_warn("KCSAN");
+ }
+ 
+ static void release_report(unsigned long *flags, struct other_info *other_info)
+diff --git a/kernel/panic.c b/kernel/panic.c
+index da323209f5833..7834c9854e026 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -32,6 +32,7 @@
+ #include <linux/bug.h>
+ #include <linux/ratelimit.h>
+ #include <linux/debugfs.h>
++#include <linux/sysfs.h>
+ #include <trace/events/error_report.h>
+ #include <asm/sections.h>
+ 
+@@ -58,6 +59,7 @@ bool crash_kexec_post_notifiers;
+ int panic_on_warn __read_mostly;
+ unsigned long panic_on_taint;
+ bool panic_on_taint_nousertaint = false;
++static unsigned int warn_limit __read_mostly;
+ 
+ int panic_timeout = CONFIG_PANIC_TIMEOUT;
+ EXPORT_SYMBOL_GPL(panic_timeout);
+@@ -75,8 +77,9 @@ ATOMIC_NOTIFIER_HEAD(panic_notifier_list);
+ 
+ EXPORT_SYMBOL(panic_notifier_list);
+ 
+-#if defined(CONFIG_SMP) && defined(CONFIG_SYSCTL)
++#ifdef CONFIG_SYSCTL
+ static struct ctl_table kern_panic_table[] = {
++#ifdef CONFIG_SMP
+ 	{
+ 		.procname       = "oops_all_cpu_backtrace",
+ 		.data           = &sysctl_oops_all_cpu_backtrace,
+@@ -86,6 +89,14 @@ static struct ctl_table kern_panic_table[] = {
+ 		.extra1         = SYSCTL_ZERO,
+ 		.extra2         = SYSCTL_ONE,
+ 	},
++#endif
++	{
++		.procname       = "warn_limit",
++		.data           = &warn_limit,
++		.maxlen         = sizeof(warn_limit),
++		.mode           = 0644,
++		.proc_handler   = proc_douintvec,
++	},
+ 	{ }
+ };
+ 
+@@ -97,6 +108,25 @@ static __init int kernel_panic_sysctls_init(void)
+ late_initcall(kernel_panic_sysctls_init);
+ #endif
+ 
++static atomic_t warn_count = ATOMIC_INIT(0);
++
++#ifdef CONFIG_SYSFS
++static ssize_t warn_count_show(struct kobject *kobj, struct kobj_attribute *attr,
++			       char *page)
++{
++	return sysfs_emit(page, "%d\n", atomic_read(&warn_count));
++}
++
++static struct kobj_attribute warn_count_attr = __ATTR_RO(warn_count);
++
++static __init int kernel_panic_sysfs_init(void)
++{
++	sysfs_add_file_to_group(kernel_kobj, &warn_count_attr.attr, NULL);
++	return 0;
++}
++late_initcall(kernel_panic_sysfs_init);
++#endif
++
+ static long no_blink(int state)
+ {
+ 	return 0;
+@@ -199,6 +229,19 @@ static void panic_print_sys_info(bool console_flush)
+ 		ftrace_dump(DUMP_ALL);
+ }
+ 
++void check_panic_on_warn(const char *origin)
++{
++	unsigned int limit;
++
++	if (panic_on_warn)
++		panic("%s: panic_on_warn set ...\n", origin);
++
++	limit = READ_ONCE(warn_limit);
++	if (atomic_inc_return(&warn_count) >= limit && limit)
++		panic("%s: system warned too often (kernel.warn_limit is %d)",
++		      origin, limit);
++}
++
+ /**
+  *	panic - halt the system
+  *	@fmt: The text string to print
+@@ -617,8 +660,7 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
+ 	if (regs)
+ 		show_regs(regs);
+ 
+-	if (panic_on_warn)
+-		panic("panic_on_warn set ...\n");
++	check_panic_on_warn("kernel");
+ 
+ 	if (!regs)
+ 		dump_stack();
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 172ec79b66f6c..f730b6fe94a7f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5778,8 +5778,7 @@ static noinline void __schedule_bug(struct task_struct *prev)
+ 		pr_err("Preemption disabled at:");
+ 		print_ip_sym(KERN_ERR, preempt_disable_ip);
+ 	}
+-	if (panic_on_warn)
+-		panic("scheduling while atomic\n");
++	check_panic_on_warn("scheduling while atomic");
+ 
+ 	dump_stack();
+ 	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 5fd54bf0e8867..88b31f096fb2d 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1442,6 +1442,8 @@ static int do_prlimit(struct task_struct *tsk, unsigned int resource,
+ 
+ 	if (resource >= RLIM_NLIMITS)
+ 		return -EINVAL;
++	resource = array_index_nospec(resource, RLIM_NLIMITS);
++
+ 	if (new_rlim) {
+ 		if (new_rlim->rlim_cur > new_rlim->rlim_max)
+ 			return -EINVAL;
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index 36bd75e334263..60c7099857a05 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -154,8 +154,7 @@ static void ubsan_epilogue(void)
+ 
+ 	current->in_ubsan--;
+ 
+-	if (panic_on_warn)
+-		panic("panic_on_warn set ...\n");
++	check_panic_on_warn("UBSAN");
+ }
+ 
+ void __ubsan_handle_divrem_overflow(void *_data, void *lhs, void *rhs)
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 9c251faeb6f59..52475c4262e45 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -94,6 +94,8 @@ static int hugetlb_acct_memory(struct hstate *h, long delta);
+ static void hugetlb_vma_lock_free(struct vm_area_struct *vma);
+ static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
+ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma);
++static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
++		unsigned long start, unsigned long end);
+ 
+ static inline bool subpool_is_free(struct hugepage_subpool *spool)
+ {
+@@ -4825,6 +4827,25 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
+ {
+ 	if (addr & ~(huge_page_mask(hstate_vma(vma))))
+ 		return -EINVAL;
++
++	/*
++	 * PMD sharing is only possible for PUD_SIZE-aligned address ranges
++	 * in HugeTLB VMAs. If we will lose PUD_SIZE alignment due to this
++	 * split, unshare PMDs in the PUD_SIZE interval surrounding addr now.
++	 */
++	if (addr & ~PUD_MASK) {
++		/*
++		 * hugetlb_vm_op_split is called right before we attempt to
++		 * split the VMA. We will need to unshare PMDs in the old and
++		 * new VMAs, so let's unshare before we split.
++		 */
++		unsigned long floor = addr & PUD_MASK;
++		unsigned long ceil = floor + PUD_SIZE;
++
++		if (floor >= vma->vm_start && ceil <= vma->vm_end)
++			hugetlb_unshare_pmds(vma, floor, ceil);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -6583,8 +6604,17 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ 		spinlock_t *ptl;
+ 		ptep = huge_pte_offset(mm, address, psize);
+ 		if (!ptep) {
+-			address |= last_addr_mask;
+-			continue;
++			if (!uffd_wp) {
++				address |= last_addr_mask;
++				continue;
++			}
++			/*
++			 * Userfaultfd wr-protect requires pgtable
++			 * pre-allocations to install pte markers.
++			 */
++			ptep = huge_pte_alloc(mm, vma, address, psize);
++			if (!ptep)
++				break;
+ 		}
+ 		ptl = huge_pte_lock(h, mm, ptep);
+ 		if (huge_pmd_unshare(mm, vma, address, ptep)) {
+@@ -6602,16 +6632,13 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ 		}
+ 		pte = huge_ptep_get(ptep);
+ 		if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) {
+-			spin_unlock(ptl);
+-			continue;
+-		}
+-		if (unlikely(is_hugetlb_entry_migration(pte))) {
++			/* Nothing to do. */
++		} else if (unlikely(is_hugetlb_entry_migration(pte))) {
+ 			swp_entry_t entry = pte_to_swp_entry(pte);
+ 			struct page *page = pfn_swap_entry_to_page(entry);
++			pte_t newpte = pte;
+ 
+-			if (!is_readable_migration_entry(entry)) {
+-				pte_t newpte;
+-
++			if (is_writable_migration_entry(entry)) {
+ 				if (PageAnon(page))
+ 					entry = make_readable_exclusive_migration_entry(
+ 								swp_offset(entry));
+@@ -6619,25 +6646,22 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
+ 					entry = make_readable_migration_entry(
+ 								swp_offset(entry));
+ 				newpte = swp_entry_to_pte(entry);
+-				if (uffd_wp)
+-					newpte = pte_swp_mkuffd_wp(newpte);
+-				else if (uffd_wp_resolve)
+-					newpte = pte_swp_clear_uffd_wp(newpte);
+-				set_huge_pte_at(mm, address, ptep, newpte);
+ 				pages++;
+ 			}
+-			spin_unlock(ptl);
+-			continue;
+-		}
+-		if (unlikely(pte_marker_uffd_wp(pte))) {
+-			/*
+-			 * This is changing a non-present pte into a none pte,
+-			 * no need for huge_ptep_modify_prot_start/commit().
+-			 */
++
++			if (uffd_wp)
++				newpte = pte_swp_mkuffd_wp(newpte);
++			else if (uffd_wp_resolve)
++				newpte = pte_swp_clear_uffd_wp(newpte);
++			if (!pte_same(pte, newpte))
++				set_huge_pte_at(mm, address, ptep, newpte);
++		} else if (unlikely(is_pte_marker(pte))) {
++			/* No other markers apply for now. */
++			WARN_ON_ONCE(!pte_marker_uffd_wp(pte));
+ 			if (uffd_wp_resolve)
++				/* Safe to modify directly (non-present->none). */
+ 				huge_pte_clear(mm, address, ptep, psize);
+-		}
+-		if (!huge_pte_none(pte)) {
++		} else if (!huge_pte_none(pte)) {
+ 			pte_t old_pte;
+ 			unsigned int shift = huge_page_shift(hstate_vma(vma));
+ 
+@@ -7386,26 +7410,21 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
+ 	}
+ }
+ 
+-/*
+- * This function will unconditionally remove all the shared pmd pgtable entries
+- * within the specific vma for a hugetlbfs memory range.
+- */
+-void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
++static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
++				   unsigned long start,
++				   unsigned long end)
+ {
+ 	struct hstate *h = hstate_vma(vma);
+ 	unsigned long sz = huge_page_size(h);
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	struct mmu_notifier_range range;
+-	unsigned long address, start, end;
++	unsigned long address;
+ 	spinlock_t *ptl;
+ 	pte_t *ptep;
+ 
+ 	if (!(vma->vm_flags & VM_MAYSHARE))
+ 		return;
+ 
+-	start = ALIGN(vma->vm_start, PUD_SIZE);
+-	end = ALIGN_DOWN(vma->vm_end, PUD_SIZE);
+-
+ 	if (start >= end)
+ 		return;
+ 
+@@ -7437,6 +7456,16 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
+ 	mmu_notifier_invalidate_range_end(&range);
+ }
+ 
++/*
++ * This function will unconditionally remove all the shared pmd pgtable entries
++ * within the specific vma for a hugetlbfs memory range.
++ */
++void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
++{
++	hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE),
++			ALIGN_DOWN(vma->vm_end, PUD_SIZE));
++}
++
+ #ifdef CONFIG_CMA
+ static bool cma_reserve_called __initdata;
+ 
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index df3602062bfd6..cc98dfdd3ed2f 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -164,8 +164,8 @@ static void end_report(unsigned long *flags, void *addr)
+ 				       (unsigned long)addr);
+ 	pr_err("==================================================================\n");
+ 	spin_unlock_irqrestore(&report_lock, *flags);
+-	if (panic_on_warn && !test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+-		panic("panic_on_warn set ...\n");
++	if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
++		check_panic_on_warn("KASAN");
+ 	if (kasan_arg_fault == KASAN_ARG_FAULT_PANIC)
+ 		panic("kasan.fault=panic set ...\n");
+ 	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+diff --git a/mm/kfence/report.c b/mm/kfence/report.c
+index 46ecea18c4ca0..60205f1257ef2 100644
+--- a/mm/kfence/report.c
++++ b/mm/kfence/report.c
+@@ -273,8 +273,7 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
+ 
+ 	lockdep_on();
+ 
+-	if (panic_on_warn)
+-		panic("panic_on_warn set ...\n");
++	check_panic_on_warn("KFENCE");
+ 
+ 	/* We encountered a memory safety error, taint the kernel! */
+ 	add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 3703a56571c12..c982b250aa317 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1467,14 +1467,6 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
+ 	if (!hugepage_vma_check(vma, vma->vm_flags, false, false, false))
+ 		return SCAN_VMA_CHECK;
+ 
+-	/*
+-	 * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
+-	 * that got written to. Without this, we'd have to also lock the
+-	 * anon_vma if one exists.
+-	 */
+-	if (vma->anon_vma)
+-		return SCAN_VMA_CHECK;
+-
+ 	/* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
+ 	if (userfaultfd_wp(vma))
+ 		return SCAN_PTE_UFFD_WP;
+@@ -1574,8 +1566,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
+ 	}
+ 
+ 	/* step 4: remove pte entries */
++	/* we make no change to anon, but protect concurrent anon page lookup */
++	if (vma->anon_vma)
++		anon_vma_lock_write(vma->anon_vma);
++
+ 	collapse_and_free_pmd(mm, vma, haddr, pmd);
+ 
++	if (vma->anon_vma)
++		anon_vma_unlock_write(vma->anon_vma);
+ 	i_mmap_unlock_write(vma->vm_file->f_mapping);
+ 
+ maybe_install_pmd:
+@@ -2646,7 +2644,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
+ 				goto out_nolock;
+ 			}
+ 
+-			hend = vma->vm_end & HPAGE_PMD_MASK;
++			hend = min(hend, vma->vm_end & HPAGE_PMD_MASK);
+ 		}
+ 		mmap_assert_locked(mm);
+ 		memset(cc->node_load, 0, sizeof(cc->node_load));
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 54abd46e60078..1777148868494 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1524,6 +1524,10 @@ int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
+ 	if (vma_soft_dirty_enabled(vma) && !is_vm_hugetlb_page(vma))
+ 		return 1;
+ 
++	/* Do we need write faults for uffd-wp tracking? */
++	if (userfaultfd_wp(vma))
++		return 1;
++
+ 	/* Specialty mapping? */
+ 	if (vm_flags & VM_PFNMAP)
+ 		return 0;
+diff --git a/mm/nommu.c b/mm/nommu.c
+index 214c70e1d0594..5b83938ecb67c 100644
+--- a/mm/nommu.c
++++ b/mm/nommu.c
+@@ -559,7 +559,6 @@ void vma_mas_remove(struct vm_area_struct *vma, struct ma_state *mas)
+ 
+ static void setup_vma_to_mm(struct vm_area_struct *vma, struct mm_struct *mm)
+ {
+-	mm->map_count++;
+ 	vma->vm_mm = mm;
+ 
+ 	/* add the VMA to the mapping */
+@@ -587,6 +586,7 @@ static void mas_add_vma_to_mm(struct ma_state *mas, struct mm_struct *mm,
+ 	BUG_ON(!vma->vm_region);
+ 
+ 	setup_vma_to_mm(vma, mm);
++	mm->map_count++;
+ 
+ 	/* add the VMA to the tree */
+ 	vma_mas_store(vma, mas);
+@@ -1240,6 +1240,7 @@ share:
+ error_just_free:
+ 	up_write(&nommu_region_sem);
+ error:
++	mas_destroy(&mas);
+ 	if (region->vm_file)
+ 		fput(region->vm_file);
+ 	kmem_cache_free(vm_region_jar, region);
+@@ -1250,7 +1251,6 @@ error:
+ 
+ sharing_violation:
+ 	up_write(&nommu_region_sem);
+-	mas_destroy(&mas);
+ 	pr_warn("Attempt to share mismatched mappings\n");
+ 	ret = -EINVAL;
+ 	goto error;
+@@ -1347,6 +1347,7 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	if (vma->vm_file)
+ 		return -ENOMEM;
+ 
++	mm = vma->vm_mm;
+ 	if (mm->map_count >= sysctl_max_map_count)
+ 		return -ENOMEM;
+ 
+@@ -1398,6 +1399,7 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	mas_set_range(&mas, vma->vm_start, vma->vm_end - 1);
+ 	mas_store(&mas, vma);
+ 	vma_mas_store(new, &mas);
++	mm->map_count++;
+ 	return 0;
+ 
+ err_mas_preallocate:
+@@ -1509,7 +1511,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, struct list
+ erase_whole_vma:
+ 	if (delete_vma_from_mm(vma))
+ 		ret = -ENOMEM;
+-	delete_vma(mm, vma);
++	else
++		delete_vma(mm, vma);
+ 	return ret;
+ }
+ 
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 82911fefc2d5e..a8d9fd039d0aa 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -472,12 +472,10 @@ bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode,
+ 	if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) ||
+ 	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)))
+ 		return false;
+-	if (shmem_huge_force)
+-		return true;
+-	if (shmem_huge == SHMEM_HUGE_FORCE)
+-		return true;
+ 	if (shmem_huge == SHMEM_HUGE_DENY)
+ 		return false;
++	if (shmem_huge_force || shmem_huge == SHMEM_HUGE_FORCE)
++		return true;
+ 
+ 	switch (SHMEM_SB(inode->i_sb)->huge) {
+ 	case SHMEM_HUGE_ALWAYS:
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 3a68d9bc43b8f..8d6c8cbfe1de4 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -3554,7 +3554,7 @@ static const struct hci_init_stage hci_init2[] = {
+ static int hci_le_read_buffer_size_sync(struct hci_dev *hdev)
+ {
+ 	/* Use Read LE Buffer Size V2 if supported */
+-	if (hdev->commands[41] & 0x20)
++	if (iso_capable(hdev) && hdev->commands[41] & 0x20)
+ 		return __hci_cmd_sync_status(hdev,
+ 					     HCI_OP_LE_READ_BUFFER_SIZE_V2,
+ 					     0, NULL, HCI_CMD_TIMEOUT);
+@@ -3579,10 +3579,10 @@ static int hci_le_read_supported_states_sync(struct hci_dev *hdev)
+ 
+ /* LE Controller init stage 2 command sequence */
+ static const struct hci_init_stage le_init2[] = {
+-	/* HCI_OP_LE_READ_BUFFER_SIZE */
+-	HCI_INIT(hci_le_read_buffer_size_sync),
+ 	/* HCI_OP_LE_READ_LOCAL_FEATURES */
+ 	HCI_INIT(hci_le_read_local_features_sync),
++	/* HCI_OP_LE_READ_BUFFER_SIZE */
++	HCI_INIT(hci_le_read_buffer_size_sync),
+ 	/* HCI_OP_LE_READ_SUPPORTED_STATES */
+ 	HCI_INIT(hci_le_read_supported_states_sync),
+ 	{}
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 81fe2422fe58a..038398d41a937 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -2094,7 +2094,8 @@ static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)
+ 		return n_stats;
+ 	if (n_stats > S32_MAX / sizeof(u64))
+ 		return -ENOMEM;
+-	WARN_ON_ONCE(!n_stats);
++	if (WARN_ON_ONCE(!n_stats))
++		return -EOPNOTSUPP;
+ 
+ 	if (copy_from_user(&stats, useraddr, sizeof(stats)))
+ 		return -EFAULT;
+diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
+index 05b6077b9f2c3..2aa442128630e 100644
+--- a/net/ipv4/tcp_ulp.c
++++ b/net/ipv4/tcp_ulp.c
+@@ -139,7 +139,7 @@ static int __tcp_set_ulp(struct sock *sk, const struct tcp_ulp_ops *ulp_ops)
+ 	if (sk->sk_socket)
+ 		clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
+ 
+-	err = -EINVAL;
++	err = -ENOTCONN;
+ 	if (!ulp_ops->clone && sk->sk_state == TCP_LISTEN)
+ 		goto out_err;
+ 
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index 07c892aa8c73f..b2e40465289d6 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -491,7 +491,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ {
+ 	struct tid_ampdu_tx *tid_tx;
+ 	struct ieee80211_local *local = sta->local;
+-	struct ieee80211_sub_if_data *sdata = sta->sdata;
++	struct ieee80211_sub_if_data *sdata;
+ 	struct ieee80211_ampdu_params params = {
+ 		.sta = &sta->sta,
+ 		.action = IEEE80211_AMPDU_TX_START,
+@@ -521,6 +521,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 	 */
+ 	synchronize_net();
+ 
++	sdata = sta->sdata;
+ 	params.ssn = sta->tid_seq[tid] >> 4;
+ 	ret = drv_ampdu_action(local, sdata, &params);
+ 	tid_tx->ssn = params.ssn;
+@@ -534,6 +535,9 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 		 */
+ 		set_bit(HT_AGG_STATE_DRV_READY, &tid_tx->state);
+ 	} else if (ret) {
++		if (!sdata)
++			return;
++
+ 		ht_dbg(sdata,
+ 		       "BA request denied - HW unavailable for %pM tid %d\n",
+ 		       sta->sta.addr, tid);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 8c8ef87997a8a..3c66e03774fbe 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -147,6 +147,7 @@ static int ieee80211_set_ap_mbssid_options(struct ieee80211_sub_if_data *sdata,
+ 	link_conf->bssid_index = 0;
+ 	link_conf->nontransmitted = false;
+ 	link_conf->ema_ap = false;
++	link_conf->bssid_indicator = 0;
+ 
+ 	if (sdata->vif.type != NL80211_IFTYPE_AP || !params.tx_wdev)
+ 		return -EINVAL;
+@@ -1511,6 +1512,12 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
+ 	kfree(link_conf->ftmr_params);
+ 	link_conf->ftmr_params = NULL;
+ 
++	sdata->vif.mbssid_tx_vif = NULL;
++	link_conf->bssid_index = 0;
++	link_conf->nontransmitted = false;
++	link_conf->ema_ap = false;
++	link_conf->bssid_indicator = 0;
++
+ 	__sta_info_flush(sdata, true);
+ 	ieee80211_free_keys(sdata, true);
+ 
+diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
+index 5392ffa182704..c08d3c9a4a177 100644
+--- a/net/mac80211/driver-ops.c
++++ b/net/mac80211/driver-ops.c
+@@ -391,6 +391,9 @@ int drv_ampdu_action(struct ieee80211_local *local,
+ 
+ 	might_sleep();
+ 
++	if (!sdata)
++		return -EIO;
++
+ 	sdata = get_bss_sdata(sdata);
+ 	if (!check_sdata_in_driver(sdata))
+ 		return -EIO;
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 46f08ec5ed760..8dd3c10a99e0b 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -364,7 +364,9 @@ static int ieee80211_check_concurrent_iface(struct ieee80211_sub_if_data *sdata,
+ 
+ 			/* No support for VLAN with MLO yet */
+ 			if (iftype == NL80211_IFTYPE_AP_VLAN &&
+-			    nsdata->wdev.use_4addr)
++			    sdata->wdev.use_4addr &&
++			    nsdata->vif.type == NL80211_IFTYPE_AP &&
++			    nsdata->vif.valid_links)
+ 				return -EOPNOTSUPP;
+ 
+ 			/*
+@@ -2258,7 +2260,6 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		ret = cfg80211_register_netdevice(ndev);
+ 		if (ret) {
+-			ieee80211_if_free(ndev);
+ 			free_netdev(ndev);
+ 			return ret;
+ 		}
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index f99416d2e1441..3262ebb240926 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4070,6 +4070,58 @@ static void ieee80211_invoke_rx_handlers(struct ieee80211_rx_data *rx)
+ #undef CALL_RXH
+ }
+ 
++static bool
++ieee80211_rx_is_valid_sta_link_id(struct ieee80211_sta *sta, u8 link_id)
++{
++	if (!sta->mlo)
++		return false;
++
++	return !!(sta->valid_links & BIT(link_id));
++}
++
++static bool ieee80211_rx_data_set_link(struct ieee80211_rx_data *rx,
++				       u8 link_id)
++{
++	rx->link_id = link_id;
++	rx->link = rcu_dereference(rx->sdata->link[link_id]);
++
++	if (!rx->sta)
++		return rx->link;
++
++	if (!ieee80211_rx_is_valid_sta_link_id(&rx->sta->sta, link_id))
++		return false;
++
++	rx->link_sta = rcu_dereference(rx->sta->link[link_id]);
++
++	return rx->link && rx->link_sta;
++}
++
++static bool ieee80211_rx_data_set_sta(struct ieee80211_rx_data *rx,
++				      struct ieee80211_sta *pubsta,
++				      int link_id)
++{
++	struct sta_info *sta;
++
++	sta = container_of(pubsta, struct sta_info, sta);
++
++	rx->link_id = link_id;
++	rx->sta = sta;
++
++	if (sta) {
++		rx->local = sta->sdata->local;
++		if (!rx->sdata)
++			rx->sdata = sta->sdata;
++		rx->link_sta = &sta->deflink;
++	}
++
++	if (link_id < 0)
++		rx->link = &rx->sdata->deflink;
++	else if (!ieee80211_rx_data_set_link(rx, link_id))
++		return false;
++
++	return true;
++}
++
+ /*
+  * This function makes calls into the RX path, therefore
+  * it has to be invoked under RCU read lock.
+@@ -4078,16 +4130,19 @@ void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid)
+ {
+ 	struct sk_buff_head frames;
+ 	struct ieee80211_rx_data rx = {
+-		.sta = sta,
+-		.sdata = sta->sdata,
+-		.local = sta->local,
+ 		/* This is OK -- must be QoS data frame */
+ 		.security_idx = tid,
+ 		.seqno_idx = tid,
+-		.link_id = -1,
+ 	};
+ 	struct tid_ampdu_rx *tid_agg_rx;
+-	u8 link_id;
++	int link_id = -1;
++
++	/* FIXME: statistics won't be right with this */
++	if (sta->sta.valid_links)
++		link_id = ffs(sta->sta.valid_links) - 1;
++
++	if (!ieee80211_rx_data_set_sta(&rx, &sta->sta, link_id))
++		return;
+ 
+ 	tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]);
+ 	if (!tid_agg_rx)
+@@ -4107,10 +4162,6 @@ void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid)
+ 		};
+ 		drv_event_callback(rx.local, rx.sdata, &event);
+ 	}
+-	/* FIXME: statistics won't be right with this */
+-	link_id = sta->sta.valid_links ? ffs(sta->sta.valid_links) - 1 : 0;
+-	rx.link = rcu_dereference(sta->sdata->link[link_id]);
+-	rx.link_sta = rcu_dereference(sta->link[link_id]);
+ 
+ 	ieee80211_rx_handlers(&rx, &frames);
+ }
+@@ -4126,7 +4177,6 @@ void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
+ 		/* This is OK -- must be QoS data frame */
+ 		.security_idx = tid,
+ 		.seqno_idx = tid,
+-		.link_id = -1,
+ 	};
+ 	int i, diff;
+ 
+@@ -4137,10 +4187,8 @@ void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
+ 
+ 	sta = container_of(pubsta, struct sta_info, sta);
+ 
+-	rx.sta = sta;
+-	rx.sdata = sta->sdata;
+-	rx.link = &rx.sdata->deflink;
+-	rx.local = sta->local;
++	if (!ieee80211_rx_data_set_sta(&rx, pubsta, -1))
++		return;
+ 
+ 	rcu_read_lock();
+ 	tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]);
+@@ -4527,15 +4575,6 @@ void ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
+ 	mutex_unlock(&local->sta_mtx);
+ }
+ 
+-static bool
+-ieee80211_rx_is_valid_sta_link_id(struct ieee80211_sta *sta, u8 link_id)
+-{
+-	if (!sta->mlo)
+-		return false;
+-
+-	return !!(sta->valid_links & BIT(link_id));
+-}
+-
+ static void ieee80211_rx_8023(struct ieee80211_rx_data *rx,
+ 			      struct ieee80211_fast_rx *fast_rx,
+ 			      int orig_len)
+@@ -4646,7 +4685,6 @@ static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,
+ 	struct sk_buff *skb = rx->skb;
+ 	struct ieee80211_hdr *hdr = (void *)skb->data;
+ 	struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+-	struct sta_info *sta = rx->sta;
+ 	int orig_len = skb->len;
+ 	int hdrlen = ieee80211_hdrlen(hdr->frame_control);
+ 	int snap_offs = hdrlen;
+@@ -4658,7 +4696,6 @@ static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,
+ 		u8 da[ETH_ALEN];
+ 		u8 sa[ETH_ALEN];
+ 	} addrs __aligned(2);
+-	struct link_sta_info *link_sta;
+ 	struct ieee80211_sta_rx_stats *stats;
+ 
+ 	/* for parallel-rx, we need to have DUP_VALIDATED, otherwise we write
+@@ -4761,18 +4798,10 @@ static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,
+  drop:
+ 	dev_kfree_skb(skb);
+ 
+-	if (rx->link_id >= 0) {
+-		link_sta = rcu_dereference(sta->link[rx->link_id]);
+-		if (!link_sta)
+-			return true;
+-	} else {
+-		link_sta = &sta->deflink;
+-	}
+-
+ 	if (fast_rx->uses_rss)
+-		stats = this_cpu_ptr(link_sta->pcpu_rx_stats);
++		stats = this_cpu_ptr(rx->link_sta->pcpu_rx_stats);
+ 	else
+-		stats = &link_sta->rx_stats;
++		stats = &rx->link_sta->rx_stats;
+ 
+ 	stats->dropped++;
+ 	return true;
+@@ -4790,8 +4819,8 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx,
+ 	struct ieee80211_local *local = rx->local;
+ 	struct ieee80211_sub_if_data *sdata = rx->sdata;
+ 	struct ieee80211_hdr *hdr = (void *)skb->data;
+-	struct link_sta_info *link_sta = NULL;
+-	struct ieee80211_link_data *link;
++	struct link_sta_info *link_sta = rx->link_sta;
++	struct ieee80211_link_data *link = rx->link;
+ 
+ 	rx->skb = skb;
+ 
+@@ -4813,35 +4842,6 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx,
+ 	if (!ieee80211_accept_frame(rx))
+ 		return false;
+ 
+-	if (rx->link_id >= 0) {
+-		link = rcu_dereference(rx->sdata->link[rx->link_id]);
+-
+-		/* we might race link removal */
+-		if (!link)
+-			return true;
+-		rx->link = link;
+-
+-		if (rx->sta) {
+-			rx->link_sta =
+-				rcu_dereference(rx->sta->link[rx->link_id]);
+-			if (!rx->link_sta)
+-				return true;
+-		}
+-	} else {
+-		if (rx->sta)
+-			rx->link_sta = &rx->sta->deflink;
+-
+-		rx->link = &sdata->deflink;
+-	}
+-
+-	if (unlikely(!is_multicast_ether_addr(hdr->addr1) &&
+-		     rx->link_id >= 0 && rx->sta && rx->sta->sta.mlo)) {
+-		link_sta = rcu_dereference(rx->sta->link[rx->link_id]);
+-
+-		if (WARN_ON_ONCE(!link_sta))
+-			return true;
+-	}
+-
+ 	if (!consume) {
+ 		struct skb_shared_hwtstamps *shwt;
+ 
+@@ -4861,7 +4861,7 @@ static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx,
+ 		shwt->hwtstamp = skb_hwtstamps(skb)->hwtstamp;
+ 	}
+ 
+-	if (unlikely(link_sta)) {
++	if (unlikely(rx->sta && rx->sta->sta.mlo)) {
+ 		/* translate to MLD addresses */
+ 		if (ether_addr_equal(link->conf->addr, hdr->addr1))
+ 			ether_addr_copy(hdr->addr1, rx->sdata->vif.addr);
+@@ -4891,6 +4891,7 @@ static void __ieee80211_rx_handle_8023(struct ieee80211_hw *hw,
+ 	struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ 	struct ieee80211_fast_rx *fast_rx;
+ 	struct ieee80211_rx_data rx;
++	int link_id = -1;
+ 
+ 	memset(&rx, 0, sizeof(rx));
+ 	rx.skb = skb;
+@@ -4907,12 +4908,8 @@ static void __ieee80211_rx_handle_8023(struct ieee80211_hw *hw,
+ 	if (!pubsta)
+ 		goto drop;
+ 
+-	rx.sta = container_of(pubsta, struct sta_info, sta);
+-	rx.sdata = rx.sta->sdata;
+-
+-	if (status->link_valid &&
+-	    !ieee80211_rx_is_valid_sta_link_id(pubsta, status->link_id))
+-		goto drop;
++	if (status->link_valid)
++		link_id = status->link_id;
+ 
+ 	/*
+ 	 * TODO: Should the frame be dropped if the right link_id is not
+@@ -4921,19 +4918,8 @@ static void __ieee80211_rx_handle_8023(struct ieee80211_hw *hw,
+ 	 * link_id is used only for stats purpose and updating the stats on
+ 	 * the deflink is fine?
+ 	 */
+-	if (status->link_valid)
+-		rx.link_id = status->link_id;
+-
+-	if (rx.link_id >= 0) {
+-		struct ieee80211_link_data *link;
+-
+-		link =  rcu_dereference(rx.sdata->link[rx.link_id]);
+-		if (!link)
+-			goto drop;
+-		rx.link = link;
+-	} else {
+-		rx.link = &rx.sdata->deflink;
+-	}
++	if (!ieee80211_rx_data_set_sta(&rx, pubsta, link_id))
++		goto drop;
+ 
+ 	fast_rx = rcu_dereference(rx.sta->fast_rx);
+ 	if (!fast_rx)
+@@ -4951,6 +4937,8 @@ static bool ieee80211_rx_for_interface(struct ieee80211_rx_data *rx,
+ {
+ 	struct link_sta_info *link_sta;
+ 	struct ieee80211_hdr *hdr = (void *)skb->data;
++	struct sta_info *sta;
++	int link_id = -1;
+ 
+ 	/*
+ 	 * Look up link station first, in case there's a
+@@ -4960,24 +4948,19 @@ static bool ieee80211_rx_for_interface(struct ieee80211_rx_data *rx,
+ 	 */
+ 	link_sta = link_sta_info_get_bss(rx->sdata, hdr->addr2);
+ 	if (link_sta) {
+-		rx->sta = link_sta->sta;
+-		rx->link_id = link_sta->link_id;
++		sta = link_sta->sta;
++		link_id = link_sta->link_id;
+ 	} else {
+ 		struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
+ 
+-		rx->sta = sta_info_get_bss(rx->sdata, hdr->addr2);
+-		if (rx->sta) {
+-			if (status->link_valid &&
+-			    !ieee80211_rx_is_valid_sta_link_id(&rx->sta->sta,
+-							       status->link_id))
+-				return false;
+-
+-			rx->link_id = status->link_valid ? status->link_id : -1;
+-		} else {
+-			rx->link_id = -1;
+-		}
++		sta = sta_info_get_bss(rx->sdata, hdr->addr2);
++		if (status->link_valid)
++			link_id = status->link_id;
+ 	}
+ 
++	if (!ieee80211_rx_data_set_sta(rx, &sta->sta, link_id))
++		return false;
++
+ 	return ieee80211_prepare_and_rx_handle(rx, skb, consume);
+ }
+ 
+@@ -5036,19 +5019,15 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+ 
+ 	if (ieee80211_is_data(fc)) {
+ 		struct sta_info *sta, *prev_sta;
+-		u8 link_id = status->link_id;
++		int link_id = -1;
+ 
+-		if (pubsta) {
+-			rx.sta = container_of(pubsta, struct sta_info, sta);
+-			rx.sdata = rx.sta->sdata;
++		if (status->link_valid)
++			link_id = status->link_id;
+ 
+-			if (status->link_valid &&
+-			    !ieee80211_rx_is_valid_sta_link_id(pubsta, link_id))
++		if (pubsta) {
++			if (!ieee80211_rx_data_set_sta(&rx, pubsta, link_id))
+ 				goto out;
+ 
+-			if (status->link_valid)
+-				rx.link_id = status->link_id;
+-
+ 			/*
+ 			 * In MLO connection, fetch the link_id using addr2
+ 			 * when the driver does not pass link_id in status.
+@@ -5066,7 +5045,7 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+ 				if (!link_sta)
+ 					goto out;
+ 
+-				rx.link_id = link_sta->link_id;
++				ieee80211_rx_data_set_link(&rx, link_sta->link_id);
+ 			}
+ 
+ 			if (ieee80211_prepare_and_rx_handle(&rx, skb, true))
+@@ -5082,30 +5061,27 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+ 				continue;
+ 			}
+ 
+-			if ((status->link_valid &&
+-			     !ieee80211_rx_is_valid_sta_link_id(&prev_sta->sta,
+-								link_id)) ||
+-			    (!status->link_valid && prev_sta->sta.mlo))
++			rx.sdata = prev_sta->sdata;
++			if (!ieee80211_rx_data_set_sta(&rx, &prev_sta->sta,
++						       link_id))
++				goto out;
++
++			if (!status->link_valid && prev_sta->sta.mlo)
+ 				continue;
+ 
+-			rx.link_id = status->link_valid ? link_id : -1;
+-			rx.sta = prev_sta;
+-			rx.sdata = prev_sta->sdata;
+ 			ieee80211_prepare_and_rx_handle(&rx, skb, false);
+ 
+ 			prev_sta = sta;
+ 		}
+ 
+ 		if (prev_sta) {
+-			if ((status->link_valid &&
+-			     !ieee80211_rx_is_valid_sta_link_id(&prev_sta->sta,
+-								link_id)) ||
+-			    (!status->link_valid && prev_sta->sta.mlo))
++			rx.sdata = prev_sta->sdata;
++			if (!ieee80211_rx_data_set_sta(&rx, &prev_sta->sta,
++						       link_id))
+ 				goto out;
+ 
+-			rx.link_id = status->link_valid ? link_id : -1;
+-			rx.sta = prev_sta;
+-			rx.sdata = prev_sta->sdata;
++			if (!status->link_valid && prev_sta->sta.mlo)
++				goto out;
+ 
+ 			if (ieee80211_prepare_and_rx_handle(&rx, skb, true))
+ 				return;
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 45e2a48397b95..70f0ced3ca86e 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -420,6 +420,31 @@ void mptcp_pm_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk)
+ 	}
+ }
+ 
++/* if sk is ipv4 or ipv6_only allows only same-family local and remote addresses,
++ * otherwise allow any matching local/remote pair
++ */
++bool mptcp_pm_addr_families_match(const struct sock *sk,
++				  const struct mptcp_addr_info *loc,
++				  const struct mptcp_addr_info *rem)
++{
++	bool mptcp_is_v4 = sk->sk_family == AF_INET;
++
++#if IS_ENABLED(CONFIG_MPTCP_IPV6)
++	bool loc_is_v4 = loc->family == AF_INET || ipv6_addr_v4mapped(&loc->addr6);
++	bool rem_is_v4 = rem->family == AF_INET || ipv6_addr_v4mapped(&rem->addr6);
++
++	if (mptcp_is_v4)
++		return loc_is_v4 && rem_is_v4;
++
++	if (ipv6_only_sock(sk))
++		return !loc_is_v4 && !rem_is_v4;
++
++	return loc_is_v4 == rem_is_v4;
++#else
++	return mptcp_is_v4 && loc->family == AF_INET && rem->family == AF_INET;
++#endif
++}
++
+ void mptcp_pm_data_reset(struct mptcp_sock *msk)
+ {
+ 	u8 pm_type = mptcp_get_pm_type(sock_net((struct sock *)msk));
+diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
+index 0430415357ba3..c1d6cd5b188c2 100644
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -294,6 +294,13 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	sk = &msk->sk.icsk_inet.sk;
++
++	if (!mptcp_pm_addr_families_match(sk, &addr_l, &addr_r)) {
++		GENL_SET_ERR_MSG(info, "families mismatch");
++		err = -EINVAL;
++		goto create_err;
++	}
++
+ 	lock_sock(sk);
+ 
+ 	err = __mptcp_subflow_connect(sk, &addr_l, &addr_r);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index e97465f0c6672..29849d77e4bf8 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -107,7 +107,7 @@ static int __mptcp_socket_create(struct mptcp_sock *msk)
+ 	struct socket *ssock;
+ 	int err;
+ 
+-	err = mptcp_subflow_create_socket(sk, &ssock);
++	err = mptcp_subflow_create_socket(sk, sk->sk_family, &ssock);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 62e9ff237b6e8..6f22ae13c9848 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -626,7 +626,8 @@ bool mptcp_addresses_equal(const struct mptcp_addr_info *a,
+ /* called with sk socket lock held */
+ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+ 			    const struct mptcp_addr_info *remote);
+-int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock);
++int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
++				struct socket **new_sock);
+ void mptcp_info2sockaddr(const struct mptcp_addr_info *info,
+ 			 struct sockaddr_storage *addr,
+ 			 unsigned short family);
+@@ -761,6 +762,9 @@ int mptcp_pm_parse_addr(struct nlattr *attr, struct genl_info *info,
+ int mptcp_pm_parse_entry(struct nlattr *attr, struct genl_info *info,
+ 			 bool require_family,
+ 			 struct mptcp_pm_addr_entry *entry);
++bool mptcp_pm_addr_families_match(const struct sock *sk,
++				  const struct mptcp_addr_info *loc,
++				  const struct mptcp_addr_info *rem);
+ void mptcp_pm_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk);
+ void mptcp_pm_nl_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk);
+ void mptcp_pm_new_connection(struct mptcp_sock *msk, const struct sock *ssk, int server_side);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 9d3701fdb2937..5220435d8e34d 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1492,7 +1492,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+ 	if (!mptcp_is_fully_established(sk))
+ 		goto err_out;
+ 
+-	err = mptcp_subflow_create_socket(sk, &sf);
++	err = mptcp_subflow_create_socket(sk, loc->family, &sf);
+ 	if (err)
+ 		goto err_out;
+ 
+@@ -1604,7 +1604,9 @@ static void mptcp_subflow_ops_undo_override(struct sock *ssk)
+ #endif
+ 		ssk->sk_prot = &tcp_prot;
+ }
+-int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
++
++int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
++				struct socket **new_sock)
+ {
+ 	struct mptcp_subflow_context *subflow;
+ 	struct net *net = sock_net(sk);
+@@ -1617,8 +1619,7 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
+ 	if (unlikely(!sk->sk_socket))
+ 		return -EINVAL;
+ 
+-	err = sock_create_kern(net, sk->sk_family, SOCK_STREAM, IPPROTO_TCP,
+-			       &sf);
++	err = sock_create_kern(net, family, SOCK_STREAM, IPPROTO_TCP, &sf);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/tools/testing/memblock/.gitignore b/tools/testing/memblock/.gitignore
+index 654338e0be52e..4cc7cd5aac2b1 100644
+--- a/tools/testing/memblock/.gitignore
++++ b/tools/testing/memblock/.gitignore
+@@ -1,4 +1,5 @@
+ main
+ memblock.c
+ linux/memblock.h
++asm/asm.h
+ asm/cmpxchg.h
+diff --git a/tools/testing/memblock/Makefile b/tools/testing/memblock/Makefile
+index 246f7ac8489b4..575e98fddc21c 100644
+--- a/tools/testing/memblock/Makefile
++++ b/tools/testing/memblock/Makefile
+@@ -29,13 +29,14 @@ include: ../../../include/linux/memblock.h ../../include/linux/*.h \
+ 
+ 	@mkdir -p linux
+ 	test -L linux/memblock.h || ln -s ../../../../include/linux/memblock.h linux/memblock.h
++	test -L asm/asm.h || ln -s ../../../arch/x86/include/asm/asm.h asm/asm.h
+ 	test -L asm/cmpxchg.h || ln -s ../../../arch/x86/include/asm/cmpxchg.h asm/cmpxchg.h
+ 
+ memblock.c: $(EXTR_SRC)
+ 	test -L memblock.c || ln -s $(EXTR_SRC) memblock.c
+ 
+ clean:
+-	$(RM) $(TARGETS) $(OFILES) linux/memblock.h memblock.c asm/cmpxchg.h
++	$(RM) $(TARGETS) $(OFILES) linux/memblock.h memblock.c asm/asm.h asm/cmpxchg.h
+ 
+ help:
+ 	@echo  'Memblock simulator'
+diff --git a/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c b/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c
+new file mode 100644
+index 0000000000000..3add34df57678
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/jeq_infer_not_null.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <test_progs.h>
++#include "jeq_infer_not_null_fail.skel.h"
++
++void test_jeq_infer_not_null(void)
++{
++	RUN_TESTS(jeq_infer_not_null_fail);
++}
+diff --git a/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c b/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c
+new file mode 100644
+index 0000000000000..f46965053acb2
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c
+@@ -0,0 +1,42 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include "vmlinux.h"
++#include <bpf/bpf_helpers.h>
++#include "bpf_misc.h"
++
++char _license[] SEC("license") = "GPL";
++
++struct {
++	__uint(type, BPF_MAP_TYPE_HASH);
++	__uint(max_entries, 1);
++	__type(key, u64);
++	__type(value, u64);
++} m_hash SEC(".maps");
++
++SEC("?raw_tp")
++__failure __msg("R8 invalid mem access 'map_value_or_null")
++int jeq_infer_not_null_ptr_to_btfid(void *ctx)
++{
++	struct bpf_map *map = (struct bpf_map *)&m_hash;
++	struct bpf_map *inner_map = map->inner_map_meta;
++	u64 key = 0, ret = 0, *val;
++
++	val = bpf_map_lookup_elem(map, &key);
++	/* Do not mark ptr as non-null if one of them is
++	 * PTR_TO_BTF_ID (R9), reject because of invalid
++	 * access to map value (R8).
++	 *
++	 * Here, we need to inline those insns to access
++	 * R8 directly, since compiler may use other reg
++	 * once it figures out val==inner_map.
++	 */
++	asm volatile("r8 = %[val];\n"
++		     "r9 = %[inner_map];\n"
++		     "if r8 != r9 goto +1;\n"
++		     "%[ret] = *(u64 *)(r8 +0);\n"
++		     : [ret] "+r"(ret)
++		     : [inner_map] "r"(inner_map), [val] "r"(val)
++		     : "r8", "r9");
++
++	return ret;
++}
+diff --git a/tools/testing/selftests/net/cmsg_sender.c b/tools/testing/selftests/net/cmsg_sender.c
+index 75dd83e39207b..24b21b15ed3fb 100644
+--- a/tools/testing/selftests/net/cmsg_sender.c
++++ b/tools/testing/selftests/net/cmsg_sender.c
+@@ -110,7 +110,7 @@ static void __attribute__((noreturn)) cs_usage(const char *bin)
+ 
+ static void cs_parse_args(int argc, char *argv[])
+ {
+-	char o;
++	int o;
+ 
+ 	while ((o = getopt(argc, argv, "46sS:p:m:M:d:tf:F:c:C:l:L:H:")) != -1) {
+ 		switch (o) {
+diff --git a/tools/testing/selftests/net/mptcp/userspace_pm.sh b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+index 3229725b64b0a..0040e3bc7b16e 100755
+--- a/tools/testing/selftests/net/mptcp/userspace_pm.sh
++++ b/tools/testing/selftests/net/mptcp/userspace_pm.sh
+@@ -776,6 +776,52 @@ test_subflows()
+ 	rm -f "$evts"
+ }
+ 
++test_subflows_v4_v6_mix()
++{
++	# Attempt to add a listener at 10.0.2.1:<subflow-port>
++	ip netns exec "$ns1" ./pm_nl_ctl listen 10.0.2.1\
++	   $app6_port > /dev/null 2>&1 &
++	local listener_pid=$!
++
++	# ADD_ADDR4 from server to client machine reusing the subflow port on
++	# the established v6 connection
++	:>"$client_evts"
++	ip netns exec "$ns1" ./pm_nl_ctl ann 10.0.2.1 token "$server6_token" id\
++	   $server_addr_id dev ns1eth2 > /dev/null 2>&1
++	stdbuf -o0 -e0 printf "ADD_ADDR4 id:%d 10.0.2.1 (ns1) => ns2, reuse port\t\t" $server_addr_id
++	sleep 0.5
++	verify_announce_event "$client_evts" "$ANNOUNCED" "$client6_token" "10.0.2.1"\
++			      "$server_addr_id" "$app6_port"
++
++	# CREATE_SUBFLOW from client to server machine
++	:>"$client_evts"
++	ip netns exec "$ns2" ./pm_nl_ctl csf lip 10.0.2.2 lid 23 rip 10.0.2.1 rport\
++	   $app6_port token "$client6_token" > /dev/null 2>&1
++	sleep 0.5
++	verify_subflow_events "$client_evts" "$SUB_ESTABLISHED" "$client6_token"\
++			      "$AF_INET" "10.0.2.2" "10.0.2.1" "$app6_port" "23"\
++			      "$server_addr_id" "ns2" "ns1"
++
++	# Delete the listener from the server ns, if one was created
++	kill_wait $listener_pid
++
++	sport=$(sed --unbuffered -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q' "$client_evts")
++
++	# DESTROY_SUBFLOW from client to server machine
++	:>"$client_evts"
++	ip netns exec "$ns2" ./pm_nl_ctl dsf lip 10.0.2.2 lport "$sport" rip 10.0.2.1 rport\
++	   $app6_port token "$client6_token" > /dev/null 2>&1
++	sleep 0.5
++	verify_subflow_events "$client_evts" "$SUB_CLOSED" "$client6_token" \
++			      "$AF_INET" "10.0.2.2" "10.0.2.1" "$app6_port" "23"\
++			      "$server_addr_id" "ns2" "ns1"
++
++	# RM_ADDR from server to client machine
++	ip netns exec "$ns1" ./pm_nl_ctl rem id $server_addr_id token\
++	   "$server6_token" > /dev/null 2>&1
++	sleep 0.5
++}
++
+ test_prio()
+ {
+ 	local count
+@@ -812,6 +858,7 @@ make_connection "v6"
+ test_announce
+ test_remove
+ test_subflows
++test_subflows_v4_v6_mix
+ test_prio
+ 
+ exit 0
+diff --git a/tools/testing/selftests/proc/proc-empty-vm.c b/tools/testing/selftests/proc/proc-empty-vm.c
+index d95b1cb43d9d0..7588428b8fcd7 100644
+--- a/tools/testing/selftests/proc/proc-empty-vm.c
++++ b/tools/testing/selftests/proc/proc-empty-vm.c
+@@ -25,6 +25,7 @@
+ #undef NDEBUG
+ #include <assert.h>
+ #include <errno.h>
++#include <stdint.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+@@ -41,7 +42,7 @@
+  * 1: vsyscall VMA is --xp		vsyscall=xonly
+  * 2: vsyscall VMA is r-xp		vsyscall=emulate
+  */
+-static int g_vsyscall;
++static volatile int g_vsyscall;
+ static const char *g_proc_pid_maps_vsyscall;
+ static const char *g_proc_pid_smaps_vsyscall;
+ 
+@@ -147,11 +148,12 @@ static void vsyscall(void)
+ 
+ 		g_vsyscall = 0;
+ 		/* gettimeofday(NULL, NULL); */
++		uint64_t rax = 0xffffffffff600000;
+ 		asm volatile (
+-			"call %P0"
+-			:
+-			: "i" (0xffffffffff600000), "D" (NULL), "S" (NULL)
+-			: "rax", "rcx", "r11"
++			"call *%[rax]"
++			: [rax] "+a" (rax)
++			: "D" (NULL), "S" (NULL)
++			: "rcx", "r11"
+ 		);
+ 
+ 		g_vsyscall = 1;
+diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c
+index 69551bfa215c4..cacbd2a4aec91 100644
+--- a/tools/testing/selftests/proc/proc-pid-vm.c
++++ b/tools/testing/selftests/proc/proc-pid-vm.c
+@@ -257,11 +257,12 @@ static void vsyscall(void)
+ 
+ 		g_vsyscall = 0;
+ 		/* gettimeofday(NULL, NULL); */
++		uint64_t rax = 0xffffffffff600000;
+ 		asm volatile (
+-			"call %P0"
+-			:
+-			: "i" (0xffffffffff600000), "D" (NULL), "S" (NULL)
+-			: "rax", "rcx", "r11"
++			"call *%[rax]"
++			: [rax] "+a" (rax)
++			: "D" (NULL), "S" (NULL)
++			: "rcx", "r11"
+ 		);
+ 
+ 		g_vsyscall = 1;
+diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c
+index fa87b58bd5fa5..98ff808d6f0c2 100644
+--- a/tools/virtio/vringh_test.c
++++ b/tools/virtio/vringh_test.c
+@@ -308,6 +308,7 @@ static int parallel_test(u64 features,
+ 
+ 		gvdev.vdev.features = features;
+ 		INIT_LIST_HEAD(&gvdev.vdev.vqs);
++		spin_lock_init(&gvdev.vdev.vqs_list_lock);
+ 		gvdev.to_host_fd = to_host[1];
+ 		gvdev.notifies = 0;
+ 
+@@ -455,6 +456,7 @@ int main(int argc, char *argv[])
+ 	getrange = getrange_iov;
+ 	vdev.features = 0;
+ 	INIT_LIST_HEAD(&vdev.vqs);
++	spin_lock_init(&vdev.vqs_list_lock);
+ 
+ 	while (argv[1]) {
+ 		if (strcmp(argv[1], "--indirect") == 0)


             reply	other threads:[~2023-01-24  7:19 UTC|newest]

Thread overview: 190+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-24  7:19 Alice Ferrazzi [this message]
  -- strict thread matches above, loose matches on Subject: below --
2025-04-10 13:35 [gentoo-commits] proj/linux-patches:6.1 commit in: / Mike Pagano
2025-04-07 10:31 Mike Pagano
2025-03-29 10:49 Mike Pagano
2025-03-13 12:56 Mike Pagano
2025-03-07 16:38 Mike Pagano
2025-02-21 13:32 Mike Pagano
2025-02-01 23:08 Mike Pagano
2025-01-30 12:56 Mike Pagano
2025-01-23 17:04 Mike Pagano
2025-01-19 10:58 Mike Pagano
2025-01-17 13:19 Mike Pagano
2025-01-09 13:54 Mike Pagano
2025-01-02 12:35 Mike Pagano
2024-12-27 14:09 Mike Pagano
2024-12-19 18:08 Mike Pagano
2024-12-14 23:49 Mike Pagano
2024-12-12 19:42 Mike Pagano
2024-11-22 17:48 Mike Pagano
2024-11-17 18:17 Mike Pagano
2024-11-14 14:55 Mike Pagano
2024-11-08 16:31 Mike Pagano
2024-11-04 20:52 Mike Pagano
2024-11-03 13:58 Mike Pagano
2024-11-01 11:33 Mike Pagano
2024-11-01 11:28 Mike Pagano
2024-10-25 11:46 Mike Pagano
2024-10-22 16:58 Mike Pagano
2024-10-17 14:24 Mike Pagano
2024-10-17 14:06 Mike Pagano
2024-09-30 16:04 Mike Pagano
2024-09-18 18:04 Mike Pagano
2024-09-12 12:35 Mike Pagano
2024-09-08 11:06 Mike Pagano
2024-09-04 13:52 Mike Pagano
2024-08-29 16:49 Mike Pagano
2024-08-19 10:43 Mike Pagano
2024-08-14 15:06 Mike Pagano
2024-08-14 14:11 Mike Pagano
2024-08-11 13:32 Mike Pagano
2024-08-11 13:29 Mike Pagano
2024-08-10 15:45 Mike Pagano
2024-08-03 15:28 Mike Pagano
2024-07-27 13:47 Mike Pagano
2024-07-25 12:15 Mike Pagano
2024-07-25 12:09 Mike Pagano
2024-07-18 12:15 Mike Pagano
2024-07-15 11:16 Mike Pagano
2024-07-11 11:49 Mike Pagano
2024-07-05 11:07 Mike Pagano
2024-06-27 13:10 Mike Pagano
2024-06-27 12:33 Mike Pagano
2024-06-21 14:07 Mike Pagano
2024-06-16 14:33 Mike Pagano
2024-06-12 10:16 Mike Pagano
2024-05-25 15:16 Mike Pagano
2024-05-17 11:36 Mike Pagano
2024-05-05 18:10 Mike Pagano
2024-05-02 15:01 Mike Pagano
2024-04-29 11:30 Mike Pagano
2024-04-29 11:27 Mike Pagano
2024-04-27 22:45 Mike Pagano
2024-04-27 17:06 Mike Pagano
2024-04-18  3:05 Alice Ferrazzi
2024-04-13 13:07 Mike Pagano
2024-04-10 15:10 Mike Pagano
2024-04-03 13:54 Mike Pagano
2024-03-27 11:24 Mike Pagano
2024-03-15 22:00 Mike Pagano
2024-03-06 18:07 Mike Pagano
2024-03-01 13:07 Mike Pagano
2024-02-23 13:19 Mike Pagano
2024-02-23 12:37 Mike Pagano
2024-02-16 19:00 Mike Pagano
2024-02-05 21:01 Mike Pagano
2024-02-01  1:23 Mike Pagano
2024-01-26  0:09 Mike Pagano
2024-01-20 11:45 Mike Pagano
2024-01-15 18:47 Mike Pagano
2024-01-10 17:16 Mike Pagano
2024-01-05 14:54 Mike Pagano
2024-01-05 14:50 Mike Pagano
2024-01-04 16:10 Mike Pagano
2024-01-01 13:46 Mike Pagano
2023-12-20 16:56 Mike Pagano
2023-12-13 18:27 Mike Pagano
2023-12-11 14:20 Mike Pagano
2023-12-08 10:55 Mike Pagano
2023-12-03 11:16 Mike Pagano
2023-12-01 10:36 Mike Pagano
2023-11-28 17:51 Mike Pagano
2023-11-20 11:23 Mike Pagano
2023-11-08 14:02 Mike Pagano
2023-11-02 11:10 Mike Pagano
2023-10-25 11:36 Mike Pagano
2023-10-22 22:53 Mike Pagano
2023-10-19 22:30 Mike Pagano
2023-10-18 20:04 Mike Pagano
2023-10-15 17:40 Mike Pagano
2023-10-10 22:56 Mike Pagano
2023-10-06 13:18 Mike Pagano
2023-10-05 14:23 Mike Pagano
2023-09-23 11:03 Mike Pagano
2023-09-23 10:16 Mike Pagano
2023-09-19 13:20 Mike Pagano
2023-09-15 18:04 Mike Pagano
2023-09-13 11:19 Mike Pagano
2023-09-13 11:05 Mike Pagano
2023-09-06 22:16 Mike Pagano
2023-09-02  9:56 Mike Pagano
2023-08-30 14:42 Mike Pagano
2023-08-27 21:41 Mike Pagano
2023-08-26 15:19 Mike Pagano
2023-08-26 15:00 Mike Pagano
2023-08-23 18:08 Mike Pagano
2023-08-16 18:32 Mike Pagano
2023-08-16 18:32 Mike Pagano
2023-08-11 11:55 Mike Pagano
2023-08-08 18:40 Mike Pagano
2023-08-03 11:54 Mike Pagano
2023-08-03 11:48 Mike Pagano
2023-07-27 11:48 Mike Pagano
2023-07-24 20:27 Mike Pagano
2023-07-23 15:14 Mike Pagano
2023-07-19 17:05 Mike Pagano
2023-07-05 20:34 Mike Pagano
2023-07-05 20:28 Mike Pagano
2023-07-04 13:15 Mike Pagano
2023-07-01 18:27 Mike Pagano
2023-06-28 10:26 Mike Pagano
2023-06-21 14:54 Alice Ferrazzi
2023-06-14 10:17 Mike Pagano
2023-06-09 12:02 Mike Pagano
2023-06-09 11:29 Mike Pagano
2023-06-05 11:48 Mike Pagano
2023-06-02 15:07 Mike Pagano
2023-05-30 16:51 Mike Pagano
2023-05-24 17:05 Mike Pagano
2023-05-17 10:57 Mike Pagano
2023-05-11 16:08 Mike Pagano
2023-05-11 14:49 Mike Pagano
2023-05-10 17:54 Mike Pagano
2023-05-10 16:18 Mike Pagano
2023-04-30 23:50 Alice Ferrazzi
2023-04-26 13:19 Mike Pagano
2023-04-20 11:16 Alice Ferrazzi
2023-04-13 16:09 Mike Pagano
2023-04-06 10:41 Alice Ferrazzi
2023-03-30 20:52 Mike Pagano
2023-03-30 11:21 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-21 13:32 Mike Pagano
2023-03-17 10:43 Mike Pagano
2023-03-13 11:30 Alice Ferrazzi
2023-03-11 14:09 Mike Pagano
2023-03-11 11:19 Mike Pagano
2023-03-10 12:57 Mike Pagano
2023-03-10 12:47 Mike Pagano
2023-03-06 17:30 Mike Pagano
2023-03-03 13:01 Mike Pagano
2023-03-03 12:28 Mike Pagano
2023-02-27 16:59 Mike Pagano
2023-02-26 18:24 Mike Pagano
2023-02-26 18:16 Mike Pagano
2023-02-25 11:02 Alice Ferrazzi
2023-02-24  3:03 Alice Ferrazzi
2023-02-22 13:46 Alice Ferrazzi
2023-02-14 18:35 Mike Pagano
2023-02-13 13:38 Mike Pagano
2023-02-09 12:52 Mike Pagano
2023-02-09 12:49 Mike Pagano
2023-02-09 12:47 Mike Pagano
2023-02-09 12:40 Mike Pagano
2023-02-09 12:34 Mike Pagano
2023-02-06 12:46 Mike Pagano
2023-02-02 19:02 Mike Pagano
2023-02-01  8:05 Alice Ferrazzi
2023-01-22 14:59 Mike Pagano
2023-01-18 11:29 Mike Pagano
2023-01-14 13:48 Mike Pagano
2023-01-12 15:25 Mike Pagano
2023-01-12 12:16 Mike Pagano
2023-01-07 11:10 Mike Pagano
2023-01-04 11:37 Mike Pagano
2022-12-31 15:28 Mike Pagano
2022-12-21 19:05 Alice Ferrazzi
2022-12-16 20:25 Mike Pagano
2022-12-16 19:44 Mike Pagano
2022-12-11 23:32 Mike Pagano
2022-12-11 14:28 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1674544753.a7e1722d1d75744592599e63e49a84ad72e66cfc.alicef@gentoo \
    --to=alicef@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox