From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 4A22115827B for ; Thu, 28 Aug 2025 15:31:21 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 24C35340D02 for ; Thu, 28 Aug 2025 15:31:21 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id 1A583110560; Thu, 28 Aug 2025 15:31:17 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id 02268110560 for ; Thu, 28 Aug 2025 15:31:16 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id BFA05340D02 for ; Thu, 28 Aug 2025 15:31:15 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 37BA434FB for ; Thu, 28 Aug 2025 15:31:14 +0000 (UTC) From: "Arisu Tachibana" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Arisu Tachibana" Message-ID: <1756394551.8e74cc0d197a9fac87cabebab148db4f96ffeb96.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.16 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1003_linux-6.16.4.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Arisu Tachibana X-VCS-Revision: 8e74cc0d197a9fac87cabebab148db4f96ffeb96 X-VCS-Branch: 6.16 Date: Thu, 28 Aug 2025 15:31:14 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 4e6b5f4f-4dce-4321-bc23-72d4d973fbbf X-Archives-Hash: 7838b7fd7704c90a46b4c2423cdf0515 commit: 8e74cc0d197a9fac87cabebab148db4f96ffeb96 Author: Arisu Tachibana gentoo org> AuthorDate: Thu Aug 28 15:22:31 2025 +0000 Commit: Arisu Tachibana gentoo org> CommitDate: Thu Aug 28 15:22:31 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e74cc0d Linux patch 6.16.4 Signed-off-by: Arisu Tachibana gentoo.org> 0000_README | 4 + 1003_linux-6.16.4.patch | 20047 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 20051 insertions(+) diff --git a/0000_README b/0000_README index eda565a9..32d61205 100644 --- a/0000_README +++ b/0000_README @@ -55,6 +55,10 @@ Patch: 1002_linux-6.16.3.patch From: https://www.kernel.org Desc: Linux 6.16.3 +Patch: 1003_linux-6.16.4.patch +From: https://www.kernel.org +Desc: Linux 6.16.4 + Patch: 1510_fs-enable-link-security-restrictions-by-default.patch From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/ Desc: Enable link security restrictions by default. diff --git a/1003_linux-6.16.4.patch b/1003_linux-6.16.4.patch new file mode 100644 index 00000000..00c38fe4 --- /dev/null +++ b/1003_linux-6.16.4.patch @@ -0,0 +1,20047 @@ +diff --git a/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml b/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml +index f546d481b7e5f4..93da1fb9adc47b 100644 +--- a/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml ++++ b/Documentation/devicetree/bindings/display/rockchip/rockchip-vop2.yaml +@@ -64,10 +64,10 @@ properties: + - description: Pixel clock for video port 0. + - description: Pixel clock for video port 1. + - description: Pixel clock for video port 2. +- - description: Pixel clock for video port 3. +- - description: Peripheral(vop grf/dsi) clock. +- - description: Alternative pixel clock provided by HDMI0 PHY PLL. +- - description: Alternative pixel clock provided by HDMI1 PHY PLL. ++ - {} ++ - {} ++ - {} ++ - {} + + clock-names: + minItems: 5 +@@ -77,10 +77,10 @@ properties: + - const: dclk_vp0 + - const: dclk_vp1 + - const: dclk_vp2 +- - const: dclk_vp3 +- - const: pclk_vop +- - const: pll_hdmiphy0 +- - const: pll_hdmiphy1 ++ - {} ++ - {} ++ - {} ++ - {} + + rockchip,grf: + $ref: /schemas/types.yaml#/definitions/phandle +@@ -175,10 +175,24 @@ allOf: + then: + properties: + clocks: +- maxItems: 5 ++ minItems: 5 ++ items: ++ - {} ++ - {} ++ - {} ++ - {} ++ - {} ++ - description: Alternative pixel clock provided by HDMI PHY PLL. + + clock-names: +- maxItems: 5 ++ minItems: 5 ++ items: ++ - {} ++ - {} ++ - {} ++ - {} ++ - {} ++ - const: pll_hdmiphy0 + + interrupts: + minItems: 4 +@@ -208,11 +222,29 @@ allOf: + properties: + clocks: + minItems: 7 +- maxItems: 9 ++ items: ++ - {} ++ - {} ++ - {} ++ - {} ++ - {} ++ - description: Pixel clock for video port 3. ++ - description: Peripheral(vop grf/dsi) clock. ++ - description: Alternative pixel clock provided by HDMI0 PHY PLL. ++ - description: Alternative pixel clock provided by HDMI1 PHY PLL. + + clock-names: + minItems: 7 +- maxItems: 9 ++ items: ++ - {} ++ - {} ++ - {} ++ - {} ++ - {} ++ - const: dclk_vp3 ++ - const: pclk_vop ++ - const: pll_hdmiphy0 ++ - const: pll_hdmiphy1 + + interrupts: + maxItems: 1 +diff --git a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml +index 4ebea60b8c5ba5..8c52fa0ea5f8ee 100644 +--- a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml ++++ b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dpu.yaml +@@ -25,7 +25,7 @@ properties: + maxItems: 1 + + clocks: +- minItems: 2 ++ maxItems: 2 + + clock-names: + items: +diff --git a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml +index bc5594d1864301..300bf2252c3e8e 100644 +--- a/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml ++++ b/Documentation/devicetree/bindings/display/sprd/sprd,sharkl3-dsi-host.yaml +@@ -20,7 +20,7 @@ properties: + maxItems: 2 + + clocks: +- minItems: 1 ++ maxItems: 1 + + clock-names: + items: +diff --git a/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml b/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml +index 32fd535a514ad1..20f341d25ebc3f 100644 +--- a/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml ++++ b/Documentation/devicetree/bindings/ufs/mediatek,ufs.yaml +@@ -33,6 +33,10 @@ properties: + + vcc-supply: true + ++ mediatek,ufs-disable-mcq: ++ $ref: /schemas/types.yaml#/definitions/flag ++ description: The mask to disable MCQ (Multi-Circular Queue) for UFS host. ++ + required: + - compatible + - clocks +diff --git a/Documentation/networking/mptcp-sysctl.rst b/Documentation/networking/mptcp-sysctl.rst +index 5bfab01eff5a9d..1683c139821e3b 100644 +--- a/Documentation/networking/mptcp-sysctl.rst ++++ b/Documentation/networking/mptcp-sysctl.rst +@@ -12,6 +12,8 @@ add_addr_timeout - INTEGER (seconds) + resent to an MPTCP peer that has not acknowledged a previous + ADD_ADDR message. + ++ Do not retransmit if set to 0. ++ + The default value matches TCP_RTO_MAX. This is a per-namespace + sysctl. + +diff --git a/Makefile b/Makefile +index df121383064380..e5509045fe3f3a 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 16 +-SUBLEVEL = 3 ++SUBLEVEL = 4 + EXTRAVERSION = + NAME = Baby Opossum Posse + +@@ -1134,7 +1134,7 @@ KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD + KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) + + # userspace programs are linked via the compiler, use the correct linker +-ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy) ++ifdef CONFIG_CC_IS_CLANG + KBUILD_USERLDFLAGS += --ld-path=$(LD) + endif + +diff --git a/arch/arm/lib/crypto/poly1305-glue.c b/arch/arm/lib/crypto/poly1305-glue.c +index 2603b0771f2c4c..ca6dc553370546 100644 +--- a/arch/arm/lib/crypto/poly1305-glue.c ++++ b/arch/arm/lib/crypto/poly1305-glue.c +@@ -7,6 +7,7 @@ + + #include + #include ++#include + #include + #include + #include +@@ -39,7 +40,7 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src, + { + len = round_down(len, POLY1305_BLOCK_SIZE); + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && +- static_branch_likely(&have_neon)) { ++ static_branch_likely(&have_neon) && likely(may_use_simd())) { + do { + unsigned int todo = min_t(unsigned int, len, SZ_4K); + +diff --git a/arch/arm64/boot/dts/apple/t8012-j132.dts b/arch/arm64/boot/dts/apple/t8012-j132.dts +index 778a69be18dd81..7dcac51703ff60 100644 +--- a/arch/arm64/boot/dts/apple/t8012-j132.dts ++++ b/arch/arm64/boot/dts/apple/t8012-j132.dts +@@ -7,6 +7,7 @@ + /dts-v1/; + + #include "t8012-jxxx.dtsi" ++#include "t8012-touchbar.dtsi" + + / { + model = "Apple T2 MacBookPro15,2 (j132)"; +diff --git a/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts b/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts +index 61eec1aff32ef3..b8ce433b93b1b4 100644 +--- a/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts ++++ b/arch/arm64/boot/dts/exynos/exynos7870-j6lte.dts +@@ -89,7 +89,7 @@ key-volup { + memory@40000000 { + device_type = "memory"; + reg = <0x0 0x40000000 0x3d800000>, +- <0x0 0x80000000 0x7d800000>; ++ <0x0 0x80000000 0x40000000>; + }; + + pwrseq_mmc1: pwrseq-mmc1 { +diff --git a/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts b/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts +index eb97dcc415423f..b1d9eff5a82702 100644 +--- a/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts ++++ b/arch/arm64/boot/dts/exynos/exynos7870-on7xelte.dts +@@ -78,7 +78,7 @@ key-volup { + memory@40000000 { + device_type = "memory"; + reg = <0x0 0x40000000 0x3e400000>, +- <0x0 0x80000000 0xbe400000>; ++ <0x0 0x80000000 0x80000000>; + }; + + pwrseq_mmc1: pwrseq-mmc1 { +diff --git a/arch/arm64/boot/dts/exynos/exynos7870.dtsi b/arch/arm64/boot/dts/exynos/exynos7870.dtsi +index 5cba8c9bb40340..d5d347623b9038 100644 +--- a/arch/arm64/boot/dts/exynos/exynos7870.dtsi ++++ b/arch/arm64/boot/dts/exynos/exynos7870.dtsi +@@ -327,6 +327,7 @@ usb@0 { + phys = <&usbdrd_phy 0>; + + usb-role-switch; ++ snps,usb2-gadget-lpm-disable; + }; + }; + +diff --git a/arch/arm64/boot/dts/exynos/google/gs101.dtsi b/arch/arm64/boot/dts/exynos/google/gs101.dtsi +index 94aa0ffb9a9760..0f6592658b982d 100644 +--- a/arch/arm64/boot/dts/exynos/google/gs101.dtsi ++++ b/arch/arm64/boot/dts/exynos/google/gs101.dtsi +@@ -1371,6 +1371,7 @@ ufs_0: ufs@14700000 { + <&cmu_hsi2 CLK_GOUT_HSI2_SYSREG_HSI2_PCLK>; + clock-names = "core_clk", "sclk_unipro_main", "fmp", + "aclk", "pclk", "sysreg"; ++ dma-coherent; + freq-table-hz = <0 0>, <0 0>, <0 0>, <0 0>, <0 0>, <0 0>; + pinctrl-0 = <&ufs_rst_n &ufs_refclk_out>; + pinctrl-names = "default"; +diff --git a/arch/arm64/boot/dts/rockchip/rk3576.dtsi b/arch/arm64/boot/dts/rockchip/rk3576.dtsi +index 64812e3bcb613c..036b50936cb25f 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3576.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3576.dtsi +@@ -1155,12 +1155,14 @@ vop: vop@27d00000 { + <&cru HCLK_VOP>, + <&cru DCLK_VP0>, + <&cru DCLK_VP1>, +- <&cru DCLK_VP2>; ++ <&cru DCLK_VP2>, ++ <&hdptxphy>; + clock-names = "aclk", + "hclk", + "dclk_vp0", + "dclk_vp1", +- "dclk_vp2"; ++ "dclk_vp2", ++ "pll_hdmiphy0"; + iommus = <&vop_mmu>; + power-domains = <&power RK3576_PD_VOP>; + rockchip,grf = <&sys_grf>; +@@ -2391,6 +2393,7 @@ hdptxphy: hdmiphy@2b000000 { + reg = <0x0 0x2b000000 0x0 0x2000>; + clocks = <&cru CLK_PHY_REF_SRC>, <&cru PCLK_HDPTX_APB>; + clock-names = "ref", "apb"; ++ #clock-cells = <0>; + resets = <&cru SRST_P_HDPTX_APB>, <&cru SRST_HDPTX_INIT>, + <&cru SRST_HDPTX_CMN>, <&cru SRST_HDPTX_LANE>; + reset-names = "apb", "init", "cmn", "lane"; +diff --git a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi +index 60ad272982ad51..6daea8961fdd65 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3588-turing-rk1.dtsi +@@ -398,17 +398,6 @@ rk806_dvs3_null: dvs3-null-pins { + + regulators { + vdd_gpu_s0: vdd_gpu_mem_s0: dcdc-reg1 { +- /* +- * RK3588's GPU power domain cannot be enabled +- * without this regulator active, but it +- * doesn't have to be on when the GPU PD is +- * disabled. Because the PD binding does not +- * currently allow us to express this +- * relationship, we have no choice but to do +- * this instead: +- */ +- regulator-always-on; +- + regulator-boot-on; + regulator-min-microvolt = <550000>; + regulator-max-microvolt = <950000>; +diff --git a/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts b/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts +index aafdb90c0eb700..4609f366006e4c 100644 +--- a/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts ++++ b/arch/arm64/boot/dts/ti/k3-am62-lp-sk.dts +@@ -74,6 +74,22 @@ vddshv_sdio: regulator-4 { + }; + + &main_pmx0 { ++ main_mmc0_pins_default: main-mmc0-default-pins { ++ bootph-all; ++ pinctrl-single,pins = < ++ AM62X_IOPAD(0x220, PIN_INPUT, 0) /* (V3) MMC0_CMD */ ++ AM62X_IOPAD(0x218, PIN_INPUT, 0) /* (Y1) MMC0_CLK */ ++ AM62X_IOPAD(0x214, PIN_INPUT, 0) /* (V2) MMC0_DAT0 */ ++ AM62X_IOPAD(0x210, PIN_INPUT, 0) /* (V1) MMC0_DAT1 */ ++ AM62X_IOPAD(0x20c, PIN_INPUT, 0) /* (W2) MMC0_DAT2 */ ++ AM62X_IOPAD(0x208, PIN_INPUT, 0) /* (W1) MMC0_DAT3 */ ++ AM62X_IOPAD(0x204, PIN_INPUT, 0) /* (Y2) MMC0_DAT4 */ ++ AM62X_IOPAD(0x200, PIN_INPUT, 0) /* (W3) MMC0_DAT5 */ ++ AM62X_IOPAD(0x1fc, PIN_INPUT, 0) /* (W4) MMC0_DAT6 */ ++ AM62X_IOPAD(0x1f8, PIN_INPUT, 0) /* (V4) MMC0_DAT7 */ ++ >; ++ }; ++ + vddshv_sdio_pins_default: vddshv-sdio-default-pins { + pinctrl-single,pins = < + AM62X_IOPAD(0x07c, PIN_OUTPUT, 7) /* (M19) GPMC0_CLK.GPIO0_31 */ +@@ -144,6 +160,14 @@ exp2: gpio@23 { + }; + }; + ++&sdhci0 { ++ bootph-all; ++ non-removable; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&main_mmc0_pins_default>; ++ status = "okay"; ++}; ++ + &sdhci1 { + vmmc-supply = <&vdd_mmc1>; + vqmmc-supply = <&vddshv_sdio>; +diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi +index 9e0b6eee9ac77d..120ba8f9dd0e7e 100644 +--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi ++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi +@@ -553,7 +553,6 @@ sdhci0: mmc@fa10000 { + clocks = <&k3_clks 57 5>, <&k3_clks 57 6>; + clock-names = "clk_ahb", "clk_xin"; + bus-width = <8>; +- mmc-ddr-1_8v; + mmc-hs200-1_8v; + ti,clkbuf-sel = <0x7>; + ti,otap-del-sel-legacy = <0x0>; +diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi +index 1ea8f64b1b3bd3..bc2289d7477457 100644 +--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi ++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi +@@ -507,16 +507,16 @@ AM62X_IOPAD(0x01ec, PIN_INPUT_PULLUP, 0) /* (A17) I2C1_SDA */ /* SODIMM 12 */ + /* Verdin I2C_2_DSI */ + pinctrl_i2c2: main-i2c2-default-pins { + pinctrl-single,pins = < +- AM62X_IOPAD(0x00b0, PIN_INPUT, 1) /* (K22) GPMC0_CSn2.I2C2_SCL */ /* SODIMM 55 */ +- AM62X_IOPAD(0x00b4, PIN_INPUT, 1) /* (K24) GPMC0_CSn3.I2C2_SDA */ /* SODIMM 53 */ ++ AM62X_IOPAD(0x00b0, PIN_INPUT_PULLUP, 1) /* (K22) GPMC0_CSn2.I2C2_SCL */ /* SODIMM 55 */ ++ AM62X_IOPAD(0x00b4, PIN_INPUT_PULLUP, 1) /* (K24) GPMC0_CSn3.I2C2_SDA */ /* SODIMM 53 */ + >; + }; + + /* Verdin I2C_4_CSI */ + pinctrl_i2c3: main-i2c3-default-pins { + pinctrl-single,pins = < +- AM62X_IOPAD(0x01d0, PIN_INPUT, 2) /* (A15) UART0_CTSn.I2C3_SCL */ /* SODIMM 95 */ +- AM62X_IOPAD(0x01d4, PIN_INPUT, 2) /* (B15) UART0_RTSn.I2C3_SDA */ /* SODIMM 93 */ ++ AM62X_IOPAD(0x01d0, PIN_INPUT_PULLUP, 2) /* (A15) UART0_CTSn.I2C3_SCL */ /* SODIMM 95 */ ++ AM62X_IOPAD(0x01d4, PIN_INPUT_PULLUP, 2) /* (B15) UART0_RTSn.I2C3_SDA */ /* SODIMM 93 */ + >; + }; + +@@ -786,8 +786,8 @@ AM62X_MCU_IOPAD(0x0010, PIN_INPUT, 7) /* (C9) MCU_SPI0_D1.MCU_GPIO0_4 */ /* SODI + /* Verdin I2C_3_HDMI */ + pinctrl_mcu_i2c0: mcu-i2c0-default-pins { + pinctrl-single,pins = < +- AM62X_MCU_IOPAD(0x0044, PIN_INPUT, 0) /* (A8) MCU_I2C0_SCL */ /* SODIMM 59 */ +- AM62X_MCU_IOPAD(0x0048, PIN_INPUT, 0) /* (D10) MCU_I2C0_SDA */ /* SODIMM 57 */ ++ AM62X_MCU_IOPAD(0x0044, PIN_INPUT_PULLUP, 0) /* (A8) MCU_I2C0_SCL */ /* SODIMM 59 */ ++ AM62X_MCU_IOPAD(0x0048, PIN_INPUT_PULLUP, 0) /* (D10) MCU_I2C0_SDA */ /* SODIMM 57 */ + >; + }; + +diff --git a/arch/arm64/boot/dts/ti/k3-am625-sk.dts b/arch/arm64/boot/dts/ti/k3-am625-sk.dts +index 2fbfa371934575..d240165bda9c57 100644 +--- a/arch/arm64/boot/dts/ti/k3-am625-sk.dts ++++ b/arch/arm64/boot/dts/ti/k3-am625-sk.dts +@@ -106,6 +106,22 @@ vcc_1v8: regulator-5 { + }; + + &main_pmx0 { ++ main_mmc0_pins_default: main-mmc0-default-pins { ++ bootph-all; ++ pinctrl-single,pins = < ++ AM62X_IOPAD(0x220, PIN_INPUT, 0) /* (Y3) MMC0_CMD */ ++ AM62X_IOPAD(0x218, PIN_INPUT, 0) /* (AB1) MMC0_CLK */ ++ AM62X_IOPAD(0x214, PIN_INPUT, 0) /* (AA2) MMC0_DAT0 */ ++ AM62X_IOPAD(0x210, PIN_INPUT_PULLUP, 0) /* (AA1) MMC0_DAT1 */ ++ AM62X_IOPAD(0x20c, PIN_INPUT_PULLUP, 0) /* (AA3) MMC0_DAT2 */ ++ AM62X_IOPAD(0x208, PIN_INPUT_PULLUP, 0) /* (Y4) MMC0_DAT3 */ ++ AM62X_IOPAD(0x204, PIN_INPUT_PULLUP, 0) /* (AB2) MMC0_DAT4 */ ++ AM62X_IOPAD(0x200, PIN_INPUT_PULLUP, 0) /* (AC1) MMC0_DAT5 */ ++ AM62X_IOPAD(0x1fc, PIN_INPUT_PULLUP, 0) /* (AD2) MMC0_DAT6 */ ++ AM62X_IOPAD(0x1f8, PIN_INPUT_PULLUP, 0) /* (AC2) MMC0_DAT7 */ ++ >; ++ }; ++ + main_rgmii2_pins_default: main-rgmii2-default-pins { + bootph-all; + pinctrl-single,pins = < +@@ -195,6 +211,14 @@ exp1: gpio@22 { + }; + }; + ++&sdhci0 { ++ bootph-all; ++ non-removable; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&main_mmc0_pins_default>; ++ status = "okay"; ++}; ++ + &sdhci1 { + vmmc-supply = <&vdd_mmc1>; + vqmmc-supply = <&vdd_sd_dv>; +diff --git a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts +index b2775902601495..2129da4d7185b4 100644 +--- a/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts ++++ b/arch/arm64/boot/dts/ti/k3-am62a7-sk.dts +@@ -301,8 +301,8 @@ AM62AX_IOPAD(0x1cc, PIN_OUTPUT, 0) /* (D15) UART0_TXD */ + + main_uart1_pins_default: main-uart1-default-pins { + pinctrl-single,pins = < +- AM62AX_IOPAD(0x01e8, PIN_INPUT, 1) /* (C17) I2C1_SCL.UART1_RXD */ +- AM62AX_IOPAD(0x01ec, PIN_OUTPUT, 1) /* (E17) I2C1_SDA.UART1_TXD */ ++ AM62AX_IOPAD(0x01ac, PIN_INPUT, 2) /* (B21) MCASP0_AFSR.UART1_RXD */ ++ AM62AX_IOPAD(0x01b0, PIN_OUTPUT, 2) /* (A21) MCASP0_ACLKR.UART1_TXD */ + AM62AX_IOPAD(0x0194, PIN_INPUT, 2) /* (C19) MCASP0_AXR3.UART1_CTSn */ + AM62AX_IOPAD(0x0198, PIN_OUTPUT, 2) /* (B19) MCASP0_AXR2.UART1_RTSn */ + >; +diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi +index ee8337bfbbfd3a..13e1d36123d51f 100644 +--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi ++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi +@@ -203,22 +203,6 @@ AM62X_IOPAD(0x0b4, PIN_INPUT_PULLUP, 1) /* (K24/H19) GPMC0_CSn3.I2C2_SDA */ + >; + }; + +- main_mmc0_pins_default: main-mmc0-default-pins { +- bootph-all; +- pinctrl-single,pins = < +- AM62X_IOPAD(0x220, PIN_INPUT, 0) /* (Y3/V3) MMC0_CMD */ +- AM62X_IOPAD(0x218, PIN_INPUT, 0) /* (AB1/Y1) MMC0_CLK */ +- AM62X_IOPAD(0x214, PIN_INPUT, 0) /* (AA2/V2) MMC0_DAT0 */ +- AM62X_IOPAD(0x210, PIN_INPUT, 0) /* (AA1/V1) MMC0_DAT1 */ +- AM62X_IOPAD(0x20c, PIN_INPUT, 0) /* (AA3/W2) MMC0_DAT2 */ +- AM62X_IOPAD(0x208, PIN_INPUT, 0) /* (Y4/W1) MMC0_DAT3 */ +- AM62X_IOPAD(0x204, PIN_INPUT, 0) /* (AB2/Y2) MMC0_DAT4 */ +- AM62X_IOPAD(0x200, PIN_INPUT, 0) /* (AC1/W3) MMC0_DAT5 */ +- AM62X_IOPAD(0x1fc, PIN_INPUT, 0) /* (AD2/W4) MMC0_DAT6 */ +- AM62X_IOPAD(0x1f8, PIN_INPUT, 0) /* (AC2/V4) MMC0_DAT7 */ +- >; +- }; +- + main_mmc1_pins_default: main-mmc1-default-pins { + bootph-all; + pinctrl-single,pins = < +@@ -457,14 +441,6 @@ &main_i2c2 { + clock-frequency = <400000>; + }; + +-&sdhci0 { +- bootph-all; +- status = "okay"; +- non-removable; +- pinctrl-names = "default"; +- pinctrl-0 = <&main_mmc0_pins_default>; +-}; +- + &sdhci1 { + /* SD/MMC */ + bootph-all; +diff --git a/arch/arm64/boot/dts/ti/k3-pinctrl.h b/arch/arm64/boot/dts/ti/k3-pinctrl.h +index cac7cccc111212..38590188dd51ca 100644 +--- a/arch/arm64/boot/dts/ti/k3-pinctrl.h ++++ b/arch/arm64/boot/dts/ti/k3-pinctrl.h +@@ -8,6 +8,7 @@ + #ifndef DTS_ARM64_TI_K3_PINCTRL_H + #define DTS_ARM64_TI_K3_PINCTRL_H + ++#define ST_EN_SHIFT (14) + #define PULLUDEN_SHIFT (16) + #define PULLTYPESEL_SHIFT (17) + #define RXACTIVE_SHIFT (18) +@@ -19,6 +20,10 @@ + #define DS_PULLUD_EN_SHIFT (27) + #define DS_PULLTYPE_SEL_SHIFT (28) + ++/* Schmitt trigger configuration */ ++#define ST_DISABLE (0 << ST_EN_SHIFT) ++#define ST_ENABLE (1 << ST_EN_SHIFT) ++ + #define PULL_DISABLE (1 << PULLUDEN_SHIFT) + #define PULL_ENABLE (0 << PULLUDEN_SHIFT) + +@@ -32,9 +37,13 @@ + #define PIN_OUTPUT (INPUT_DISABLE | PULL_DISABLE) + #define PIN_OUTPUT_PULLUP (INPUT_DISABLE | PULL_UP) + #define PIN_OUTPUT_PULLDOWN (INPUT_DISABLE | PULL_DOWN) +-#define PIN_INPUT (INPUT_EN | PULL_DISABLE) +-#define PIN_INPUT_PULLUP (INPUT_EN | PULL_UP) +-#define PIN_INPUT_PULLDOWN (INPUT_EN | PULL_DOWN) ++#define PIN_INPUT (INPUT_EN | ST_ENABLE | PULL_DISABLE) ++#define PIN_INPUT_PULLUP (INPUT_EN | ST_ENABLE | PULL_UP) ++#define PIN_INPUT_PULLDOWN (INPUT_EN | ST_ENABLE | PULL_DOWN) ++/* Input configurations with Schmitt Trigger disabled */ ++#define PIN_INPUT_NOST (INPUT_EN | PULL_DISABLE) ++#define PIN_INPUT_PULLUP_NOST (INPUT_EN | PULL_UP) ++#define PIN_INPUT_PULLDOWN_NOST (INPUT_EN | PULL_DOWN) + + #define PIN_DEBOUNCE_DISABLE (0 << DEBOUNCE_SHIFT) + #define PIN_DEBOUNCE_CONF1 (1 << DEBOUNCE_SHIFT) +diff --git a/arch/arm64/lib/crypto/poly1305-glue.c b/arch/arm64/lib/crypto/poly1305-glue.c +index c9a74766785bd7..31aea21ce42f79 100644 +--- a/arch/arm64/lib/crypto/poly1305-glue.c ++++ b/arch/arm64/lib/crypto/poly1305-glue.c +@@ -7,6 +7,7 @@ + + #include + #include ++#include + #include + #include + #include +@@ -33,7 +34,7 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src, + unsigned int len, u32 padbit) + { + len = round_down(len, POLY1305_BLOCK_SIZE); +- if (static_branch_likely(&have_neon)) { ++ if (static_branch_likely(&have_neon) && likely(may_use_simd())) { + do { + unsigned int todo = min_t(unsigned int, len, SZ_4K); + +diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile +index b0703a4e02a253..a3a9759414f40f 100644 +--- a/arch/loongarch/Makefile ++++ b/arch/loongarch/Makefile +@@ -102,7 +102,13 @@ KBUILD_CFLAGS += $(call cc-option,-mthin-add-sub) $(call cc-option,-Wa$(comma) + + ifdef CONFIG_OBJTOOL + ifdef CONFIG_CC_HAS_ANNOTATE_TABLEJUMP ++# The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled. ++# Ensure it is aware of linker with LTO, '--loongarch-annotate-tablejump' also needs to ++# be passed via '-mllvm' to ld.lld. + KBUILD_CFLAGS += -mannotate-tablejump ++ifdef CONFIG_LTO_CLANG ++KBUILD_LDFLAGS += -mllvm --loongarch-annotate-tablejump ++endif + else + KBUILD_CFLAGS += -fno-jump-tables # keep compatibility with older compilers + endif +diff --git a/arch/loongarch/kernel/module-sections.c b/arch/loongarch/kernel/module-sections.c +index e2f30ff9afde82..a43ba7f9f9872a 100644 +--- a/arch/loongarch/kernel/module-sections.c ++++ b/arch/loongarch/kernel/module-sections.c +@@ -8,6 +8,7 @@ + #include + #include + #include ++#include + + Elf_Addr module_emit_got_entry(struct module *mod, Elf_Shdr *sechdrs, Elf_Addr val) + { +@@ -61,39 +62,38 @@ Elf_Addr module_emit_plt_entry(struct module *mod, Elf_Shdr *sechdrs, Elf_Addr v + return (Elf_Addr)&plt[nr]; + } + +-static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y) +-{ +- return x->r_info == y->r_info && x->r_addend == y->r_addend; +-} ++#define cmp_3way(a, b) ((a) < (b) ? -1 : (a) > (b)) + +-static bool duplicate_rela(const Elf_Rela *rela, int idx) ++static int compare_rela(const void *x, const void *y) + { +- int i; ++ int ret; ++ const Elf_Rela *rela_x = x, *rela_y = y; + +- for (i = 0; i < idx; i++) { +- if (is_rela_equal(&rela[i], &rela[idx])) +- return true; +- } ++ ret = cmp_3way(rela_x->r_info, rela_y->r_info); ++ if (ret == 0) ++ ret = cmp_3way(rela_x->r_addend, rela_y->r_addend); + +- return false; ++ return ret; + } + + static void count_max_entries(Elf_Rela *relas, int num, + unsigned int *plts, unsigned int *gots) + { +- unsigned int i, type; ++ unsigned int i; ++ ++ sort(relas, num, sizeof(Elf_Rela), compare_rela, NULL); + + for (i = 0; i < num; i++) { +- type = ELF_R_TYPE(relas[i].r_info); +- switch (type) { ++ if (i && !compare_rela(&relas[i-1], &relas[i])) ++ continue; ++ ++ switch (ELF_R_TYPE(relas[i].r_info)) { + case R_LARCH_SOP_PUSH_PLT_PCREL: + case R_LARCH_B26: +- if (!duplicate_rela(relas, i)) +- (*plts)++; ++ (*plts)++; + break; + case R_LARCH_GOT_PC_HI20: +- if (!duplicate_rela(relas, i)) +- (*gots)++; ++ (*gots)++; + break; + default: + break; /* Do nothing. */ +diff --git a/arch/loongarch/kvm/intc/eiointc.c b/arch/loongarch/kvm/intc/eiointc.c +index a75f865d6fb96c..0207cfe1dbd6c7 100644 +--- a/arch/loongarch/kvm/intc/eiointc.c ++++ b/arch/loongarch/kvm/intc/eiointc.c +@@ -9,7 +9,7 @@ + + static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s) + { +- int ipnum, cpu, cpuid, irq_index, irq_mask, irq; ++ int ipnum, cpu, cpuid, irq; + struct kvm_vcpu *vcpu; + + for (irq = 0; irq < EIOINTC_IRQS; irq++) { +@@ -18,8 +18,6 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s) + ipnum = count_trailing_zeros(ipnum); + ipnum = (ipnum >= 0 && ipnum < 4) ? ipnum : 0; + } +- irq_index = irq / 32; +- irq_mask = BIT(irq & 0x1f); + + cpuid = s->coremap.reg_u8[irq]; + vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid); +@@ -27,16 +25,16 @@ static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s) + continue; + + cpu = vcpu->vcpu_id; +- if (!!(s->coreisr.reg_u32[cpu][irq_index] & irq_mask)) +- set_bit(irq, s->sw_coreisr[cpu][ipnum]); ++ if (test_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu])) ++ __set_bit(irq, s->sw_coreisr[cpu][ipnum]); + else +- clear_bit(irq, s->sw_coreisr[cpu][ipnum]); ++ __clear_bit(irq, s->sw_coreisr[cpu][ipnum]); + } + } + + static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level) + { +- int ipnum, cpu, found, irq_index, irq_mask; ++ int ipnum, cpu, found; + struct kvm_vcpu *vcpu; + struct kvm_interrupt vcpu_irq; + +@@ -47,20 +45,22 @@ static void eiointc_update_irq(struct loongarch_eiointc *s, int irq, int level) + } + + cpu = s->sw_coremap[irq]; +- vcpu = kvm_get_vcpu(s->kvm, cpu); +- irq_index = irq / 32; +- irq_mask = BIT(irq & 0x1f); ++ vcpu = kvm_get_vcpu_by_id(s->kvm, cpu); ++ if (unlikely(vcpu == NULL)) { ++ kvm_err("%s: invalid target cpu: %d\n", __func__, cpu); ++ return; ++ } + + if (level) { + /* if not enable return false */ +- if (((s->enable.reg_u32[irq_index]) & irq_mask) == 0) ++ if (!test_bit(irq, (unsigned long *)s->enable.reg_u32)) + return; +- s->coreisr.reg_u32[cpu][irq_index] |= irq_mask; ++ __set_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]); + found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS); +- set_bit(irq, s->sw_coreisr[cpu][ipnum]); ++ __set_bit(irq, s->sw_coreisr[cpu][ipnum]); + } else { +- s->coreisr.reg_u32[cpu][irq_index] &= ~irq_mask; +- clear_bit(irq, s->sw_coreisr[cpu][ipnum]); ++ __clear_bit(irq, (unsigned long *)s->coreisr.reg_u32[cpu]); ++ __clear_bit(irq, s->sw_coreisr[cpu][ipnum]); + found = find_first_bit(s->sw_coreisr[cpu][ipnum], EIOINTC_IRQS); + } + +@@ -110,8 +110,8 @@ void eiointc_set_irq(struct loongarch_eiointc *s, int irq, int level) + unsigned long flags; + unsigned long *isr = (unsigned long *)s->isr.reg_u8; + +- level ? set_bit(irq, isr) : clear_bit(irq, isr); + spin_lock_irqsave(&s->lock, flags); ++ level ? __set_bit(irq, isr) : __clear_bit(irq, isr); + eiointc_update_irq(s, irq, level); + spin_unlock_irqrestore(&s->lock, flags); + } +diff --git a/arch/loongarch/kvm/intc/ipi.c b/arch/loongarch/kvm/intc/ipi.c +index fe734dc062ed47..4859e320e3a166 100644 +--- a/arch/loongarch/kvm/intc/ipi.c ++++ b/arch/loongarch/kvm/intc/ipi.c +@@ -99,7 +99,7 @@ static void write_mailbox(struct kvm_vcpu *vcpu, int offset, uint64_t data, int + static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data) + { + int i, idx, ret; +- uint32_t val = 0, mask = 0; ++ uint64_t val = 0, mask = 0; + + /* + * Bit 27-30 is mask for byte writing. +@@ -108,7 +108,7 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data) + if ((data >> 27) & 0xf) { + /* Read the old val */ + idx = srcu_read_lock(&vcpu->kvm->srcu); +- ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); ++ ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, 4, &val); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + if (unlikely(ret)) { + kvm_err("%s: : read data from addr %llx failed\n", __func__, addr); +@@ -124,7 +124,7 @@ static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data) + } + val |= ((uint32_t)(data >> 32) & ~mask); + idx = srcu_read_lock(&vcpu->kvm->srcu); +- ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val); ++ ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, 4, &val); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + if (unlikely(ret)) + kvm_err("%s: : write data to addr %llx failed\n", __func__, addr); +@@ -318,7 +318,7 @@ static int kvm_ipi_regs_access(struct kvm_device *dev, + cpu = (attr->attr >> 16) & 0x3ff; + addr = attr->attr & 0xff; + +- vcpu = kvm_get_vcpu(dev->kvm, cpu); ++ vcpu = kvm_get_vcpu_by_id(dev->kvm, cpu); + if (unlikely(vcpu == NULL)) { + kvm_err("%s: invalid target cpu: %d\n", __func__, cpu); + return -EINVAL; +diff --git a/arch/loongarch/kvm/intc/pch_pic.c b/arch/loongarch/kvm/intc/pch_pic.c +index 08fce845f66803..ef5044796b7a6e 100644 +--- a/arch/loongarch/kvm/intc/pch_pic.c ++++ b/arch/loongarch/kvm/intc/pch_pic.c +@@ -195,6 +195,11 @@ static int kvm_pch_pic_read(struct kvm_vcpu *vcpu, + return -EINVAL; + } + ++ if (addr & (len - 1)) { ++ kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len); ++ return -EINVAL; ++ } ++ + /* statistics of pch pic reading */ + vcpu->kvm->stat.pch_pic_read_exits++; + ret = loongarch_pch_pic_read(s, addr, len, val); +@@ -302,6 +307,11 @@ static int kvm_pch_pic_write(struct kvm_vcpu *vcpu, + return -EINVAL; + } + ++ if (addr & (len - 1)) { ++ kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len); ++ return -EINVAL; ++ } ++ + /* statistics of pch pic writing */ + vcpu->kvm->stat.pch_pic_write_exits++; + ret = loongarch_pch_pic_write(s, addr, len, val); +diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c +index 5af32ec62cb16a..ca35c01fa36382 100644 +--- a/arch/loongarch/kvm/vcpu.c ++++ b/arch/loongarch/kvm/vcpu.c +@@ -1277,9 +1277,11 @@ int kvm_own_lbt(struct kvm_vcpu *vcpu) + return -EINVAL; + + preempt_disable(); +- set_csr_euen(CSR_EUEN_LBTEN); +- _restore_lbt(&vcpu->arch.lbt); +- vcpu->arch.aux_inuse |= KVM_LARCH_LBT; ++ if (!(vcpu->arch.aux_inuse & KVM_LARCH_LBT)) { ++ set_csr_euen(CSR_EUEN_LBTEN); ++ _restore_lbt(&vcpu->arch.lbt); ++ vcpu->arch.aux_inuse |= KVM_LARCH_LBT; ++ } + preempt_enable(); + + return 0; +diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S +index ba22bc2f3d6d86..d96685489aac98 100644 +--- a/arch/m68k/kernel/head.S ++++ b/arch/m68k/kernel/head.S +@@ -3400,6 +3400,7 @@ L(console_clear_loop): + + movel %d4,%d1 /* screen height in pixels */ + divul %a0@(FONT_DESC_HEIGHT),%d1 /* d1 = max num rows */ ++ subql #1,%d1 /* row range is 0 to num - 1 */ + + movel %d0,%a2@(Lconsole_struct_num_columns) + movel %d1,%a2@(Lconsole_struct_num_rows) +@@ -3546,15 +3547,14 @@ func_start console_putc,%a0/%a1/%d0-%d7 + cmpib #10,%d7 + jne L(console_not_lf) + movel %a0@(Lconsole_struct_cur_row),%d0 +- addil #1,%d0 +- movel %d0,%a0@(Lconsole_struct_cur_row) + movel %a0@(Lconsole_struct_num_rows),%d1 + cmpl %d1,%d0 + jcs 1f +- subil #1,%d0 +- movel %d0,%a0@(Lconsole_struct_cur_row) + console_scroll ++ jra L(console_exit) + 1: ++ addql #1,%d0 ++ movel %d0,%a0@(Lconsole_struct_cur_row) + jra L(console_exit) + + L(console_not_lf): +@@ -3581,12 +3581,6 @@ L(console_not_cr): + */ + L(console_not_home): + movel %a0@(Lconsole_struct_cur_column),%d0 +- addql #1,%a0@(Lconsole_struct_cur_column) +- movel %a0@(Lconsole_struct_num_columns),%d1 +- cmpl %d1,%d0 +- jcs 1f +- console_putc #'\n' /* recursion is OK! */ +-1: + movel %a0@(Lconsole_struct_cur_row),%d1 + + /* +@@ -3633,6 +3627,23 @@ L(console_do_font_scanline): + addq #1,%d1 + dbra %d7,L(console_read_char_scanline) + ++ /* ++ * Register usage in the code below: ++ * a0 = pointer to console globals ++ * d0 = cursor column ++ * d1 = cursor column limit ++ */ ++ ++ lea %pc@(L(console_globals)),%a0 ++ ++ movel %a0@(Lconsole_struct_cur_column),%d0 ++ addql #1,%d0 ++ movel %d0,%a0@(Lconsole_struct_cur_column) /* Update cursor pos */ ++ movel %a0@(Lconsole_struct_num_columns),%d1 ++ cmpl %d1,%d0 ++ jcs L(console_exit) ++ console_putc #'\n' /* Line wrap using tail recursion */ ++ + L(console_exit): + func_return console_putc + +diff --git a/arch/mips/lib/crypto/chacha-core.S b/arch/mips/lib/crypto/chacha-core.S +index 5755f69cfe0074..706aeb850fb0d6 100644 +--- a/arch/mips/lib/crypto/chacha-core.S ++++ b/arch/mips/lib/crypto/chacha-core.S +@@ -55,17 +55,13 @@ + #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ + #define MSB 0 + #define LSB 3 +-#define ROTx rotl +-#define ROTR(n) rotr n, 24 + #define CPU_TO_LE32(n) \ +- wsbh n; \ ++ wsbh n, n; \ + rotr n, 16; + #else + #define MSB 3 + #define LSB 0 +-#define ROTx rotr + #define CPU_TO_LE32(n) +-#define ROTR(n) + #endif + + #define FOR_EACH_WORD(x) \ +@@ -192,10 +188,10 @@ CONCAT3(.Lchacha_mips_xor_aligned_, PLUS_ONE(x), _b: ;) \ + xor X(W), X(B); \ + xor X(Y), X(C); \ + xor X(Z), X(D); \ +- rotl X(V), S; \ +- rotl X(W), S; \ +- rotl X(Y), S; \ +- rotl X(Z), S; ++ rotr X(V), 32 - S; \ ++ rotr X(W), 32 - S; \ ++ rotr X(Y), 32 - S; \ ++ rotr X(Z), 32 - S; + + .text + .set reorder +@@ -372,21 +368,19 @@ chacha_crypt_arch: + /* First byte */ + lbu T1, 0(IN) + addiu $at, BYTES, 1 +- CPU_TO_LE32(SAVED_X) +- ROTR(SAVED_X) + xor T1, SAVED_X + sb T1, 0(OUT) + beqz $at, .Lchacha_mips_xor_done + /* Second byte */ + lbu T1, 1(IN) + addiu $at, BYTES, 2 +- ROTx SAVED_X, 8 ++ rotr SAVED_X, 8 + xor T1, SAVED_X + sb T1, 1(OUT) + beqz $at, .Lchacha_mips_xor_done + /* Third byte */ + lbu T1, 2(IN) +- ROTx SAVED_X, 8 ++ rotr SAVED_X, 8 + xor T1, SAVED_X + sb T1, 2(OUT) + b .Lchacha_mips_xor_done +diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile +index 9cd9aa3d16f29a..48ae3c79557a51 100644 +--- a/arch/parisc/Makefile ++++ b/arch/parisc/Makefile +@@ -39,7 +39,9 @@ endif + + export LD_BFD + +-# Set default 32 bits cross compilers for vdso ++# Set default 32 bits cross compilers for vdso. ++# This means that for 64BIT, both the 64-bit tools and the 32-bit tools ++# need to be in the path. + CC_ARCHES_32 = hppa hppa2.0 hppa1.1 + CC_SUFFIXES = linux linux-gnu unknown-linux-gnu suse-linux + CROSS32_COMPILE := $(call cc-cross-prefix, \ +diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h +index 1a86a4370b298a..2c139a4dbf4b86 100644 +--- a/arch/parisc/include/asm/pgtable.h ++++ b/arch/parisc/include/asm/pgtable.h +@@ -276,7 +276,7 @@ extern unsigned long *empty_zero_page; + #define pte_none(x) (pte_val(x) == 0) + #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) + #define pte_user(x) (pte_val(x) & _PAGE_USER) +-#define pte_clear(mm, addr, xp) set_pte(xp, __pte(0)) ++#define pte_clear(mm, addr, xp) set_pte_at((mm), (addr), (xp), __pte(0)) + + #define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) + #define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT) +@@ -392,6 +392,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + } + } + #define set_ptes set_ptes ++#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + + /* Used for deferring calls to flush_dcache_page() */ + +@@ -456,7 +457,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned + if (!pte_young(pte)) { + return 0; + } +- set_pte(ptep, pte_mkold(pte)); ++ set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte)); + return 1; + } + +@@ -466,7 +467,7 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *pt + struct mm_struct; + static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) + { +- set_pte(ptep, pte_wrprotect(*ptep)); ++ set_pte_at(mm, addr, ptep, pte_wrprotect(*ptep)); + } + + #define pte_same(A,B) (pte_val(A) == pte_val(B)) +diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h +index 51f40eaf778065..1013eeba31e5bb 100644 +--- a/arch/parisc/include/asm/special_insns.h ++++ b/arch/parisc/include/asm/special_insns.h +@@ -32,6 +32,34 @@ + pa; \ + }) + ++/** ++ * prober_user() - Probe user read access ++ * @sr: Space regster. ++ * @va: Virtual address. ++ * ++ * Return: Non-zero if address is accessible. ++ * ++ * Due to the way _PAGE_READ is handled in TLB entries, we need ++ * a special check to determine whether a user address is accessible. ++ * The ldb instruction does the initial access check. If it is ++ * successful, the probe instruction checks user access rights. ++ */ ++#define prober_user(sr, va) ({ \ ++ unsigned long read_allowed; \ ++ __asm__ __volatile__( \ ++ "copy %%r0,%0\n" \ ++ "8:\tldb 0(%%sr%1,%2),%%r0\n" \ ++ "\tproberi (%%sr%1,%2),%3,%0\n" \ ++ "9:\n" \ ++ ASM_EXCEPTIONTABLE_ENTRY(8b, 9b, \ ++ "or %%r0,%%r0,%%r0") \ ++ : "=&r" (read_allowed) \ ++ : "i" (sr), "r" (va), "i" (PRIV_USER) \ ++ : "memory" \ ++ ); \ ++ read_allowed; \ ++}) ++ + #define CR_EIEM 15 /* External Interrupt Enable Mask */ + #define CR_CR16 16 /* CR16 Interval Timer */ + #define CR_EIRR 23 /* External Interrupt Request Register */ +diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h +index 88d0ae5769dde5..6c531d2c847eb1 100644 +--- a/arch/parisc/include/asm/uaccess.h ++++ b/arch/parisc/include/asm/uaccess.h +@@ -42,9 +42,24 @@ + __gu_err; \ + }) + +-#define __get_user(val, ptr) \ +-({ \ +- __get_user_internal(SR_USER, val, ptr); \ ++#define __probe_user_internal(sr, error, ptr) \ ++({ \ ++ __asm__("\tproberi (%%sr%1,%2),%3,%0\n" \ ++ "\tcmpiclr,= 1,%0,%0\n" \ ++ "\tldi %4,%0\n" \ ++ : "=r"(error) \ ++ : "i"(sr), "r"(ptr), "i"(PRIV_USER), \ ++ "i"(-EFAULT)); \ ++}) ++ ++#define __get_user(val, ptr) \ ++({ \ ++ register long __gu_err; \ ++ \ ++ __gu_err = __get_user_internal(SR_USER, val, ptr); \ ++ if (likely(!__gu_err)) \ ++ __probe_user_internal(SR_USER, __gu_err, ptr); \ ++ __gu_err; \ + }) + + #define __get_user_asm(sr, val, ldx, ptr) \ +diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c +index db531e58d70ef0..37ca484cc49511 100644 +--- a/arch/parisc/kernel/cache.c ++++ b/arch/parisc/kernel/cache.c +@@ -429,7 +429,7 @@ static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) + return ptep; + } + +-static inline bool pte_needs_flush(pte_t pte) ++static inline bool pte_needs_cache_flush(pte_t pte) + { + return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_NO_CACHE)) + == (_PAGE_PRESENT | _PAGE_ACCESSED); +@@ -630,7 +630,7 @@ static void flush_cache_page_if_present(struct vm_area_struct *vma, + ptep = get_ptep(vma->vm_mm, vmaddr); + if (ptep) { + pte = ptep_get(ptep); +- needs_flush = pte_needs_flush(pte); ++ needs_flush = pte_needs_cache_flush(pte); + pte_unmap(ptep); + } + if (needs_flush) +@@ -841,7 +841,7 @@ void flush_cache_vmap(unsigned long start, unsigned long end) + } + + vm = find_vm_area((void *)start); +- if (WARN_ON_ONCE(!vm)) { ++ if (!vm) { + flush_cache_all(); + return; + } +diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S +index ea57bcc21dc5fe..f4bf61a34701e5 100644 +--- a/arch/parisc/kernel/entry.S ++++ b/arch/parisc/kernel/entry.S +@@ -499,6 +499,12 @@ + * this happens is quite subtle, read below */ + .macro make_insert_tlb spc,pte,prot,tmp + space_to_prot \spc \prot /* create prot id from space */ ++ ++#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT ++ /* need to drop DMB bit, as it's used as SPECIAL flag */ ++ depi 0,_PAGE_SPECIAL_BIT,1,\pte ++#endif ++ + /* The following is the real subtlety. This is depositing + * T <-> _PAGE_REFTRAP + * D <-> _PAGE_DIRTY +@@ -511,17 +517,18 @@ + * Finally, _PAGE_READ goes in the top bit of PL1 (so we + * trigger an access rights trap in user space if the user + * tries to read an unreadable page */ +-#if _PAGE_SPECIAL_BIT == _PAGE_DMB_BIT +- /* need to drop DMB bit, as it's used as SPECIAL flag */ +- depi 0,_PAGE_SPECIAL_BIT,1,\pte +-#endif + depd \pte,8,7,\prot + + /* PAGE_USER indicates the page can be read with user privileges, + * so deposit X1|11 to PL1|PL2 (remember the upper bit of PL1 +- * contains _PAGE_READ) */ ++ * contains _PAGE_READ). While the kernel can't directly write ++ * user pages which have _PAGE_WRITE zero, it can read pages ++ * which have _PAGE_READ zero (PL <= PL1). Thus, the kernel ++ * exception fault handler doesn't trigger when reading pages ++ * that aren't user read accessible */ + extrd,u,*= \pte,_PAGE_USER_BIT+32,1,%r0 + depdi 7,11,3,\prot ++ + /* If we're a gateway page, drop PL2 back to zero for promotion + * to kernel privilege (so we can execute the page as kernel). + * Any privilege promotion page always denys read and write */ +diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S +index 0fa81bf1466b15..f58c4bccfbce0e 100644 +--- a/arch/parisc/kernel/syscall.S ++++ b/arch/parisc/kernel/syscall.S +@@ -613,6 +613,9 @@ lws_compare_and_swap32: + lws_compare_and_swap: + /* Trigger memory reference interruptions without writing to memory */ + 1: ldw 0(%r26), %r28 ++ proberi (%r26), PRIV_USER, %r28 ++ comb,=,n %r28, %r0, lws_fault /* backwards, likely not taken */ ++ nop + 2: stbys,e %r0, 0(%r26) + + /* Calculate 8-bit hash index from virtual address */ +@@ -767,6 +770,9 @@ cas2_lock_start: + copy %r26, %r28 + depi_safe 0, 31, 2, %r28 + 10: ldw 0(%r28), %r1 ++ proberi (%r28), PRIV_USER, %r1 ++ comb,=,n %r1, %r0, lws_fault /* backwards, likely not taken */ ++ nop + 11: stbys,e %r0, 0(%r28) + + /* Calculate 8-bit hash index from virtual address */ +@@ -951,41 +957,47 @@ atomic_xchg_begin: + + /* 8-bit exchange */ + 1: ldb 0(%r24), %r20 ++ proberi (%r24), PRIV_USER, %r20 ++ comb,=,n %r20, %r0, lws_fault /* backwards, likely not taken */ ++ nop + copy %r23, %r20 + depi_safe 0, 31, 2, %r20 + b atomic_xchg_start + 2: stbys,e %r0, 0(%r20) +- nop +- nop +- nop + + /* 16-bit exchange */ + 3: ldh 0(%r24), %r20 ++ proberi (%r24), PRIV_USER, %r20 ++ comb,=,n %r20, %r0, lws_fault /* backwards, likely not taken */ ++ nop + copy %r23, %r20 + depi_safe 0, 31, 2, %r20 + b atomic_xchg_start + 4: stbys,e %r0, 0(%r20) +- nop +- nop +- nop + + /* 32-bit exchange */ + 5: ldw 0(%r24), %r20 ++ proberi (%r24), PRIV_USER, %r20 ++ comb,=,n %r20, %r0, lws_fault /* backwards, likely not taken */ ++ nop + b atomic_xchg_start + 6: stbys,e %r0, 0(%r23) + nop + nop +- nop +- nop +- nop + + /* 64-bit exchange */ + #ifdef CONFIG_64BIT + 7: ldd 0(%r24), %r20 ++ proberi (%r24), PRIV_USER, %r20 ++ comb,=,n %r20, %r0, lws_fault /* backwards, likely not taken */ ++ nop + 8: stdby,e %r0, 0(%r23) + #else + 7: ldw 0(%r24), %r20 + 8: ldw 4(%r24), %r20 ++ proberi (%r24), PRIV_USER, %r20 ++ comb,=,n %r20, %r0, lws_fault /* backwards, likely not taken */ ++ nop + copy %r23, %r20 + depi_safe 0, 31, 2, %r20 + 9: stbys,e %r0, 0(%r20) +diff --git a/arch/parisc/lib/memcpy.c b/arch/parisc/lib/memcpy.c +index 5fc0c852c84c8d..69d65ffab31263 100644 +--- a/arch/parisc/lib/memcpy.c ++++ b/arch/parisc/lib/memcpy.c +@@ -12,6 +12,7 @@ + #include + #include + #include ++#include + + #define get_user_space() mfsp(SR_USER) + #define get_kernel_space() SR_KERNEL +@@ -32,9 +33,25 @@ EXPORT_SYMBOL(raw_copy_to_user); + unsigned long raw_copy_from_user(void *dst, const void __user *src, + unsigned long len) + { ++ unsigned long start = (unsigned long) src; ++ unsigned long end = start + len; ++ unsigned long newlen = len; ++ + mtsp(get_user_space(), SR_TEMP1); + mtsp(get_kernel_space(), SR_TEMP2); +- return pa_memcpy(dst, (void __force *)src, len); ++ ++ /* Check region is user accessible */ ++ if (start) ++ while (start < end) { ++ if (!prober_user(SR_TEMP1, start)) { ++ newlen = (start - (unsigned long) src); ++ break; ++ } ++ start += PAGE_SIZE; ++ /* align to page boundry which may have different permission */ ++ start = PAGE_ALIGN_DOWN(start); ++ } ++ return len - newlen + pa_memcpy(dst, (void __force *)src, newlen); + } + EXPORT_SYMBOL(raw_copy_from_user); + +diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c +index c39de84e98b051..f1785640b049b5 100644 +--- a/arch/parisc/mm/fault.c ++++ b/arch/parisc/mm/fault.c +@@ -363,6 +363,10 @@ void do_page_fault(struct pt_regs *regs, unsigned long code, + mmap_read_unlock(mm); + + bad_area_nosemaphore: ++ if (!user_mode(regs) && fixup_exception(regs)) { ++ return; ++ } ++ + if (user_mode(regs)) { + int signo, si_code; + +diff --git a/arch/s390/boot/vmem.c b/arch/s390/boot/vmem.c +index 1d073acd05a7b8..cea3de4dce8c32 100644 +--- a/arch/s390/boot/vmem.c ++++ b/arch/s390/boot/vmem.c +@@ -530,6 +530,9 @@ void setup_vmem(unsigned long kernel_start, unsigned long kernel_end, unsigned l + lowcore_address + sizeof(struct lowcore), + POPULATE_LOWCORE); + for_each_physmem_usable_range(i, &start, &end) { ++ /* Do not map lowcore with identity mapping */ ++ if (!start) ++ start = sizeof(struct lowcore); + pgtable_populate((unsigned long)__identity_va(start), + (unsigned long)__identity_va(end), + POPULATE_IDENTITY); +diff --git a/arch/s390/hypfs/hypfs_dbfs.c b/arch/s390/hypfs/hypfs_dbfs.c +index 5d9effb0867cde..41a0d2066fa002 100644 +--- a/arch/s390/hypfs/hypfs_dbfs.c ++++ b/arch/s390/hypfs/hypfs_dbfs.c +@@ -6,6 +6,7 @@ + * Author(s): Michael Holzheu + */ + ++#include + #include + #include "hypfs.h" + +@@ -66,23 +67,27 @@ static long dbfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + long rc; + + mutex_lock(&df->lock); +- if (df->unlocked_ioctl) +- rc = df->unlocked_ioctl(file, cmd, arg); +- else +- rc = -ENOTTY; ++ rc = df->unlocked_ioctl(file, cmd, arg); + mutex_unlock(&df->lock); + return rc; + } + +-static const struct file_operations dbfs_ops = { ++static const struct file_operations dbfs_ops_ioctl = { + .read = dbfs_read, + .unlocked_ioctl = dbfs_ioctl, + }; + ++static const struct file_operations dbfs_ops = { ++ .read = dbfs_read, ++}; ++ + void hypfs_dbfs_create_file(struct hypfs_dbfs_file *df) + { +- df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, +- &dbfs_ops); ++ const struct file_operations *fops = &dbfs_ops; ++ ++ if (df->unlocked_ioctl && !security_locked_down(LOCKDOWN_DEBUGFS)) ++ fops = &dbfs_ops_ioctl; ++ df->dentry = debugfs_create_file(df->name, 0400, dbfs_dir, df, fops); + mutex_init(&df->lock); + } + +diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c +index f1b6d40154e352..f1adfba1a76ea9 100644 +--- a/arch/x86/crypto/aegis128-aesni-glue.c ++++ b/arch/x86/crypto/aegis128-aesni-glue.c +@@ -104,10 +104,12 @@ static void crypto_aegis128_aesni_process_ad( + } + } + +-static __always_inline void ++static __always_inline int + crypto_aegis128_aesni_process_crypt(struct aegis_state *state, + struct skcipher_walk *walk, bool enc) + { ++ int err = 0; ++ + while (walk->nbytes >= AEGIS128_BLOCK_SIZE) { + if (enc) + aegis128_aesni_enc(state, walk->src.virt.addr, +@@ -119,7 +121,10 @@ crypto_aegis128_aesni_process_crypt(struct aegis_state *state, + walk->dst.virt.addr, + round_down(walk->nbytes, + AEGIS128_BLOCK_SIZE)); +- skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE); ++ kernel_fpu_end(); ++ err = skcipher_walk_done(walk, ++ walk->nbytes % AEGIS128_BLOCK_SIZE); ++ kernel_fpu_begin(); + } + + if (walk->nbytes) { +@@ -131,8 +136,11 @@ crypto_aegis128_aesni_process_crypt(struct aegis_state *state, + aegis128_aesni_dec_tail(state, walk->src.virt.addr, + walk->dst.virt.addr, + walk->nbytes); +- skcipher_walk_done(walk, 0); ++ kernel_fpu_end(); ++ err = skcipher_walk_done(walk, 0); ++ kernel_fpu_begin(); + } ++ return err; + } + + static struct aegis_ctx *crypto_aegis128_aesni_ctx(struct crypto_aead *aead) +@@ -165,7 +173,7 @@ static int crypto_aegis128_aesni_setauthsize(struct crypto_aead *tfm, + return 0; + } + +-static __always_inline void ++static __always_inline int + crypto_aegis128_aesni_crypt(struct aead_request *req, + struct aegis_block *tag_xor, + unsigned int cryptlen, bool enc) +@@ -174,20 +182,24 @@ crypto_aegis128_aesni_crypt(struct aead_request *req, + struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm); + struct skcipher_walk walk; + struct aegis_state state; ++ int err; + + if (enc) +- skcipher_walk_aead_encrypt(&walk, req, true); ++ err = skcipher_walk_aead_encrypt(&walk, req, false); + else +- skcipher_walk_aead_decrypt(&walk, req, true); ++ err = skcipher_walk_aead_decrypt(&walk, req, false); ++ if (err) ++ return err; + + kernel_fpu_begin(); + + aegis128_aesni_init(&state, &ctx->key, req->iv); + crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen); +- crypto_aegis128_aesni_process_crypt(&state, &walk, enc); +- aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen); +- ++ err = crypto_aegis128_aesni_process_crypt(&state, &walk, enc); ++ if (err == 0) ++ aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen); + kernel_fpu_end(); ++ return err; + } + + static int crypto_aegis128_aesni_encrypt(struct aead_request *req) +@@ -196,8 +208,11 @@ static int crypto_aegis128_aesni_encrypt(struct aead_request *req) + struct aegis_block tag = {}; + unsigned int authsize = crypto_aead_authsize(tfm); + unsigned int cryptlen = req->cryptlen; ++ int err; + +- crypto_aegis128_aesni_crypt(req, &tag, cryptlen, true); ++ err = crypto_aegis128_aesni_crypt(req, &tag, cryptlen, true); ++ if (err) ++ return err; + + scatterwalk_map_and_copy(tag.bytes, req->dst, + req->assoclen + cryptlen, authsize, 1); +@@ -212,11 +227,14 @@ static int crypto_aegis128_aesni_decrypt(struct aead_request *req) + struct aegis_block tag; + unsigned int authsize = crypto_aead_authsize(tfm); + unsigned int cryptlen = req->cryptlen - authsize; ++ int err; + + scatterwalk_map_and_copy(tag.bytes, req->src, + req->assoclen + cryptlen, authsize, 0); + +- crypto_aegis128_aesni_crypt(req, &tag, cryptlen, false); ++ err = crypto_aegis128_aesni_crypt(req, &tag, cryptlen, false); ++ if (err) ++ return err; + + return crypto_memneq(tag.bytes, zeros.bytes, authsize) ? -EBADMSG : 0; + } +diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h +index 59a62c3780a2f2..a16d4631547ce1 100644 +--- a/arch/x86/include/asm/xen/hypercall.h ++++ b/arch/x86/include/asm/xen/hypercall.h +@@ -94,12 +94,13 @@ DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func); + #ifdef MODULE + #define __ADDRESSABLE_xen_hypercall + #else +-#define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall) ++#define __ADDRESSABLE_xen_hypercall \ ++ __stringify(.global STATIC_CALL_KEY(xen_hypercall);) + #endif + + #define __HYPERCALL \ + __ADDRESSABLE_xen_hypercall \ +- "call __SCT__xen_hypercall" ++ __stringify(call STATIC_CALL_TRAMP(xen_hypercall)) + + #define __HYPERCALL_ENTRY(x) "a" (x) + +diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c +index 329ee185d8ccaf..9cc1738f59cf56 100644 +--- a/arch/x86/kernel/cpu/amd.c ++++ b/arch/x86/kernel/cpu/amd.c +@@ -1324,8 +1324,8 @@ static const char * const s5_reset_reason_txt[] = { + + static __init int print_s5_reset_status_mmio(void) + { +- unsigned long value; + void __iomem *addr; ++ u32 value; + int i; + + if (!cpu_feature_enabled(X86_FEATURE_ZEN)) +@@ -1338,12 +1338,16 @@ static __init int print_s5_reset_status_mmio(void) + value = ioread32(addr); + iounmap(addr); + ++ /* Value with "all bits set" is an error response and should be ignored. */ ++ if (value == U32_MAX) ++ return 0; ++ + for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) { + if (!(value & BIT(i))) + continue; + + if (s5_reset_reason_txt[i]) { +- pr_info("x86/amd: Previous system reset reason [0x%08lx]: %s\n", ++ pr_info("x86/amd: Previous system reset reason [0x%08x]: %s\n", + value, s5_reset_reason_txt[i]); + } + } +diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c +index 2154f12766fb5a..1fda6c3a2b65a7 100644 +--- a/arch/x86/kernel/cpu/hygon.c ++++ b/arch/x86/kernel/cpu/hygon.c +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + + #include "cpu.h" + +@@ -117,6 +118,8 @@ static void bsp_init_hygon(struct cpuinfo_x86 *c) + x86_amd_ls_cfg_ssbd_mask = 1ULL << 10; + } + } ++ ++ resctrl_cpu_detect(c); + } + + static void early_init_hygon(struct cpuinfo_x86 *c) +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index d68da9e92e1eee..9483c529c95823 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -7229,22 +7229,16 @@ static void bfq_init_root_group(struct bfq_group *root_group, + root_group->sched_data.bfq_class_idle_last_service = jiffies; + } + +-static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) ++static int bfq_init_queue(struct request_queue *q, struct elevator_queue *eq) + { + struct bfq_data *bfqd; +- struct elevator_queue *eq; + unsigned int i; + struct blk_independent_access_ranges *ia_ranges = q->disk->ia_ranges; + +- eq = elevator_alloc(q, e); +- if (!eq) +- return -ENOMEM; +- + bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node); +- if (!bfqd) { +- kobject_put(&eq->kobj); ++ if (!bfqd) + return -ENOMEM; +- } ++ + eq->elevator_data = bfqd; + + spin_lock_irq(&q->queue_lock); +@@ -7402,7 +7396,6 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) + + out_free: + kfree(bfqd); +- kobject_put(&eq->kobj); + return -ENOMEM; + } + +diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c +index 29b3540dd1804d..bdcb27ab560682 100644 +--- a/block/blk-mq-debugfs.c ++++ b/block/blk-mq-debugfs.c +@@ -95,6 +95,7 @@ static const char *const blk_queue_flag_name[] = { + QUEUE_FLAG_NAME(SQ_SCHED), + QUEUE_FLAG_NAME(DISABLE_WBT_DEF), + QUEUE_FLAG_NAME(NO_ELV_SWITCH), ++ QUEUE_FLAG_NAME(QOS_ENABLED), + }; + #undef QUEUE_FLAG_NAME + +diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c +index 55a0fd10514796..e2ce4a28e6c9e0 100644 +--- a/block/blk-mq-sched.c ++++ b/block/blk-mq-sched.c +@@ -374,64 +374,17 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq, + } + EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge); + +-static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q, +- struct blk_mq_hw_ctx *hctx, +- unsigned int hctx_idx) +-{ +- if (blk_mq_is_shared_tags(q->tag_set->flags)) { +- hctx->sched_tags = q->sched_shared_tags; +- return 0; +- } +- +- hctx->sched_tags = blk_mq_alloc_map_and_rqs(q->tag_set, hctx_idx, +- q->nr_requests); +- +- if (!hctx->sched_tags) +- return -ENOMEM; +- return 0; +-} +- +-static void blk_mq_exit_sched_shared_tags(struct request_queue *queue) +-{ +- blk_mq_free_rq_map(queue->sched_shared_tags); +- queue->sched_shared_tags = NULL; +-} +- + /* called in queue's release handler, tagset has gone away */ + static void blk_mq_sched_tags_teardown(struct request_queue *q, unsigned int flags) + { + struct blk_mq_hw_ctx *hctx; + unsigned long i; + +- queue_for_each_hw_ctx(q, hctx, i) { +- if (hctx->sched_tags) { +- if (!blk_mq_is_shared_tags(flags)) +- blk_mq_free_rq_map(hctx->sched_tags); +- hctx->sched_tags = NULL; +- } +- } ++ queue_for_each_hw_ctx(q, hctx, i) ++ hctx->sched_tags = NULL; + + if (blk_mq_is_shared_tags(flags)) +- blk_mq_exit_sched_shared_tags(q); +-} +- +-static int blk_mq_init_sched_shared_tags(struct request_queue *queue) +-{ +- struct blk_mq_tag_set *set = queue->tag_set; +- +- /* +- * Set initial depth at max so that we don't need to reallocate for +- * updating nr_requests. +- */ +- queue->sched_shared_tags = blk_mq_alloc_map_and_rqs(set, +- BLK_MQ_NO_HCTX_IDX, +- MAX_SCHED_RQ); +- if (!queue->sched_shared_tags) +- return -ENOMEM; +- +- blk_mq_tag_update_sched_shared_tags(queue); +- +- return 0; ++ q->sched_shared_tags = NULL; + } + + void blk_mq_sched_reg_debugfs(struct request_queue *q) +@@ -458,8 +411,140 @@ void blk_mq_sched_unreg_debugfs(struct request_queue *q) + mutex_unlock(&q->debugfs_mutex); + } + ++void blk_mq_free_sched_tags(struct elevator_tags *et, ++ struct blk_mq_tag_set *set) ++{ ++ unsigned long i; ++ ++ /* Shared tags are stored at index 0 in @tags. */ ++ if (blk_mq_is_shared_tags(set->flags)) ++ blk_mq_free_map_and_rqs(set, et->tags[0], BLK_MQ_NO_HCTX_IDX); ++ else { ++ for (i = 0; i < et->nr_hw_queues; i++) ++ blk_mq_free_map_and_rqs(set, et->tags[i], i); ++ } ++ ++ kfree(et); ++} ++ ++void blk_mq_free_sched_tags_batch(struct xarray *et_table, ++ struct blk_mq_tag_set *set) ++{ ++ struct request_queue *q; ++ struct elevator_tags *et; ++ ++ lockdep_assert_held_write(&set->update_nr_hwq_lock); ++ ++ list_for_each_entry(q, &set->tag_list, tag_set_list) { ++ /* ++ * Accessing q->elevator without holding q->elevator_lock is ++ * safe because we're holding here set->update_nr_hwq_lock in ++ * the writer context. So, scheduler update/switch code (which ++ * acquires the same lock but in the reader context) can't run ++ * concurrently. ++ */ ++ if (q->elevator) { ++ et = xa_load(et_table, q->id); ++ if (unlikely(!et)) ++ WARN_ON_ONCE(1); ++ else ++ blk_mq_free_sched_tags(et, set); ++ } ++ } ++} ++ ++struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set, ++ unsigned int nr_hw_queues) ++{ ++ unsigned int nr_tags; ++ int i; ++ struct elevator_tags *et; ++ gfp_t gfp = GFP_NOIO | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY; ++ ++ if (blk_mq_is_shared_tags(set->flags)) ++ nr_tags = 1; ++ else ++ nr_tags = nr_hw_queues; ++ ++ et = kmalloc(sizeof(struct elevator_tags) + ++ nr_tags * sizeof(struct blk_mq_tags *), gfp); ++ if (!et) ++ return NULL; ++ /* ++ * Default to double of smaller one between hw queue_depth and ++ * 128, since we don't split into sync/async like the old code ++ * did. Additionally, this is a per-hw queue depth. ++ */ ++ et->nr_requests = 2 * min_t(unsigned int, set->queue_depth, ++ BLKDEV_DEFAULT_RQ); ++ et->nr_hw_queues = nr_hw_queues; ++ ++ if (blk_mq_is_shared_tags(set->flags)) { ++ /* Shared tags are stored at index 0 in @tags. */ ++ et->tags[0] = blk_mq_alloc_map_and_rqs(set, BLK_MQ_NO_HCTX_IDX, ++ MAX_SCHED_RQ); ++ if (!et->tags[0]) ++ goto out; ++ } else { ++ for (i = 0; i < et->nr_hw_queues; i++) { ++ et->tags[i] = blk_mq_alloc_map_and_rqs(set, i, ++ et->nr_requests); ++ if (!et->tags[i]) ++ goto out_unwind; ++ } ++ } ++ ++ return et; ++out_unwind: ++ while (--i >= 0) ++ blk_mq_free_map_and_rqs(set, et->tags[i], i); ++out: ++ kfree(et); ++ return NULL; ++} ++ ++int blk_mq_alloc_sched_tags_batch(struct xarray *et_table, ++ struct blk_mq_tag_set *set, unsigned int nr_hw_queues) ++{ ++ struct request_queue *q; ++ struct elevator_tags *et; ++ gfp_t gfp = GFP_NOIO | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY; ++ ++ lockdep_assert_held_write(&set->update_nr_hwq_lock); ++ ++ list_for_each_entry(q, &set->tag_list, tag_set_list) { ++ /* ++ * Accessing q->elevator without holding q->elevator_lock is ++ * safe because we're holding here set->update_nr_hwq_lock in ++ * the writer context. So, scheduler update/switch code (which ++ * acquires the same lock but in the reader context) can't run ++ * concurrently. ++ */ ++ if (q->elevator) { ++ et = blk_mq_alloc_sched_tags(set, nr_hw_queues); ++ if (!et) ++ goto out_unwind; ++ if (xa_insert(et_table, q->id, et, gfp)) ++ goto out_free_tags; ++ } ++ } ++ return 0; ++out_free_tags: ++ blk_mq_free_sched_tags(et, set); ++out_unwind: ++ list_for_each_entry_continue_reverse(q, &set->tag_list, tag_set_list) { ++ if (q->elevator) { ++ et = xa_load(et_table, q->id); ++ if (et) ++ blk_mq_free_sched_tags(et, set); ++ } ++ } ++ return -ENOMEM; ++} ++ + /* caller must have a reference to @e, will grab another one if successful */ +-int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) ++int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e, ++ struct elevator_tags *et) + { + unsigned int flags = q->tag_set->flags; + struct blk_mq_hw_ctx *hctx; +@@ -467,36 +552,33 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) + unsigned long i; + int ret; + +- /* +- * Default to double of smaller one between hw queue_depth and 128, +- * since we don't split into sync/async like the old code did. +- * Additionally, this is a per-hw queue depth. +- */ +- q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth, +- BLKDEV_DEFAULT_RQ); ++ eq = elevator_alloc(q, e, et); ++ if (!eq) ++ return -ENOMEM; ++ ++ q->nr_requests = et->nr_requests; + + if (blk_mq_is_shared_tags(flags)) { +- ret = blk_mq_init_sched_shared_tags(q); +- if (ret) +- return ret; ++ /* Shared tags are stored at index 0 in @et->tags. */ ++ q->sched_shared_tags = et->tags[0]; ++ blk_mq_tag_update_sched_shared_tags(q); + } + + queue_for_each_hw_ctx(q, hctx, i) { +- ret = blk_mq_sched_alloc_map_and_rqs(q, hctx, i); +- if (ret) +- goto err_free_map_and_rqs; ++ if (blk_mq_is_shared_tags(flags)) ++ hctx->sched_tags = q->sched_shared_tags; ++ else ++ hctx->sched_tags = et->tags[i]; + } + +- ret = e->ops.init_sched(q, e); ++ ret = e->ops.init_sched(q, eq); + if (ret) +- goto err_free_map_and_rqs; ++ goto out; + + queue_for_each_hw_ctx(q, hctx, i) { + if (e->ops.init_hctx) { + ret = e->ops.init_hctx(hctx, i); + if (ret) { +- eq = q->elevator; +- blk_mq_sched_free_rqs(q); + blk_mq_exit_sched(q, eq); + kobject_put(&eq->kobj); + return ret; +@@ -505,10 +587,9 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e) + } + return 0; + +-err_free_map_and_rqs: +- blk_mq_sched_free_rqs(q); ++out: + blk_mq_sched_tags_teardown(q, flags); +- ++ kobject_put(&eq->kobj); + q->elevator = NULL; + return ret; + } +diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h +index 1326526bb7338c..b554e1d559508c 100644 +--- a/block/blk-mq-sched.h ++++ b/block/blk-mq-sched.h +@@ -18,10 +18,20 @@ void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx); + + void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx); + +-int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e); ++int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e, ++ struct elevator_tags *et); + void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e); + void blk_mq_sched_free_rqs(struct request_queue *q); + ++struct elevator_tags *blk_mq_alloc_sched_tags(struct blk_mq_tag_set *set, ++ unsigned int nr_hw_queues); ++int blk_mq_alloc_sched_tags_batch(struct xarray *et_table, ++ struct blk_mq_tag_set *set, unsigned int nr_hw_queues); ++void blk_mq_free_sched_tags(struct elevator_tags *et, ++ struct blk_mq_tag_set *set); ++void blk_mq_free_sched_tags_batch(struct xarray *et_table, ++ struct blk_mq_tag_set *set); ++ + static inline void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx) + { + if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 32d11305d51bb2..355db0abe44b86 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -4972,12 +4972,13 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) + * Switch back to the elevator type stored in the xarray. + */ + static void blk_mq_elv_switch_back(struct request_queue *q, +- struct xarray *elv_tbl) ++ struct xarray *elv_tbl, struct xarray *et_tbl) + { + struct elevator_type *e = xa_load(elv_tbl, q->id); ++ struct elevator_tags *t = xa_load(et_tbl, q->id); + + /* The elv_update_nr_hw_queues unfreezes the queue. */ +- elv_update_nr_hw_queues(q, e); ++ elv_update_nr_hw_queues(q, e, t); + + /* Drop the reference acquired in blk_mq_elv_switch_none. */ + if (e) +@@ -5029,7 +5030,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, + int prev_nr_hw_queues = set->nr_hw_queues; + unsigned int memflags; + int i; +- struct xarray elv_tbl; ++ struct xarray elv_tbl, et_tbl; ++ bool queues_frozen = false; + + lockdep_assert_held(&set->tag_list_lock); + +@@ -5042,6 +5044,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, + + memflags = memalloc_noio_save(); + ++ xa_init(&et_tbl); ++ if (blk_mq_alloc_sched_tags_batch(&et_tbl, set, nr_hw_queues) < 0) ++ goto out_memalloc_restore; ++ + xa_init(&elv_tbl); + + list_for_each_entry(q, &set->tag_list, tag_set_list) { +@@ -5049,9 +5055,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, + blk_mq_sysfs_unregister_hctxs(q); + } + +- list_for_each_entry(q, &set->tag_list, tag_set_list) +- blk_mq_freeze_queue_nomemsave(q); +- + /* + * Switch IO scheduler to 'none', cleaning up the data associated + * with the previous scheduler. We will switch back once we are done +@@ -5061,6 +5064,9 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, + if (blk_mq_elv_switch_none(q, &elv_tbl)) + goto switch_back; + ++ list_for_each_entry(q, &set->tag_list, tag_set_list) ++ blk_mq_freeze_queue_nomemsave(q); ++ queues_frozen = true; + if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0) + goto switch_back; + +@@ -5084,8 +5090,12 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, + } + switch_back: + /* The blk_mq_elv_switch_back unfreezes queue for us. */ +- list_for_each_entry(q, &set->tag_list, tag_set_list) +- blk_mq_elv_switch_back(q, &elv_tbl); ++ list_for_each_entry(q, &set->tag_list, tag_set_list) { ++ /* switch_back expects queue to be frozen */ ++ if (!queues_frozen) ++ blk_mq_freeze_queue_nomemsave(q); ++ blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl); ++ } + + list_for_each_entry(q, &set->tag_list, tag_set_list) { + blk_mq_sysfs_register_hctxs(q); +@@ -5096,7 +5106,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, + } + + xa_destroy(&elv_tbl); +- ++ xa_destroy(&et_tbl); ++out_memalloc_restore: + memalloc_noio_restore(memflags); + + /* Free the excess tags when nr_hw_queues shrink. */ +diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c +index 848591fb3c57b6..654478dfbc2044 100644 +--- a/block/blk-rq-qos.c ++++ b/block/blk-rq-qos.c +@@ -2,8 +2,6 @@ + + #include "blk-rq-qos.h" + +-__read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos); +- + /* + * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded, + * false if 'v' + 1 would be bigger than 'below'. +@@ -319,8 +317,8 @@ void rq_qos_exit(struct request_queue *q) + struct rq_qos *rqos = q->rq_qos; + q->rq_qos = rqos->next; + rqos->ops->exit(rqos); +- static_branch_dec(&block_rq_qos); + } ++ blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q); + mutex_unlock(&q->rq_qos_mutex); + } + +@@ -346,7 +344,7 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id, + goto ebusy; + rqos->next = q->rq_qos; + q->rq_qos = rqos; +- static_branch_inc(&block_rq_qos); ++ blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q); + + blk_mq_unfreeze_queue(q, memflags); + +@@ -377,6 +375,8 @@ void rq_qos_del(struct rq_qos *rqos) + break; + } + } ++ if (!q->rq_qos) ++ blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q); + blk_mq_unfreeze_queue(q, memflags); + + mutex_lock(&q->debugfs_mutex); +diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h +index 39749f4066fb10..1fe22000a3790e 100644 +--- a/block/blk-rq-qos.h ++++ b/block/blk-rq-qos.h +@@ -12,7 +12,6 @@ + #include "blk-mq-debugfs.h" + + struct blk_mq_debugfs_attr; +-extern struct static_key_false block_rq_qos; + + enum rq_qos_id { + RQ_QOS_WBT, +@@ -113,43 +112,55 @@ void __rq_qos_queue_depth_changed(struct rq_qos *rqos); + + static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) + __rq_qos_cleanup(q->rq_qos, bio); + } + + static inline void rq_qos_done(struct request_queue *q, struct request *rq) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos && +- !blk_rq_is_passthrough(rq)) ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos && !blk_rq_is_passthrough(rq)) + __rq_qos_done(q->rq_qos, rq); + } + + static inline void rq_qos_issue(struct request_queue *q, struct request *rq) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) + __rq_qos_issue(q->rq_qos, rq); + } + + static inline void rq_qos_requeue(struct request_queue *q, struct request *rq) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) + __rq_qos_requeue(q->rq_qos, rq); + } + + static inline void rq_qos_done_bio(struct bio *bio) + { +- if (static_branch_unlikely(&block_rq_qos) && +- bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) || +- bio_flagged(bio, BIO_QOS_MERGED))) { +- struct request_queue *q = bdev_get_queue(bio->bi_bdev); +- if (q->rq_qos) +- __rq_qos_done_bio(q->rq_qos, bio); +- } ++ struct request_queue *q; ++ ++ if (!bio->bi_bdev || (!bio_flagged(bio, BIO_QOS_THROTTLED) && ++ !bio_flagged(bio, BIO_QOS_MERGED))) ++ return; ++ ++ q = bdev_get_queue(bio->bi_bdev); ++ ++ /* ++ * If a bio has BIO_QOS_xxx set, it implicitly implies that ++ * q->rq_qos is present. So, we skip re-checking q->rq_qos ++ * here as an extra optimization and directly call ++ * __rq_qos_done_bio(). ++ */ ++ __rq_qos_done_bio(q->rq_qos, bio); + } + + static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) { ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) { + bio_set_flag(bio, BIO_QOS_THROTTLED); + __rq_qos_throttle(q->rq_qos, bio); + } +@@ -158,14 +169,16 @@ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) + static inline void rq_qos_track(struct request_queue *q, struct request *rq, + struct bio *bio) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) + __rq_qos_track(q->rq_qos, rq, bio); + } + + static inline void rq_qos_merge(struct request_queue *q, struct request *rq, + struct bio *bio) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) { ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) { + bio_set_flag(bio, BIO_QOS_MERGED); + __rq_qos_merge(q->rq_qos, rq, bio); + } +@@ -173,7 +186,8 @@ static inline void rq_qos_merge(struct request_queue *q, struct request *rq, + + static inline void rq_qos_queue_depth_changed(struct request_queue *q) + { +- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) ++ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && ++ q->rq_qos) + __rq_qos_queue_depth_changed(q->rq_qos); + } + +diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c +index de39746de18b48..7d1a03c36502f8 100644 +--- a/block/blk-sysfs.c ++++ b/block/blk-sysfs.c +@@ -876,9 +876,9 @@ int blk_register_queue(struct gendisk *disk) + + if (queue_is_mq(q)) + elevator_set_default(q); +- wbt_enable_default(disk); + + blk_queue_flag_set(QUEUE_FLAG_REGISTERED, q); ++ wbt_enable_default(disk); + + /* Now everything is ready and send out KOBJ_ADD uevent */ + kobject_uevent(&disk->queue_kobj, KOBJ_ADD); +diff --git a/block/blk.h b/block/blk.h +index 4746a7704856af..5d9ca8c951932e 100644 +--- a/block/blk.h ++++ b/block/blk.h +@@ -12,6 +12,7 @@ + #include "blk-crypto-internal.h" + + struct elevator_type; ++struct elevator_tags; + + #define BLK_DEV_MAX_SECTORS (LLONG_MAX >> 9) + #define BLK_MIN_SEGMENT_SIZE 4096 +@@ -322,7 +323,8 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list, + + bool blk_insert_flush(struct request *rq); + +-void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e); ++void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e, ++ struct elevator_tags *t); + void elevator_set_default(struct request_queue *q); + void elevator_set_none(struct request_queue *q); + +diff --git a/block/elevator.c b/block/elevator.c +index 88f8f36bed9818..fe96c6f4753ca2 100644 +--- a/block/elevator.c ++++ b/block/elevator.c +@@ -54,6 +54,8 @@ struct elv_change_ctx { + struct elevator_queue *old; + /* for registering new elevator */ + struct elevator_queue *new; ++ /* holds sched tags data */ ++ struct elevator_tags *et; + }; + + static DEFINE_SPINLOCK(elv_list_lock); +@@ -132,7 +134,7 @@ static struct elevator_type *elevator_find_get(const char *name) + static const struct kobj_type elv_ktype; + + struct elevator_queue *elevator_alloc(struct request_queue *q, +- struct elevator_type *e) ++ struct elevator_type *e, struct elevator_tags *et) + { + struct elevator_queue *eq; + +@@ -145,10 +147,10 @@ struct elevator_queue *elevator_alloc(struct request_queue *q, + kobject_init(&eq->kobj, &elv_ktype); + mutex_init(&eq->sysfs_lock); + hash_init(eq->hash); ++ eq->et = et; + + return eq; + } +-EXPORT_SYMBOL(elevator_alloc); + + static void elevator_release(struct kobject *kobj) + { +@@ -166,7 +168,6 @@ static void elevator_exit(struct request_queue *q) + lockdep_assert_held(&q->elevator_lock); + + ioc_clear_queue(q); +- blk_mq_sched_free_rqs(q); + + mutex_lock(&e->sysfs_lock); + blk_mq_exit_sched(q, e); +@@ -592,7 +593,7 @@ static int elevator_switch(struct request_queue *q, struct elv_change_ctx *ctx) + } + + if (new_e) { +- ret = blk_mq_init_sched(q, new_e); ++ ret = blk_mq_init_sched(q, new_e, ctx->et); + if (ret) + goto out_unfreeze; + ctx->new = q->elevator; +@@ -627,8 +628,10 @@ static void elv_exit_and_release(struct request_queue *q) + elevator_exit(q); + mutex_unlock(&q->elevator_lock); + blk_mq_unfreeze_queue(q, memflags); +- if (e) ++ if (e) { ++ blk_mq_free_sched_tags(e->et, q->tag_set); + kobject_put(&e->kobj); ++ } + } + + static int elevator_change_done(struct request_queue *q, +@@ -641,6 +644,7 @@ static int elevator_change_done(struct request_queue *q, + &ctx->old->flags); + + elv_unregister_queue(q, ctx->old); ++ blk_mq_free_sched_tags(ctx->old->et, q->tag_set); + kobject_put(&ctx->old->kobj); + if (enable_wbt) + wbt_enable_default(q->disk); +@@ -659,9 +663,16 @@ static int elevator_change_done(struct request_queue *q, + static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx) + { + unsigned int memflags; ++ struct blk_mq_tag_set *set = q->tag_set; + int ret = 0; + +- lockdep_assert_held(&q->tag_set->update_nr_hwq_lock); ++ lockdep_assert_held(&set->update_nr_hwq_lock); ++ ++ if (strncmp(ctx->name, "none", 4)) { ++ ctx->et = blk_mq_alloc_sched_tags(set, set->nr_hw_queues); ++ if (!ctx->et) ++ return -ENOMEM; ++ } + + memflags = blk_mq_freeze_queue(q); + /* +@@ -681,6 +692,11 @@ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx) + blk_mq_unfreeze_queue(q, memflags); + if (!ret) + ret = elevator_change_done(q, ctx); ++ /* ++ * Free sched tags if it's allocated but we couldn't switch elevator. ++ */ ++ if (ctx->et && !ctx->new) ++ blk_mq_free_sched_tags(ctx->et, set); + + return ret; + } +@@ -689,8 +705,10 @@ static int elevator_change(struct request_queue *q, struct elv_change_ctx *ctx) + * The I/O scheduler depends on the number of hardware queues, this forces a + * reattachment when nr_hw_queues changes. + */ +-void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e) ++void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e, ++ struct elevator_tags *t) + { ++ struct blk_mq_tag_set *set = q->tag_set; + struct elv_change_ctx ctx = {}; + int ret = -ENODEV; + +@@ -698,6 +716,7 @@ void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e) + + if (e && !blk_queue_dying(q) && blk_queue_registered(q)) { + ctx.name = e->elevator_name; ++ ctx.et = t; + + mutex_lock(&q->elevator_lock); + /* force to reattach elevator after nr_hw_queue is updated */ +@@ -707,6 +726,11 @@ void elv_update_nr_hw_queues(struct request_queue *q, struct elevator_type *e) + blk_mq_unfreeze_queue_nomemrestore(q); + if (!ret) + WARN_ON_ONCE(elevator_change_done(q, &ctx)); ++ /* ++ * Free sched tags if it's allocated but we couldn't switch elevator. ++ */ ++ if (t && !ctx.new) ++ blk_mq_free_sched_tags(t, set); + } + + /* +diff --git a/block/elevator.h b/block/elevator.h +index a07ce773a38f79..adc5c157e17e51 100644 +--- a/block/elevator.h ++++ b/block/elevator.h +@@ -23,8 +23,17 @@ enum elv_merge { + struct blk_mq_alloc_data; + struct blk_mq_hw_ctx; + ++struct elevator_tags { ++ /* num. of hardware queues for which tags are allocated */ ++ unsigned int nr_hw_queues; ++ /* depth used while allocating tags */ ++ unsigned int nr_requests; ++ /* shared tag is stored at index 0 */ ++ struct blk_mq_tags *tags[]; ++}; ++ + struct elevator_mq_ops { +- int (*init_sched)(struct request_queue *, struct elevator_type *); ++ int (*init_sched)(struct request_queue *, struct elevator_queue *); + void (*exit_sched)(struct elevator_queue *); + int (*init_hctx)(struct blk_mq_hw_ctx *, unsigned int); + void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int); +@@ -113,6 +122,7 @@ struct request *elv_rqhash_find(struct request_queue *q, sector_t offset); + struct elevator_queue + { + struct elevator_type *type; ++ struct elevator_tags *et; + void *elevator_data; + struct kobject kobj; + struct mutex sysfs_lock; +@@ -152,8 +162,8 @@ ssize_t elv_iosched_show(struct gendisk *disk, char *page); + ssize_t elv_iosched_store(struct gendisk *disk, const char *page, size_t count); + + extern bool elv_bio_merge_ok(struct request *, struct bio *); +-extern struct elevator_queue *elevator_alloc(struct request_queue *, +- struct elevator_type *); ++struct elevator_queue *elevator_alloc(struct request_queue *, ++ struct elevator_type *, struct elevator_tags *); + + /* + * Helper functions. +diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c +index bfd9a40bb33d44..70cbc7b2deb40b 100644 +--- a/block/kyber-iosched.c ++++ b/block/kyber-iosched.c +@@ -399,20 +399,13 @@ static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q) + return ERR_PTR(ret); + } + +-static int kyber_init_sched(struct request_queue *q, struct elevator_type *e) ++static int kyber_init_sched(struct request_queue *q, struct elevator_queue *eq) + { + struct kyber_queue_data *kqd; +- struct elevator_queue *eq; +- +- eq = elevator_alloc(q, e); +- if (!eq) +- return -ENOMEM; + + kqd = kyber_queue_data_alloc(q); +- if (IS_ERR(kqd)) { +- kobject_put(&eq->kobj); ++ if (IS_ERR(kqd)) + return PTR_ERR(kqd); +- } + + blk_stat_enable_accounting(q); + +diff --git a/block/mq-deadline.c b/block/mq-deadline.c +index 9ab6c62566952b..b9b7cdf1d3c980 100644 +--- a/block/mq-deadline.c ++++ b/block/mq-deadline.c +@@ -554,20 +554,14 @@ static void dd_exit_sched(struct elevator_queue *e) + /* + * initialize elevator private data (deadline_data). + */ +-static int dd_init_sched(struct request_queue *q, struct elevator_type *e) ++static int dd_init_sched(struct request_queue *q, struct elevator_queue *eq) + { + struct deadline_data *dd; +- struct elevator_queue *eq; + enum dd_prio prio; +- int ret = -ENOMEM; +- +- eq = elevator_alloc(q, e); +- if (!eq) +- return ret; + + dd = kzalloc_node(sizeof(*dd), GFP_KERNEL, q->node); + if (!dd) +- goto put_eq; ++ return -ENOMEM; + + eq->elevator_data = dd; + +@@ -594,10 +588,6 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e) + + q->elevator = eq; + return 0; +- +-put_eq: +- kobject_put(&eq->kobj); +- return ret; + } + + /* +diff --git a/crypto/deflate.c b/crypto/deflate.c +index fe8e4ad0fee106..21404515dc77ec 100644 +--- a/crypto/deflate.c ++++ b/crypto/deflate.c +@@ -48,9 +48,14 @@ static void *deflate_alloc_stream(void) + return ctx; + } + ++static void deflate_free_stream(void *ctx) ++{ ++ kvfree(ctx); ++} ++ + static struct crypto_acomp_streams deflate_streams = { + .alloc_ctx = deflate_alloc_stream, +- .cfree_ctx = kvfree, ++ .free_ctx = deflate_free_stream, + }; + + static int deflate_compress_one(struct acomp_req *req, +diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c +index a38b88baadf2ba..5722e4128d3cee 100644 +--- a/drivers/accel/habanalabs/gaudi2/gaudi2.c ++++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c +@@ -10437,7 +10437,7 @@ static int gaudi2_memset_device_memory(struct hl_device *hdev, u64 addr, u64 siz + (u64 *)(lin_dma_pkts_arr), DEBUGFS_WRITE64); + WREG32(sob_addr, 0); + +- kfree(lin_dma_pkts_arr); ++ kvfree(lin_dma_pkts_arr); + + return rc; + } +diff --git a/drivers/acpi/apei/einj-core.c b/drivers/acpi/apei/einj-core.c +index 9b041415a9d018..aded7d8f8cafdd 100644 +--- a/drivers/acpi/apei/einj-core.c ++++ b/drivers/acpi/apei/einj-core.c +@@ -842,7 +842,7 @@ static int __init einj_probe(struct faux_device *fdev) + return rc; + } + +-static void __exit einj_remove(struct faux_device *fdev) ++static void einj_remove(struct faux_device *fdev) + { + struct apei_exec_context ctx; + +@@ -864,15 +864,9 @@ static void __exit einj_remove(struct faux_device *fdev) + } + + static struct faux_device *einj_dev; +-/* +- * einj_remove() lives in .exit.text. For drivers registered via +- * platform_driver_probe() this is ok because they cannot get unbound at +- * runtime. So mark the driver struct with __refdata to prevent modpost +- * triggering a section mismatch warning. +- */ +-static struct faux_device_ops einj_device_ops __refdata = { ++static struct faux_device_ops einj_device_ops = { + .probe = einj_probe, +- .remove = __exit_p(einj_remove), ++ .remove = einj_remove, + }; + + static int __init einj_init(void) +diff --git a/drivers/acpi/pfr_update.c b/drivers/acpi/pfr_update.c +index 031d1ba81b866d..08b9b2bc2d9779 100644 +--- a/drivers/acpi/pfr_update.c ++++ b/drivers/acpi/pfr_update.c +@@ -310,7 +310,7 @@ static bool applicable_image(const void *data, struct pfru_update_cap_info *cap, + if (type == PFRU_CODE_INJECT_TYPE) + return payload_hdr->rt_ver >= cap->code_rt_version; + +- return payload_hdr->rt_ver >= cap->drv_rt_version; ++ return payload_hdr->svn_ver >= cap->drv_svn; + } + + static void print_update_debug_info(struct pfru_updated_result *result, +diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig +index e00536b495529b..120a2b7067fc7b 100644 +--- a/drivers/ata/Kconfig ++++ b/drivers/ata/Kconfig +@@ -117,23 +117,39 @@ config SATA_AHCI + + config SATA_MOBILE_LPM_POLICY + int "Default SATA Link Power Management policy" +- range 0 4 ++ range 0 5 + default 3 + depends on SATA_AHCI + help + Select the Default SATA Link Power Management (LPM) policy to use + for chipsets / "South Bridges" supporting low-power modes. Such + chipsets are ubiquitous across laptops, desktops and servers. +- +- The value set has the following meanings: ++ Each policy combines power saving states and features: ++ - Partial: The Phy logic is powered but is in a reduced power ++ state. The exit latency from this state is no longer than ++ 10us). ++ - Slumber: The Phy logic is powered but is in an even lower power ++ state. The exit latency from this state is potentially ++ longer, but no longer than 10ms. ++ - DevSleep: The Phy logic may be powered down. The exit latency from ++ this state is no longer than 20 ms, unless otherwise ++ specified by DETO in the device Identify Device Data log. ++ - HIPM: Host Initiated Power Management (host automatically ++ transitions to partial and slumber). ++ - DIPM: Device Initiated Power Management (device automatically ++ transitions to partial and slumber). ++ ++ The possible values for the default SATA link power management ++ policies are: + 0 => Keep firmware settings +- 1 => Maximum performance +- 2 => Medium power +- 3 => Medium power with Device Initiated PM enabled +- 4 => Minimum power +- +- Note "Minimum power" is known to cause issues, including disk +- corruption, with some disks and should not be used. ++ 1 => No power savings (maximum performance) ++ 2 => HIPM (Partial) ++ 3 => HIPM (Partial) and DIPM (Partial and Slumber) ++ 4 => HIPM (Partial and DevSleep) and DIPM (Partial and Slumber) ++ 5 => HIPM (Slumber and DevSleep) and DIPM (Partial and Slumber) ++ ++ Excluding the value 0, higher values represent policies with higher ++ power savings. + + config SATA_AHCI_PLATFORM + tristate "Platform AHCI SATA support" +diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c +index a21c9895408dc5..0573d9b6363a52 100644 +--- a/drivers/ata/libata-scsi.c ++++ b/drivers/ata/libata-scsi.c +@@ -859,18 +859,14 @@ static void ata_to_sense_error(u8 drv_stat, u8 drv_err, u8 *sk, u8 *asc, + {0xFF, 0xFF, 0xFF, 0xFF}, // END mark + }; + static const unsigned char stat_table[][4] = { +- /* Must be first because BUSY means no other bits valid */ +- {0x80, ABORTED_COMMAND, 0x47, 0x00}, +- // Busy, fake parity for now +- {0x40, ILLEGAL_REQUEST, 0x21, 0x04}, +- // Device ready, unaligned write command +- {0x20, HARDWARE_ERROR, 0x44, 0x00}, +- // Device fault, internal target failure +- {0x08, ABORTED_COMMAND, 0x47, 0x00}, +- // Timed out in xfer, fake parity for now +- {0x04, RECOVERED_ERROR, 0x11, 0x00}, +- // Recovered ECC error Medium error, recovered +- {0xFF, 0xFF, 0xFF, 0xFF}, // END mark ++ /* Busy: must be first because BUSY means no other bits valid */ ++ { ATA_BUSY, ABORTED_COMMAND, 0x00, 0x00 }, ++ /* Device fault: INTERNAL TARGET FAILURE */ ++ { ATA_DF, HARDWARE_ERROR, 0x44, 0x00 }, ++ /* Corrected data error */ ++ { ATA_CORR, RECOVERED_ERROR, 0x00, 0x00 }, ++ ++ { 0xFF, 0xFF, 0xFF, 0xFF }, /* END mark */ + }; + + /* +@@ -942,6 +938,8 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc) + if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) { + ata_dev_dbg(dev, + "missing result TF: can't generate ATA PT sense data\n"); ++ if (qc->err_mask) ++ ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0); + return; + } + +@@ -996,8 +994,8 @@ static void ata_gen_ata_sense(struct ata_queued_cmd *qc) + + if (!(qc->flags & ATA_QCFLAG_RTF_FILLED)) { + ata_dev_dbg(dev, +- "missing result TF: can't generate sense data\n"); +- return; ++ "Missing result TF: reporting aborted command\n"); ++ goto aborted; + } + + /* Use ata_to_sense_error() to map status register bits +@@ -1008,13 +1006,15 @@ static void ata_gen_ata_sense(struct ata_queued_cmd *qc) + ata_to_sense_error(tf->status, tf->error, + &sense_key, &asc, &ascq); + ata_scsi_set_sense(dev, cmd, sense_key, asc, ascq); +- } else { +- /* Could not decode error */ +- ata_dev_warn(dev, "could not decode error status 0x%x err_mask 0x%x\n", +- tf->status, qc->err_mask); +- ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0); + return; + } ++ ++ /* Could not decode error */ ++ ata_dev_warn(dev, ++ "Could not decode error 0x%x, status 0x%x (err_mask=0x%x)\n", ++ tf->error, tf->status, qc->err_mask); ++aborted: ++ ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0); + } + + void ata_scsi_sdev_config(struct scsi_device *sdev) +@@ -3905,21 +3905,16 @@ static int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc, + /* Check cdl_ctrl */ + switch (buf[0] & 0x03) { + case 0: +- /* Disable CDL if it is enabled */ +- if (!(dev->flags & ATA_DFLAG_CDL_ENABLED)) +- return 0; ++ /* Disable CDL */ + ata_dev_dbg(dev, "Disabling CDL\n"); + cdl_action = 0; + dev->flags &= ~ATA_DFLAG_CDL_ENABLED; + break; + case 0x02: + /* +- * Enable CDL if not already enabled. Since this is mutually +- * exclusive with NCQ priority, allow this only if NCQ priority +- * is disabled. ++ * Enable CDL. Since CDL is mutually exclusive with NCQ ++ * priority, allow this only if NCQ priority is disabled. + */ +- if (dev->flags & ATA_DFLAG_CDL_ENABLED) +- return 0; + if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) { + ata_dev_err(dev, + "NCQ priority must be disabled to enable CDL\n"); +diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c +index 1ef26216f9718a..5f9491ab6a43cc 100644 +--- a/drivers/base/power/runtime.c ++++ b/drivers/base/power/runtime.c +@@ -1191,10 +1191,12 @@ EXPORT_SYMBOL_GPL(__pm_runtime_resume); + * + * Return -EINVAL if runtime PM is disabled for @dev. + * +- * Otherwise, if the runtime PM status of @dev is %RPM_ACTIVE and either +- * @ign_usage_count is %true or the runtime PM usage counter of @dev is not +- * zero, increment the usage counter of @dev and return 1. Otherwise, return 0 +- * without changing the usage counter. ++ * Otherwise, if its runtime PM status is %RPM_ACTIVE and (1) @ign_usage_count ++ * is set, or (2) @dev is not ignoring children and its active child count is ++ * nonero, or (3) the runtime PM usage counter of @dev is not zero, increment ++ * the usage counter of @dev and return 1. ++ * ++ * Otherwise, return 0 without changing the usage counter. + * + * If @ign_usage_count is %true, this function can be used to prevent suspending + * the device when its runtime PM status is %RPM_ACTIVE. +@@ -1216,7 +1218,8 @@ static int pm_runtime_get_conditional(struct device *dev, bool ign_usage_count) + retval = -EINVAL; + } else if (dev->power.runtime_status != RPM_ACTIVE) { + retval = 0; +- } else if (ign_usage_count) { ++ } else if (ign_usage_count || (!dev->power.ignore_children && ++ atomic_read(&dev->power.child_count) > 0)) { + retval = 1; + atomic_inc(&dev->power.usage_count); + } else { +@@ -1249,10 +1252,16 @@ EXPORT_SYMBOL_GPL(pm_runtime_get_if_active); + * @dev: Target device. + * + * Increment the runtime PM usage counter of @dev if its runtime PM status is +- * %RPM_ACTIVE and its runtime PM usage counter is greater than 0, in which case +- * it returns 1. If the device is in a different state or its usage_count is 0, +- * 0 is returned. -EINVAL is returned if runtime PM is disabled for the device, +- * in which case also the usage_count will remain unmodified. ++ * %RPM_ACTIVE and its runtime PM usage counter is greater than 0 or it is not ++ * ignoring children and its active child count is nonzero. 1 is returned in ++ * this case. ++ * ++ * If @dev is in a different state or it is not in use (that is, its usage ++ * counter is 0, or it is ignoring children, or its active child count is 0), ++ * 0 is returned. ++ * ++ * -EINVAL is returned if runtime PM is disabled for the device, in which case ++ * also the usage counter of @dev is not updated. + */ + int pm_runtime_get_if_in_use(struct device *dev) + { +diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c +index 4390fd571dbd15..a8c520dc09e19c 100644 +--- a/drivers/bluetooth/btmtk.c ++++ b/drivers/bluetooth/btmtk.c +@@ -642,12 +642,7 @@ static int btmtk_usb_hci_wmt_sync(struct hci_dev *hdev, + * WMT command. + */ + err = wait_on_bit_timeout(&data->flags, BTMTK_TX_WAIT_VND_EVT, +- TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT); +- if (err == -EINTR) { +- bt_dev_err(hdev, "Execution of wmt command interrupted"); +- clear_bit(BTMTK_TX_WAIT_VND_EVT, &data->flags); +- goto err_free_wc; +- } ++ TASK_UNINTERRUPTIBLE, HCI_INIT_TIMEOUT); + + if (err) { + bt_dev_err(hdev, "Execution of wmt command timed out"); +diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c +index efa3b6dddf4d2f..205d83ac069f15 100644 +--- a/drivers/bus/mhi/host/boot.c ++++ b/drivers/bus/mhi/host/boot.c +@@ -31,8 +31,8 @@ int mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, + int ret; + + for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) { +- bhi_vec->dma_addr = mhi_buf->dma_addr; +- bhi_vec->size = mhi_buf->len; ++ bhi_vec->dma_addr = cpu_to_le64(mhi_buf->dma_addr); ++ bhi_vec->size = cpu_to_le64(mhi_buf->len); + } + + dev_dbg(dev, "BHIe programming for RDDM\n"); +@@ -431,8 +431,8 @@ static void mhi_firmware_copy_bhie(struct mhi_controller *mhi_cntrl, + while (remainder) { + to_cpy = min(remainder, mhi_buf->len); + memcpy(mhi_buf->buf, buf, to_cpy); +- bhi_vec->dma_addr = mhi_buf->dma_addr; +- bhi_vec->size = to_cpy; ++ bhi_vec->dma_addr = cpu_to_le64(mhi_buf->dma_addr); ++ bhi_vec->size = cpu_to_le64(to_cpy); + + buf += to_cpy; + remainder -= to_cpy; +diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h +index ce566f7d2e9240..1dbc3f736161d6 100644 +--- a/drivers/bus/mhi/host/internal.h ++++ b/drivers/bus/mhi/host/internal.h +@@ -25,8 +25,8 @@ struct mhi_ctxt { + }; + + struct bhi_vec_entry { +- u64 dma_addr; +- u64 size; ++ __le64 dma_addr; ++ __le64 size; + }; + + enum mhi_fw_load_type { +diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c +index 9bb0df43ceef1e..d7b266bafb1087 100644 +--- a/drivers/bus/mhi/host/main.c ++++ b/drivers/bus/mhi/host/main.c +@@ -602,7 +602,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, + { + dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event); + struct mhi_ring_element *local_rp, *ev_tre; +- void *dev_rp; ++ void *dev_rp, *next_rp; + struct mhi_buf_info *buf_info; + u16 xfer_len; + +@@ -621,6 +621,16 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, + result.dir = mhi_chan->dir; + + local_rp = tre_ring->rp; ++ ++ next_rp = local_rp + 1; ++ if (next_rp >= tre_ring->base + tre_ring->len) ++ next_rp = tre_ring->base; ++ if (dev_rp != next_rp && !MHI_TRE_DATA_GET_CHAIN(local_rp)) { ++ dev_err(&mhi_cntrl->mhi_dev->dev, ++ "Event element points to an unexpected TRE\n"); ++ break; ++ } ++ + while (local_rp != dev_rp) { + buf_info = buf_ring->rp; + /* If it's the last TRE, get length from the event */ +diff --git a/drivers/cdx/controller/cdx_rpmsg.c b/drivers/cdx/controller/cdx_rpmsg.c +index 04b578a0be17c2..61f1a290ff0890 100644 +--- a/drivers/cdx/controller/cdx_rpmsg.c ++++ b/drivers/cdx/controller/cdx_rpmsg.c +@@ -129,8 +129,7 @@ static int cdx_rpmsg_probe(struct rpmsg_device *rpdev) + + chinfo.src = RPMSG_ADDR_ANY; + chinfo.dst = rpdev->dst; +- strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, +- strlen(cdx_rpmsg_id_table[0].name)); ++ strscpy(chinfo.name, cdx_rpmsg_id_table[0].name, sizeof(chinfo.name)); + + cdx_mcdi->ept = rpmsg_create_ept(rpdev, cdx_rpmsg_cb, NULL, chinfo); + if (!cdx_mcdi->ept) { +diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c +index 23b7178522ae03..7e2f2b1a1c362e 100644 +--- a/drivers/comedi/comedi_fops.c ++++ b/drivers/comedi/comedi_fops.c +@@ -1587,6 +1587,9 @@ static int do_insnlist_ioctl(struct comedi_device *dev, + memset(&data[n], 0, (MIN_SAMPLES - n) * + sizeof(unsigned int)); + } ++ } else { ++ memset(data, 0, max_t(unsigned int, n, MIN_SAMPLES) * ++ sizeof(unsigned int)); + } + ret = parse_insn(dev, insns + i, data, file); + if (ret < 0) +@@ -1670,6 +1673,8 @@ static int do_insn_ioctl(struct comedi_device *dev, + memset(&data[insn->n], 0, + (MIN_SAMPLES - insn->n) * sizeof(unsigned int)); + } ++ } else { ++ memset(data, 0, n_data * sizeof(unsigned int)); + } + ret = parse_insn(dev, insn, data, file); + if (ret < 0) +diff --git a/drivers/comedi/drivers.c b/drivers/comedi/drivers.c +index f1dc854928c176..c9ebaadc5e82af 100644 +--- a/drivers/comedi/drivers.c ++++ b/drivers/comedi/drivers.c +@@ -620,11 +620,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev, + unsigned int chan = CR_CHAN(insn->chanspec); + unsigned int base_chan = (chan < 32) ? 0 : chan; + unsigned int _data[2]; ++ unsigned int i; + int ret; + +- if (insn->n == 0) +- return 0; +- + memset(_data, 0, sizeof(_data)); + memset(&_insn, 0, sizeof(_insn)); + _insn.insn = INSN_BITS; +@@ -635,18 +633,21 @@ static int insn_rw_emulate_bits(struct comedi_device *dev, + if (insn->insn == INSN_WRITE) { + if (!(s->subdev_flags & SDF_WRITABLE)) + return -EINVAL; +- _data[0] = 1U << (chan - base_chan); /* mask */ +- _data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */ ++ _data[0] = 1U << (chan - base_chan); /* mask */ + } ++ for (i = 0; i < insn->n; i++) { ++ if (insn->insn == INSN_WRITE) ++ _data[1] = data[i] ? _data[0] : 0; /* bits */ + +- ret = s->insn_bits(dev, s, &_insn, _data); +- if (ret < 0) +- return ret; ++ ret = s->insn_bits(dev, s, &_insn, _data); ++ if (ret < 0) ++ return ret; + +- if (insn->insn == INSN_READ) +- data[0] = (_data[1] >> (chan - base_chan)) & 1; ++ if (insn->insn == INSN_READ) ++ data[i] = (_data[1] >> (chan - base_chan)) & 1; ++ } + +- return 1; ++ return insn->n; + } + + static int __comedi_device_postconfig_async(struct comedi_device *dev, +diff --git a/drivers/comedi/drivers/pcl726.c b/drivers/comedi/drivers/pcl726.c +index 0430630e6ebb90..b542896fa0e427 100644 +--- a/drivers/comedi/drivers/pcl726.c ++++ b/drivers/comedi/drivers/pcl726.c +@@ -328,7 +328,8 @@ static int pcl726_attach(struct comedi_device *dev, + * Hook up the external trigger source interrupt only if the + * user config option is valid and the board supports interrupts. + */ +- if (it->options[1] && (board->irq_mask & (1 << it->options[1]))) { ++ if (it->options[1] > 0 && it->options[1] < 16 && ++ (board->irq_mask & (1U << it->options[1]))) { + ret = request_irq(it->options[1], pcl726_interrupt, 0, + dev->board_name, dev); + if (ret == 0) { +diff --git a/drivers/cpufreq/armada-8k-cpufreq.c b/drivers/cpufreq/armada-8k-cpufreq.c +index 006f4c554dd7e9..d96c1718f7f87c 100644 +--- a/drivers/cpufreq/armada-8k-cpufreq.c ++++ b/drivers/cpufreq/armada-8k-cpufreq.c +@@ -103,7 +103,7 @@ static void armada_8k_cpufreq_free_table(struct freq_table *freq_tables) + { + int opps_index, nb_cpus = num_possible_cpus(); + +- for (opps_index = 0 ; opps_index <= nb_cpus; opps_index++) { ++ for (opps_index = 0 ; opps_index < nb_cpus; opps_index++) { + int i; + + /* If cpu_dev is NULL then we reached the end of the array */ +diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c +index 81306612a5c67d..b2e3d0b0a116dc 100644 +--- a/drivers/cpuidle/governors/menu.c ++++ b/drivers/cpuidle/governors/menu.c +@@ -287,20 +287,15 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + return 0; + } + +- if (tick_nohz_tick_stopped()) { +- /* +- * If the tick is already stopped, the cost of possible short +- * idle duration misprediction is much higher, because the CPU +- * may be stuck in a shallow idle state for a long time as a +- * result of it. In that case say we might mispredict and use +- * the known time till the closest timer event for the idle +- * state selection. +- */ +- if (predicted_ns < TICK_NSEC) +- predicted_ns = data->next_timer_ns; +- } else if (latency_req > predicted_ns) { +- latency_req = predicted_ns; +- } ++ /* ++ * If the tick is already stopped, the cost of possible short idle ++ * duration misprediction is much higher, because the CPU may be stuck ++ * in a shallow idle state for a long time as a result of it. In that ++ * case, say we might mispredict and use the known time till the closest ++ * timer event for the idle state selection. ++ */ ++ if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC) ++ predicted_ns = data->next_timer_ns; + + /* + * Find the idle state with the lowest power while satisfying +@@ -316,13 +311,15 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + if (idx == -1) + idx = i; /* first enabled state */ + ++ if (s->exit_latency_ns > latency_req) ++ break; ++ + if (s->target_residency_ns > predicted_ns) { + /* + * Use a physical idle state, not busy polling, unless + * a timer is going to trigger soon enough. + */ + if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) && +- s->exit_latency_ns <= latency_req && + s->target_residency_ns <= data->next_timer_ns) { + predicted_ns = s->target_residency_ns; + idx = i; +@@ -354,8 +351,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + + return idx; + } +- if (s->exit_latency_ns > latency_req) +- break; + + idx = i; + } +diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c +index 9cd5e3d54d9d0e..ce7b99019537ef 100644 +--- a/drivers/crypto/caam/ctrl.c ++++ b/drivers/crypto/caam/ctrl.c +@@ -831,7 +831,7 @@ static int caam_ctrl_suspend(struct device *dev) + { + const struct caam_drv_private *ctrlpriv = dev_get_drvdata(dev); + +- if (ctrlpriv->caam_off_during_pm && !ctrlpriv->optee_en) ++ if (ctrlpriv->caam_off_during_pm && !ctrlpriv->no_page0) + caam_state_save(dev); + + return 0; +@@ -842,7 +842,7 @@ static int caam_ctrl_resume(struct device *dev) + struct caam_drv_private *ctrlpriv = dev_get_drvdata(dev); + int ret = 0; + +- if (ctrlpriv->caam_off_during_pm && !ctrlpriv->optee_en) { ++ if (ctrlpriv->caam_off_during_pm && !ctrlpriv->no_page0) { + caam_state_restore(dev); + + /* HW and rng will be reset so deinstantiation can be removed */ +@@ -908,6 +908,7 @@ static int caam_probe(struct platform_device *pdev) + + imx_soc_data = imx_soc_match->data; + reg_access = reg_access && imx_soc_data->page0_access; ++ ctrlpriv->no_page0 = !reg_access; + /* + * CAAM clocks cannot be controlled from kernel. + */ +diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h +index e5132015087209..51c90d17a40d23 100644 +--- a/drivers/crypto/caam/intern.h ++++ b/drivers/crypto/caam/intern.h +@@ -115,6 +115,7 @@ struct caam_drv_private { + u8 blob_present; /* Nonzero if BLOB support present in device */ + u8 mc_en; /* Nonzero if MC f/w is active */ + u8 optee_en; /* Nonzero if OP-TEE f/w is active */ ++ u8 no_page0; /* Nonzero if register page 0 is not controlled by Linux */ + bool pr_support; /* RNG prediction resistance available */ + int secvio_irq; /* Security violation interrupt number */ + int virt_en; /* Virtualization enabled in CAAM */ +diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c +index 539c303beb3a28..e058ba02779296 100644 +--- a/drivers/crypto/ccp/sev-dev.c ++++ b/drivers/crypto/ccp/sev-dev.c +@@ -1787,8 +1787,14 @@ static int __sev_snp_shutdown_locked(int *error, bool panic) + sev->snp_initialized = false; + dev_dbg(sev->dev, "SEV-SNP firmware shutdown\n"); + +- atomic_notifier_chain_unregister(&panic_notifier_list, +- &snp_panic_notifier); ++ /* ++ * __sev_snp_shutdown_locked() deadlocks when it tries to unregister ++ * itself during panic as the panic notifier is called with RCU read ++ * lock held and notifier unregistration does RCU synchronization. ++ */ ++ if (!panic) ++ atomic_notifier_chain_unregister(&panic_notifier_list, ++ &snp_panic_notifier); + + /* Reset TMR size back to default */ + sev_es_tmr_size = SEV_TMR_SIZE; +diff --git a/drivers/crypto/intel/qat/qat_common/adf_common_drv.h b/drivers/crypto/intel/qat/qat_common/adf_common_drv.h +index eaa6388a6678b0..7a022bd4ae07ca 100644 +--- a/drivers/crypto/intel/qat/qat_common/adf_common_drv.h ++++ b/drivers/crypto/intel/qat/qat_common/adf_common_drv.h +@@ -189,6 +189,7 @@ void adf_exit_misc_wq(void); + bool adf_misc_wq_queue_work(struct work_struct *work); + bool adf_misc_wq_queue_delayed_work(struct delayed_work *work, + unsigned long delay); ++void adf_misc_wq_flush(void); + #if defined(CONFIG_PCI_IOV) + int adf_sriov_configure(struct pci_dev *pdev, int numvfs); + void adf_disable_sriov(struct adf_accel_dev *accel_dev); +diff --git a/drivers/crypto/intel/qat/qat_common/adf_init.c b/drivers/crypto/intel/qat/qat_common/adf_init.c +index f189cce7d15358..46491048e0bb42 100644 +--- a/drivers/crypto/intel/qat/qat_common/adf_init.c ++++ b/drivers/crypto/intel/qat/qat_common/adf_init.c +@@ -404,6 +404,7 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev) + hw_data->exit_admin_comms(accel_dev); + + adf_cleanup_etr_data(accel_dev); ++ adf_misc_wq_flush(); + adf_dev_restore(accel_dev); + } + +diff --git a/drivers/crypto/intel/qat/qat_common/adf_isr.c b/drivers/crypto/intel/qat/qat_common/adf_isr.c +index cae1aee5479aff..12e5656136610c 100644 +--- a/drivers/crypto/intel/qat/qat_common/adf_isr.c ++++ b/drivers/crypto/intel/qat/qat_common/adf_isr.c +@@ -407,3 +407,8 @@ bool adf_misc_wq_queue_delayed_work(struct delayed_work *work, + { + return queue_delayed_work(adf_misc_wq, work, delay); + } ++ ++void adf_misc_wq_flush(void) ++{ ++ flush_workqueue(adf_misc_wq); ++} +diff --git a/drivers/crypto/intel/qat/qat_common/qat_algs.c b/drivers/crypto/intel/qat/qat_common/qat_algs.c +index c03a698511142e..43e6dd9b77b7d4 100644 +--- a/drivers/crypto/intel/qat/qat_common/qat_algs.c ++++ b/drivers/crypto/intel/qat/qat_common/qat_algs.c +@@ -1277,7 +1277,7 @@ static struct aead_alg qat_aeads[] = { { + .base = { + .cra_name = "authenc(hmac(sha1),cbc(aes))", + .cra_driver_name = "qat_aes_cbc_hmac_sha1", +- .cra_priority = 4001, ++ .cra_priority = 100, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct qat_alg_aead_ctx), +@@ -1294,7 +1294,7 @@ static struct aead_alg qat_aeads[] = { { + .base = { + .cra_name = "authenc(hmac(sha256),cbc(aes))", + .cra_driver_name = "qat_aes_cbc_hmac_sha256", +- .cra_priority = 4001, ++ .cra_priority = 100, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct qat_alg_aead_ctx), +@@ -1311,7 +1311,7 @@ static struct aead_alg qat_aeads[] = { { + .base = { + .cra_name = "authenc(hmac(sha512),cbc(aes))", + .cra_driver_name = "qat_aes_cbc_hmac_sha512", +- .cra_priority = 4001, ++ .cra_priority = 100, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct qat_alg_aead_ctx), +@@ -1329,7 +1329,7 @@ static struct aead_alg qat_aeads[] = { { + static struct skcipher_alg qat_skciphers[] = { { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "qat_aes_cbc", +- .base.cra_priority = 4001, ++ .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), +@@ -1347,7 +1347,7 @@ static struct skcipher_alg qat_skciphers[] = { { + }, { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "qat_aes_ctr", +- .base.cra_priority = 4001, ++ .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), +@@ -1365,7 +1365,7 @@ static struct skcipher_alg qat_skciphers[] = { { + }, { + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "qat_aes_xts", +- .base.cra_priority = 4001, ++ .base.cra_priority = 100, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_ALLOCATES_MEMORY, + .base.cra_blocksize = AES_BLOCK_SIZE, +diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +index e27e849b01dfc0..90a031421aacbf 100644 +--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h ++++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +@@ -34,6 +34,9 @@ + #define SG_COMP_2 2 + #define SG_COMP_1 1 + ++#define OTX2_CPT_DPTR_RPTR_ALIGN 8 ++#define OTX2_CPT_RES_ADDR_ALIGN 32 ++ + union otx2_cpt_opcode { + u16 flags; + struct { +@@ -347,22 +350,48 @@ static inline struct otx2_cpt_inst_info * + cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + gfp_t gfp) + { +- u32 dlen = 0, g_len, sg_len, info_len; +- int align = OTX2_CPT_DMA_MINALIGN; ++ u32 dlen = 0, g_len, s_len, sg_len, info_len; + struct otx2_cpt_inst_info *info; +- u16 g_sz_bytes, s_sz_bytes; + u32 total_mem_len; + int i; + +- g_sz_bytes = ((req->in_cnt + 2) / 3) * +- sizeof(struct cn10kb_cpt_sglist_component); +- s_sz_bytes = ((req->out_cnt + 2) / 3) * +- sizeof(struct cn10kb_cpt_sglist_component); ++ /* Allocate memory to meet below alignment requirement: ++ * ------------------------------------ ++ * | struct otx2_cpt_inst_info | ++ * | (No alignment required) | ++ * | --------------------------------| ++ * | | padding for ARCH_DMA_MINALIGN | ++ * | | alignment | ++ * |------------------------------------| ++ * | SG List Gather/Input memory | ++ * | Length = multiple of 32Bytes | ++ * | Alignment = 8Byte | ++ * |---------------------------------- | ++ * | SG List Scatter/Output memory | ++ * | Length = multiple of 32Bytes | ++ * | Alignment = 8Byte | ++ * | -------------------------------| ++ * | | padding for 32B alignment | ++ * |------------------------------------| ++ * | Result response memory | ++ * | Alignment = 32Byte | ++ * ------------------------------------ ++ */ ++ ++ info_len = sizeof(*info); ++ ++ g_len = ((req->in_cnt + 2) / 3) * ++ sizeof(struct cn10kb_cpt_sglist_component); ++ s_len = ((req->out_cnt + 2) / 3) * ++ sizeof(struct cn10kb_cpt_sglist_component); ++ sg_len = g_len + s_len; + +- g_len = ALIGN(g_sz_bytes, align); +- sg_len = ALIGN(g_len + s_sz_bytes, align); +- info_len = ALIGN(sizeof(*info), align); +- total_mem_len = sg_len + info_len + sizeof(union otx2_cpt_res_s); ++ /* Allocate extra memory for SG and response address alignment */ ++ total_mem_len = ALIGN(info_len, OTX2_CPT_DPTR_RPTR_ALIGN); ++ total_mem_len += (ARCH_DMA_MINALIGN - 1) & ++ ~(OTX2_CPT_DPTR_RPTR_ALIGN - 1); ++ total_mem_len += ALIGN(sg_len, OTX2_CPT_RES_ADDR_ALIGN); ++ total_mem_len += sizeof(union otx2_cpt_res_s); + + info = kzalloc(total_mem_len, gfp); + if (unlikely(!info)) +@@ -372,7 +401,8 @@ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + dlen += req->in[i].size; + + info->dlen = dlen; +- info->in_buffer = (u8 *)info + info_len; ++ info->in_buffer = PTR_ALIGN((u8 *)info + info_len, ARCH_DMA_MINALIGN); ++ info->out_buffer = info->in_buffer + g_len; + info->gthr_sz = req->in_cnt; + info->sctr_sz = req->out_cnt; + +@@ -384,7 +414,7 @@ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + } + + if (sgv2io_components_setup(pdev, req->out, req->out_cnt, +- &info->in_buffer[g_len])) { ++ info->out_buffer)) { + dev_err(&pdev->dev, "Failed to setup scatter list\n"); + goto destroy_info; + } +@@ -401,8 +431,10 @@ cn10k_sgv2_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + * Get buffer for union otx2_cpt_res_s response + * structure and its physical address + */ +- info->completion_addr = info->in_buffer + sg_len; +- info->comp_baddr = info->dptr_baddr + sg_len; ++ info->completion_addr = PTR_ALIGN((info->in_buffer + sg_len), ++ OTX2_CPT_RES_ADDR_ALIGN); ++ info->comp_baddr = ALIGN((info->dptr_baddr + sg_len), ++ OTX2_CPT_RES_ADDR_ALIGN); + + return info; + +@@ -417,10 +449,9 @@ static inline struct otx2_cpt_inst_info * + otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + gfp_t gfp) + { +- int align = OTX2_CPT_DMA_MINALIGN; + struct otx2_cpt_inst_info *info; +- u32 dlen, align_dlen, info_len; +- u16 g_sz_bytes, s_sz_bytes; ++ u32 dlen, info_len; ++ u16 g_len, s_len; + u32 total_mem_len; + + if (unlikely(req->in_cnt > OTX2_CPT_MAX_SG_IN_CNT || +@@ -429,22 +460,54 @@ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + return NULL; + } + +- g_sz_bytes = ((req->in_cnt + 3) / 4) * +- sizeof(struct otx2_cpt_sglist_component); +- s_sz_bytes = ((req->out_cnt + 3) / 4) * +- sizeof(struct otx2_cpt_sglist_component); ++ /* Allocate memory to meet below alignment requirement: ++ * ------------------------------------ ++ * | struct otx2_cpt_inst_info | ++ * | (No alignment required) | ++ * | --------------------------------| ++ * | | padding for ARCH_DMA_MINALIGN | ++ * | | alignment | ++ * |------------------------------------| ++ * | SG List Header of 8 Byte | ++ * |------------------------------------| ++ * | SG List Gather/Input memory | ++ * | Length = multiple of 32Bytes | ++ * | Alignment = 8Byte | ++ * |---------------------------------- | ++ * | SG List Scatter/Output memory | ++ * | Length = multiple of 32Bytes | ++ * | Alignment = 8Byte | ++ * | -------------------------------| ++ * | | padding for 32B alignment | ++ * |------------------------------------| ++ * | Result response memory | ++ * | Alignment = 32Byte | ++ * ------------------------------------ ++ */ ++ ++ info_len = sizeof(*info); ++ ++ g_len = ((req->in_cnt + 3) / 4) * ++ sizeof(struct otx2_cpt_sglist_component); ++ s_len = ((req->out_cnt + 3) / 4) * ++ sizeof(struct otx2_cpt_sglist_component); ++ ++ dlen = g_len + s_len + SG_LIST_HDR_SIZE; + +- dlen = g_sz_bytes + s_sz_bytes + SG_LIST_HDR_SIZE; +- align_dlen = ALIGN(dlen, align); +- info_len = ALIGN(sizeof(*info), align); +- total_mem_len = align_dlen + info_len + sizeof(union otx2_cpt_res_s); ++ /* Allocate extra memory for SG and response address alignment */ ++ total_mem_len = ALIGN(info_len, OTX2_CPT_DPTR_RPTR_ALIGN); ++ total_mem_len += (ARCH_DMA_MINALIGN - 1) & ++ ~(OTX2_CPT_DPTR_RPTR_ALIGN - 1); ++ total_mem_len += ALIGN(dlen, OTX2_CPT_RES_ADDR_ALIGN); ++ total_mem_len += sizeof(union otx2_cpt_res_s); + + info = kzalloc(total_mem_len, gfp); + if (unlikely(!info)) + return NULL; + + info->dlen = dlen; +- info->in_buffer = (u8 *)info + info_len; ++ info->in_buffer = PTR_ALIGN((u8 *)info + info_len, ARCH_DMA_MINALIGN); ++ info->out_buffer = info->in_buffer + SG_LIST_HDR_SIZE + g_len; + + ((u16 *)info->in_buffer)[0] = req->out_cnt; + ((u16 *)info->in_buffer)[1] = req->in_cnt; +@@ -460,7 +523,7 @@ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + } + + if (setup_sgio_components(pdev, req->out, req->out_cnt, +- &info->in_buffer[8 + g_sz_bytes])) { ++ info->out_buffer)) { + dev_err(&pdev->dev, "Failed to setup scatter list\n"); + goto destroy_info; + } +@@ -476,8 +539,10 @@ otx2_sg_info_create(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + * Get buffer for union otx2_cpt_res_s response + * structure and its physical address + */ +- info->completion_addr = info->in_buffer + align_dlen; +- info->comp_baddr = info->dptr_baddr + align_dlen; ++ info->completion_addr = PTR_ALIGN((info->in_buffer + dlen), ++ OTX2_CPT_RES_ADDR_ALIGN); ++ info->comp_baddr = ALIGN((info->dptr_baddr + dlen), ++ OTX2_CPT_RES_ADDR_ALIGN); + + return info; + +diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c +index 9095dea2748d5e..56645b3eb71757 100644 +--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c ++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c +@@ -1491,12 +1491,13 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf) + union otx2_cpt_opcode opcode; + union otx2_cpt_res_s *result; + union otx2_cpt_inst_s inst; ++ dma_addr_t result_baddr; + dma_addr_t rptr_baddr; + struct pci_dev *pdev; +- u32 len, compl_rlen; + int timeout = 10000; ++ void *base, *rptr; + int ret, etype; +- void *rptr; ++ u32 len; + + /* + * We don't get capabilities if it was already done +@@ -1519,22 +1520,28 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf) + if (ret) + goto delete_grps; + +- compl_rlen = ALIGN(sizeof(union otx2_cpt_res_s), OTX2_CPT_DMA_MINALIGN); +- len = compl_rlen + LOADFVC_RLEN; ++ /* Allocate extra memory for "rptr" and "result" pointer alignment */ ++ len = LOADFVC_RLEN + ARCH_DMA_MINALIGN + ++ sizeof(union otx2_cpt_res_s) + OTX2_CPT_RES_ADDR_ALIGN; + +- result = kzalloc(len, GFP_KERNEL); +- if (!result) { ++ base = kzalloc(len, GFP_KERNEL); ++ if (!base) { + ret = -ENOMEM; + goto lf_cleanup; + } +- rptr_baddr = dma_map_single(&pdev->dev, (void *)result, len, +- DMA_BIDIRECTIONAL); ++ ++ rptr = PTR_ALIGN(base, ARCH_DMA_MINALIGN); ++ rptr_baddr = dma_map_single(&pdev->dev, rptr, len, DMA_BIDIRECTIONAL); + if (dma_mapping_error(&pdev->dev, rptr_baddr)) { + dev_err(&pdev->dev, "DMA mapping failed\n"); + ret = -EFAULT; +- goto free_result; ++ goto free_rptr; + } +- rptr = (u8 *)result + compl_rlen; ++ ++ result = (union otx2_cpt_res_s *)PTR_ALIGN(rptr + LOADFVC_RLEN, ++ OTX2_CPT_RES_ADDR_ALIGN); ++ result_baddr = ALIGN(rptr_baddr + LOADFVC_RLEN, ++ OTX2_CPT_RES_ADDR_ALIGN); + + /* Fill in the command */ + opcode.s.major = LOADFVC_MAJOR_OP; +@@ -1546,14 +1553,14 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf) + /* 64-bit swap for microcode data reads, not needed for addresses */ + cpu_to_be64s(&iq_cmd.cmd.u); + iq_cmd.dptr = 0; +- iq_cmd.rptr = rptr_baddr + compl_rlen; ++ iq_cmd.rptr = rptr_baddr; + iq_cmd.cptr.u = 0; + + for (etype = 1; etype < OTX2_CPT_MAX_ENG_TYPES; etype++) { + result->s.compcode = OTX2_CPT_COMPLETION_CODE_INIT; + iq_cmd.cptr.s.grp = otx2_cpt_get_eng_grp(&cptpf->eng_grps, + etype); +- otx2_cpt_fill_inst(&inst, &iq_cmd, rptr_baddr); ++ otx2_cpt_fill_inst(&inst, &iq_cmd, result_baddr); + lfs->ops->send_cmd(&inst, 1, &cptpf->lfs.lf[0]); + timeout = 10000; + +@@ -1576,8 +1583,8 @@ int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf) + + error_no_response: + dma_unmap_single(&pdev->dev, rptr_baddr, len, DMA_BIDIRECTIONAL); +-free_result: +- kfree(result); ++free_rptr: ++ kfree(base); + lf_cleanup: + otx2_cptlf_shutdown(lfs); + delete_grps: +diff --git a/drivers/fpga/zynq-fpga.c b/drivers/fpga/zynq-fpga.c +index f7e08f7ea9ef3c..b7629a0e481340 100644 +--- a/drivers/fpga/zynq-fpga.c ++++ b/drivers/fpga/zynq-fpga.c +@@ -405,12 +405,12 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt) + } + } + +- priv->dma_nelms = +- dma_map_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE); +- if (priv->dma_nelms == 0) { ++ err = dma_map_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0); ++ if (err) { + dev_err(&mgr->dev, "Unable to DMA map (TO_DEVICE)\n"); +- return -ENOMEM; ++ return err; + } ++ priv->dma_nelms = sgt->nents; + + /* enable clock */ + err = clk_enable(priv->clk); +@@ -478,7 +478,7 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr, struct sg_table *sgt) + clk_disable(priv->clk); + + out_free: +- dma_unmap_sg(mgr->dev.parent, sgt->sgl, sgt->nents, DMA_TO_DEVICE); ++ dma_unmap_sgtable(mgr->dev.parent, sgt, DMA_TO_DEVICE, 0); + return err; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h +index a5ccd0ada16ab0..e1d79f48304992 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h +@@ -886,6 +886,7 @@ struct amdgpu_mqd_prop { + uint64_t csa_addr; + uint64_t fence_address; + bool tmz_queue; ++ bool kernel_queue; + }; + + struct amdgpu_mqd { +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c +index 15dde1f5032842..25252231a68a94 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cper.c +@@ -459,7 +459,7 @@ static u32 amdgpu_cper_ring_get_ent_sz(struct amdgpu_ring *ring, u64 pos) + + void amdgpu_cper_ring_write(struct amdgpu_ring *ring, void *src, int count) + { +- u64 pos, wptr_old, rptr = *ring->rptr_cpu_addr & ring->ptr_mask; ++ u64 pos, wptr_old, rptr; + int rec_cnt_dw = count >> 2; + u32 chunk, ent_sz; + u8 *s = (u8 *)src; +@@ -472,9 +472,11 @@ void amdgpu_cper_ring_write(struct amdgpu_ring *ring, void *src, int count) + return; + } + ++ mutex_lock(&ring->adev->cper.ring_lock); ++ + wptr_old = ring->wptr; ++ rptr = *ring->rptr_cpu_addr & ring->ptr_mask; + +- mutex_lock(&ring->adev->cper.ring_lock); + while (count) { + ent_sz = amdgpu_cper_ring_get_ent_sz(ring, ring->wptr); + chunk = umin(ent_sz, count); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +index 9ea0d9b71f48db..82cb68114ae991 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +@@ -1138,6 +1138,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) + } + } + ++ if (!amdgpu_vm_ready(vm)) ++ return -EINVAL; ++ + r = amdgpu_vm_clear_freed(adev, vm, NULL); + if (r) + return r; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +index 54ea8e8d781215..a57e8c5474bb00 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +@@ -2561,9 +2561,6 @@ static int amdgpu_device_parse_gpu_info_fw(struct amdgpu_device *adev) + + adev->firmware.gpu_info_fw = NULL; + +- if (adev->mman.discovery_bin) +- return 0; +- + switch (adev->asic_type) { + default: + return 0; +@@ -2585,6 +2582,8 @@ static int amdgpu_device_parse_gpu_info_fw(struct amdgpu_device *adev) + chip_name = "arcturus"; + break; + case CHIP_NAVI12: ++ if (adev->mman.discovery_bin) ++ return 0; + chip_name = "navi12"; + break; + } +@@ -3235,6 +3234,7 @@ static bool amdgpu_device_check_vram_lost(struct amdgpu_device *adev) + * always assumed to be lost. + */ + switch (amdgpu_asic_reset_method(adev)) { ++ case AMD_RESET_METHOD_LEGACY: + case AMD_RESET_METHOD_LINK: + case AMD_RESET_METHOD_BACO: + case AMD_RESET_METHOD_MODE1: +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c +index 81b3443c8d7f48..efe0058b48ca85 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c +@@ -276,7 +276,7 @@ static int amdgpu_discovery_read_binary_from_mem(struct amdgpu_device *adev, + u32 msg; + + if (!amdgpu_sriov_vf(adev)) { +- /* It can take up to a second for IFWI init to complete on some dGPUs, ++ /* It can take up to two second for IFWI init to complete on some dGPUs, + * but generally it should be in the 60-100ms range. Normally this starts + * as soon as the device gets power so by the time the OS loads this has long + * completed. However, when a card is hotplugged via e.g., USB4, we need to +@@ -284,7 +284,7 @@ static int amdgpu_discovery_read_binary_from_mem(struct amdgpu_device *adev, + * continue. + */ + +- for (i = 0; i < 1000; i++) { ++ for (i = 0; i < 2000; i++) { + msg = RREG32(mmMP0_SMN_C2PMSG_33); + if (msg & 0x80000000) + break; +@@ -2555,40 +2555,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + + switch (adev->asic_type) { + case CHIP_VEGA10: +- case CHIP_VEGA12: +- case CHIP_RAVEN: +- case CHIP_VEGA20: +- case CHIP_ARCTURUS: +- case CHIP_ALDEBARAN: +- /* this is not fatal. We have a fallback below +- * if the new firmwares are not present. some of +- * this will be overridden below to keep things +- * consistent with the current behavior. ++ /* This is not fatal. We only need the discovery ++ * binary for sysfs. We don't need it for a ++ * functional system. + */ +- r = amdgpu_discovery_reg_base_init(adev); +- if (!r) { +- amdgpu_discovery_harvest_ip(adev); +- amdgpu_discovery_get_gfx_info(adev); +- amdgpu_discovery_get_mall_info(adev); +- amdgpu_discovery_get_vcn_info(adev); +- } +- break; +- default: +- r = amdgpu_discovery_reg_base_init(adev); +- if (r) { +- drm_err(&adev->ddev, "discovery failed: %d\n", r); +- return r; +- } +- +- amdgpu_discovery_harvest_ip(adev); +- amdgpu_discovery_get_gfx_info(adev); +- amdgpu_discovery_get_mall_info(adev); +- amdgpu_discovery_get_vcn_info(adev); +- break; +- } +- +- switch (adev->asic_type) { +- case CHIP_VEGA10: ++ amdgpu_discovery_init(adev); + vega10_reg_base_init(adev); + adev->sdma.num_instances = 2; + adev->gmc.num_umc = 4; +@@ -2611,6 +2582,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + adev->ip_versions[DCI_HWIP][0] = IP_VERSION(12, 0, 0); + break; + case CHIP_VEGA12: ++ /* This is not fatal. We only need the discovery ++ * binary for sysfs. We don't need it for a ++ * functional system. ++ */ ++ amdgpu_discovery_init(adev); + vega10_reg_base_init(adev); + adev->sdma.num_instances = 2; + adev->gmc.num_umc = 4; +@@ -2633,6 +2609,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + adev->ip_versions[DCI_HWIP][0] = IP_VERSION(12, 0, 1); + break; + case CHIP_RAVEN: ++ /* This is not fatal. We only need the discovery ++ * binary for sysfs. We don't need it for a ++ * functional system. ++ */ ++ amdgpu_discovery_init(adev); + vega10_reg_base_init(adev); + adev->sdma.num_instances = 1; + adev->vcn.num_vcn_inst = 1; +@@ -2674,6 +2655,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + } + break; + case CHIP_VEGA20: ++ /* This is not fatal. We only need the discovery ++ * binary for sysfs. We don't need it for a ++ * functional system. ++ */ ++ amdgpu_discovery_init(adev); + vega20_reg_base_init(adev); + adev->sdma.num_instances = 2; + adev->gmc.num_umc = 8; +@@ -2697,6 +2683,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + adev->ip_versions[DCI_HWIP][0] = IP_VERSION(12, 1, 0); + break; + case CHIP_ARCTURUS: ++ /* This is not fatal. We only need the discovery ++ * binary for sysfs. We don't need it for a ++ * functional system. ++ */ ++ amdgpu_discovery_init(adev); + arct_reg_base_init(adev); + adev->sdma.num_instances = 8; + adev->vcn.num_vcn_inst = 2; +@@ -2725,6 +2716,11 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + adev->ip_versions[UVD_HWIP][1] = IP_VERSION(2, 5, 0); + break; + case CHIP_ALDEBARAN: ++ /* This is not fatal. We only need the discovery ++ * binary for sysfs. We don't need it for a ++ * functional system. ++ */ ++ amdgpu_discovery_init(adev); + aldebaran_reg_base_init(adev); + adev->sdma.num_instances = 5; + adev->vcn.num_vcn_inst = 2; +@@ -2751,6 +2747,16 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev) + adev->ip_versions[XGMI_HWIP][0] = IP_VERSION(6, 1, 0); + break; + default: ++ r = amdgpu_discovery_reg_base_init(adev); ++ if (r) { ++ drm_err(&adev->ddev, "discovery failed: %d\n", r); ++ return r; ++ } ++ ++ amdgpu_discovery_harvest_ip(adev); ++ amdgpu_discovery_get_gfx_info(adev); ++ amdgpu_discovery_get_mall_info(adev); ++ amdgpu_discovery_get_vcn_info(adev); + break; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +index 3528a27c7c1ddd..bee6609eab2043 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +@@ -387,13 +387,6 @@ amdgpu_job_prepare_job(struct drm_sched_job *sched_job, + dev_err(ring->adev->dev, "Error getting VM ID (%d)\n", r); + goto error; + } +- /* +- * The VM structure might be released after the VMID is +- * assigned, we had multiple problems with people trying to use +- * the VM pointer so better set it to NULL. +- */ +- if (!fence) +- job->vm = NULL; + return fence; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +index 6ac0ce361a2d8c..7c5584742471e9 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +@@ -687,6 +687,7 @@ static void amdgpu_ring_to_mqd_prop(struct amdgpu_ring *ring, + prop->eop_gpu_addr = ring->eop_gpu_addr; + prop->use_doorbell = ring->use_doorbell; + prop->doorbell_index = ring->doorbell_index; ++ prop->kernel_queue = true; + + /* map_queues packet doesn't need activate the queue, + * so only kiq need set this field. +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +index eaddc441c51ab5..024f2121a0997b 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +@@ -32,6 +32,7 @@ + + static const struct kicker_device kicker_device_list[] = { + {0x744B, 0x00}, ++ {0x7551, 0xC8} + }; + + static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr) +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +index 3911c78f828279..9b364502de44f9 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +@@ -654,11 +654,10 @@ int amdgpu_vm_validate(struct amdgpu_device *adev, struct amdgpu_vm *vm, + * Check if all VM PDs/PTs are ready for updates + * + * Returns: +- * True if VM is not evicting. ++ * True if VM is not evicting and all VM entities are not stopped + */ + bool amdgpu_vm_ready(struct amdgpu_vm *vm) + { +- bool empty; + bool ret; + + amdgpu_vm_eviction_lock(vm); +@@ -666,10 +665,18 @@ bool amdgpu_vm_ready(struct amdgpu_vm *vm) + amdgpu_vm_eviction_unlock(vm); + + spin_lock(&vm->status_lock); +- empty = list_empty(&vm->evicted); ++ ret &= list_empty(&vm->evicted); + spin_unlock(&vm->status_lock); + +- return ret && empty; ++ spin_lock(&vm->immediate.lock); ++ ret &= !vm->immediate.stopped; ++ spin_unlock(&vm->immediate.lock); ++ ++ spin_lock(&vm->delayed.lock); ++ ret &= !vm->delayed.stopped; ++ spin_unlock(&vm->delayed.lock); ++ ++ return ret; + } + + /** +@@ -2409,13 +2416,11 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size, + */ + long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout) + { +- timeout = dma_resv_wait_timeout(vm->root.bo->tbo.base.resv, +- DMA_RESV_USAGE_BOOKKEEP, +- true, timeout); ++ timeout = drm_sched_entity_flush(&vm->immediate, timeout); + if (timeout <= 0) + return timeout; + +- return dma_fence_wait_timeout(vm->last_unlocked, true, timeout); ++ return drm_sched_entity_flush(&vm->delayed, timeout); + } + + static void amdgpu_vm_destroy_task_info(struct kref *kref) +diff --git a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c +index 1c083304ae7767..c67c4705d662e6 100644 +--- a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c ++++ b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c +@@ -453,6 +453,7 @@ static int __aqua_vanjaram_get_px_mode_info(struct amdgpu_xcp_mgr *xcp_mgr, + uint16_t *nps_modes) + { + struct amdgpu_device *adev = xcp_mgr->adev; ++ uint32_t gc_ver = amdgpu_ip_version(adev, GC_HWIP, 0); + + if (!num_xcp || !nps_modes || !(xcp_mgr->supp_xcp_modes & BIT(px_mode))) + return -EINVAL; +@@ -476,12 +477,14 @@ static int __aqua_vanjaram_get_px_mode_info(struct amdgpu_xcp_mgr *xcp_mgr, + *num_xcp = 4; + *nps_modes = BIT(AMDGPU_NPS1_PARTITION_MODE) | + BIT(AMDGPU_NPS4_PARTITION_MODE); ++ if (gc_ver == IP_VERSION(9, 5, 0)) ++ *nps_modes |= BIT(AMDGPU_NPS2_PARTITION_MODE); + break; + case AMDGPU_CPX_PARTITION_MODE: + *num_xcp = NUM_XCC(adev->gfx.xcc_mask); + *nps_modes = BIT(AMDGPU_NPS1_PARTITION_MODE) | + BIT(AMDGPU_NPS4_PARTITION_MODE); +- if (amdgpu_sriov_vf(adev)) ++ if (gc_ver == IP_VERSION(9, 5, 0)) + *nps_modes |= BIT(AMDGPU_NPS2_PARTITION_MODE); + break; + default: +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c +index 50f04c2c0b8c0c..097ec7d99c5abb 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c +@@ -79,6 +79,7 @@ MODULE_FIRMWARE("amdgpu/gc_12_0_1_pfp.bin"); + MODULE_FIRMWARE("amdgpu/gc_12_0_1_me.bin"); + MODULE_FIRMWARE("amdgpu/gc_12_0_1_mec.bin"); + MODULE_FIRMWARE("amdgpu/gc_12_0_1_rlc.bin"); ++MODULE_FIRMWARE("amdgpu/gc_12_0_1_rlc_kicker.bin"); + MODULE_FIRMWARE("amdgpu/gc_12_0_1_toc.bin"); + + static const struct amdgpu_hwip_reg_entry gc_reg_list_12_0[] = { +@@ -586,7 +587,7 @@ static int gfx_v12_0_init_toc_microcode(struct amdgpu_device *adev, const char * + + static int gfx_v12_0_init_microcode(struct amdgpu_device *adev) + { +- char ucode_prefix[15]; ++ char ucode_prefix[30]; + int err; + const struct rlc_firmware_header_v2_0 *rlc_hdr; + uint16_t version_major; +@@ -613,9 +614,14 @@ static int gfx_v12_0_init_microcode(struct amdgpu_device *adev) + amdgpu_gfx_cp_init_microcode(adev, AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK); + + if (!amdgpu_sriov_vf(adev)) { +- err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, +- AMDGPU_UCODE_REQUIRED, +- "amdgpu/%s_rlc.bin", ucode_prefix); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, ++ AMDGPU_UCODE_REQUIRED, ++ "amdgpu/%s_rlc_kicker.bin", ucode_prefix); ++ else ++ err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, ++ AMDGPU_UCODE_REQUIRED, ++ "amdgpu/%s_rlc.bin", ucode_prefix); + if (err) + goto out; + rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data; +diff --git a/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c b/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c +index df898dbb746e3f..58cd87db80619f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/imu_v12_0.c +@@ -34,12 +34,13 @@ + + MODULE_FIRMWARE("amdgpu/gc_12_0_0_imu.bin"); + MODULE_FIRMWARE("amdgpu/gc_12_0_1_imu.bin"); ++MODULE_FIRMWARE("amdgpu/gc_12_0_1_imu_kicker.bin"); + + #define TRANSFER_RAM_MASK 0x001c0000 + + static int imu_v12_0_init_microcode(struct amdgpu_device *adev) + { +- char ucode_prefix[15]; ++ char ucode_prefix[30]; + int err; + const struct imu_firmware_header_v1_0 *imu_hdr; + struct amdgpu_firmware_info *info = NULL; +@@ -47,8 +48,12 @@ static int imu_v12_0_init_microcode(struct amdgpu_device *adev) + DRM_DEBUG("\n"); + + amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); +- err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, +- "amdgpu/%s_imu.bin", ucode_prefix); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, ++ "amdgpu/%s_imu_kicker.bin", ucode_prefix); ++ else ++ err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, ++ "amdgpu/%s_imu.bin", ucode_prefix); + if (err) + goto out; + +@@ -362,7 +367,7 @@ static void program_imu_rlc_ram(struct amdgpu_device *adev, + static void imu_v12_0_program_rlc_ram(struct amdgpu_device *adev) + { + u32 reg_data, size = 0; +- const u32 *data; ++ const u32 *data = NULL; + int r = -EINVAL; + + WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_INDEX, 0x2); +diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c +index 134c4ec1088785..910337dc28d105 100644 +--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c ++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c +@@ -36,40 +36,47 @@ + + static const char *mmhub_client_ids_v3_0_1[][2] = { + [0][0] = "VMC", ++ [1][0] = "ISPXT", ++ [2][0] = "ISPIXT", + [4][0] = "DCEDMC", + [5][0] = "DCEVGA", + [6][0] = "MP0", + [7][0] = "MP1", +- [8][0] = "MPIO", +- [16][0] = "HDP", +- [17][0] = "LSDMA", +- [18][0] = "JPEG", +- [19][0] = "VCNU0", +- [21][0] = "VSCH", +- [22][0] = "VCNU1", +- [23][0] = "VCN1", +- [32+20][0] = "VCN0", +- [2][1] = "DBGUNBIO", ++ [8][0] = "MPM", ++ [12][0] = "ISPTNR", ++ [14][0] = "ISPCRD0", ++ [15][0] = "ISPCRD1", ++ [16][0] = "ISPCRD2", ++ [22][0] = "HDP", ++ [23][0] = "LSDMA", ++ [24][0] = "JPEG", ++ [27][0] = "VSCH", ++ [28][0] = "VCNU", ++ [29][0] = "VCN", ++ [1][1] = "ISPXT", ++ [2][1] = "ISPIXT", + [3][1] = "DCEDWB", + [4][1] = "DCEDMC", + [5][1] = "DCEVGA", + [6][1] = "MP0", + [7][1] = "MP1", +- [8][1] = "MPIO", +- [10][1] = "DBGU0", +- [11][1] = "DBGU1", +- [12][1] = "DBGU2", +- [13][1] = "DBGU3", +- [14][1] = "XDP", +- [15][1] = "OSSSYS", +- [16][1] = "HDP", +- [17][1] = "LSDMA", +- [18][1] = "JPEG", +- [19][1] = "VCNU0", +- [20][1] = "VCN0", +- [21][1] = "VSCH", +- [22][1] = "VCNU1", +- [23][1] = "VCN1", ++ [8][1] = "MPM", ++ [10][1] = "ISPMWR0", ++ [11][1] = "ISPMWR1", ++ [12][1] = "ISPTNR", ++ [13][1] = "ISPSWR", ++ [14][1] = "ISPCWR0", ++ [15][1] = "ISPCWR1", ++ [16][1] = "ISPCWR2", ++ [17][1] = "ISPCWR3", ++ [18][1] = "XDP", ++ [21][1] = "OSSSYS", ++ [22][1] = "HDP", ++ [23][1] = "LSDMA", ++ [24][1] = "JPEG", ++ [27][1] = "VSCH", ++ [28][1] = "VCNU", ++ [29][1] = "VCN", + }; + + static uint32_t mmhub_v3_0_1_get_invalidate_req(unsigned int vmid, +diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c +index bc3d6c2fc87a42..f6fc9778bc3059 100644 +--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c ++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v3_3.c +@@ -40,30 +40,129 @@ + + static const char *mmhub_client_ids_v3_3[][2] = { + [0][0] = "VMC", ++ [1][0] = "ISPXT", ++ [2][0] = "ISPIXT", + [4][0] = "DCEDMC", + [6][0] = "MP0", + [7][0] = "MP1", + [8][0] = "MPM", ++ [9][0] = "ISPPDPRD", ++ [10][0] = "ISPCSTATRD", ++ [11][0] = "ISPBYRPRD", ++ [12][0] = "ISPRGBPRD", ++ [13][0] = "ISPMCFPRD", ++ [14][0] = "ISPMCFPRD1", ++ [15][0] = "ISPYUVPRD", ++ [16][0] = "ISPMCSCRD", ++ [17][0] = "ISPGDCRD", ++ [18][0] = "ISPLMERD", ++ [22][0] = "ISPXT1", ++ [23][0] = "ISPIXT1", + [24][0] = "HDP", + [25][0] = "LSDMA", + [26][0] = "JPEG", + [27][0] = "VPE", ++ [28][0] = "VSCH", + [29][0] = "VCNU", + [30][0] = "VCN", ++ [1][1] = "ISPXT", ++ [2][1] = "ISPIXT", + [3][1] = "DCEDWB", + [4][1] = "DCEDMC", ++ [5][1] = "ISPCSISWR", + [6][1] = "MP0", + [7][1] = "MP1", + [8][1] = "MPM", ++ [9][1] = "ISPPDPWR", ++ [10][1] = "ISPCSTATWR", ++ [11][1] = "ISPBYRPWR", ++ [12][1] = "ISPRGBPWR", ++ [13][1] = "ISPMCFPWR", ++ [14][1] = "ISPMWR0", ++ [15][1] = "ISPYUVPWR", ++ [16][1] = "ISPMCSCWR", ++ [17][1] = "ISPGDCWR", ++ [18][1] = "ISPLMEWR", ++ [20][1] = "ISPMWR2", + [21][1] = "OSSSYS", ++ [22][1] = "ISPXT1", ++ [23][1] = "ISPIXT1", + [24][1] = "HDP", + [25][1] = "LSDMA", + [26][1] = "JPEG", + [27][1] = "VPE", ++ [28][1] = "VSCH", + [29][1] = "VCNU", + [30][1] = "VCN", + }; + ++static const char *mmhub_client_ids_v3_3_1[][2] = { ++ [0][0] = "VMC", ++ [4][0] = "DCEDMC", ++ [6][0] = "MP0", ++ [7][0] = "MP1", ++ [8][0] = "MPM", ++ [24][0] = "HDP", ++ [25][0] = "LSDMA", ++ [26][0] = "JPEG0", ++ [27][0] = "VPE0", ++ [28][0] = "VSCH", ++ [29][0] = "VCNU0", ++ [30][0] = "VCN0", ++ [32+1][0] = "ISPXT", ++ [32+2][0] = "ISPIXT", ++ [32+9][0] = "ISPPDPRD", ++ [32+10][0] = "ISPCSTATRD", ++ [32+11][0] = "ISPBYRPRD", ++ [32+12][0] = "ISPRGBPRD", ++ [32+13][0] = "ISPMCFPRD", ++ [32+14][0] = "ISPMCFPRD1", ++ [32+15][0] = "ISPYUVPRD", ++ [32+16][0] = "ISPMCSCRD", ++ [32+17][0] = "ISPGDCRD", ++ [32+18][0] = "ISPLMERD", ++ [32+22][0] = "ISPXT1", ++ [32+23][0] = "ISPIXT1", ++ [32+26][0] = "JPEG1", ++ [32+27][0] = "VPE1", ++ [32+29][0] = "VCNU1", ++ [32+30][0] = "VCN1", ++ [3][1] = "DCEDWB", ++ [4][1] = "DCEDMC", ++ [6][1] = "MP0", ++ [7][1] = "MP1", ++ [8][1] = "MPM", ++ [21][1] = "OSSSYS", ++ [24][1] = "HDP", ++ [25][1] = "LSDMA", ++ [26][1] = "JPEG0", ++ [27][1] = "VPE0", ++ [28][1] = "VSCH", ++ [29][1] = "VCNU0", ++ [30][1] = "VCN0", ++ [32+1][1] = "ISPXT", ++ [32+2][1] = "ISPIXT", ++ [32+5][1] = "ISPCSISWR", ++ [32+9][1] = "ISPPDPWR", ++ [32+10][1] = "ISPCSTATWR", ++ [32+11][1] = "ISPBYRPWR", ++ [32+12][1] = "ISPRGBPWR", ++ [32+13][1] = "ISPMCFPWR", ++ [32+14][1] = "ISPMWR0", ++ [32+15][1] = "ISPYUVPWR", ++ [32+16][1] = "ISPMCSCWR", ++ [32+17][1] = "ISPGDCWR", ++ [32+18][1] = "ISPLMEWR", ++ [32+19][1] = "ISPMWR1", ++ [32+20][1] = "ISPMWR2", ++ [32+22][1] = "ISPXT1", ++ [32+23][1] = "ISPIXT1", ++ [32+26][1] = "JPEG1", ++ [32+27][1] = "VPE1", ++ [32+29][1] = "VCNU1", ++ [32+30][1] = "VCN1", ++}; ++ + static uint32_t mmhub_v3_3_get_invalidate_req(unsigned int vmid, + uint32_t flush_type) + { +@@ -102,12 +201,16 @@ mmhub_v3_3_print_l2_protection_fault_status(struct amdgpu_device *adev, + + switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { + case IP_VERSION(3, 3, 0): +- case IP_VERSION(3, 3, 1): + case IP_VERSION(3, 3, 2): + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_3) ? + mmhub_client_ids_v3_3[cid][rw] : + cid == 0x140 ? "UMSCH" : NULL; + break; ++ case IP_VERSION(3, 3, 1): ++ mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_3_1) ? ++ mmhub_client_ids_v3_3_1[cid][rw] : ++ cid == 0x140 ? "UMSCH" : NULL; ++ break; + default: + mmhub_cid = NULL; + break; +diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c +index f2ab5001b49249..951998454b2572 100644 +--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c +@@ -37,39 +37,31 @@ + static const char *mmhub_client_ids_v4_1_0[][2] = { + [0][0] = "VMC", + [4][0] = "DCEDMC", +- [5][0] = "DCEVGA", + [6][0] = "MP0", + [7][0] = "MP1", + [8][0] = "MPIO", +- [16][0] = "HDP", +- [17][0] = "LSDMA", +- [18][0] = "JPEG", +- [19][0] = "VCNU0", +- [21][0] = "VSCH", +- [22][0] = "VCNU1", +- [23][0] = "VCN1", +- [32+20][0] = "VCN0", +- [2][1] = "DBGUNBIO", ++ [16][0] = "LSDMA", ++ [17][0] = "JPEG", ++ [19][0] = "VCNU", ++ [22][0] = "VSCH", ++ [23][0] = "HDP", ++ [32+23][0] = "VCNRD", + [3][1] = "DCEDWB", + [4][1] = "DCEDMC", +- [5][1] = "DCEVGA", + [6][1] = "MP0", + [7][1] = "MP1", + [8][1] = "MPIO", + [10][1] = "DBGU0", + [11][1] = "DBGU1", +- [12][1] = "DBGU2", +- [13][1] = "DBGU3", ++ [12][1] = "DBGUNBIO", + [14][1] = "XDP", + [15][1] = "OSSSYS", +- [16][1] = "HDP", +- [17][1] = "LSDMA", +- [18][1] = "JPEG", +- [19][1] = "VCNU0", +- [20][1] = "VCN0", +- [21][1] = "VSCH", +- [22][1] = "VCNU1", +- [23][1] = "VCN1", ++ [16][1] = "LSDMA", ++ [17][1] = "JPEG", ++ [18][1] = "VCNWR", ++ [19][1] = "VCNU", ++ [22][1] = "VSCH", ++ [23][1] = "HDP", + }; + + static uint32_t mmhub_v4_1_0_get_invalidate_req(unsigned int vmid, +diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c +index ffa47c7d24c919..b029f301aaccaf 100644 +--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c +@@ -34,7 +34,9 @@ + MODULE_FIRMWARE("amdgpu/psp_14_0_2_sos.bin"); + MODULE_FIRMWARE("amdgpu/psp_14_0_2_ta.bin"); + MODULE_FIRMWARE("amdgpu/psp_14_0_3_sos.bin"); ++MODULE_FIRMWARE("amdgpu/psp_14_0_3_sos_kicker.bin"); + MODULE_FIRMWARE("amdgpu/psp_14_0_3_ta.bin"); ++MODULE_FIRMWARE("amdgpu/psp_14_0_3_ta_kicker.bin"); + MODULE_FIRMWARE("amdgpu/psp_14_0_5_toc.bin"); + MODULE_FIRMWARE("amdgpu/psp_14_0_5_ta.bin"); + +diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c +index c457be3a3c56f5..9e74c9822e622a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/soc15.c ++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c +@@ -1218,6 +1218,8 @@ static int soc15_common_early_init(struct amdgpu_ip_block *ip_block) + AMD_PG_SUPPORT_JPEG; + /*TODO: need a new external_rev_id for GC 9.4.4? */ + adev->external_rev_id = adev->rev_id + 0x46; ++ if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 5, 0)) ++ adev->external_rev_id = adev->rev_id + 0x50; + break; + default: + /* FIXME: not supported yet */ +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +index 76359c6a3f3a44..91ce313ac43abe 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +@@ -2716,7 +2716,7 @@ static void get_queue_checkpoint_info(struct device_queue_manager *dqm, + + dqm_lock(dqm); + mqd_mgr = dqm->mqd_mgrs[mqd_type]; +- *mqd_size = mqd_mgr->mqd_size; ++ *mqd_size = mqd_mgr->mqd_size * NUM_XCC(mqd_mgr->dev->xcc_mask); + *ctl_stack_size = 0; + + if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE && mqd_mgr->get_checkpoint_info) +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_module.c b/drivers/gpu/drm/amd/amdkfd/kfd_module.c +index aee2212e52f69a..33aa23450b3f72 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_module.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_module.c +@@ -78,8 +78,8 @@ static int kfd_init(void) + static void kfd_exit(void) + { + kfd_cleanup_processes(); +- kfd_debugfs_fini(); + kfd_process_destroy_wq(); ++ kfd_debugfs_fini(); + kfd_procfs_shutdown(); + kfd_topology_shutdown(); + kfd_chardev_exit(); +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c +index 97933d2a380323..f2dee320fada42 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c +@@ -373,7 +373,7 @@ static void get_checkpoint_info(struct mqd_manager *mm, void *mqd, u32 *ctl_stac + { + struct v9_mqd *m = get_mqd(mqd); + +- *ctl_stack_size = m->cp_hqd_cntl_stack_size; ++ *ctl_stack_size = m->cp_hqd_cntl_stack_size * NUM_XCC(mm->dev->xcc_mask); + } + + static void checkpoint_mqd(struct mqd_manager *mm, void *mqd, void *mqd_dst, void *ctl_stack_dst) +@@ -388,6 +388,24 @@ static void checkpoint_mqd(struct mqd_manager *mm, void *mqd, void *mqd_dst, voi + memcpy(ctl_stack_dst, ctl_stack, m->cp_hqd_cntl_stack_size); + } + ++static void checkpoint_mqd_v9_4_3(struct mqd_manager *mm, ++ void *mqd, ++ void *mqd_dst, ++ void *ctl_stack_dst) ++{ ++ struct v9_mqd *m; ++ int xcc; ++ uint64_t size = get_mqd(mqd)->cp_mqd_stride_size; ++ ++ for (xcc = 0; xcc < NUM_XCC(mm->dev->xcc_mask); xcc++) { ++ m = get_mqd(mqd + size * xcc); ++ ++ checkpoint_mqd(mm, m, ++ (uint8_t *)mqd_dst + sizeof(*m) * xcc, ++ (uint8_t *)ctl_stack_dst + m->cp_hqd_cntl_stack_size * xcc); ++ } ++} ++ + static void restore_mqd(struct mqd_manager *mm, void **mqd, + struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr, + struct queue_properties *qp, +@@ -764,13 +782,35 @@ static void restore_mqd_v9_4_3(struct mqd_manager *mm, void **mqd, + const void *mqd_src, + const void *ctl_stack_src, u32 ctl_stack_size) + { +- restore_mqd(mm, mqd, mqd_mem_obj, gart_addr, qp, mqd_src, ctl_stack_src, ctl_stack_size); +- if (amdgpu_sriov_multi_vf_mode(mm->dev->adev)) { +- struct v9_mqd *m; ++ struct kfd_mem_obj xcc_mqd_mem_obj; ++ u32 mqd_ctl_stack_size; ++ struct v9_mqd *m; ++ u32 num_xcc; ++ int xcc; + +- m = (struct v9_mqd *) mqd_mem_obj->cpu_ptr; +- m->cp_hqd_pq_doorbell_control |= 1 << +- CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_MODE__SHIFT; ++ uint64_t offset = mm->mqd_stride(mm, qp); ++ ++ mm->dev->dqm->current_logical_xcc_start++; ++ ++ num_xcc = NUM_XCC(mm->dev->xcc_mask); ++ mqd_ctl_stack_size = ctl_stack_size / num_xcc; ++ ++ memset(&xcc_mqd_mem_obj, 0x0, sizeof(struct kfd_mem_obj)); ++ ++ /* Set the MQD pointer and gart address to XCC0 MQD */ ++ *mqd = mqd_mem_obj->cpu_ptr; ++ if (gart_addr) ++ *gart_addr = mqd_mem_obj->gpu_addr; ++ ++ for (xcc = 0; xcc < num_xcc; xcc++) { ++ get_xcc_mqd(mqd_mem_obj, &xcc_mqd_mem_obj, offset * xcc); ++ restore_mqd(mm, (void **)&m, ++ &xcc_mqd_mem_obj, ++ NULL, ++ qp, ++ (uint8_t *)mqd_src + xcc * sizeof(*m), ++ (uint8_t *)ctl_stack_src + xcc * mqd_ctl_stack_size, ++ mqd_ctl_stack_size); + } + } + static int destroy_mqd_v9_4_3(struct mqd_manager *mm, void *mqd, +@@ -906,7 +946,6 @@ struct mqd_manager *mqd_manager_init_v9(enum KFD_MQD_TYPE type, + mqd->free_mqd = kfd_free_mqd_cp; + mqd->is_occupied = kfd_is_occupied_cp; + mqd->get_checkpoint_info = get_checkpoint_info; +- mqd->checkpoint_mqd = checkpoint_mqd; + mqd->mqd_size = sizeof(struct v9_mqd); + mqd->mqd_stride = mqd_stride_v9; + #if defined(CONFIG_DEBUG_FS) +@@ -918,16 +957,18 @@ struct mqd_manager *mqd_manager_init_v9(enum KFD_MQD_TYPE type, + mqd->init_mqd = init_mqd_v9_4_3; + mqd->load_mqd = load_mqd_v9_4_3; + mqd->update_mqd = update_mqd_v9_4_3; +- mqd->restore_mqd = restore_mqd_v9_4_3; + mqd->destroy_mqd = destroy_mqd_v9_4_3; + mqd->get_wave_state = get_wave_state_v9_4_3; ++ mqd->checkpoint_mqd = checkpoint_mqd_v9_4_3; ++ mqd->restore_mqd = restore_mqd_v9_4_3; + } else { + mqd->init_mqd = init_mqd; + mqd->load_mqd = load_mqd; + mqd->update_mqd = update_mqd; +- mqd->restore_mqd = restore_mqd; + mqd->destroy_mqd = kfd_destroy_mqd_cp; + mqd->get_wave_state = get_wave_state; ++ mqd->checkpoint_mqd = checkpoint_mqd; ++ mqd->restore_mqd = restore_mqd; + } + break; + case KFD_MQD_TYPE_HIQ: +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c +index c643e0ccec52b0..7fbb5c274ccc42 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c +@@ -914,7 +914,10 @@ static int criu_checkpoint_queues_device(struct kfd_process_device *pdd, + + q_data = (struct kfd_criu_queue_priv_data *)q_private_data; + +- /* data stored in this order: priv_data, mqd, ctl_stack */ ++ /* ++ * data stored in this order: ++ * priv_data, mqd[xcc0], mqd[xcc1],..., ctl_stack[xcc0], ctl_stack[xcc1]... ++ */ + q_data->mqd_size = mqd_size; + q_data->ctl_stack_size = ctl_stack_size; + +@@ -963,7 +966,7 @@ int kfd_criu_checkpoint_queues(struct kfd_process *p, + } + + static void set_queue_properties_from_criu(struct queue_properties *qp, +- struct kfd_criu_queue_priv_data *q_data) ++ struct kfd_criu_queue_priv_data *q_data, uint32_t num_xcc) + { + qp->is_interop = false; + qp->queue_percent = q_data->q_percent; +@@ -976,7 +979,11 @@ static void set_queue_properties_from_criu(struct queue_properties *qp, + qp->eop_ring_buffer_size = q_data->eop_ring_buffer_size; + qp->ctx_save_restore_area_address = q_data->ctx_save_restore_area_address; + qp->ctx_save_restore_area_size = q_data->ctx_save_restore_area_size; +- qp->ctl_stack_size = q_data->ctl_stack_size; ++ if (q_data->type == KFD_QUEUE_TYPE_COMPUTE) ++ qp->ctl_stack_size = q_data->ctl_stack_size / num_xcc; ++ else ++ qp->ctl_stack_size = q_data->ctl_stack_size; ++ + qp->type = q_data->type; + qp->format = q_data->format; + } +@@ -1036,12 +1043,15 @@ int kfd_criu_restore_queue(struct kfd_process *p, + goto exit; + } + +- /* data stored in this order: mqd, ctl_stack */ ++ /* ++ * data stored in this order: ++ * mqd[xcc0], mqd[xcc1],..., ctl_stack[xcc0], ctl_stack[xcc1]... ++ */ + mqd = q_extra_data; + ctl_stack = mqd + q_data->mqd_size; + + memset(&qp, 0, sizeof(qp)); +- set_queue_properties_from_criu(&qp, q_data); ++ set_queue_properties_from_criu(&qp, q_data, NUM_XCC(pdd->dev->adev->gfx.xcc_mask)); + + print_queue_properties(&qp); + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 7b5440bdad2f35..2d94fec5b545d7 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -3343,8 +3343,10 @@ static int dm_resume(struct amdgpu_ip_block *ip_block) + link_enc_cfg_copy(adev->dm.dc->current_state, dc_state); + + r = dm_dmub_hw_init(adev); +- if (r) ++ if (r) { + drm_err(adev_to_drm(adev), "DMUB interface failed to initialize: status=%d\n", r); ++ return r; ++ } + + dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0); + dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0); +@@ -4731,16 +4733,16 @@ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps, + return 1; + } + +-/* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */ ++/* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */ + static inline u32 scale_input_to_fw(int min, int max, u64 input) + { +- return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min); ++ return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min); + } + +-/* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */ ++/* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */ + static inline u32 scale_fw_to_input(int min, int max, u64 input) + { +- return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL); ++ return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL); + } + + static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps, +@@ -4952,9 +4954,9 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector) + caps = &dm->backlight_caps[aconnector->bl_idx]; + if (get_brightness_range(caps, &min, &max)) { + if (power_supply_is_system_supplied() > 0) +- props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->ac_level, 100); ++ props.brightness = DIV_ROUND_CLOSEST((max - min) * caps->ac_level, 100); + else +- props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->dc_level, 100); ++ props.brightness = DIV_ROUND_CLOSEST((max - min) * caps->dc_level, 100); + /* min is zero, so max needs to be adjusted */ + props.max_brightness = max - min; + drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max, +@@ -7759,6 +7761,9 @@ amdgpu_dm_connector_atomic_check(struct drm_connector *conn, + struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn); + int ret; + ++ if (WARN_ON(unlikely(!old_con_state || !new_con_state))) ++ return -EINVAL; ++ + trace_amdgpu_dm_connector_atomic_check(new_con_state); + + if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) { +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +index 2551823382f8b6..45feb404b0979e 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +@@ -299,6 +299,25 @@ static inline int amdgpu_dm_crtc_set_vblank(struct drm_crtc *crtc, bool enable) + irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id); + + if (enable) { ++ struct dc *dc = adev->dm.dc; ++ struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc); ++ struct psr_settings *psr = &acrtc_state->stream->link->psr_settings; ++ struct replay_settings *pr = &acrtc_state->stream->link->replay_settings; ++ bool sr_supported = (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED) || ++ pr->config.replay_supported; ++ ++ /* ++ * IPS & self-refresh feature can cause vblank counter resets between ++ * vblank disable and enable. ++ * It may cause system stuck due to waiting for the vblank counter. ++ * Call this function to estimate missed vblanks by using timestamps and ++ * update the vblank counter in DRM. ++ */ ++ if (dc->caps.ips_support && ++ dc->config.disable_ips != DMUB_IPS_DISABLE_ALL && ++ sr_supported && vblank->config.disable_immediate) ++ drm_crtc_vblank_restore(crtc); ++ + /* vblank irq on -> Only need vupdate irq in vrr mode */ + if (amdgpu_dm_crtc_vrr_active(acrtc_state)) + rc = amdgpu_dm_crtc_set_vupdate_irq(crtc, true); +@@ -661,6 +680,15 @@ static int amdgpu_dm_crtc_helper_atomic_check(struct drm_crtc *crtc, + return -EINVAL; + } + ++ if (!state->legacy_cursor_update && amdgpu_dm_crtc_vrr_active(dm_crtc_state)) { ++ struct drm_plane_state *primary_state; ++ ++ /* Pull in primary plane for correct VRR handling */ ++ primary_state = drm_atomic_get_plane_state(state, crtc->primary); ++ if (IS_ERR(primary_state)) ++ return PTR_ERR(primary_state); ++ } ++ + /* In some use cases, like reset, no stream is attached */ + if (!dm_crtc_state->stream) + return 0; +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +index c7d13e743e6c8c..b726bcd18e2982 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +@@ -3988,7 +3988,7 @@ static int capabilities_show(struct seq_file *m, void *unused) + + struct hubbub *hubbub = dc->res_pool->hubbub; + +- if (hubbub->funcs->get_mall_en) ++ if (hubbub && hubbub->funcs->get_mall_en) + hubbub->funcs->get_mall_en(hubbub, &mall_in_use); + + if (dc->cap_funcs.get_subvp_en) +diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c +index 67f08495b7e6e2..154fd2c18e8848 100644 +--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c ++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c +@@ -174,11 +174,8 @@ static struct graphics_object_id bios_parser_get_connector_id( + return object_id; + } + +- if (tbl->ucNumberOfObjects <= i) { +- dm_error("Can't find connector id %d in connector table of size %d.\n", +- i, tbl->ucNumberOfObjects); ++ if (tbl->ucNumberOfObjects <= i) + return object_id; +- } + + id = le16_to_cpu(tbl->asObjects[i].usObjectID); + object_id = object_id_from_bios_object_id(id); +diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c +index 2bcae0643e61db..58e88778da7ffd 100644 +--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c ++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c +@@ -993,7 +993,7 @@ static enum bp_result set_pixel_clock_v3( + allocation.sPCLKInput.usFbDiv = + cpu_to_le16((uint16_t)bp_params->feedback_divider); + allocation.sPCLKInput.ucFracFbDiv = +- (uint8_t)bp_params->fractional_feedback_divider; ++ (uint8_t)(bp_params->fractional_feedback_divider / 100000); + allocation.sPCLKInput.ucPostDiv = + (uint8_t)bp_params->pixel_clock_post_divider; + +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c +index 4c3e58c730b11c..a0c1072c59a236 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c +@@ -158,7 +158,6 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p + return NULL; + } + dce60_clk_mgr_construct(ctx, clk_mgr); +- dce_clk_mgr_construct(ctx, clk_mgr); + return &clk_mgr->base; + } + #endif +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c +index 26feefbb8990ae..dbd6ef1b60a0b7 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce100/dce_clk_mgr.c +@@ -72,9 +72,9 @@ static const struct state_dependent_clocks dce80_max_clks_by_state[] = { + /* ClocksStateLow */ + { .display_clk_khz = 352000, .pixel_clk_khz = 330000}, + /* ClocksStateNominal */ +-{ .display_clk_khz = 600000, .pixel_clk_khz = 400000 }, ++{ .display_clk_khz = 625000, .pixel_clk_khz = 400000 }, + /* ClocksStatePerformance */ +-{ .display_clk_khz = 600000, .pixel_clk_khz = 400000 } }; ++{ .display_clk_khz = 625000, .pixel_clk_khz = 400000 } }; + + int dentist_get_divider_from_did(int did) + { +@@ -245,6 +245,11 @@ int dce_set_clock( + pxl_clk_params.target_pixel_clock_100hz = requested_clk_khz * 10; + pxl_clk_params.pll_id = CLOCK_SOURCE_ID_DFS; + ++ /* DCE 6.0, DCE 6.4: engine clock is the same as PLL0 */ ++ if (clk_mgr_base->ctx->dce_version == DCE_VERSION_6_0 || ++ clk_mgr_base->ctx->dce_version == DCE_VERSION_6_4) ++ pxl_clk_params.pll_id = CLOCK_SOURCE_ID_PLL0; ++ + if (clk_mgr_dce->dfs_bypass_active) + pxl_clk_params.flags.SET_DISPCLK_DFS_BYPASS = true; + +@@ -386,8 +391,6 @@ static void dce_pplib_apply_display_requirements( + { + struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg; + +- pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); +- + dce110_fill_display_configs(context, pp_display_cfg); + + if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) +@@ -400,11 +403,9 @@ static void dce_update_clocks(struct clk_mgr *clk_mgr_base, + { + struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base); + struct dm_pp_power_level_change_request level_change_req; +- int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz; +- +- /*TODO: W/A for dal3 linux, investigate why this works */ +- if (!clk_mgr_dce->dfs_bypass_active) +- patched_disp_clk = patched_disp_clk * 115 / 100; ++ const int max_disp_clk = ++ clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; ++ int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz); + + level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context); + /* get max clock state from PPLIB */ +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c +index f8409453434c1c..13cf415e38e501 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c +@@ -120,9 +120,15 @@ void dce110_fill_display_configs( + const struct dc_state *context, + struct dm_pp_display_configuration *pp_display_cfg) + { ++ struct dc *dc = context->clk_mgr->ctx->dc; + int j; + int num_cfgs = 0; + ++ pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); ++ pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz; ++ pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0; ++ pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator; ++ + for (j = 0; j < context->stream_count; j++) { + int k; + +@@ -164,6 +170,23 @@ void dce110_fill_display_configs( + cfg->v_refresh /= stream->timing.h_total; + cfg->v_refresh = (cfg->v_refresh + stream->timing.v_total / 2) + / stream->timing.v_total; ++ ++ /* Find first CRTC index and calculate its line time. ++ * This is necessary for DPM on SI GPUs. ++ */ ++ if (cfg->pipe_idx < pp_display_cfg->crtc_index) { ++ const struct dc_crtc_timing *timing = ++ &context->streams[0]->timing; ++ ++ pp_display_cfg->crtc_index = cfg->pipe_idx; ++ pp_display_cfg->line_time_in_us = ++ timing->h_total * 10000 / timing->pix_clk_100hz; ++ } ++ } ++ ++ if (!num_cfgs) { ++ pp_display_cfg->crtc_index = 0; ++ pp_display_cfg->line_time_in_us = 0; + } + + pp_display_cfg->display_count = num_cfgs; +@@ -223,25 +246,8 @@ void dce11_pplib_apply_display_requirements( + pp_display_cfg->min_engine_clock_deep_sleep_khz + = context->bw_ctx.bw.dce.sclk_deep_sleep_khz; + +- pp_display_cfg->avail_mclk_switch_time_us = +- dce110_get_min_vblank_time_us(context); +- /* TODO: dce11.2*/ +- pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0; +- +- pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz; +- + dce110_fill_display_configs(context, pp_display_cfg); + +- /* TODO: is this still applicable?*/ +- if (pp_display_cfg->display_count == 1) { +- const struct dc_crtc_timing *timing = +- &context->streams[0]->timing; +- +- pp_display_cfg->crtc_index = +- pp_display_cfg->disp_configs[0].pipe_idx; +- pp_display_cfg->line_time_in_us = timing->h_total * 10000 / timing->pix_clk_100hz; +- } +- + if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) + dm_pp_apply_display_requirements(dc->ctx, pp_display_cfg); + } +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c +index 0267644717b27a..a39641a0ff09ef 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce60/dce60_clk_mgr.c +@@ -83,22 +83,13 @@ static const struct state_dependent_clocks dce60_max_clks_by_state[] = { + static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base) + { + struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); +- int dprefclk_wdivider; +- int dp_ref_clk_khz; +- int target_div; ++ struct dc_context *ctx = clk_mgr_base->ctx; ++ int dp_ref_clk_khz = 0; + +- /* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */ +- +- /* Read the mmDENTIST_DISPCLK_CNTL to get the currently +- * programmed DID DENTIST_DPREFCLK_WDIVIDER*/ +- REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider); +- +- /* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/ +- target_div = dentist_get_divider_from_did(dprefclk_wdivider); +- +- /* Calculate the current DFS clock, in kHz.*/ +- dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR +- * clk_mgr->base.dentist_vco_freq_khz) / target_div; ++ if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev)) ++ dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency; ++ else ++ dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz; + + return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz); + } +@@ -109,8 +100,6 @@ static void dce60_pplib_apply_display_requirements( + { + struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg; + +- pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context); +- + dce110_fill_display_configs(context, pp_display_cfg); + + if (memcmp(&dc->current_state->pp_display_cfg, pp_display_cfg, sizeof(*pp_display_cfg)) != 0) +@@ -123,11 +112,9 @@ static void dce60_update_clocks(struct clk_mgr *clk_mgr_base, + { + struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base); + struct dm_pp_power_level_change_request level_change_req; +- int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz; +- +- /*TODO: W/A for dal3 linux, investigate why this works */ +- if (!clk_mgr_dce->dfs_bypass_active) +- patched_disp_clk = patched_disp_clk * 115 / 100; ++ const int max_disp_clk = ++ clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz; ++ int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz); + + level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context); + /* get max clock state from PPLIB */ +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c +index 0017e9991670bd..eb76611a42a5eb 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c +@@ -217,11 +217,24 @@ static bool create_links( + connectors_num, + num_virtual_links); + +- // condition loop on link_count to allow skipping invalid indices ++ /* When getting the number of connectors, the VBIOS reports the number of valid indices, ++ * but it doesn't say which indices are valid, and not every index has an actual connector. ++ * So, if we don't find a connector on an index, that is not an error. ++ * ++ * - There is no guarantee that the first N indices will be valid ++ * - VBIOS may report a higher amount of valid indices than there are actual connectors ++ * - Some VBIOS have valid configurations for more connectors than there actually are ++ * on the card. This may be because the manufacturer used the same VBIOS for different ++ * variants of the same card. ++ */ + for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) { ++ struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i); + struct link_init_data link_init_params = {0}; + struct dc_link *link; + ++ if (connector_id.id == CONNECTOR_ID_UNKNOWN) ++ continue; ++ + DC_LOG_DC("BIOS object table - printing link object info for connector number: %d, link_index: %d", i, dc->link_count); + + link_init_params.ctx = dc->ctx; +@@ -938,17 +951,18 @@ static void dc_destruct(struct dc *dc) + if (dc->link_srv) + link_destroy_link_service(&dc->link_srv); + +- if (dc->ctx->gpio_service) +- dal_gpio_service_destroy(&dc->ctx->gpio_service); +- +- if (dc->ctx->created_bios) +- dal_bios_parser_destroy(&dc->ctx->dc_bios); ++ if (dc->ctx) { ++ if (dc->ctx->gpio_service) ++ dal_gpio_service_destroy(&dc->ctx->gpio_service); + +- kfree(dc->ctx->logger); +- dc_perf_trace_destroy(&dc->ctx->perf_trace); ++ if (dc->ctx->created_bios) ++ dal_bios_parser_destroy(&dc->ctx->dc_bios); ++ kfree(dc->ctx->logger); ++ dc_perf_trace_destroy(&dc->ctx->perf_trace); + +- kfree(dc->ctx); +- dc->ctx = NULL; ++ kfree(dc->ctx); ++ dc->ctx = NULL; ++ } + + kfree(dc->bw_vbios); + dc->bw_vbios = NULL; +diff --git a/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c +index d9ffdded5ce1e1..6944bac4ea9b2e 100644 +--- a/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/resource/dce60/dce60_resource.c +@@ -373,7 +373,7 @@ static const struct resource_caps res_cap = { + .num_timing_generator = 6, + .num_audio = 6, + .num_stream_encoder = 6, +- .num_pll = 2, ++ .num_pll = 3, + .num_ddc = 6, + }; + +@@ -389,7 +389,7 @@ static const struct resource_caps res_cap_64 = { + .num_timing_generator = 2, + .num_audio = 2, + .num_stream_encoder = 2, +- .num_pll = 2, ++ .num_pll = 3, + .num_ddc = 2, + }; + +@@ -973,21 +973,24 @@ static bool dce60_construct( + + if (bp->fw_info_valid && bp->fw_info.external_clock_source_frequency_for_dp != 0) { + pool->base.dp_clock_source = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true); + ++ /* DCE 6.0 and 6.4: PLL0 can only be used with DP. Don't initialize it here. */ + pool->base.clock_sources[0] = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], false); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false); + pool->base.clock_sources[1] = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false); + pool->base.clk_src_count = 2; + + } else { + pool->base.dp_clock_source = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], true); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], true); + + pool->base.clock_sources[0] = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false); +- pool->base.clk_src_count = 1; ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false); ++ pool->base.clock_sources[1] = ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false); ++ pool->base.clk_src_count = 2; + } + + if (pool->base.dp_clock_source == NULL) { +@@ -1365,21 +1368,24 @@ static bool dce64_construct( + + if (bp->fw_info_valid && bp->fw_info.external_clock_source_frequency_for_dp != 0) { + pool->base.dp_clock_source = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_EXTERNAL, NULL, true); + ++ /* DCE 6.0 and 6.4: PLL0 can only be used with DP. Don't initialize it here. */ + pool->base.clock_sources[0] = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[0], false); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false); + pool->base.clock_sources[1] = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[1], false); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false); + pool->base.clk_src_count = 2; + + } else { + pool->base.dp_clock_source = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[0], true); ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL0, &clk_src_regs[0], true); + + pool->base.clock_sources[0] = +- dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[1], false); +- pool->base.clk_src_count = 1; ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL1, &clk_src_regs[1], false); ++ pool->base.clock_sources[1] = ++ dce60_clock_source_create(ctx, bp, CLOCK_SOURCE_ID_PLL2, &clk_src_regs[2], false); ++ pool->base.clk_src_count = 2; + } + + if (pool->base.dp_clock_source == NULL) { +diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c +index e58e7b93810be7..6b7db8ec9a53b2 100644 +--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c ++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp_psp.c +@@ -260,6 +260,9 @@ enum mod_hdcp_status mod_hdcp_hdcp1_create_session(struct mod_hdcp *hdcp) + return MOD_HDCP_STATUS_FAILURE; + } + ++ if (!display) ++ return MOD_HDCP_STATUS_DISPLAY_NOT_FOUND; ++ + hdcp_cmd = (struct ta_hdcp_shared_memory *)psp->hdcp_context.context.mem_context.shared_buf; + + mutex_lock(&psp->hdcp_context.mutex); +diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c +index d79a1d94661a54..26b8e232f85825 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c +@@ -76,6 +76,9 @@ static void smu_power_profile_mode_get(struct smu_context *smu, + enum PP_SMC_POWER_PROFILE profile_mode); + static void smu_power_profile_mode_put(struct smu_context *smu, + enum PP_SMC_POWER_PROFILE profile_mode); ++static int smu_od_edit_dpm_table(void *handle, ++ enum PP_OD_DPM_TABLE_COMMAND type, ++ long *input, uint32_t size); + + static int smu_sys_get_pp_feature_mask(void *handle, + char *buf) +@@ -2144,6 +2147,7 @@ static int smu_resume(struct amdgpu_ip_block *ip_block) + int ret; + struct amdgpu_device *adev = ip_block->adev; + struct smu_context *smu = adev->powerplay.pp_handle; ++ struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm); + + if (amdgpu_sriov_multi_vf_mode(adev)) + return 0; +@@ -2175,6 +2179,18 @@ static int smu_resume(struct amdgpu_ip_block *ip_block) + + adev->pm.dpm_enabled = true; + ++ if (smu->current_power_limit) { ++ ret = smu_set_power_limit(smu, smu->current_power_limit); ++ if (ret && ret != -EOPNOTSUPP) ++ return ret; ++ } ++ ++ if (smu_dpm_ctx->dpm_level == AMD_DPM_FORCED_LEVEL_MANUAL) { ++ ret = smu_od_edit_dpm_table(smu, PP_OD_COMMIT_DPM_TABLE, NULL, 0); ++ if (ret) ++ return ret; ++ } ++ + dev_info(adev->dev, "SMU is resumed successfully!\n"); + + return 0; +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c +index 76c1adda83dbc8..f9b0938c57ea71 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c +@@ -62,13 +62,14 @@ const int decoded_link_width[8] = {0, 1, 2, 4, 8, 12, 16, 32}; + + MODULE_FIRMWARE("amdgpu/smu_14_0_2.bin"); + MODULE_FIRMWARE("amdgpu/smu_14_0_3.bin"); ++MODULE_FIRMWARE("amdgpu/smu_14_0_3_kicker.bin"); + + #define ENABLE_IMU_ARG_GFXOFF_ENABLE 1 + + int smu_v14_0_init_microcode(struct smu_context *smu) + { + struct amdgpu_device *adev = smu->adev; +- char ucode_prefix[15]; ++ char ucode_prefix[30]; + int err = 0; + const struct smc_firmware_header_v1_0 *hdr; + const struct common_firmware_header *header; +@@ -79,8 +80,12 @@ int smu_v14_0_init_microcode(struct smu_context *smu) + return 0; + + amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix)); +- err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, +- "amdgpu/%s.bin", ucode_prefix); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, ++ "amdgpu/%s_kicker.bin", ucode_prefix); ++ else ++ err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, ++ "amdgpu/%s.bin", ucode_prefix); + if (err) + goto out; + +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c +index 82c2db972491d4..7c8d19cfa324e3 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c +@@ -1689,9 +1689,11 @@ static int smu_v14_0_2_get_power_limit(struct smu_context *smu, + uint32_t *min_power_limit) + { + struct smu_table_context *table_context = &smu->smu_table; ++ struct smu_14_0_2_powerplay_table *powerplay_table = ++ table_context->power_play_table; + PPTable_t *pptable = table_context->driver_pptable; + CustomSkuTable_t *skutable = &pptable->CustomSkuTable; +- uint32_t power_limit; ++ uint32_t power_limit, od_percent_upper = 0, od_percent_lower = 0; + uint32_t msg_limit = pptable->SkuTable.MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; + + if (smu_v14_0_get_current_power_limit(smu, &power_limit)) +@@ -1704,11 +1706,29 @@ static int smu_v14_0_2_get_power_limit(struct smu_context *smu, + if (default_power_limit) + *default_power_limit = power_limit; + +- if (max_power_limit) +- *max_power_limit = msg_limit; ++ if (powerplay_table) { ++ if (smu->od_enabled && ++ smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) { ++ od_percent_upper = pptable->SkuTable.OverDriveLimitsBasicMax.Ppt; ++ od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt; ++ } else if (smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_PPT_BIT)) { ++ od_percent_upper = 0; ++ od_percent_lower = pptable->SkuTable.OverDriveLimitsBasicMin.Ppt; ++ } ++ } ++ ++ dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", ++ od_percent_upper, od_percent_lower, power_limit); ++ ++ if (max_power_limit) { ++ *max_power_limit = msg_limit * (100 + od_percent_upper); ++ *max_power_limit /= 100; ++ } + +- if (min_power_limit) +- *min_power_limit = 0; ++ if (min_power_limit) { ++ *min_power_limit = power_limit * (100 + od_percent_lower); ++ *min_power_limit /= 100; ++ } + + return 0; + } +diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c +index ea78c6c8ca7a63..dc622c78db9d4a 100644 +--- a/drivers/gpu/drm/display/drm_dp_helper.c ++++ b/drivers/gpu/drm/display/drm_dp_helper.c +@@ -725,7 +725,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset, + * monitor doesn't power down exactly after the throw away read. + */ + if (!aux->is_remote) { +- ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET); ++ ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS); + if (ret < 0) + return ret; + } +diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c +index d36e6cacc575e3..73b5a80771cc9f 100644 +--- a/drivers/gpu/drm/drm_format_helper.c ++++ b/drivers/gpu/drm/drm_format_helper.c +@@ -857,11 +857,33 @@ static void drm_fb_xrgb8888_to_abgr8888_line(void *dbuf, const void *sbuf, unsig + drm_fb_xfrm_line_32to32(dbuf, sbuf, pixels, drm_pixel_xrgb8888_to_abgr8888); + } + +-static void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned int *dst_pitch, +- const struct iosys_map *src, +- const struct drm_framebuffer *fb, +- const struct drm_rect *clip, +- struct drm_format_conv_state *state) ++/** ++ * drm_fb_xrgb8888_to_abgr8888 - Convert XRGB8888 to ABGR8888 clip buffer ++ * @dst: Array of ABGR8888 destination buffers ++ * @dst_pitch: Array of numbers of bytes between the start of two consecutive scanlines ++ * within @dst; can be NULL if scanlines are stored next to each other. ++ * @src: Array of XRGB8888 source buffer ++ * @fb: DRM framebuffer ++ * @clip: Clip rectangle area to copy ++ * @state: Transform and conversion state ++ * ++ * This function copies parts of a framebuffer to display memory and converts the ++ * color format during the process. The parameters @dst, @dst_pitch and @src refer ++ * to arrays. Each array must have at least as many entries as there are planes in ++ * @fb's format. Each entry stores the value for the format's respective color plane ++ * at the same index. ++ * ++ * This function does not apply clipping on @dst (i.e. the destination is at the ++ * top-left corner). ++ * ++ * Drivers can use this function for ABGR8888 devices that don't support XRGB8888 ++ * natively. It sets an opaque alpha channel as part of the conversion. ++ */ ++void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned int *dst_pitch, ++ const struct iosys_map *src, ++ const struct drm_framebuffer *fb, ++ const struct drm_rect *clip, ++ struct drm_format_conv_state *state) + { + static const u8 dst_pixsize[DRM_FORMAT_MAX_PLANES] = { + 4, +@@ -870,17 +892,40 @@ static void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned in + drm_fb_xfrm(dst, dst_pitch, dst_pixsize, src, fb, clip, false, state, + drm_fb_xrgb8888_to_abgr8888_line); + } ++EXPORT_SYMBOL(drm_fb_xrgb8888_to_abgr8888); + + static void drm_fb_xrgb8888_to_xbgr8888_line(void *dbuf, const void *sbuf, unsigned int pixels) + { + drm_fb_xfrm_line_32to32(dbuf, sbuf, pixels, drm_pixel_xrgb8888_to_xbgr8888); + } + +-static void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned int *dst_pitch, +- const struct iosys_map *src, +- const struct drm_framebuffer *fb, +- const struct drm_rect *clip, +- struct drm_format_conv_state *state) ++/** ++ * drm_fb_xrgb8888_to_xbgr8888 - Convert XRGB8888 to XBGR8888 clip buffer ++ * @dst: Array of XBGR8888 destination buffers ++ * @dst_pitch: Array of numbers of bytes between the start of two consecutive scanlines ++ * within @dst; can be NULL if scanlines are stored next to each other. ++ * @src: Array of XRGB8888 source buffer ++ * @fb: DRM framebuffer ++ * @clip: Clip rectangle area to copy ++ * @state: Transform and conversion state ++ * ++ * This function copies parts of a framebuffer to display memory and converts the ++ * color format during the process. The parameters @dst, @dst_pitch and @src refer ++ * to arrays. Each array must have at least as many entries as there are planes in ++ * @fb's format. Each entry stores the value for the format's respective color plane ++ * at the same index. ++ * ++ * This function does not apply clipping on @dst (i.e. the destination is at the ++ * top-left corner). ++ * ++ * Drivers can use this function for XBGR8888 devices that don't support XRGB8888 ++ * natively. ++ */ ++void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned int *dst_pitch, ++ const struct iosys_map *src, ++ const struct drm_framebuffer *fb, ++ const struct drm_rect *clip, ++ struct drm_format_conv_state *state) + { + static const u8 dst_pixsize[DRM_FORMAT_MAX_PLANES] = { + 4, +@@ -889,6 +934,49 @@ static void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned in + drm_fb_xfrm(dst, dst_pitch, dst_pixsize, src, fb, clip, false, state, + drm_fb_xrgb8888_to_xbgr8888_line); + } ++EXPORT_SYMBOL(drm_fb_xrgb8888_to_xbgr8888); ++ ++static void drm_fb_xrgb8888_to_bgrx8888_line(void *dbuf, const void *sbuf, unsigned int pixels) ++{ ++ drm_fb_xfrm_line_32to32(dbuf, sbuf, pixels, drm_pixel_xrgb8888_to_bgrx8888); ++} ++ ++/** ++ * drm_fb_xrgb8888_to_bgrx8888 - Convert XRGB8888 to BGRX8888 clip buffer ++ * @dst: Array of BGRX8888 destination buffers ++ * @dst_pitch: Array of numbers of bytes between the start of two consecutive scanlines ++ * within @dst; can be NULL if scanlines are stored next to each other. ++ * @src: Array of XRGB8888 source buffer ++ * @fb: DRM framebuffer ++ * @clip: Clip rectangle area to copy ++ * @state: Transform and conversion state ++ * ++ * This function copies parts of a framebuffer to display memory and converts the ++ * color format during the process. The parameters @dst, @dst_pitch and @src refer ++ * to arrays. Each array must have at least as many entries as there are planes in ++ * @fb's format. Each entry stores the value for the format's respective color plane ++ * at the same index. ++ * ++ * This function does not apply clipping on @dst (i.e. the destination is at the ++ * top-left corner). ++ * ++ * Drivers can use this function for BGRX8888 devices that don't support XRGB8888 ++ * natively. ++ */ ++void drm_fb_xrgb8888_to_bgrx8888(struct iosys_map *dst, const unsigned int *dst_pitch, ++ const struct iosys_map *src, ++ const struct drm_framebuffer *fb, ++ const struct drm_rect *clip, ++ struct drm_format_conv_state *state) ++{ ++ static const u8 dst_pixsize[DRM_FORMAT_MAX_PLANES] = { ++ 4, ++ }; ++ ++ drm_fb_xfrm(dst, dst_pitch, dst_pixsize, src, fb, clip, false, state, ++ drm_fb_xrgb8888_to_bgrx8888_line); ++} ++EXPORT_SYMBOL(drm_fb_xrgb8888_to_bgrx8888); + + static void drm_fb_xrgb8888_to_xrgb2101010_line(void *dbuf, const void *sbuf, unsigned int pixels) + { +diff --git a/drivers/gpu/drm/drm_format_internal.h b/drivers/gpu/drm/drm_format_internal.h +index 9f857bfa368d10..0aa458b8a3e0af 100644 +--- a/drivers/gpu/drm/drm_format_internal.h ++++ b/drivers/gpu/drm/drm_format_internal.h +@@ -111,6 +111,14 @@ static inline u32 drm_pixel_xrgb8888_to_xbgr8888(u32 pix) + ((pix & 0x000000ff) << 16); + } + ++static inline u32 drm_pixel_xrgb8888_to_bgrx8888(u32 pix) ++{ ++ return ((pix & 0xff000000) >> 24) | /* also copy filler bits */ ++ ((pix & 0x00ff0000) >> 8) | ++ ((pix & 0x0000ff00) << 8) | ++ ((pix & 0x000000ff) << 24); ++} ++ + static inline u32 drm_pixel_xrgb8888_to_abgr8888(u32 pix) + { + return GENMASK(31, 24) | /* fill alpha bits */ +diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs +index 18492daae4b345..b9cc64458437d2 100644 +--- a/drivers/gpu/drm/drm_panic_qr.rs ++++ b/drivers/gpu/drm/drm_panic_qr.rs +@@ -381,6 +381,26 @@ struct DecFifo { + len: usize, + } + ++// On arm32 architecture, dividing an `u64` by a constant will generate a call ++// to `__aeabi_uldivmod` which is not present in the kernel. ++// So use the multiply by inverse method for this architecture. ++fn div10(val: u64) -> u64 { ++ if cfg!(target_arch = "arm") { ++ let val_h = val >> 32; ++ let val_l = val & 0xFFFFFFFF; ++ let b_h: u64 = 0x66666666; ++ let b_l: u64 = 0x66666667; ++ ++ let tmp1 = val_h * b_l + ((val_l * b_l) >> 32); ++ let tmp2 = val_l * b_h + (tmp1 & 0xffffffff); ++ let tmp3 = val_h * b_h + (tmp1 >> 32) + (tmp2 >> 32); ++ ++ tmp3 >> 2 ++ } else { ++ val / 10 ++ } ++} ++ + impl DecFifo { + fn push(&mut self, data: u64, len: usize) { + let mut chunk = data; +@@ -389,7 +409,7 @@ fn push(&mut self, data: u64, len: usize) { + } + for i in 0..len { + self.decimals[i] = (chunk % 10) as u8; +- chunk /= 10; ++ chunk = div10(chunk); + } + self.len += len; + } +diff --git a/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c b/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c +index 74f7832ea53eae..0726cb5b736e60 100644 +--- a/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c ++++ b/drivers/gpu/drm/hisilicon/hibmc/dp/dp_link.c +@@ -325,6 +325,17 @@ static int hibmc_dp_link_downgrade_training_eq(struct hibmc_dp_dev *dp) + return hibmc_dp_link_reduce_rate(dp); + } + ++static void hibmc_dp_update_caps(struct hibmc_dp_dev *dp) ++{ ++ dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE]; ++ if (dp->link.cap.link_rate > DP_LINK_BW_8_1 || !dp->link.cap.link_rate) ++ dp->link.cap.link_rate = DP_LINK_BW_8_1; ++ ++ dp->link.cap.lanes = dp->dpcd[DP_MAX_LANE_COUNT] & DP_MAX_LANE_COUNT_MASK; ++ if (dp->link.cap.lanes > HIBMC_DP_LANE_NUM_MAX) ++ dp->link.cap.lanes = HIBMC_DP_LANE_NUM_MAX; ++} ++ + int hibmc_dp_link_training(struct hibmc_dp_dev *dp) + { + struct hibmc_dp_link *link = &dp->link; +@@ -334,8 +345,7 @@ int hibmc_dp_link_training(struct hibmc_dp_dev *dp) + if (ret) + drm_err(dp->dev, "dp aux read dpcd failed, ret: %d\n", ret); + +- dp->link.cap.link_rate = dp->dpcd[DP_MAX_LINK_RATE]; +- dp->link.cap.lanes = 0x2; ++ hibmc_dp_update_caps(dp); + + ret = hibmc_dp_get_serdes_rate_cfg(dp); + if (ret < 0) +diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c +index 768b97f9e74afe..289304500ab097 100644 +--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c ++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c +@@ -32,7 +32,7 @@ + + DEFINE_DRM_GEM_FOPS(hibmc_fops); + +-static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "vblank", "hpd" }; ++static const char *g_irqs_names_map[HIBMC_MAX_VECTORS] = { "hibmc-vblank", "hibmc-hpd" }; + + static irqreturn_t hibmc_interrupt(int irq, void *arg) + { +@@ -115,6 +115,8 @@ static const struct drm_mode_config_funcs hibmc_mode_funcs = { + static int hibmc_kms_init(struct hibmc_drm_private *priv) + { + struct drm_device *dev = &priv->dev; ++ struct drm_encoder *encoder; ++ u32 clone_mask = 0; + int ret; + + ret = drmm_mode_config_init(dev); +@@ -154,6 +156,12 @@ static int hibmc_kms_init(struct hibmc_drm_private *priv) + return ret; + } + ++ drm_for_each_encoder(encoder, dev) ++ clone_mask |= drm_encoder_mask(encoder); ++ ++ drm_for_each_encoder(encoder, dev) ++ encoder->possible_clones = clone_mask; ++ + return 0; + } + +@@ -277,7 +285,6 @@ static void hibmc_unload(struct drm_device *dev) + static int hibmc_msi_init(struct drm_device *dev) + { + struct pci_dev *pdev = to_pci_dev(dev->dev); +- char name[32] = {0}; + int valid_irq_num; + int irq; + int ret; +@@ -292,9 +299,6 @@ static int hibmc_msi_init(struct drm_device *dev) + valid_irq_num = ret; + + for (int i = 0; i < valid_irq_num; i++) { +- snprintf(name, ARRAY_SIZE(name) - 1, "%s-%s-%s", +- dev->driver->name, pci_name(pdev), g_irqs_names_map[i]); +- + irq = pci_irq_vector(pdev, i); + + if (i) +@@ -302,10 +306,10 @@ static int hibmc_msi_init(struct drm_device *dev) + ret = devm_request_threaded_irq(&pdev->dev, irq, + hibmc_dp_interrupt, + hibmc_dp_hpd_isr, +- IRQF_SHARED, name, dev); ++ IRQF_SHARED, g_irqs_names_map[i], dev); + else + ret = devm_request_irq(&pdev->dev, irq, hibmc_interrupt, +- IRQF_SHARED, name, dev); ++ IRQF_SHARED, g_irqs_names_map[i], dev); + if (ret) { + drm_err(dev, "install irq failed: %d\n", ret); + return ret; +@@ -323,13 +327,13 @@ static int hibmc_load(struct drm_device *dev) + + ret = hibmc_hw_init(priv); + if (ret) +- goto err; ++ return ret; + + ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0), + pci_resource_len(pdev, 0)); + if (ret) { + drm_err(dev, "Error initializing VRAM MM; %d\n", ret); +- goto err; ++ return ret; + } + + ret = hibmc_kms_init(priv); +diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h +index 274feabe7df007..ca8502e2760c12 100644 +--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h ++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h +@@ -69,6 +69,7 @@ int hibmc_de_init(struct hibmc_drm_private *priv); + int hibmc_vdac_init(struct hibmc_drm_private *priv); + + int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *connector); ++void hibmc_ddc_del(struct hibmc_vdac *vdac); + + int hibmc_dp_init(struct hibmc_drm_private *priv); + +diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c +index 99b3b77b5445f6..44860011855eb6 100644 +--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c ++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c +@@ -95,3 +95,8 @@ int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_vdac *vdac) + + return i2c_bit_add_bus(&vdac->adapter); + } ++ ++void hibmc_ddc_del(struct hibmc_vdac *vdac) ++{ ++ i2c_del_adapter(&vdac->adapter); ++} +diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c +index e8a527ede85438..841e81f47b6862 100644 +--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c ++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c +@@ -53,7 +53,7 @@ static void hibmc_connector_destroy(struct drm_connector *connector) + { + struct hibmc_vdac *vdac = to_hibmc_vdac(connector); + +- i2c_del_adapter(&vdac->adapter); ++ hibmc_ddc_del(vdac); + drm_connector_cleanup(connector); + } + +@@ -110,7 +110,7 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv) + ret = drmm_encoder_init(dev, encoder, NULL, DRM_MODE_ENCODER_DAC, NULL); + if (ret) { + drm_err(dev, "failed to init encoder: %d\n", ret); +- return ret; ++ goto err; + } + + drm_encoder_helper_add(encoder, &hibmc_encoder_helper_funcs); +@@ -121,7 +121,7 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv) + &vdac->adapter); + if (ret) { + drm_err(dev, "failed to init connector: %d\n", ret); +- return ret; ++ goto err; + } + + drm_connector_helper_add(connector, &hibmc_connector_helper_funcs); +@@ -131,4 +131,9 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv) + connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT; + + return 0; ++ ++err: ++ hibmc_ddc_del(vdac); ++ ++ return ret; + } +diff --git a/drivers/gpu/drm/i915/display/intel_display_irq.c b/drivers/gpu/drm/i915/display/intel_display_irq.c +index 3e73832e5e8132..9e785318a6e251 100644 +--- a/drivers/gpu/drm/i915/display/intel_display_irq.c ++++ b/drivers/gpu/drm/i915/display/intel_display_irq.c +@@ -1492,10 +1492,14 @@ u32 gen11_gu_misc_irq_ack(struct intel_display *display, const u32 master_ctl) + if (!(master_ctl & GEN11_GU_MISC_IRQ)) + return 0; + ++ intel_display_rpm_assert_block(display); ++ + iir = intel_de_read(display, GEN11_GU_MISC_IIR); + if (likely(iir)) + intel_de_write(display, GEN11_GU_MISC_IIR, iir); + ++ intel_display_rpm_assert_unblock(display); ++ + return iir; + } + +diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c +index c1014e74791faa..2df5145fd9286a 100644 +--- a/drivers/gpu/drm/i915/display/intel_tc.c ++++ b/drivers/gpu/drm/i915/display/intel_tc.c +@@ -22,6 +22,7 @@ + #include "intel_modeset_lock.h" + #include "intel_tc.h" + ++#define DP_PIN_ASSIGNMENT_NONE 0x0 + #define DP_PIN_ASSIGNMENT_C 0x3 + #define DP_PIN_ASSIGNMENT_D 0x4 + #define DP_PIN_ASSIGNMENT_E 0x5 +@@ -65,6 +66,7 @@ struct intel_tc_port { + enum tc_port_mode init_mode; + enum phy_fia phy_fia; + u8 phy_fia_idx; ++ u8 max_lane_count; + }; + + static enum intel_display_power_domain +@@ -306,6 +308,8 @@ static int lnl_tc_port_get_max_lane_count(struct intel_digital_port *dig_port) + REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val); + + switch (pin_assignment) { ++ case DP_PIN_ASSIGNMENT_NONE: ++ return 0; + default: + MISSING_CASE(pin_assignment); + fallthrough; +@@ -364,12 +368,12 @@ static int intel_tc_port_get_max_lane_count(struct intel_digital_port *dig_port) + } + } + +-int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) ++static int get_max_lane_count(struct intel_tc_port *tc) + { +- struct intel_display *display = to_intel_display(dig_port); +- struct intel_tc_port *tc = to_tc_port(dig_port); ++ struct intel_display *display = to_intel_display(tc->dig_port); ++ struct intel_digital_port *dig_port = tc->dig_port; + +- if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT) ++ if (tc->mode != TC_PORT_DP_ALT) + return 4; + + assert_tc_cold_blocked(tc); +@@ -383,6 +387,25 @@ int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) + return intel_tc_port_get_max_lane_count(dig_port); + } + ++static void read_pin_configuration(struct intel_tc_port *tc) ++{ ++ tc->max_lane_count = get_max_lane_count(tc); ++} ++ ++int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port) ++{ ++ struct intel_display *display = to_intel_display(dig_port); ++ struct intel_tc_port *tc = to_tc_port(dig_port); ++ ++ if (!intel_encoder_is_tc(&dig_port->base)) ++ return 4; ++ ++ if (DISPLAY_VER(display) < 20) ++ return get_max_lane_count(tc); ++ ++ return tc->max_lane_count; ++} ++ + void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port, + int required_lanes) + { +@@ -595,9 +618,12 @@ static void icl_tc_phy_get_hw_state(struct intel_tc_port *tc) + tc_cold_wref = __tc_cold_block(tc, &domain); + + tc->mode = tc_phy_get_current_mode(tc); +- if (tc->mode != TC_PORT_DISCONNECTED) ++ if (tc->mode != TC_PORT_DISCONNECTED) { + tc->lock_wakeref = tc_cold_block(tc); + ++ read_pin_configuration(tc); ++ } ++ + __tc_cold_unblock(tc, domain, tc_cold_wref); + } + +@@ -655,8 +681,11 @@ static bool icl_tc_phy_connect(struct intel_tc_port *tc, + + tc->lock_wakeref = tc_cold_block(tc); + +- if (tc->mode == TC_PORT_TBT_ALT) ++ if (tc->mode == TC_PORT_TBT_ALT) { ++ read_pin_configuration(tc); ++ + return true; ++ } + + if ((!tc_phy_is_ready(tc) || + !icl_tc_phy_take_ownership(tc, true)) && +@@ -667,6 +696,7 @@ static bool icl_tc_phy_connect(struct intel_tc_port *tc, + goto out_unblock_tc_cold; + } + ++ read_pin_configuration(tc); + + if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) + goto out_release_phy; +@@ -857,9 +887,12 @@ static void adlp_tc_phy_get_hw_state(struct intel_tc_port *tc) + port_wakeref = intel_display_power_get(display, port_power_domain); + + tc->mode = tc_phy_get_current_mode(tc); +- if (tc->mode != TC_PORT_DISCONNECTED) ++ if (tc->mode != TC_PORT_DISCONNECTED) { + tc->lock_wakeref = tc_cold_block(tc); + ++ read_pin_configuration(tc); ++ } ++ + intel_display_power_put(display, port_power_domain, port_wakeref); + } + +@@ -872,6 +905,9 @@ static bool adlp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes) + + if (tc->mode == TC_PORT_TBT_ALT) { + tc->lock_wakeref = tc_cold_block(tc); ++ ++ read_pin_configuration(tc); ++ + return true; + } + +@@ -893,6 +929,8 @@ static bool adlp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes) + + tc->lock_wakeref = tc_cold_block(tc); + ++ read_pin_configuration(tc); ++ + if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) + goto out_unblock_tc_cold; + +@@ -1123,9 +1161,18 @@ static void xelpdp_tc_phy_get_hw_state(struct intel_tc_port *tc) + tc_cold_wref = __tc_cold_block(tc, &domain); + + tc->mode = tc_phy_get_current_mode(tc); +- if (tc->mode != TC_PORT_DISCONNECTED) ++ if (tc->mode != TC_PORT_DISCONNECTED) { + tc->lock_wakeref = tc_cold_block(tc); + ++ read_pin_configuration(tc); ++ /* ++ * Set a valid lane count value for a DP-alt sink which got ++ * disconnected. The driver can only disable the output on this PHY. ++ */ ++ if (tc->max_lane_count == 0) ++ tc->max_lane_count = 4; ++ } ++ + drm_WARN_ON(display->drm, + (tc->mode == TC_PORT_DP_ALT || tc->mode == TC_PORT_LEGACY) && + !xelpdp_tc_phy_tcss_power_is_enabled(tc)); +@@ -1137,14 +1184,19 @@ static bool xelpdp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes) + { + tc->lock_wakeref = tc_cold_block(tc); + +- if (tc->mode == TC_PORT_TBT_ALT) ++ if (tc->mode == TC_PORT_TBT_ALT) { ++ read_pin_configuration(tc); ++ + return true; ++ } + + if (!xelpdp_tc_phy_enable_tcss_power(tc, true)) + goto out_unblock_tccold; + + xelpdp_tc_phy_take_ownership(tc, true); + ++ read_pin_configuration(tc); ++ + if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes)) + goto out_release_phy; + +@@ -1225,14 +1277,19 @@ static void tc_phy_get_hw_state(struct intel_tc_port *tc) + tc->phy_ops->get_hw_state(tc); + } + +-static bool tc_phy_is_ready_and_owned(struct intel_tc_port *tc, +- bool phy_is_ready, bool phy_is_owned) ++/* Is the PHY owned by display i.e. is it in legacy or DP-alt mode? */ ++static bool tc_phy_owned_by_display(struct intel_tc_port *tc, ++ bool phy_is_ready, bool phy_is_owned) + { + struct intel_display *display = to_intel_display(tc->dig_port); + +- drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready); ++ if (DISPLAY_VER(display) < 20) { ++ drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready); + +- return phy_is_ready && phy_is_owned; ++ return phy_is_ready && phy_is_owned; ++ } else { ++ return phy_is_owned; ++ } + } + + static bool tc_phy_is_connected(struct intel_tc_port *tc, +@@ -1243,7 +1300,7 @@ static bool tc_phy_is_connected(struct intel_tc_port *tc, + bool phy_is_owned = tc_phy_is_owned(tc); + bool is_connected; + +- if (tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) ++ if (tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) + is_connected = port_pll_type == ICL_PORT_DPLL_MG_PHY; + else + is_connected = port_pll_type == ICL_PORT_DPLL_DEFAULT; +@@ -1351,7 +1408,7 @@ tc_phy_get_current_mode(struct intel_tc_port *tc) + phy_is_ready = tc_phy_is_ready(tc); + phy_is_owned = tc_phy_is_owned(tc); + +- if (!tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) { ++ if (!tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) { + mode = get_tc_mode_in_phy_not_owned_state(tc, live_mode); + } else { + drm_WARN_ON(display->drm, live_mode == TC_PORT_TBT_ALT); +@@ -1440,11 +1497,11 @@ static void intel_tc_port_reset_mode(struct intel_tc_port *tc, + intel_display_power_flush_work(display); + if (!intel_tc_cold_requires_aux_pw(dig_port)) { + enum intel_display_power_domain aux_domain; +- bool aux_powered; + + aux_domain = intel_aux_power_domain(dig_port); +- aux_powered = intel_display_power_is_enabled(display, aux_domain); +- drm_WARN_ON(display->drm, aux_powered); ++ if (intel_display_power_is_enabled(display, aux_domain)) ++ drm_dbg_kms(display->drm, "Port %s: AUX unexpectedly powered\n", ++ tc->port_name); + } + + tc_phy_disconnect(tc); +diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c +index b37e400f74e536..5a95f06900b5d3 100644 +--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c ++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c +@@ -634,6 +634,8 @@ static void cfl_ctx_workarounds_init(struct intel_engine_cs *engine, + static void icl_ctx_workarounds_init(struct intel_engine_cs *engine, + struct i915_wa_list *wal) + { ++ struct drm_i915_private *i915 = engine->i915; ++ + /* Wa_1406697149 (WaDisableBankHangMode:icl) */ + wa_write(wal, GEN8_L3CNTLREG, GEN8_ERRDETBCTRL); + +@@ -669,6 +671,15 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine, + + /* Wa_1406306137:icl,ehl */ + wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU); ++ ++ if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) { ++ /* ++ * Disable Repacking for Compression (masked R/W access) ++ * before rendering compressed surfaces for display. ++ */ ++ wa_masked_en(wal, CACHE_MODE_0_GEN7, ++ DISABLE_REPACKING_FOR_COMPRESSION); ++ } + } + + /* +@@ -2306,15 +2317,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal) + GEN8_RC_SEMA_IDLE_MSG_DISABLE); + } + +- if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) { +- /* +- * "Disable Repacking for Compression (masked R/W access) +- * before rendering compressed surfaces for display." +- */ +- wa_masked_en(wal, CACHE_MODE_0_GEN7, +- DISABLE_REPACKING_FOR_COMPRESSION); +- } +- + if (GRAPHICS_VER(i915) == 11) { + /* This is not an Wa. Enable for better image quality */ + wa_masked_en(wal, +diff --git a/drivers/gpu/drm/nouveau/nvif/vmm.c b/drivers/gpu/drm/nouveau/nvif/vmm.c +index 99296f03371ae0..07c1ebc2a94141 100644 +--- a/drivers/gpu/drm/nouveau/nvif/vmm.c ++++ b/drivers/gpu/drm/nouveau/nvif/vmm.c +@@ -219,7 +219,8 @@ nvif_vmm_ctor(struct nvif_mmu *mmu, const char *name, s32 oclass, + case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break; + default: + WARN_ON(1); +- return -EINVAL; ++ ret = -EINVAL; ++ goto done; + } + + memcpy(args->data, argv, argc); +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c +index 9d06ff722fea7c..0dc4782df8c0c1 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c +@@ -325,7 +325,7 @@ r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries) + + rpc = r535_gsp_msgq_peek(gsp, sizeof(*rpc), info.retries); + if (IS_ERR_OR_NULL(rpc)) { +- kfree(buf); ++ kvfree(buf); + return rpc; + } + +@@ -334,7 +334,7 @@ r535_gsp_msgq_recv(struct nvkm_gsp *gsp, u32 gsp_rpc_len, int *retries) + + rpc = r535_gsp_msgq_recv_one_elem(gsp, &info); + if (IS_ERR_OR_NULL(rpc)) { +- kfree(buf); ++ kvfree(buf); + return rpc; + } + +diff --git a/drivers/gpu/drm/nova/file.rs b/drivers/gpu/drm/nova/file.rs +index 7e59a34b830da3..4fe62cf98a23e9 100644 +--- a/drivers/gpu/drm/nova/file.rs ++++ b/drivers/gpu/drm/nova/file.rs +@@ -39,7 +39,8 @@ pub(crate) fn get_param( + _ => return Err(EINVAL), + }; + +- getparam.set_value(value); ++ #[allow(clippy::useless_conversion)] ++ getparam.set_value(value.into()); + + Ok(0) + } +diff --git a/drivers/gpu/drm/tests/drm_format_helper_test.c b/drivers/gpu/drm/tests/drm_format_helper_test.c +index 35cd3405d0450c..e17643c408bf4b 100644 +--- a/drivers/gpu/drm/tests/drm_format_helper_test.c ++++ b/drivers/gpu/drm/tests/drm_format_helper_test.c +@@ -748,14 +748,9 @@ static void drm_test_fb_xrgb8888_to_rgb565(struct kunit *test) + buf = dst.vaddr; + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGB565, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_rgb565(&dst, dst_pitch, &src, &fb, ¶ms->clip, ++ &fmtcnv_state, false); + buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -795,14 +790,8 @@ static void drm_test_fb_xrgb8888_to_xrgb1555(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB1555, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_xrgb1555(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -842,14 +831,8 @@ static void drm_test_fb_xrgb8888_to_argb1555(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB1555, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_argb1555(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -889,14 +872,8 @@ static void drm_test_fb_xrgb8888_to_rgba5551(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGBA5551, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_rgba5551(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -939,12 +916,7 @@ static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_RGB888, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- +- KUNIT_EXPECT_FALSE(test, blit_result); ++ drm_fb_xrgb8888_to_rgb888(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -985,12 +957,8 @@ static void drm_test_fb_xrgb8888_to_bgr888(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, &result->dst_pitch, DRM_FORMAT_BGR888, &src, &fb, ¶ms->clip, ++ drm_fb_xrgb8888_to_bgr888(&dst, &result->dst_pitch, &src, &fb, ¶ms->clip, + &fmtcnv_state); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1030,14 +998,8 @@ static void drm_test_fb_xrgb8888_to_argb8888(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB8888, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_argb8888(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1071,18 +1033,14 @@ static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test) + NULL : &result->dst_pitch; + + drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); +- buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32)); ++ buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB2101010, &src, &fb, +- ¶ms->clip, &fmtcnv_state); +- +- KUNIT_EXPECT_FALSE(test, blit_result); ++ drm_fb_xrgb8888_to_xrgb2101010(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); ++ buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1122,14 +1080,8 @@ static void drm_test_fb_xrgb8888_to_argb2101010(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ARGB2101010, &src, &fb, +- ¶ms->clip, &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_argb2101010(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1202,23 +1154,15 @@ static void drm_test_fb_swab(struct kunit *test) + buf = dst.vaddr; /* restore original value of buf */ + memset(buf, 0, dst_size); + +- int blit_result; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB8888 | DRM_FORMAT_BIG_ENDIAN, +- &src, &fb, ¶ms->clip, &fmtcnv_state); ++ drm_fb_swab(&dst, dst_pitch, &src, &fb, ¶ms->clip, false, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + + buf = dst.vaddr; + memset(buf, 0, dst_size); + +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_BGRX8888, &src, &fb, ¶ms->clip, +- &fmtcnv_state); ++ drm_fb_xrgb8888_to_bgrx8888(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + + buf = dst.vaddr; +@@ -1229,11 +1173,8 @@ static void drm_test_fb_swab(struct kunit *test) + mock_format.format |= DRM_FORMAT_BIG_ENDIAN; + fb.format = &mock_format; + +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XRGB8888, &src, &fb, ¶ms->clip, +- &fmtcnv_state); ++ drm_fb_swab(&dst, dst_pitch, &src, &fb, ¶ms->clip, false, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1266,14 +1207,8 @@ static void drm_test_fb_xrgb8888_to_abgr8888(struct kunit *test) + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? + NULL : &result->dst_pitch; + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_ABGR8888, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_abgr8888(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1306,14 +1241,8 @@ static void drm_test_fb_xrgb8888_to_xbgr8888(struct kunit *test) + const unsigned int *dst_pitch = (result->dst_pitch == TEST_USE_DEFAULT_PITCH) ? + NULL : &result->dst_pitch; + +- int blit_result = 0; +- +- blit_result = drm_fb_blit(&dst, dst_pitch, DRM_FORMAT_XBGR8888, &src, &fb, ¶ms->clip, +- &fmtcnv_state); +- ++ drm_fb_xrgb8888_to_xbgr8888(&dst, dst_pitch, &src, &fb, ¶ms->clip, &fmtcnv_state); + buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); +- +- KUNIT_EXPECT_FALSE(test, blit_result); + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); + } + +@@ -1910,12 +1839,8 @@ static void drm_test_fb_memcpy(struct kunit *test) + memset(buf[i], 0, dst_size[i]); + } + +- int blit_result; +- +- blit_result = drm_fb_blit(dst, dst_pitches, params->format, src, &fb, ¶ms->clip, +- &fmtcnv_state); ++ drm_fb_memcpy(dst, dst_pitches, src, &fb, ¶ms->clip); + +- KUNIT_EXPECT_FALSE(test, blit_result); + for (size_t i = 0; i < fb.format->num_planes; i++) { + expected[i] = cpubuf_to_le32(test, params->expected[i], TEST_BUF_SIZE); + KUNIT_EXPECT_MEMEQ_MSG(test, buf[i], expected[i], dst_size[i], +diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig +index 99a91355842ec3..785d2917f6ed20 100644 +--- a/drivers/gpu/drm/xe/Kconfig ++++ b/drivers/gpu/drm/xe/Kconfig +@@ -5,6 +5,7 @@ config DRM_XE + depends on KUNIT || !KUNIT + depends on INTEL_VSEC || !INTEL_VSEC + depends on X86_PLATFORM_DEVICES || !(X86 && ACPI) ++ depends on PAGE_SIZE_4KB || COMPILE_TEST || BROKEN + select INTERVAL_TREE + # we need shmfs for the swappable backing store, and in particular + # the shmem_readpage() which depends upon tmpfs +diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c +index 1e3fd139dfcbca..0a481190f3e6ae 100644 +--- a/drivers/gpu/drm/xe/xe_migrate.c ++++ b/drivers/gpu/drm/xe/xe_migrate.c +@@ -408,7 +408,7 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile) + + /* Special layout, prepared below.. */ + vm = xe_vm_create(xe, XE_VM_FLAG_MIGRATION | +- XE_VM_FLAG_SET_TILE_ID(tile)); ++ XE_VM_FLAG_SET_TILE_ID(tile), NULL); + if (IS_ERR(vm)) + return ERR_CAST(vm); + +diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c +index d92ec0f515b034..ca95f2a4d4ef5d 100644 +--- a/drivers/gpu/drm/xe/xe_pxp_submit.c ++++ b/drivers/gpu/drm/xe/xe_pxp_submit.c +@@ -101,7 +101,7 @@ static int allocate_gsc_client_resources(struct xe_gt *gt, + xe_assert(xe, hwe); + + /* PXP instructions must be issued from PPGTT */ +- vm = xe_vm_create(xe, XE_VM_FLAG_GSC); ++ vm = xe_vm_create(xe, XE_VM_FLAG_GSC, NULL); + if (IS_ERR(vm)) + return PTR_ERR(vm); + +diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c +index 86d47aaf035892..5e761f543ac30f 100644 +--- a/drivers/gpu/drm/xe/xe_shrinker.c ++++ b/drivers/gpu/drm/xe/xe_shrinker.c +@@ -53,10 +53,10 @@ xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgea + write_unlock(&shrinker->lock); + } + +-static s64 xe_shrinker_walk(struct xe_device *xe, +- struct ttm_operation_ctx *ctx, +- const struct xe_bo_shrink_flags flags, +- unsigned long to_scan, unsigned long *scanned) ++static s64 __xe_shrinker_walk(struct xe_device *xe, ++ struct ttm_operation_ctx *ctx, ++ const struct xe_bo_shrink_flags flags, ++ unsigned long to_scan, unsigned long *scanned) + { + unsigned int mem_type; + s64 freed = 0, lret; +@@ -86,6 +86,48 @@ static s64 xe_shrinker_walk(struct xe_device *xe, + return freed; + } + ++/* ++ * Try shrinking idle objects without writeback first, then if not sufficient, ++ * try also non-idle objects and finally if that's not sufficient either, ++ * add writeback. This avoids stalls and explicit writebacks with light or ++ * moderate memory pressure. ++ */ ++static s64 xe_shrinker_walk(struct xe_device *xe, ++ struct ttm_operation_ctx *ctx, ++ const struct xe_bo_shrink_flags flags, ++ unsigned long to_scan, unsigned long *scanned) ++{ ++ bool no_wait_gpu = true; ++ struct xe_bo_shrink_flags save_flags = flags; ++ s64 lret, freed; ++ ++ swap(no_wait_gpu, ctx->no_wait_gpu); ++ save_flags.writeback = false; ++ lret = __xe_shrinker_walk(xe, ctx, save_flags, to_scan, scanned); ++ swap(no_wait_gpu, ctx->no_wait_gpu); ++ if (lret < 0 || *scanned >= to_scan) ++ return lret; ++ ++ freed = lret; ++ if (!ctx->no_wait_gpu) { ++ lret = __xe_shrinker_walk(xe, ctx, save_flags, to_scan, scanned); ++ if (lret < 0) ++ return lret; ++ freed += lret; ++ if (*scanned >= to_scan) ++ return freed; ++ } ++ ++ if (flags.writeback) { ++ lret = __xe_shrinker_walk(xe, ctx, flags, to_scan, scanned); ++ if (lret < 0) ++ return lret; ++ freed += lret; ++ } ++ ++ return freed; ++} ++ + static unsigned long + xe_shrinker_count(struct shrinker *shrink, struct shrink_control *sc) + { +@@ -192,6 +234,7 @@ static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_con + runtime_pm = xe_shrinker_runtime_pm_get(shrinker, true, 0, can_backup); + + shrink_flags.purge = false; ++ + lret = xe_shrinker_walk(shrinker->xe, &ctx, shrink_flags, + nr_to_scan, &nr_scanned); + if (lret >= 0) +diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c +index 8615777469293b..3b11b1d52bee9b 100644 +--- a/drivers/gpu/drm/xe/xe_vm.c ++++ b/drivers/gpu/drm/xe/xe_vm.c +@@ -1612,7 +1612,7 @@ static void xe_vm_free_scratch(struct xe_vm *vm) + } + } + +-struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) ++struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef) + { + struct drm_gem_object *vm_resv_obj; + struct xe_vm *vm; +@@ -1633,9 +1633,10 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) + vm->xe = xe; + + vm->size = 1ull << xe->info.va_bits; +- + vm->flags = flags; + ++ if (xef) ++ vm->xef = xe_file_get(xef); + /** + * GSC VMs are kernel-owned, only used for PXP ops and can sometimes be + * manipulated under the PXP mutex. However, the PXP mutex can be taken +@@ -1766,6 +1767,20 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) + if (number_tiles > 1) + vm->composite_fence_ctx = dma_fence_context_alloc(1); + ++ if (xef && xe->info.has_asid) { ++ u32 asid; ++ ++ down_write(&xe->usm.lock); ++ err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm, ++ XA_LIMIT(1, XE_MAX_ASID - 1), ++ &xe->usm.next_asid, GFP_KERNEL); ++ up_write(&xe->usm.lock); ++ if (err < 0) ++ goto err_unlock_close; ++ ++ vm->usm.asid = asid; ++ } ++ + trace_xe_vm_create(vm); + + return vm; +@@ -1786,6 +1801,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) + for_each_tile(tile, xe, id) + xe_range_fence_tree_fini(&vm->rftree[id]); + ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); ++ if (vm->xef) ++ xe_file_put(vm->xef); + kfree(vm); + if (flags & XE_VM_FLAG_LR_MODE) + xe_pm_runtime_put(xe); +@@ -2031,9 +2048,8 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data, + struct xe_device *xe = to_xe_device(dev); + struct xe_file *xef = to_xe_file(file); + struct drm_xe_vm_create *args = data; +- struct xe_tile *tile; + struct xe_vm *vm; +- u32 id, asid; ++ u32 id; + int err; + u32 flags = 0; + +@@ -2069,29 +2085,10 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data, + if (args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE) + flags |= XE_VM_FLAG_FAULT_MODE; + +- vm = xe_vm_create(xe, flags); ++ vm = xe_vm_create(xe, flags, xef); + if (IS_ERR(vm)) + return PTR_ERR(vm); + +- if (xe->info.has_asid) { +- down_write(&xe->usm.lock); +- err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm, +- XA_LIMIT(1, XE_MAX_ASID - 1), +- &xe->usm.next_asid, GFP_KERNEL); +- up_write(&xe->usm.lock); +- if (err < 0) +- goto err_close_and_put; +- +- vm->usm.asid = asid; +- } +- +- vm->xef = xe_file_get(xef); +- +- /* Record BO memory for VM pagetable created against client */ +- for_each_tile(tile, xe, id) +- if (vm->pt_root[id]) +- xe_drm_client_add_bo(vm->xef->client, vm->pt_root[id]->bo); +- + #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_MEM) + /* Warning: Security issue - never enable by default */ + args->reserved[0] = xe_bo_main_addr(vm->pt_root[0]->bo, XE_PAGE_SIZE); +@@ -3203,6 +3200,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm, + free_bind_ops: + if (args->num_binds > 1) + kvfree(*bind_ops); ++ *bind_ops = NULL; + return err; + } + +@@ -3308,7 +3306,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file) + struct xe_exec_queue *q = NULL; + u32 num_syncs, num_ufence = 0; + struct xe_sync_entry *syncs = NULL; +- struct drm_xe_vm_bind_op *bind_ops; ++ struct drm_xe_vm_bind_op *bind_ops = NULL; + struct xe_vma_ops vops; + struct dma_fence *fence; + int err; +diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h +index 494af6bdc646b4..0158ec0ae3b230 100644 +--- a/drivers/gpu/drm/xe/xe_vm.h ++++ b/drivers/gpu/drm/xe/xe_vm.h +@@ -26,7 +26,7 @@ struct xe_sync_entry; + struct xe_svm_range; + struct drm_exec; + +-struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags); ++struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef); + + struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id); + int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node); +diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c +index 0f9af82cebec2c..105b9f9dbec3d7 100644 +--- a/drivers/hwmon/gsc-hwmon.c ++++ b/drivers/hwmon/gsc-hwmon.c +@@ -64,7 +64,7 @@ static ssize_t pwm_auto_point_temp_show(struct device *dev, + return ret; + + ret = regs[0] | regs[1] << 8; +- return sprintf(buf, "%d\n", ret * 10); ++ return sprintf(buf, "%d\n", ret * 100); + } + + static ssize_t pwm_auto_point_temp_store(struct device *dev, +@@ -99,7 +99,7 @@ static ssize_t pwm_auto_point_pwm_show(struct device *dev, + { + struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr); + +- return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10))); ++ return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)) / 100); + } + + static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm_auto_point_pwm, 0); +diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c +index 13889f52b6f78a..ff2289b52c84cc 100644 +--- a/drivers/i2c/busses/i2c-qcom-geni.c ++++ b/drivers/i2c/busses/i2c-qcom-geni.c +@@ -155,9 +155,9 @@ static const struct geni_i2c_clk_fld geni_i2c_clk_map_19p2mhz[] = { + + /* source_clock = 32 MHz */ + static const struct geni_i2c_clk_fld geni_i2c_clk_map_32mhz[] = { +- { I2C_MAX_STANDARD_MODE_FREQ, 8, 14, 18, 40 }, +- { I2C_MAX_FAST_MODE_FREQ, 4, 3, 11, 20 }, +- { I2C_MAX_FAST_MODE_PLUS_FREQ, 2, 3, 6, 15 }, ++ { I2C_MAX_STANDARD_MODE_FREQ, 8, 14, 18, 38 }, ++ { I2C_MAX_FAST_MODE_FREQ, 4, 3, 9, 19 }, ++ { I2C_MAX_FAST_MODE_PLUS_FREQ, 2, 3, 5, 15 }, + {} + }; + +diff --git a/drivers/i2c/busses/i2c-rtl9300.c b/drivers/i2c/busses/i2c-rtl9300.c +index e064e8a4a1f082..cfafe089102aa2 100644 +--- a/drivers/i2c/busses/i2c-rtl9300.c ++++ b/drivers/i2c/busses/i2c-rtl9300.c +@@ -143,10 +143,10 @@ static int rtl9300_i2c_write(struct rtl9300_i2c *i2c, u8 *buf, int len) + return -EIO; + + for (i = 0; i < len; i++) { +- if (i % 4 == 0) +- vals[i/4] = 0; +- vals[i/4] <<= 8; +- vals[i/4] |= buf[i]; ++ unsigned int shift = (i % 4) * 8; ++ unsigned int reg = i / 4; ++ ++ vals[reg] |= buf[i] << shift; + } + + return regmap_bulk_write(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_DATA_WORD0, +@@ -175,7 +175,7 @@ static int rtl9300_i2c_execute_xfer(struct rtl9300_i2c *i2c, char read_write, + return ret; + + ret = regmap_read_poll_timeout(i2c->regmap, i2c->reg_base + RTL9300_I2C_MST_CTRL1, +- val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 2000); ++ val, !(val & RTL9300_I2C_MST_CTRL1_I2C_TRIG), 100, 100000); + if (ret) + return ret; + +@@ -281,15 +281,19 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s + ret = rtl9300_i2c_reg_addr_set(i2c, command, 1); + if (ret) + goto out_unlock; +- ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0]); ++ if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX) { ++ ret = -EINVAL; ++ goto out_unlock; ++ } ++ ret = rtl9300_i2c_config_xfer(i2c, chan, addr, data->block[0] + 1); + if (ret) + goto out_unlock; + if (read_write == I2C_SMBUS_WRITE) { +- ret = rtl9300_i2c_write(i2c, &data->block[1], data->block[0]); ++ ret = rtl9300_i2c_write(i2c, &data->block[0], data->block[0] + 1); + if (ret) + goto out_unlock; + } +- len = data->block[0]; ++ len = data->block[0] + 1; + break; + + default: +diff --git a/drivers/iio/accel/sca3300.c b/drivers/iio/accel/sca3300.c +index 67416a406e2f43..d5e9fa58d0ba3a 100644 +--- a/drivers/iio/accel/sca3300.c ++++ b/drivers/iio/accel/sca3300.c +@@ -479,7 +479,7 @@ static irqreturn_t sca3300_trigger_handler(int irq, void *p) + struct iio_dev *indio_dev = pf->indio_dev; + struct sca3300_data *data = iio_priv(indio_dev); + int bit, ret, val, i = 0; +- IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX); ++ IIO_DECLARE_BUFFER_WITH_TS(s16, channels, SCA3300_SCAN_MAX) = { }; + + iio_for_each_active_channel(indio_dev, bit) { + ret = sca3300_read_reg(data, indio_dev->channels[bit].address, &val); +diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig +index ea3ba139739281..a632c74c1fc510 100644 +--- a/drivers/iio/adc/Kconfig ++++ b/drivers/iio/adc/Kconfig +@@ -1257,7 +1257,7 @@ config RN5T618_ADC + + config ROHM_BD79124 + tristate "Rohm BD79124 ADC driver" +- depends on I2C ++ depends on I2C && GPIOLIB + select REGMAP_I2C + select IIO_ADC_HELPER + help +diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c +index 92596f15e79737..bdd2b2b5bac1ae 100644 +--- a/drivers/iio/adc/ad7124.c ++++ b/drivers/iio/adc/ad7124.c +@@ -855,7 +855,7 @@ enum { + static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan_spec *chan) + { + struct device *dev = &st->sd.spi->dev; +- struct ad7124_channel *ch = &st->channels[chan->channel]; ++ struct ad7124_channel *ch = &st->channels[chan->address]; + int ret; + + if (ch->syscalib_mode == AD7124_SYSCALIB_ZERO_SCALE) { +@@ -871,8 +871,8 @@ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan + if (ret < 0) + return ret; + +- dev_dbg(dev, "offset for channel %d after zero-scale calibration: 0x%x\n", +- chan->channel, ch->cfg.calibration_offset); ++ dev_dbg(dev, "offset for channel %lu after zero-scale calibration: 0x%x\n", ++ chan->address, ch->cfg.calibration_offset); + } else { + ch->cfg.calibration_gain = st->gain_default; + +@@ -886,8 +886,8 @@ static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan + if (ret < 0) + return ret; + +- dev_dbg(dev, "gain for channel %d after full-scale calibration: 0x%x\n", +- chan->channel, ch->cfg.calibration_gain); ++ dev_dbg(dev, "gain for channel %lu after full-scale calibration: 0x%x\n", ++ chan->address, ch->cfg.calibration_gain); + } + + return 0; +@@ -930,7 +930,7 @@ static int ad7124_set_syscalib_mode(struct iio_dev *indio_dev, + { + struct ad7124_state *st = iio_priv(indio_dev); + +- st->channels[chan->channel].syscalib_mode = mode; ++ st->channels[chan->address].syscalib_mode = mode; + + return 0; + } +@@ -940,7 +940,7 @@ static int ad7124_get_syscalib_mode(struct iio_dev *indio_dev, + { + struct ad7124_state *st = iio_priv(indio_dev); + +- return st->channels[chan->channel].syscalib_mode; ++ return st->channels[chan->address].syscalib_mode; + } + + static const struct iio_enum ad7124_syscalib_mode_enum = { +diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c +index b3e6bd2a55d717..5b57c29b6b34c9 100644 +--- a/drivers/iio/adc/ad7173.c ++++ b/drivers/iio/adc/ad7173.c +@@ -200,7 +200,7 @@ struct ad7173_channel_config { + /* + * Following fields are used to compare equality. If you + * make adaptations in it, you most likely also have to adapt +- * ad7173_find_live_config(), too. ++ * ad7173_is_setup_equal(), too. + */ + struct_group(config_props, + bool bipolar; +@@ -319,7 +319,7 @@ static int ad7173_set_syscalib_mode(struct iio_dev *indio_dev, + { + struct ad7173_state *st = iio_priv(indio_dev); + +- st->channels[chan->channel].syscalib_mode = mode; ++ st->channels[chan->address].syscalib_mode = mode; + + return 0; + } +@@ -329,7 +329,7 @@ static int ad7173_get_syscalib_mode(struct iio_dev *indio_dev, + { + struct ad7173_state *st = iio_priv(indio_dev); + +- return st->channels[chan->channel].syscalib_mode; ++ return st->channels[chan->address].syscalib_mode; + } + + static ssize_t ad7173_write_syscalib(struct iio_dev *indio_dev, +@@ -348,7 +348,7 @@ static ssize_t ad7173_write_syscalib(struct iio_dev *indio_dev, + if (!iio_device_claim_direct(indio_dev)) + return -EBUSY; + +- mode = st->channels[chan->channel].syscalib_mode; ++ mode = st->channels[chan->address].syscalib_mode; + if (sys_calib) { + if (mode == AD7173_SYSCALIB_ZERO_SCALE) + ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_SYS_ZERO, +@@ -392,13 +392,12 @@ static int ad7173_calibrate_all(struct ad7173_state *st, struct iio_dev *indio_d + if (indio_dev->channels[i].type != IIO_VOLTAGE) + continue; + +- ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_ZERO, st->channels[i].ain); ++ ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_ZERO, i); + if (ret < 0) + return ret; + + if (st->info->has_internal_fs_calibration) { +- ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_FULL, +- st->channels[i].ain); ++ ret = ad_sd_calibrate(&st->sd, AD7173_MODE_CAL_INT_FULL, i); + if (ret < 0) + return ret; + } +@@ -563,12 +562,19 @@ static void ad7173_reset_usage_cnts(struct ad7173_state *st) + st->config_usage_counter = 0; + } + +-static struct ad7173_channel_config * +-ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg) ++/** ++ * ad7173_is_setup_equal - Compare two channel setups ++ * @cfg1: First channel configuration ++ * @cfg2: Second channel configuration ++ * ++ * Compares all configuration options that affect the registers connected to ++ * SETUP_SEL, namely CONFIGx, FILTERx, GAINx and OFFSETx. ++ * ++ * Returns: true if the setups are identical, false otherwise ++ */ ++static bool ad7173_is_setup_equal(const struct ad7173_channel_config *cfg1, ++ const struct ad7173_channel_config *cfg2) + { +- struct ad7173_channel_config *cfg_aux; +- int i; +- + /* + * This is just to make sure that the comparison is adapted after + * struct ad7173_channel_config was changed. +@@ -581,14 +587,22 @@ ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *c + u8 ref_sel; + })); + ++ return cfg1->bipolar == cfg2->bipolar && ++ cfg1->input_buf == cfg2->input_buf && ++ cfg1->odr == cfg2->odr && ++ cfg1->ref_sel == cfg2->ref_sel; ++} ++ ++static struct ad7173_channel_config * ++ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg) ++{ ++ struct ad7173_channel_config *cfg_aux; ++ int i; ++ + for (i = 0; i < st->num_channels; i++) { + cfg_aux = &st->channels[i].cfg; + +- if (cfg_aux->live && +- cfg->bipolar == cfg_aux->bipolar && +- cfg->input_buf == cfg_aux->input_buf && +- cfg->odr == cfg_aux->odr && +- cfg->ref_sel == cfg_aux->ref_sel) ++ if (cfg_aux->live && ad7173_is_setup_equal(cfg, cfg_aux)) + return cfg_aux; + } + return NULL; +@@ -772,10 +786,26 @@ static const struct ad_sigma_delta_info ad7173_sigma_delta_info_8_slots = { + .num_slots = 8, + }; + ++static const struct ad_sigma_delta_info ad7173_sigma_delta_info_16_slots = { ++ .set_channel = ad7173_set_channel, ++ .append_status = ad7173_append_status, ++ .disable_all = ad7173_disable_all, ++ .disable_one = ad7173_disable_one, ++ .set_mode = ad7173_set_mode, ++ .has_registers = true, ++ .has_named_irqs = true, ++ .addr_shift = 0, ++ .read_mask = BIT(6), ++ .status_ch_mask = GENMASK(3, 0), ++ .data_reg = AD7173_REG_DATA, ++ .num_resetclks = 64, ++ .num_slots = 16, ++}; ++ + static const struct ad7173_device_info ad4111_device_info = { + .name = "ad4111", + .id = AD4111_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in_div = 8, + .num_channels = 16, + .num_configs = 8, +@@ -797,7 +827,7 @@ static const struct ad7173_device_info ad4111_device_info = { + static const struct ad7173_device_info ad4112_device_info = { + .name = "ad4112", + .id = AD4112_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in_div = 8, + .num_channels = 16, + .num_configs = 8, +@@ -818,7 +848,7 @@ static const struct ad7173_device_info ad4112_device_info = { + static const struct ad7173_device_info ad4113_device_info = { + .name = "ad4113", + .id = AD4113_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in_div = 8, + .num_channels = 16, + .num_configs = 8, +@@ -837,7 +867,7 @@ static const struct ad7173_device_info ad4113_device_info = { + static const struct ad7173_device_info ad4114_device_info = { + .name = "ad4114", + .id = AD4114_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in_div = 16, + .num_channels = 16, + .num_configs = 8, +@@ -856,7 +886,7 @@ static const struct ad7173_device_info ad4114_device_info = { + static const struct ad7173_device_info ad4115_device_info = { + .name = "ad4115", + .id = AD4115_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in_div = 16, + .num_channels = 16, + .num_configs = 8, +@@ -875,7 +905,7 @@ static const struct ad7173_device_info ad4115_device_info = { + static const struct ad7173_device_info ad4116_device_info = { + .name = "ad4116", + .id = AD4116_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in_div = 11, + .num_channels = 16, + .num_configs = 8, +@@ -894,7 +924,7 @@ static const struct ad7173_device_info ad4116_device_info = { + static const struct ad7173_device_info ad7172_2_device_info = { + .name = "ad7172-2", + .id = AD7172_2_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_4_slots, + .num_voltage_in = 5, + .num_channels = 4, + .num_configs = 4, +@@ -927,7 +957,7 @@ static const struct ad7173_device_info ad7172_4_device_info = { + static const struct ad7173_device_info ad7173_8_device_info = { + .name = "ad7173-8", + .id = AD7173_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in = 17, + .num_channels = 16, + .num_configs = 8, +@@ -944,7 +974,7 @@ static const struct ad7173_device_info ad7173_8_device_info = { + static const struct ad7173_device_info ad7175_2_device_info = { + .name = "ad7175-2", + .id = AD7175_2_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_4_slots, + .num_voltage_in = 5, + .num_channels = 4, + .num_configs = 4, +@@ -961,7 +991,7 @@ static const struct ad7173_device_info ad7175_2_device_info = { + static const struct ad7173_device_info ad7175_8_device_info = { + .name = "ad7175-8", + .id = AD7175_8_ID, +- .sd_info = &ad7173_sigma_delta_info_8_slots, ++ .sd_info = &ad7173_sigma_delta_info_16_slots, + .num_voltage_in = 17, + .num_channels = 16, + .num_configs = 8, +@@ -1214,7 +1244,7 @@ static int ad7173_update_scan_mode(struct iio_dev *indio_dev, + const unsigned long *scan_mask) + { + struct ad7173_state *st = iio_priv(indio_dev); +- int i, ret; ++ int i, j, k, ret; + + for (i = 0; i < indio_dev->num_channels; i++) { + if (test_bit(i, scan_mask)) +@@ -1225,6 +1255,54 @@ static int ad7173_update_scan_mode(struct iio_dev *indio_dev, + return ret; + } + ++ /* ++ * On some chips, there are more channels that setups, so if there were ++ * more unique setups requested than the number of available slots, ++ * ad7173_set_channel() will have written over some of the slots. We ++ * can detect this by making sure each assigned cfg_slot matches the ++ * requested configuration. If it doesn't, we know that the slot was ++ * overwritten by a different channel. ++ */ ++ for_each_set_bit(i, scan_mask, indio_dev->num_channels) { ++ const struct ad7173_channel_config *cfg1, *cfg2; ++ ++ cfg1 = &st->channels[i].cfg; ++ ++ for_each_set_bit(j, scan_mask, indio_dev->num_channels) { ++ cfg2 = &st->channels[j].cfg; ++ ++ /* ++ * Only compare configs that are assigned to the same ++ * SETUP_SEL slot and don't compare channel to itself. ++ */ ++ if (i == j || cfg1->cfg_slot != cfg2->cfg_slot) ++ continue; ++ ++ /* ++ * If we find two different configs trying to use the ++ * same SETUP_SEL slot, then we know that the that we ++ * have too many unique configurations requested for ++ * the available slots and at least one was overwritten. ++ */ ++ if (!ad7173_is_setup_equal(cfg1, cfg2)) { ++ /* ++ * At this point, there isn't a way to tell ++ * which setups are actually programmed in the ++ * ADC anymore, so we could read them back to ++ * see, but it is simpler to just turn off all ++ * of the live flags so that everything gets ++ * reprogramed on the next attempt read a sample. ++ */ ++ for (k = 0; k < st->num_channels; k++) ++ st->channels[k].cfg.live = false; ++ ++ dev_err(&st->sd.spi->dev, ++ "Too many unique channel configurations requested for scan\n"); ++ return -EINVAL; ++ } ++ } ++ } ++ + return 0; + } + +@@ -1580,6 +1658,7 @@ static int ad7173_fw_parse_channel_config(struct iio_dev *indio_dev) + chan_st_priv->cfg.bipolar = false; + chan_st_priv->cfg.input_buf = st->info->has_input_buf; + chan_st_priv->cfg.ref_sel = AD7173_SETUP_REF_SEL_INT_REF; ++ chan_st_priv->cfg.odr = st->info->odr_start_value; + chan_st_priv->cfg.openwire_comp_chan = -1; + st->adc_mode |= AD7173_ADC_MODE_REF_EN; + if (st->info->data_reg_only_16bit) +@@ -1646,7 +1725,7 @@ static int ad7173_fw_parse_channel_config(struct iio_dev *indio_dev) + chan->scan_index = chan_index; + chan->channel = ain[0]; + chan_st_priv->cfg.input_buf = st->info->has_input_buf; +- chan_st_priv->cfg.odr = 0; ++ chan_st_priv->cfg.odr = st->info->odr_start_value; + chan_st_priv->cfg.openwire_comp_chan = -1; + + chan_st_priv->cfg.bipolar = fwnode_property_read_bool(child, "bipolar"); +diff --git a/drivers/iio/adc/ad7380.c b/drivers/iio/adc/ad7380.c +index cabf5511d11617..3773d727708089 100644 +--- a/drivers/iio/adc/ad7380.c ++++ b/drivers/iio/adc/ad7380.c +@@ -873,6 +873,7 @@ static const struct ad7380_chip_info adaq4381_4_chip_info = { + .has_hardware_gain = true, + .available_scan_masks = ad7380_4_channel_scan_masks, + .timing_specs = &ad7380_4_timing, ++ .max_conversion_rate_hz = 4 * MEGA, + }; + + static const struct spi_offload_config ad7380_offload_config = { +diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c +index 6b3ef7ef403e00..debd59b3989474 100644 +--- a/drivers/iio/adc/ad_sigma_delta.c ++++ b/drivers/iio/adc/ad_sigma_delta.c +@@ -520,7 +520,7 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev) + return ret; + } + +-static int ad_sd_buffer_postdisable(struct iio_dev *indio_dev) ++static int ad_sd_buffer_predisable(struct iio_dev *indio_dev) + { + struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev); + +@@ -644,7 +644,7 @@ static bool ad_sd_validate_scan_mask(struct iio_dev *indio_dev, const unsigned l + + static const struct iio_buffer_setup_ops ad_sd_buffer_setup_ops = { + .postenable = &ad_sd_buffer_postenable, +- .postdisable = &ad_sd_buffer_postdisable, ++ .predisable = &ad_sd_buffer_predisable, + .validate_scan_mask = &ad_sd_validate_scan_mask, + }; + +diff --git a/drivers/iio/adc/rzg2l_adc.c b/drivers/iio/adc/rzg2l_adc.c +index 9674d48074c9a7..cadb0446bc2956 100644 +--- a/drivers/iio/adc/rzg2l_adc.c ++++ b/drivers/iio/adc/rzg2l_adc.c +@@ -89,7 +89,6 @@ struct rzg2l_adc { + struct completion completion; + struct mutex lock; + u16 last_val[RZG2L_ADC_MAX_CHANNELS]; +- bool was_rpm_active; + }; + + /** +@@ -428,6 +427,8 @@ static int rzg2l_adc_probe(struct platform_device *pdev) + if (!indio_dev) + return -ENOMEM; + ++ platform_set_drvdata(pdev, indio_dev); ++ + adc = iio_priv(indio_dev); + + adc->hw_params = device_get_match_data(dev); +@@ -460,8 +461,6 @@ static int rzg2l_adc_probe(struct platform_device *pdev) + if (ret) + return ret; + +- platform_set_drvdata(pdev, indio_dev); +- + ret = rzg2l_adc_hw_init(dev, adc); + if (ret) + return dev_err_probe(&pdev->dev, ret, +@@ -541,14 +540,9 @@ static int rzg2l_adc_suspend(struct device *dev) + }; + int ret; + +- if (pm_runtime_suspended(dev)) { +- adc->was_rpm_active = false; +- } else { +- ret = pm_runtime_force_suspend(dev); +- if (ret) +- return ret; +- adc->was_rpm_active = true; +- } ++ ret = pm_runtime_force_suspend(dev); ++ if (ret) ++ return ret; + + ret = reset_control_bulk_assert(ARRAY_SIZE(resets), resets); + if (ret) +@@ -557,9 +551,7 @@ static int rzg2l_adc_suspend(struct device *dev) + return 0; + + rpm_restore: +- if (adc->was_rpm_active) +- pm_runtime_force_resume(dev); +- ++ pm_runtime_force_resume(dev); + return ret; + } + +@@ -577,11 +569,9 @@ static int rzg2l_adc_resume(struct device *dev) + if (ret) + return ret; + +- if (adc->was_rpm_active) { +- ret = pm_runtime_force_resume(dev); +- if (ret) +- goto resets_restore; +- } ++ ret = pm_runtime_force_resume(dev); ++ if (ret) ++ goto resets_restore; + + ret = rzg2l_adc_hw_init(dev, adc); + if (ret) +@@ -590,10 +580,7 @@ static int rzg2l_adc_resume(struct device *dev) + return 0; + + rpm_restore: +- if (adc->was_rpm_active) { +- pm_runtime_mark_last_busy(dev); +- pm_runtime_put_autosuspend(dev); +- } ++ pm_runtime_force_suspend(dev); + resets_restore: + reset_control_bulk_assert(ARRAY_SIZE(resets), resets); + return ret; +diff --git a/drivers/iio/imu/bno055/bno055.c b/drivers/iio/imu/bno055/bno055.c +index 597c402b98dedf..143ccc4f4331e3 100644 +--- a/drivers/iio/imu/bno055/bno055.c ++++ b/drivers/iio/imu/bno055/bno055.c +@@ -118,6 +118,7 @@ struct bno055_sysfs_attr { + int len; + int *fusion_vals; + int *hw_xlate; ++ int hw_xlate_len; + int type; + }; + +@@ -170,20 +171,24 @@ static int bno055_gyr_scale_vals[] = { + 1000, 1877467, 2000, 1877467, + }; + ++static int bno055_gyr_scale_hw_xlate[] = {0, 1, 2, 3, 4}; + static struct bno055_sysfs_attr bno055_gyr_scale = { + .vals = bno055_gyr_scale_vals, + .len = ARRAY_SIZE(bno055_gyr_scale_vals), + .fusion_vals = (int[]){1, 900}, +- .hw_xlate = (int[]){4, 3, 2, 1, 0}, ++ .hw_xlate = bno055_gyr_scale_hw_xlate, ++ .hw_xlate_len = ARRAY_SIZE(bno055_gyr_scale_hw_xlate), + .type = IIO_VAL_FRACTIONAL, + }; + + static int bno055_gyr_lpf_vals[] = {12, 23, 32, 47, 64, 116, 230, 523}; ++static int bno055_gyr_lpf_hw_xlate[] = {5, 4, 7, 3, 6, 2, 1, 0}; + static struct bno055_sysfs_attr bno055_gyr_lpf = { + .vals = bno055_gyr_lpf_vals, + .len = ARRAY_SIZE(bno055_gyr_lpf_vals), + .fusion_vals = (int[]){32}, +- .hw_xlate = (int[]){5, 4, 7, 3, 6, 2, 1, 0}, ++ .hw_xlate = bno055_gyr_lpf_hw_xlate, ++ .hw_xlate_len = ARRAY_SIZE(bno055_gyr_lpf_hw_xlate), + .type = IIO_VAL_INT, + }; + +@@ -561,7 +566,7 @@ static int bno055_get_regmask(struct bno055_priv *priv, int *val, int *val2, + + idx = (hwval & mask) >> shift; + if (attr->hw_xlate) +- for (i = 0; i < attr->len; i++) ++ for (i = 0; i < attr->hw_xlate_len; i++) + if (attr->hw_xlate[i] == idx) { + idx = i; + break; +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h +index f893dbe6996506..55ed1ddaa8cb5d 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h +@@ -164,11 +164,11 @@ struct inv_icm42600_state { + struct inv_icm42600_suspended suspended; + struct iio_dev *indio_gyro; + struct iio_dev *indio_accel; +- uint8_t buffer[2] __aligned(IIO_DMA_MINALIGN); ++ u8 buffer[2] __aligned(IIO_DMA_MINALIGN); + struct inv_icm42600_fifo fifo; + struct { +- int64_t gyro; +- int64_t accel; ++ s64 gyro; ++ s64 accel; + } timestamp; + }; + +@@ -410,7 +410,7 @@ const struct iio_mount_matrix * + inv_icm42600_get_mount_matrix(const struct iio_dev *indio_dev, + const struct iio_chan_spec *chan); + +-uint32_t inv_icm42600_odr_to_period(enum inv_icm42600_odr odr); ++u32 inv_icm42600_odr_to_period(enum inv_icm42600_odr odr); + + int inv_icm42600_set_accel_conf(struct inv_icm42600_state *st, + struct inv_icm42600_sensor_conf *conf, +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c +index e6cd9dcb0687d1..8a6f09e68f4934 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c +@@ -177,7 +177,7 @@ static const struct iio_chan_spec inv_icm42600_accel_channels[] = { + */ + struct inv_icm42600_accel_buffer { + struct inv_icm42600_fifo_sensor_data accel; +- int16_t temp; ++ s16 temp; + aligned_s64 timestamp; + }; + +@@ -241,7 +241,7 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev, + + static int inv_icm42600_accel_read_sensor(struct iio_dev *indio_dev, + struct iio_chan_spec const *chan, +- int16_t *val) ++ s16 *val) + { + struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev); + struct inv_icm42600_sensor_state *accel_st = iio_priv(indio_dev); +@@ -284,7 +284,7 @@ static int inv_icm42600_accel_read_sensor(struct iio_dev *indio_dev, + if (ret) + goto exit; + +- *val = (int16_t)be16_to_cpup(data); ++ *val = (s16)be16_to_cpup(data); + if (*val == INV_ICM42600_DATA_INVALID) + ret = -EINVAL; + exit: +@@ -492,11 +492,11 @@ static int inv_icm42600_accel_read_offset(struct inv_icm42600_state *st, + int *val, int *val2) + { + struct device *dev = regmap_get_device(st->map); +- int64_t val64; +- int32_t bias; ++ s64 val64; ++ s32 bias; + unsigned int reg; +- int16_t offset; +- uint8_t data[2]; ++ s16 offset; ++ u8 data[2]; + int ret; + + if (chan->type != IIO_ACCEL) +@@ -550,7 +550,7 @@ static int inv_icm42600_accel_read_offset(struct inv_icm42600_state *st, + * result in micro (1000000) + * (offset * 5 * 9.806650 * 1000000) / 10000 + */ +- val64 = (int64_t)offset * 5LL * 9806650LL; ++ val64 = (s64)offset * 5LL * 9806650LL; + /* for rounding, add + or - divisor (10000) divided by 2 */ + if (val64 >= 0) + val64 += 10000LL / 2LL; +@@ -568,10 +568,10 @@ static int inv_icm42600_accel_write_offset(struct inv_icm42600_state *st, + int val, int val2) + { + struct device *dev = regmap_get_device(st->map); +- int64_t val64; +- int32_t min, max; ++ s64 val64; ++ s32 min, max; + unsigned int reg, regval; +- int16_t offset; ++ s16 offset; + int ret; + + if (chan->type != IIO_ACCEL) +@@ -596,7 +596,7 @@ static int inv_icm42600_accel_write_offset(struct inv_icm42600_state *st, + inv_icm42600_accel_calibbias[1]; + max = inv_icm42600_accel_calibbias[4] * 1000000L + + inv_icm42600_accel_calibbias[5]; +- val64 = (int64_t)val * 1000000LL + (int64_t)val2; ++ val64 = (s64)val * 1000000LL + (s64)val2; + if (val64 < min || val64 > max) + return -EINVAL; + +@@ -671,7 +671,7 @@ static int inv_icm42600_accel_read_raw(struct iio_dev *indio_dev, + int *val, int *val2, long mask) + { + struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev); +- int16_t data; ++ s16 data; + int ret; + + switch (chan->type) { +@@ -902,7 +902,8 @@ int inv_icm42600_accel_parse_fifo(struct iio_dev *indio_dev) + const int8_t *temp; + unsigned int odr; + int64_t ts_val; +- struct inv_icm42600_accel_buffer buffer; ++ /* buffer is copied to userspace, zeroing it to avoid any data leak */ ++ struct inv_icm42600_accel_buffer buffer = { }; + + /* parse all fifo packets */ + for (i = 0, no = 0; i < st->fifo.count; i += size, ++no) { +@@ -921,8 +922,6 @@ int inv_icm42600_accel_parse_fifo(struct iio_dev *indio_dev) + inv_sensors_timestamp_apply_odr(ts, st->fifo.period, + st->fifo.nb.total, no); + +- /* buffer is copied to userspace, zeroing it to avoid any data leak */ +- memset(&buffer, 0, sizeof(buffer)); + memcpy(&buffer.accel, accel, sizeof(buffer.accel)); + /* convert 8 bits FIFO temperature in high resolution format */ + buffer.temp = temp ? (*temp * 64) : 0; +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c +index aae7c56481a3fa..00b9db52ca7855 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c +@@ -26,28 +26,28 @@ + #define INV_ICM42600_FIFO_HEADER_ODR_GYRO BIT(0) + + struct inv_icm42600_fifo_1sensor_packet { +- uint8_t header; ++ u8 header; + struct inv_icm42600_fifo_sensor_data data; +- int8_t temp; ++ s8 temp; + } __packed; + #define INV_ICM42600_FIFO_1SENSOR_PACKET_SIZE 8 + + struct inv_icm42600_fifo_2sensors_packet { +- uint8_t header; ++ u8 header; + struct inv_icm42600_fifo_sensor_data accel; + struct inv_icm42600_fifo_sensor_data gyro; +- int8_t temp; ++ s8 temp; + __be16 timestamp; + } __packed; + #define INV_ICM42600_FIFO_2SENSORS_PACKET_SIZE 16 + + ssize_t inv_icm42600_fifo_decode_packet(const void *packet, const void **accel, +- const void **gyro, const int8_t **temp, ++ const void **gyro, const s8 **temp, + const void **timestamp, unsigned int *odr) + { + const struct inv_icm42600_fifo_1sensor_packet *pack1 = packet; + const struct inv_icm42600_fifo_2sensors_packet *pack2 = packet; +- uint8_t header = *((const uint8_t *)packet); ++ u8 header = *((const u8 *)packet); + + /* FIFO empty */ + if (header & INV_ICM42600_FIFO_HEADER_MSG) { +@@ -100,7 +100,7 @@ ssize_t inv_icm42600_fifo_decode_packet(const void *packet, const void **accel, + + void inv_icm42600_buffer_update_fifo_period(struct inv_icm42600_state *st) + { +- uint32_t period_gyro, period_accel, period; ++ u32 period_gyro, period_accel, period; + + if (st->fifo.en & INV_ICM42600_SENSOR_GYRO) + period_gyro = inv_icm42600_odr_to_period(st->conf.gyro.odr); +@@ -204,8 +204,8 @@ int inv_icm42600_buffer_update_watermark(struct inv_icm42600_state *st) + { + size_t packet_size, wm_size; + unsigned int wm_gyro, wm_accel, watermark; +- uint32_t period_gyro, period_accel, period; +- uint32_t latency_gyro, latency_accel, latency; ++ u32 period_gyro, period_accel, period; ++ u32 latency_gyro, latency_accel, latency; + bool restore; + __le16 raw_wm; + int ret; +@@ -459,7 +459,7 @@ int inv_icm42600_buffer_fifo_read(struct inv_icm42600_state *st, + __be16 *raw_fifo_count; + ssize_t i, size; + const void *accel, *gyro, *timestamp; +- const int8_t *temp; ++ const s8 *temp; + unsigned int odr; + int ret; + +@@ -550,7 +550,7 @@ int inv_icm42600_buffer_hwfifo_flush(struct inv_icm42600_state *st, + struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro); + struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel); + struct inv_sensors_timestamp *ts; +- int64_t gyro_ts, accel_ts; ++ s64 gyro_ts, accel_ts; + int ret; + + gyro_ts = iio_get_time_ns(st->indio_gyro); +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h +index f6c85daf42b00b..ffca4da1e24936 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h +@@ -28,7 +28,7 @@ struct inv_icm42600_state; + struct inv_icm42600_fifo { + unsigned int on; + unsigned int en; +- uint32_t period; ++ u32 period; + struct { + unsigned int gyro; + unsigned int accel; +@@ -41,7 +41,7 @@ struct inv_icm42600_fifo { + size_t accel; + size_t total; + } nb; +- uint8_t data[2080] __aligned(IIO_DMA_MINALIGN); ++ u8 data[2080] __aligned(IIO_DMA_MINALIGN); + }; + + /* FIFO data packet */ +@@ -52,7 +52,7 @@ struct inv_icm42600_fifo_sensor_data { + } __packed; + #define INV_ICM42600_FIFO_DATA_INVALID -32768 + +-static inline int16_t inv_icm42600_fifo_get_sensor_data(__be16 d) ++static inline s16 inv_icm42600_fifo_get_sensor_data(__be16 d) + { + return be16_to_cpu(d); + } +@@ -60,7 +60,7 @@ static inline int16_t inv_icm42600_fifo_get_sensor_data(__be16 d) + static inline bool + inv_icm42600_fifo_is_data_valid(const struct inv_icm42600_fifo_sensor_data *s) + { +- int16_t x, y, z; ++ s16 x, y, z; + + x = inv_icm42600_fifo_get_sensor_data(s->x); + y = inv_icm42600_fifo_get_sensor_data(s->y); +@@ -75,7 +75,7 @@ inv_icm42600_fifo_is_data_valid(const struct inv_icm42600_fifo_sensor_data *s) + } + + ssize_t inv_icm42600_fifo_decode_packet(const void *packet, const void **accel, +- const void **gyro, const int8_t **temp, ++ const void **gyro, const s8 **temp, + const void **timestamp, unsigned int *odr); + + extern const struct iio_buffer_setup_ops inv_icm42600_buffer_ops; +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c +index 63d46619ebfaa1..0bf696ba35ed6a 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c +@@ -103,7 +103,7 @@ const struct regmap_config inv_icm42600_spi_regmap_config = { + EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, "IIO_ICM42600"); + + struct inv_icm42600_hw { +- uint8_t whoami; ++ u8 whoami; + const char *name; + const struct inv_icm42600_conf *conf; + }; +@@ -188,9 +188,9 @@ inv_icm42600_get_mount_matrix(const struct iio_dev *indio_dev, + return &st->orientation; + } + +-uint32_t inv_icm42600_odr_to_period(enum inv_icm42600_odr odr) ++u32 inv_icm42600_odr_to_period(enum inv_icm42600_odr odr) + { +- static uint32_t odr_periods[INV_ICM42600_ODR_NB] = { ++ static u32 odr_periods[INV_ICM42600_ODR_NB] = { + /* reserved values */ + 0, 0, 0, + /* 8kHz */ +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c +index b4d7ce1432a4f4..9ba6f13628e6af 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c +@@ -77,7 +77,7 @@ static const struct iio_chan_spec inv_icm42600_gyro_channels[] = { + */ + struct inv_icm42600_gyro_buffer { + struct inv_icm42600_fifo_sensor_data gyro; +- int16_t temp; ++ s16 temp; + aligned_s64 timestamp; + }; + +@@ -139,7 +139,7 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev, + + static int inv_icm42600_gyro_read_sensor(struct inv_icm42600_state *st, + struct iio_chan_spec const *chan, +- int16_t *val) ++ s16 *val) + { + struct device *dev = regmap_get_device(st->map); + struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT; +@@ -179,7 +179,7 @@ static int inv_icm42600_gyro_read_sensor(struct inv_icm42600_state *st, + if (ret) + goto exit; + +- *val = (int16_t)be16_to_cpup(data); ++ *val = (s16)be16_to_cpup(data); + if (*val == INV_ICM42600_DATA_INVALID) + ret = -EINVAL; + exit: +@@ -399,11 +399,11 @@ static int inv_icm42600_gyro_read_offset(struct inv_icm42600_state *st, + int *val, int *val2) + { + struct device *dev = regmap_get_device(st->map); +- int64_t val64; +- int32_t bias; ++ s64 val64; ++ s32 bias; + unsigned int reg; +- int16_t offset; +- uint8_t data[2]; ++ s16 offset; ++ u8 data[2]; + int ret; + + if (chan->type != IIO_ANGL_VEL) +@@ -457,7 +457,7 @@ static int inv_icm42600_gyro_read_offset(struct inv_icm42600_state *st, + * result in nano (1000000000) + * (offset * 64 * Pi * 1000000000) / (2048 * 180) + */ +- val64 = (int64_t)offset * 64LL * 3141592653LL; ++ val64 = (s64)offset * 64LL * 3141592653LL; + /* for rounding, add + or - divisor (2048 * 180) divided by 2 */ + if (val64 >= 0) + val64 += 2048 * 180 / 2; +@@ -475,9 +475,9 @@ static int inv_icm42600_gyro_write_offset(struct inv_icm42600_state *st, + int val, int val2) + { + struct device *dev = regmap_get_device(st->map); +- int64_t val64, min, max; ++ s64 val64, min, max; + unsigned int reg, regval; +- int16_t offset; ++ s16 offset; + int ret; + + if (chan->type != IIO_ANGL_VEL) +@@ -498,11 +498,11 @@ static int inv_icm42600_gyro_write_offset(struct inv_icm42600_state *st, + } + + /* inv_icm42600_gyro_calibbias: min - step - max in nano */ +- min = (int64_t)inv_icm42600_gyro_calibbias[0] * 1000000000LL + +- (int64_t)inv_icm42600_gyro_calibbias[1]; +- max = (int64_t)inv_icm42600_gyro_calibbias[4] * 1000000000LL + +- (int64_t)inv_icm42600_gyro_calibbias[5]; +- val64 = (int64_t)val * 1000000000LL + (int64_t)val2; ++ min = (s64)inv_icm42600_gyro_calibbias[0] * 1000000000LL + ++ (s64)inv_icm42600_gyro_calibbias[1]; ++ max = (s64)inv_icm42600_gyro_calibbias[4] * 1000000000LL + ++ (s64)inv_icm42600_gyro_calibbias[5]; ++ val64 = (s64)val * 1000000000LL + (s64)val2; + if (val64 < min || val64 > max) + return -EINVAL; + +@@ -577,7 +577,7 @@ static int inv_icm42600_gyro_read_raw(struct iio_dev *indio_dev, + int *val, int *val2, long mask) + { + struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev); +- int16_t data; ++ s16 data; + int ret; + + switch (chan->type) { +@@ -803,10 +803,11 @@ int inv_icm42600_gyro_parse_fifo(struct iio_dev *indio_dev) + ssize_t i, size; + unsigned int no; + const void *accel, *gyro, *timestamp; +- const int8_t *temp; ++ const s8 *temp; + unsigned int odr; +- int64_t ts_val; +- struct inv_icm42600_gyro_buffer buffer; ++ s64 ts_val; ++ /* buffer is copied to userspace, zeroing it to avoid any data leak */ ++ struct inv_icm42600_gyro_buffer buffer = { }; + + /* parse all fifo packets */ + for (i = 0, no = 0; i < st->fifo.count; i += size, ++no) { +@@ -825,8 +826,6 @@ int inv_icm42600_gyro_parse_fifo(struct iio_dev *indio_dev) + inv_sensors_timestamp_apply_odr(ts, st->fifo.period, + st->fifo.nb.total, no); + +- /* buffer is copied to userspace, zeroing it to avoid any data leak */ +- memset(&buffer, 0, sizeof(buffer)); + memcpy(&buffer.gyro, gyro, sizeof(buffer.gyro)); + /* convert 8 bits FIFO temperature in high resolution format */ + buffer.temp = temp ? (*temp * 64) : 0; +diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c +index 988f227f6563da..271a4788604ad5 100644 +--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c ++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c +@@ -13,7 +13,7 @@ + #include "inv_icm42600.h" + #include "inv_icm42600_temp.h" + +-static int inv_icm42600_temp_read(struct inv_icm42600_state *st, int16_t *temp) ++static int inv_icm42600_temp_read(struct inv_icm42600_state *st, s16 *temp) + { + struct device *dev = regmap_get_device(st->map); + __be16 *raw; +@@ -31,9 +31,13 @@ static int inv_icm42600_temp_read(struct inv_icm42600_state *st, int16_t *temp) + if (ret) + goto exit; + +- *temp = (int16_t)be16_to_cpup(raw); ++ *temp = (s16)be16_to_cpup(raw); ++ /* ++ * Temperature data is invalid if both accel and gyro are off. ++ * Return -EBUSY in this case. ++ */ + if (*temp == INV_ICM42600_DATA_INVALID) +- ret = -EINVAL; ++ ret = -EBUSY; + + exit: + mutex_unlock(&st->lock); +@@ -48,7 +52,7 @@ int inv_icm42600_temp_read_raw(struct iio_dev *indio_dev, + int *val, int *val2, long mask) + { + struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev); +- int16_t temp; ++ s16 temp; + int ret; + + if (chan->type != IIO_TEMP) +diff --git a/drivers/iio/light/as73211.c b/drivers/iio/light/as73211.c +index 68f60dc3c79d53..32719f584c47a0 100644 +--- a/drivers/iio/light/as73211.c ++++ b/drivers/iio/light/as73211.c +@@ -639,7 +639,7 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p) + struct { + __le16 chan[4]; + aligned_s64 ts; +- } scan; ++ } scan = { }; + int data_result, ret; + + mutex_lock(&data->mutex); +diff --git a/drivers/iio/pressure/bmp280-core.c b/drivers/iio/pressure/bmp280-core.c +index f37f20776c8917..0f23ece440004c 100644 +--- a/drivers/iio/pressure/bmp280-core.c ++++ b/drivers/iio/pressure/bmp280-core.c +@@ -3216,11 +3216,12 @@ int bmp280_common_probe(struct device *dev, + + /* Bring chip out of reset if there is an assigned GPIO line */ + gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); ++ if (IS_ERR(gpiod)) ++ return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n"); ++ + /* Deassert the signal */ +- if (gpiod) { +- dev_info(dev, "release reset\n"); +- gpiod_set_value(gpiod, 0); +- } ++ dev_info(dev, "release reset\n"); ++ gpiod_set_value(gpiod, 0); + + data->regmap = regmap; + +diff --git a/drivers/iio/proximity/isl29501.c b/drivers/iio/proximity/isl29501.c +index d1510fe2405088..f69db6f2f38031 100644 +--- a/drivers/iio/proximity/isl29501.c ++++ b/drivers/iio/proximity/isl29501.c +@@ -938,12 +938,18 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p) + struct iio_dev *indio_dev = pf->indio_dev; + struct isl29501_private *isl29501 = iio_priv(indio_dev); + const unsigned long *active_mask = indio_dev->active_scan_mask; +- u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */ +- +- if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) +- isl29501_register_read(isl29501, REG_DISTANCE, buffer); ++ u32 value; ++ struct { ++ u16 data; ++ aligned_s64 ts; ++ } scan = { }; ++ ++ if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask)) { ++ isl29501_register_read(isl29501, REG_DISTANCE, &value); ++ scan.data = value; ++ } + +- iio_push_to_buffers_with_timestamp(indio_dev, buffer, pf->timestamp); ++ iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp); + iio_trigger_notify_done(indio_dev->trig); + + return IRQ_HANDLED; +diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c +index cae8e84821d7fd..205939680fd4fc 100644 +--- a/drivers/iio/temperature/maxim_thermocouple.c ++++ b/drivers/iio/temperature/maxim_thermocouple.c +@@ -11,6 +11,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -121,8 +122,15 @@ struct maxim_thermocouple_data { + struct spi_device *spi; + const struct maxim_thermocouple_chip *chip; + char tc_type; +- +- u8 buffer[16] __aligned(IIO_DMA_MINALIGN); ++ /* Buffer for reading up to 2 hardware channels. */ ++ struct { ++ union { ++ __be16 raw16; ++ __be32 raw32; ++ __be16 raw[2]; ++ }; ++ aligned_s64 timestamp; ++ } buffer __aligned(IIO_DMA_MINALIGN); + }; + + static int maxim_thermocouple_read(struct maxim_thermocouple_data *data, +@@ -130,18 +138,16 @@ static int maxim_thermocouple_read(struct maxim_thermocouple_data *data, + { + unsigned int storage_bytes = data->chip->read_size; + unsigned int shift = chan->scan_type.shift + (chan->address * 8); +- __be16 buf16; +- __be32 buf32; + int ret; + + switch (storage_bytes) { + case 2: +- ret = spi_read(data->spi, (void *)&buf16, storage_bytes); +- *val = be16_to_cpu(buf16); ++ ret = spi_read(data->spi, &data->buffer.raw16, storage_bytes); ++ *val = be16_to_cpu(data->buffer.raw16); + break; + case 4: +- ret = spi_read(data->spi, (void *)&buf32, storage_bytes); +- *val = be32_to_cpu(buf32); ++ ret = spi_read(data->spi, &data->buffer.raw32, storage_bytes); ++ *val = be32_to_cpu(data->buffer.raw32); + break; + default: + ret = -EINVAL; +@@ -166,9 +172,9 @@ static irqreturn_t maxim_thermocouple_trigger_handler(int irq, void *private) + struct maxim_thermocouple_data *data = iio_priv(indio_dev); + int ret; + +- ret = spi_read(data->spi, data->buffer, data->chip->read_size); ++ ret = spi_read(data->spi, data->buffer.raw, data->chip->read_size); + if (!ret) { +- iio_push_to_buffers_with_ts(indio_dev, data->buffer, ++ iio_push_to_buffers_with_ts(indio_dev, &data->buffer, + sizeof(data->buffer), + iio_get_time_ns(indio_dev)); + } +diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c +index b1c44ec1a3f36d..572a91a62a7bea 100644 +--- a/drivers/infiniband/core/umem_odp.c ++++ b/drivers/infiniband/core/umem_odp.c +@@ -115,7 +115,7 @@ static int ib_init_umem_odp(struct ib_umem_odp *umem_odp, + + out_free_map: + if (ib_uses_virt_dma(dev)) +- kfree(map->pfn_list); ++ kvfree(map->pfn_list); + else + hmm_dma_map_free(dev->dma_device, map); + return ret; +@@ -287,7 +287,7 @@ static void ib_umem_odp_free(struct ib_umem_odp *umem_odp) + mutex_unlock(&umem_odp->umem_mutex); + mmu_interval_notifier_remove(&umem_odp->notifier); + if (ib_uses_virt_dma(dev)) +- kfree(umem_odp->map.pfn_list); ++ kvfree(umem_odp->map.pfn_list); + else + hmm_dma_map_free(dev->dma_device, &umem_odp->map); + } +diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c +index 3a627acb82ce13..9b33072f9a0680 100644 +--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c ++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c +@@ -1921,7 +1921,6 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr, + struct bnxt_re_srq *srq = container_of(ib_srq, struct bnxt_re_srq, + ib_srq); + struct bnxt_re_dev *rdev = srq->rdev; +- int rc; + + switch (srq_attr_mask) { + case IB_SRQ_MAX_WR: +@@ -1933,11 +1932,8 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr, + return -EINVAL; + + srq->qplib_srq.threshold = srq_attr->srq_limit; +- rc = bnxt_qplib_modify_srq(&rdev->qplib_res, &srq->qplib_srq); +- if (rc) { +- ibdev_err(&rdev->ibdev, "Modify HW SRQ failed!"); +- return rc; +- } ++ bnxt_qplib_srq_arm_db(&srq->qplib_srq.dbinfo, srq->qplib_srq.threshold); ++ + /* On success, update the shadow */ + srq->srq_limit = srq_attr->srq_limit; + /* No need to Build and send response back to udata */ +diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c +index 293b0a96c8e3ec..df7cf8d68e273f 100644 +--- a/drivers/infiniband/hw/bnxt_re/main.c ++++ b/drivers/infiniband/hw/bnxt_re/main.c +@@ -2017,6 +2017,28 @@ static void bnxt_re_free_nqr_mem(struct bnxt_re_dev *rdev) + rdev->nqr = NULL; + } + ++/* When DEL_GID fails, driver is not freeing GID ctx memory. ++ * To avoid the memory leak, free the memory during unload ++ */ ++static void bnxt_re_free_gid_ctx(struct bnxt_re_dev *rdev) ++{ ++ struct bnxt_qplib_sgid_tbl *sgid_tbl = &rdev->qplib_res.sgid_tbl; ++ struct bnxt_re_gid_ctx *ctx, **ctx_tbl; ++ int i; ++ ++ if (!sgid_tbl->active) ++ return; ++ ++ ctx_tbl = sgid_tbl->ctx; ++ for (i = 0; i < sgid_tbl->max; i++) { ++ if (sgid_tbl->hw_id[i] == 0xFFFF) ++ continue; ++ ++ ctx = ctx_tbl[i]; ++ kfree(ctx); ++ } ++} ++ + static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type) + { + u8 type; +@@ -2030,6 +2052,7 @@ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type) + if (test_and_clear_bit(BNXT_RE_FLAG_QOS_WORK_REG, &rdev->flags)) + cancel_delayed_work_sync(&rdev->worker); + ++ bnxt_re_free_gid_ctx(rdev); + if (test_and_clear_bit(BNXT_RE_FLAG_RESOURCES_INITIALIZED, + &rdev->flags)) + bnxt_re_cleanup_res(rdev); +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +index be34c605d51672..c2784561156f63 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +@@ -705,9 +705,7 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res, + srq->dbinfo.db = srq->dpi->dbr; + srq->dbinfo.max_slot = 1; + srq->dbinfo.priv_db = res->dpi_tbl.priv_db; +- if (srq->threshold) +- bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA); +- srq->arm_req = false; ++ bnxt_qplib_armen_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ_ARMENA); + + return 0; + fail: +@@ -717,24 +715,6 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res, + return rc; + } + +-int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res, +- struct bnxt_qplib_srq *srq) +-{ +- struct bnxt_qplib_hwq *srq_hwq = &srq->hwq; +- u32 count; +- +- count = __bnxt_qplib_get_avail(srq_hwq); +- if (count > srq->threshold) { +- srq->arm_req = false; +- bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold); +- } else { +- /* Deferred arming */ +- srq->arm_req = true; +- } +- +- return 0; +-} +- + int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, + struct bnxt_qplib_srq *srq) + { +@@ -776,7 +756,6 @@ int bnxt_qplib_post_srq_recv(struct bnxt_qplib_srq *srq, + struct bnxt_qplib_hwq *srq_hwq = &srq->hwq; + struct rq_wqe *srqe; + struct sq_sge *hw_sge; +- u32 count = 0; + int i, next; + + spin_lock(&srq_hwq->lock); +@@ -808,15 +787,8 @@ int bnxt_qplib_post_srq_recv(struct bnxt_qplib_srq *srq, + + bnxt_qplib_hwq_incr_prod(&srq->dbinfo, srq_hwq, srq->dbinfo.max_slot); + +- spin_lock(&srq_hwq->lock); +- count = __bnxt_qplib_get_avail(srq_hwq); +- spin_unlock(&srq_hwq->lock); + /* Ring DB */ + bnxt_qplib_ring_prod_db(&srq->dbinfo, DBC_DBC_TYPE_SRQ); +- if (srq->arm_req == true && count > srq->threshold) { +- srq->arm_req = false; +- bnxt_qplib_srq_arm_db(&srq->dbinfo, srq->threshold); +- } + + return 0; + } +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h +index 0d9487c889ff3e..846501f12227ca 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h +@@ -543,8 +543,6 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq, + srqn_handler_t srq_handler); + int bnxt_qplib_create_srq(struct bnxt_qplib_res *res, + struct bnxt_qplib_srq *srq); +-int bnxt_qplib_modify_srq(struct bnxt_qplib_res *res, +- struct bnxt_qplib_srq *srq); + int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, + struct bnxt_qplib_srq *srq); + void bnxt_qplib_destroy_srq(struct bnxt_qplib_res *res, +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c +index 6cd05207ffeddf..cc5c82d968395a 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c ++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c +@@ -121,6 +121,7 @@ static int __alloc_pbl(struct bnxt_qplib_res *res, + pbl->pg_arr = vmalloc_array(pages, sizeof(void *)); + if (!pbl->pg_arr) + return -ENOMEM; ++ memset(pbl->pg_arr, 0, pages * sizeof(void *)); + + pbl->pg_map_arr = vmalloc_array(pages, sizeof(dma_addr_t)); + if (!pbl->pg_map_arr) { +@@ -128,6 +129,7 @@ static int __alloc_pbl(struct bnxt_qplib_res *res, + pbl->pg_arr = NULL; + return -ENOMEM; + } ++ memset(pbl->pg_map_arr, 0, pages * sizeof(dma_addr_t)); + pbl->pg_count = 0; + pbl->pg_size = sginfo->pgsize; + +diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c +index ec0ad40860668a..8d7596abb82296 100644 +--- a/drivers/infiniband/hw/erdma/erdma_verbs.c ++++ b/drivers/infiniband/hw/erdma/erdma_verbs.c +@@ -994,6 +994,8 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs, + old_entry = xa_store(&dev->qp_xa, 1, qp, GFP_KERNEL); + if (xa_is_err(old_entry)) + ret = xa_err(old_entry); ++ else ++ qp->ibqp.qp_num = 1; + } else { + ret = xa_alloc_cyclic(&dev->qp_xa, &qp->ibqp.qp_num, qp, + XA_LIMIT(1, dev->attrs.max_qp - 1), +@@ -1031,7 +1033,9 @@ int erdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attrs, + if (ret) + goto err_out_cmd; + } else { +- init_kernel_qp(dev, qp, attrs); ++ ret = init_kernel_qp(dev, qp, attrs); ++ if (ret) ++ goto err_out_xa; + } + + qp->attrs.max_send_sge = attrs->cap.max_send_sge; +diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +index b30dce00f2405a..b544ca0244842b 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c ++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +@@ -3043,7 +3043,7 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev) + if (!hr_dev->is_vf) + hns_roce_free_link_table(hr_dev); + +- if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09) ++ if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09) + free_dip_entry(hr_dev); + } + +@@ -5514,7 +5514,7 @@ static int hns_roce_v2_query_srqc(struct hns_roce_dev *hr_dev, u32 srqn, + return ret; + } + +-static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn, ++static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 sccn, + void *buffer) + { + struct hns_roce_v2_scc_context *context; +@@ -5526,7 +5526,7 @@ static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn, + return PTR_ERR(mailbox); + + ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, HNS_ROCE_CMD_QUERY_SCCC, +- qpn); ++ sccn); + if (ret) + goto out; + +diff --git a/drivers/infiniband/hw/hns/hns_roce_restrack.c b/drivers/infiniband/hw/hns/hns_roce_restrack.c +index f637b73b946e44..230187dda6a07b 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_restrack.c ++++ b/drivers/infiniband/hw/hns/hns_roce_restrack.c +@@ -100,6 +100,7 @@ int hns_roce_fill_res_qp_entry_raw(struct sk_buff *msg, struct ib_qp *ib_qp) + struct hns_roce_v2_qp_context qpc; + struct hns_roce_v2_scc_context sccc; + } context = {}; ++ u32 sccn = hr_qp->qpn; + int ret; + + if (!hr_dev->hw->query_qpc) +@@ -116,7 +117,13 @@ int hns_roce_fill_res_qp_entry_raw(struct sk_buff *msg, struct ib_qp *ib_qp) + !hr_dev->hw->query_sccc) + goto out; + +- ret = hr_dev->hw->query_sccc(hr_dev, hr_qp->qpn, &context.sccc); ++ if (hr_qp->cong_type == CONG_TYPE_DIP) { ++ if (!hr_qp->dip) ++ goto out; ++ sccn = hr_qp->dip->dip_idx; ++ } ++ ++ ret = hr_dev->hw->query_sccc(hr_dev, sccn, &context.sccc); + if (ret) + ibdev_warn_ratelimited(&hr_dev->ib_dev, + "failed to query SCCC, ret = %d.\n", +diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c +index 132a87e52d5c7e..ac0183a2ff7aa1 100644 +--- a/drivers/infiniband/sw/rxe/rxe_net.c ++++ b/drivers/infiniband/sw/rxe/rxe_net.c +@@ -345,33 +345,15 @@ int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt, + + static void rxe_skb_tx_dtor(struct sk_buff *skb) + { +- struct net_device *ndev = skb->dev; +- struct rxe_dev *rxe; +- unsigned int qp_index; +- struct rxe_qp *qp; ++ struct rxe_qp *qp = skb->sk->sk_user_data; + int skb_out; + +- rxe = rxe_get_dev_from_net(ndev); +- if (!rxe && is_vlan_dev(ndev)) +- rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev)); +- if (WARN_ON(!rxe)) +- return; +- +- qp_index = (int)(uintptr_t)skb->sk->sk_user_data; +- if (!qp_index) +- return; +- +- qp = rxe_pool_get_index(&rxe->qp_pool, qp_index); +- if (!qp) +- goto put_dev; +- + skb_out = atomic_dec_return(&qp->skb_out); +- if (qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW) ++ if (unlikely(qp->need_req_skb && ++ skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) + rxe_sched_task(&qp->send_task); + + rxe_put(qp); +-put_dev: +- ib_device_put(&rxe->ib_dev); + sock_put(skb->sk); + } + +@@ -383,6 +365,7 @@ static int rxe_send(struct sk_buff *skb, struct rxe_pkt_info *pkt) + sock_hold(sk); + skb->sk = sk; + skb->destructor = rxe_skb_tx_dtor; ++ rxe_get(pkt->qp); + atomic_inc(&pkt->qp->skb_out); + + if (skb->protocol == htons(ETH_P_IP)) +@@ -405,6 +388,7 @@ static int rxe_loopback(struct sk_buff *skb, struct rxe_pkt_info *pkt) + sock_hold(sk); + skb->sk = sk; + skb->destructor = rxe_skb_tx_dtor; ++ rxe_get(pkt->qp); + atomic_inc(&pkt->qp->skb_out); + + if (skb->protocol == htons(ETH_P_IP)) +@@ -497,6 +481,9 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av, + goto out; + } + ++ /* Add time stamp to skb. */ ++ skb->tstamp = ktime_get(); ++ + skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev)); + + /* FIXME: hold reference to this netdev until life of this skb. */ +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c +index f2af3e0aef35b5..95f1c1c2949de7 100644 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c +@@ -244,7 +244,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, + err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk); + if (err < 0) + return err; +- qp->sk->sk->sk_user_data = (void *)(uintptr_t)qp->elem.index; ++ qp->sk->sk->sk_user_data = qp; + + /* pick a source UDP port number for this QP based on + * the source QPN. this spreads traffic for different QPs +diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c +index 9c17dfa7670306..7add9bcf45dc8b 100644 +--- a/drivers/iommu/amd/init.c ++++ b/drivers/iommu/amd/init.c +@@ -3596,7 +3596,7 @@ static int __init parse_ivrs_acpihid(char *str) + { + u32 seg = 0, bus, dev, fn; + char *hid, *uid, *p, *addr; +- char acpiid[ACPIID_LEN] = {0}; ++ char acpiid[ACPIID_LEN + 1] = { }; /* size with NULL terminator */ + int i; + + addr = strchr(str, '@'); +@@ -3622,7 +3622,7 @@ static int __init parse_ivrs_acpihid(char *str) + /* We have the '@', make it the terminator to get just the acpiid */ + *addr++ = 0; + +- if (strlen(str) > ACPIID_LEN + 1) ++ if (strlen(str) > ACPIID_LEN) + goto not_found; + + if (sscanf(str, "=%s", acpiid) != 1) +diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c +index 757d24f67ad45a..190f28d7661515 100644 +--- a/drivers/iommu/apple-dart.c ++++ b/drivers/iommu/apple-dart.c +@@ -991,7 +991,6 @@ static const struct iommu_ops apple_dart_iommu_ops = { + .of_xlate = apple_dart_of_xlate, + .def_domain_type = apple_dart_def_domain_type, + .get_resv_regions = apple_dart_get_resv_regions, +- .pgsize_bitmap = -1UL, /* Restricted during dart probe */ + .owner = THIS_MODULE, + .default_domain_ops = &(const struct iommu_domain_ops) { + .attach_dev = apple_dart_attach_dev_paging, +diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +index dacaa78f69aaa1..43df3dc65e2127 100644 +--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c ++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +@@ -2997,9 +2997,9 @@ void arm_smmu_attach_commit(struct arm_smmu_attach_state *state) + /* ATS is being switched off, invalidate the entire ATC */ + arm_smmu_atc_inv_master(master, IOMMU_NO_PASID); + } +- master->ats_enabled = state->ats_enabled; + + arm_smmu_remove_master_domain(master, state->old_domain, state->ssid); ++ master->ats_enabled = state->ats_enabled; + } + + static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) +diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c +index ab4cd742f0953c..c239e280e43d91 100644 +--- a/drivers/iommu/intel/iommu.c ++++ b/drivers/iommu/intel/iommu.c +@@ -4390,7 +4390,6 @@ const struct iommu_ops intel_iommu_ops = { + .device_group = intel_iommu_device_group, + .is_attach_deferred = intel_iommu_is_attach_deferred, + .def_domain_type = device_def_domain_type, +- .pgsize_bitmap = SZ_4K, + .page_response = intel_iommu_page_response, + .default_domain_ops = &(const struct iommu_domain_ops) { + .attach_dev = intel_iommu_attach_device, +diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c +index 6bd0abf9a641e2..c52bf037a2f01e 100644 +--- a/drivers/iommu/iommufd/selftest.c ++++ b/drivers/iommu/iommufd/selftest.c +@@ -801,7 +801,6 @@ static const struct iommu_ops mock_ops = { + .default_domain = &mock_blocking_domain, + .blocked_domain = &mock_blocking_domain, + .owner = THIS_MODULE, +- .pgsize_bitmap = MOCK_IO_PAGE_SIZE, + .hw_info = mock_domain_hw_info, + .domain_alloc_paging_flags = mock_domain_alloc_paging_flags, + .domain_alloc_nested = mock_domain_alloc_nested, +diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c +index bb57092ca90110..0eae2f4bdc5e64 100644 +--- a/drivers/iommu/riscv/iommu.c ++++ b/drivers/iommu/riscv/iommu.c +@@ -1283,7 +1283,7 @@ static phys_addr_t riscv_iommu_iova_to_phys(struct iommu_domain *iommu_domain, + unsigned long *ptr; + + ptr = riscv_iommu_pte_fetch(domain, iova, &pte_size); +- if (_io_pte_none(*ptr) || !_io_pte_present(*ptr)) ++ if (!ptr) + return 0; + + return pfn_to_phys(__page_val_to_pfn(*ptr)) | (iova & (pte_size - 1)); +@@ -1533,7 +1533,6 @@ static void riscv_iommu_release_device(struct device *dev) + } + + static const struct iommu_ops riscv_iommu_ops = { +- .pgsize_bitmap = SZ_4K, + .of_xlate = riscv_iommu_of_xlate, + .identity_domain = &riscv_iommu_identity_domain, + .blocked_domain = &riscv_iommu_blocking_domain, +diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c +index ecd41fb03e5a51..b39d6f134ab28f 100644 +--- a/drivers/iommu/virtio-iommu.c ++++ b/drivers/iommu/virtio-iommu.c +@@ -998,8 +998,7 @@ static void viommu_get_resv_regions(struct device *dev, struct list_head *head) + iommu_dma_get_resv_regions(dev, head); + } + +-static struct iommu_ops viommu_ops; +-static struct virtio_driver virtio_iommu_drv; ++static const struct bus_type *virtio_bus_type; + + static int viommu_match_node(struct device *dev, const void *data) + { +@@ -1008,8 +1007,9 @@ static int viommu_match_node(struct device *dev, const void *data) + + static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode) + { +- struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL, +- fwnode, viommu_match_node); ++ struct device *dev = bus_find_device(virtio_bus_type, NULL, fwnode, ++ viommu_match_node); ++ + put_device(dev); + + return dev ? dev_to_virtio(dev)->priv : NULL; +@@ -1086,7 +1086,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap) + } + } + +-static struct iommu_ops viommu_ops = { ++static const struct iommu_ops viommu_ops = { + .capable = viommu_capable, + .domain_alloc_identity = viommu_domain_alloc_identity, + .domain_alloc_paging = viommu_domain_alloc_paging, +@@ -1160,6 +1160,9 @@ static int viommu_probe(struct virtio_device *vdev) + if (!viommu) + return -ENOMEM; + ++ /* Borrow this for easy lookups later */ ++ virtio_bus_type = dev->bus; ++ + spin_lock_init(&viommu->request_lock); + ida_init(&viommu->domain_ids); + viommu->dev = dev; +@@ -1217,8 +1220,6 @@ static int viommu_probe(struct virtio_device *vdev) + viommu->first_domain++; + } + +- viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap; +- + virtio_device_ready(vdev); + + /* Populate the event queue with buffers */ +@@ -1231,10 +1232,10 @@ static int viommu_probe(struct virtio_device *vdev) + if (ret) + goto err_free_vqs; + +- iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev); +- + vdev->priv = viommu; + ++ iommu_device_register(&viommu->iommu, &viommu_ops, parent_dev); ++ + dev_info(dev, "input address: %u bits\n", + order_base_2(viommu->geometry.aperture_end)); + dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap); +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c +index 17157c4216a5b5..4e80784d17343e 100644 +--- a/drivers/md/dm-crypt.c ++++ b/drivers/md/dm-crypt.c +@@ -253,17 +253,35 @@ MODULE_PARM_DESC(max_read_size, "Maximum size of a read request"); + static unsigned int max_write_size = 0; + module_param(max_write_size, uint, 0644); + MODULE_PARM_DESC(max_write_size, "Maximum size of a write request"); +-static unsigned get_max_request_size(struct crypt_config *cc, bool wrt) ++ ++static unsigned get_max_request_sectors(struct dm_target *ti, struct bio *bio) + { ++ struct crypt_config *cc = ti->private; + unsigned val, sector_align; +- val = !wrt ? READ_ONCE(max_read_size) : READ_ONCE(max_write_size); +- if (likely(!val)) +- val = !wrt ? DM_CRYPT_DEFAULT_MAX_READ_SIZE : DM_CRYPT_DEFAULT_MAX_WRITE_SIZE; +- if (wrt || cc->used_tag_size) { +- if (unlikely(val > BIO_MAX_VECS << PAGE_SHIFT)) +- val = BIO_MAX_VECS << PAGE_SHIFT; +- } +- sector_align = max(bdev_logical_block_size(cc->dev->bdev), (unsigned)cc->sector_size); ++ bool wrt = op_is_write(bio_op(bio)); ++ ++ if (wrt) { ++ /* ++ * For zoned devices, splitting write operations creates the ++ * risk of deadlocking queue freeze operations with zone write ++ * plugging BIO work when the reminder of a split BIO is ++ * issued. So always allow the entire BIO to proceed. ++ */ ++ if (ti->emulate_zone_append) ++ return bio_sectors(bio); ++ ++ val = min_not_zero(READ_ONCE(max_write_size), ++ DM_CRYPT_DEFAULT_MAX_WRITE_SIZE); ++ } else { ++ val = min_not_zero(READ_ONCE(max_read_size), ++ DM_CRYPT_DEFAULT_MAX_READ_SIZE); ++ } ++ ++ if (wrt || cc->used_tag_size) ++ val = min(val, BIO_MAX_VECS << PAGE_SHIFT); ++ ++ sector_align = max(bdev_logical_block_size(cc->dev->bdev), ++ (unsigned)cc->sector_size); + val = round_down(val, sector_align); + if (unlikely(!val)) + val = sector_align; +@@ -3496,7 +3514,7 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) + /* + * Check if bio is too large, split as needed. + */ +- max_sectors = get_max_request_size(cc, bio_data_dir(bio) == WRITE); ++ max_sectors = get_max_request_sectors(ti, bio); + if (unlikely(bio_sectors(bio) > max_sectors)) + dm_accept_partial_bio(bio, max_sectors); + +@@ -3733,6 +3751,17 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits) + max_t(unsigned int, limits->physical_block_size, cc->sector_size); + limits->io_min = max_t(unsigned int, limits->io_min, cc->sector_size); + limits->dma_alignment = limits->logical_block_size - 1; ++ ++ /* ++ * For zoned dm-crypt targets, there will be no internal splitting of ++ * write BIOs to avoid exceeding BIO_MAX_VECS vectors per BIO. But ++ * without respecting this limit, crypt_alloc_buffer() will trigger a ++ * BUG(). Avoid this by forcing DM core to split write BIOs to this ++ * limit. ++ */ ++ if (ti->emulate_zone_append) ++ limits->max_hw_sectors = min(limits->max_hw_sectors, ++ BIO_MAX_VECS << PAGE_SECTORS_SHIFT); + } + + static struct target_type crypt_target = { +diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c +index e8c0a8c6fb5117..9835f2fe26e99f 100644 +--- a/drivers/md/dm-raid.c ++++ b/drivers/md/dm-raid.c +@@ -439,7 +439,7 @@ static bool rs_is_reshapable(struct raid_set *rs) + /* Return true, if raid set in @rs is recovering */ + static bool rs_is_recovering(struct raid_set *rs) + { +- return rs->md.recovery_cp < rs->md.dev_sectors; ++ return rs->md.resync_offset < rs->md.dev_sectors; + } + + /* Return true, if raid set in @rs is reshaping */ +@@ -769,7 +769,7 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r + rs->md.layout = raid_type->algorithm; + rs->md.new_layout = rs->md.layout; + rs->md.delta_disks = 0; +- rs->md.recovery_cp = MaxSector; ++ rs->md.resync_offset = MaxSector; + + for (i = 0; i < raid_devs; i++) + md_rdev_init(&rs->dev[i].rdev); +@@ -913,7 +913,7 @@ static int parse_dev_params(struct raid_set *rs, struct dm_arg_set *as) + rs->md.external = 0; + rs->md.persistent = 1; + rs->md.major_version = 2; +- } else if (rebuild && !rs->md.recovery_cp) { ++ } else if (rebuild && !rs->md.resync_offset) { + /* + * Without metadata, we will not be able to tell if the array + * is in-sync or not - we must assume it is not. Therefore, +@@ -1696,20 +1696,20 @@ static void rs_setup_recovery(struct raid_set *rs, sector_t dev_sectors) + { + /* raid0 does not recover */ + if (rs_is_raid0(rs)) +- rs->md.recovery_cp = MaxSector; ++ rs->md.resync_offset = MaxSector; + /* + * A raid6 set has to be recovered either + * completely or for the grown part to + * ensure proper parity and Q-Syndrome + */ + else if (rs_is_raid6(rs)) +- rs->md.recovery_cp = dev_sectors; ++ rs->md.resync_offset = dev_sectors; + /* + * Other raid set types may skip recovery + * depending on the 'nosync' flag. + */ + else +- rs->md.recovery_cp = test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags) ++ rs->md.resync_offset = test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags) + ? MaxSector : dev_sectors; + } + +@@ -2144,7 +2144,7 @@ static void super_sync(struct mddev *mddev, struct md_rdev *rdev) + sb->events = cpu_to_le64(mddev->events); + + sb->disk_recovery_offset = cpu_to_le64(rdev->recovery_offset); +- sb->array_resync_offset = cpu_to_le64(mddev->recovery_cp); ++ sb->array_resync_offset = cpu_to_le64(mddev->resync_offset); + + sb->level = cpu_to_le32(mddev->level); + sb->layout = cpu_to_le32(mddev->layout); +@@ -2335,18 +2335,18 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev) + } + + if (!test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags)) +- mddev->recovery_cp = le64_to_cpu(sb->array_resync_offset); ++ mddev->resync_offset = le64_to_cpu(sb->array_resync_offset); + + /* + * During load, we set FirstUse if a new superblock was written. + * There are two reasons we might not have a superblock: + * 1) The raid set is brand new - in which case, all of the + * devices must have their In_sync bit set. Also, +- * recovery_cp must be 0, unless forced. ++ * resync_offset must be 0, unless forced. + * 2) This is a new device being added to an old raid set + * and the new device needs to be rebuilt - in which + * case the In_sync bit will /not/ be set and +- * recovery_cp must be MaxSector. ++ * resync_offset must be MaxSector. + * 3) This is/are a new device(s) being added to an old + * raid set during takeover to a higher raid level + * to provide capacity for redundancy or during reshape +@@ -2391,8 +2391,8 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev) + new_devs > 1 ? "s" : ""); + return -EINVAL; + } else if (!test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags) && rs_is_recovering(rs)) { +- DMERR("'rebuild' specified while raid set is not in-sync (recovery_cp=%llu)", +- (unsigned long long) mddev->recovery_cp); ++ DMERR("'rebuild' specified while raid set is not in-sync (resync_offset=%llu)", ++ (unsigned long long) mddev->resync_offset); + return -EINVAL; + } else if (rs_is_reshaping(rs)) { + DMERR("'rebuild' specified while raid set is being reshaped (reshape_position=%llu)", +@@ -2697,11 +2697,11 @@ static int rs_adjust_data_offsets(struct raid_set *rs) + } + out: + /* +- * Raise recovery_cp in case data_offset != 0 to ++ * Raise resync_offset in case data_offset != 0 to + * avoid false recovery positives in the constructor. + */ +- if (rs->md.recovery_cp < rs->md.dev_sectors) +- rs->md.recovery_cp += rs->dev[0].rdev.data_offset; ++ if (rs->md.resync_offset < rs->md.dev_sectors) ++ rs->md.resync_offset += rs->dev[0].rdev.data_offset; + + /* Adjust data offsets on all rdevs but on any raid4/5/6 journal device */ + rdev_for_each(rdev, &rs->md) { +@@ -2756,7 +2756,7 @@ static int rs_setup_takeover(struct raid_set *rs) + } + + clear_bit(MD_ARRAY_FIRST_USE, &mddev->flags); +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + + while (d--) { + rdev = &rs->dev[d].rdev; +@@ -2764,7 +2764,7 @@ static int rs_setup_takeover(struct raid_set *rs) + if (test_bit(d, (void *) rs->rebuild_disks)) { + clear_bit(In_sync, &rdev->flags); + clear_bit(Faulty, &rdev->flags); +- mddev->recovery_cp = rdev->recovery_offset = 0; ++ mddev->resync_offset = rdev->recovery_offset = 0; + /* Bitmap has to be created when we do an "up" takeover */ + set_bit(MD_ARRAY_FIRST_USE, &mddev->flags); + } +@@ -3222,7 +3222,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv) + if (r) + goto bad; + +- rs_setup_recovery(rs, rs->md.recovery_cp < rs->md.dev_sectors ? rs->md.recovery_cp : rs->md.dev_sectors); ++ rs_setup_recovery(rs, rs->md.resync_offset < rs->md.dev_sectors ? rs->md.resync_offset : rs->md.dev_sectors); + } else { + /* This is no size change or it is shrinking, update size and record in superblocks */ + r = rs_set_dev_and_array_sectors(rs, rs->ti->len, false); +@@ -3446,7 +3446,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery, + + } else { + if (state == st_idle && !test_bit(MD_RECOVERY_INTR, &recovery)) +- r = mddev->recovery_cp; ++ r = mddev->resync_offset; + else + r = mddev->curr_resync_completed; + +@@ -4074,9 +4074,9 @@ static int raid_preresume(struct dm_target *ti) + } + + /* Check for any resize/reshape on @rs and adjust/initiate */ +- if (mddev->recovery_cp && mddev->recovery_cp < MaxSector) { ++ if (mddev->resync_offset && mddev->resync_offset < MaxSector) { + set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery); +- mddev->resync_min = mddev->recovery_cp; ++ mddev->resync_min = mddev->resync_offset; + if (test_bit(RT_FLAG_RS_GROW, &rs->runtime_flags)) + mddev->resync_max_sectors = mddev->dev_sectors; + } +diff --git a/drivers/md/dm.c b/drivers/md/dm.c +index 9f6d88ea60e67f..abfe0392b5a47e 100644 +--- a/drivers/md/dm.c ++++ b/drivers/md/dm.c +@@ -1293,8 +1293,9 @@ static size_t dm_dax_recovery_write(struct dax_device *dax_dev, pgoff_t pgoff, + /* + * A target may call dm_accept_partial_bio only from the map routine. It is + * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_* zone management +- * operations, REQ_OP_ZONE_APPEND (zone append writes) and any bio serviced by +- * __send_duplicate_bios(). ++ * operations, zone append writes (native with REQ_OP_ZONE_APPEND or emulated ++ * with write BIOs flagged with BIO_EMULATES_ZONE_APPEND) and any bio serviced ++ * by __send_duplicate_bios(). + * + * dm_accept_partial_bio informs the dm that the target only wants to process + * additional n_sectors sectors of the bio and the rest of the data should be +@@ -1327,11 +1328,19 @@ void dm_accept_partial_bio(struct bio *bio, unsigned int n_sectors) + unsigned int bio_sectors = bio_sectors(bio); + + BUG_ON(dm_tio_flagged(tio, DM_TIO_IS_DUPLICATE_BIO)); +- BUG_ON(op_is_zone_mgmt(bio_op(bio))); +- BUG_ON(bio_op(bio) == REQ_OP_ZONE_APPEND); + BUG_ON(bio_sectors > *tio->len_ptr); + BUG_ON(n_sectors > bio_sectors); + ++ if (static_branch_unlikely(&zoned_enabled) && ++ unlikely(bdev_is_zoned(bio->bi_bdev))) { ++ enum req_op op = bio_op(bio); ++ ++ BUG_ON(op_is_zone_mgmt(op)); ++ BUG_ON(op == REQ_OP_WRITE); ++ BUG_ON(op == REQ_OP_WRITE_ZEROES); ++ BUG_ON(op == REQ_OP_ZONE_APPEND); ++ } ++ + *tio->len_ptr -= bio_sectors - n_sectors; + bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT; + +diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c +index 7f524a26cebcaa..334b7140493004 100644 +--- a/drivers/md/md-bitmap.c ++++ b/drivers/md/md-bitmap.c +@@ -1987,12 +1987,12 @@ static void bitmap_dirty_bits(struct mddev *mddev, unsigned long s, + + md_bitmap_set_memory_bits(bitmap, sec, 1); + md_bitmap_file_set_bit(bitmap, sec); +- if (sec < bitmap->mddev->recovery_cp) ++ if (sec < bitmap->mddev->resync_offset) + /* We are asserting that the array is dirty, +- * so move the recovery_cp address back so ++ * so move the resync_offset address back so + * that it is obvious that it is dirty + */ +- bitmap->mddev->recovery_cp = sec; ++ bitmap->mddev->resync_offset = sec; + } + } + +@@ -2258,7 +2258,7 @@ static int bitmap_load(struct mddev *mddev) + || bitmap->events_cleared == mddev->events) + /* no need to keep dirty bits to optimise a + * re-add of a missing device */ +- start = mddev->recovery_cp; ++ start = mddev->resync_offset; + + mutex_lock(&mddev->bitmap_info.mutex); + err = md_bitmap_init_from_disk(bitmap, start); +diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c +index 94221d964d4fd6..5497eaee96e7d3 100644 +--- a/drivers/md/md-cluster.c ++++ b/drivers/md/md-cluster.c +@@ -337,11 +337,11 @@ static void recover_bitmaps(struct md_thread *thread) + md_wakeup_thread(mddev->sync_thread); + + if (hi > 0) { +- if (lo < mddev->recovery_cp) +- mddev->recovery_cp = lo; ++ if (lo < mddev->resync_offset) ++ mddev->resync_offset = lo; + /* wake up thread to continue resync in case resync + * is not finished */ +- if (mddev->recovery_cp != MaxSector) { ++ if (mddev->resync_offset != MaxSector) { + /* + * clear the REMOTE flag since we will launch + * resync thread in current node. +@@ -863,9 +863,9 @@ static int gather_all_resync_info(struct mddev *mddev, int total_slots) + lockres_free(bm_lockres); + continue; + } +- if ((hi > 0) && (lo < mddev->recovery_cp)) { ++ if ((hi > 0) && (lo < mddev->resync_offset)) { + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); +- mddev->recovery_cp = lo; ++ mddev->resync_offset = lo; + md_check_recovery(mddev); + } + +@@ -1027,7 +1027,7 @@ static int leave(struct mddev *mddev) + * Also, we should send BITMAP_NEEDS_SYNC message in + * case reshaping is interrupted. + */ +- if ((cinfo->slot_number > 0 && mddev->recovery_cp != MaxSector) || ++ if ((cinfo->slot_number > 0 && mddev->resync_offset != MaxSector) || + (mddev->reshape_position != MaxSector && + test_bit(MD_CLOSING, &mddev->flags))) + resync_bitmap(mddev); +@@ -1605,8 +1605,8 @@ static int gather_bitmaps(struct md_rdev *rdev) + pr_warn("md-cluster: Could not gather bitmaps from slot %d", sn); + goto out; + } +- if ((hi > 0) && (lo < mddev->recovery_cp)) +- mddev->recovery_cp = lo; ++ if ((hi > 0) && (lo < mddev->resync_offset)) ++ mddev->resync_offset = lo; + } + out: + return err; +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 10670c62b09e50..8746b22060a7c2 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -1402,13 +1402,13 @@ static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, stru + mddev->layout = -1; + + if (sb->state & (1<recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + else { + if (sb->events_hi == sb->cp_events_hi && + sb->events_lo == sb->cp_events_lo) { +- mddev->recovery_cp = sb->recovery_cp; ++ mddev->resync_offset = sb->resync_offset; + } else +- mddev->recovery_cp = 0; ++ mddev->resync_offset = 0; + } + + memcpy(mddev->uuid+0, &sb->set_uuid0, 4); +@@ -1534,13 +1534,13 @@ static void super_90_sync(struct mddev *mddev, struct md_rdev *rdev) + mddev->minor_version = sb->minor_version; + if (mddev->in_sync) + { +- sb->recovery_cp = mddev->recovery_cp; ++ sb->resync_offset = mddev->resync_offset; + sb->cp_events_hi = (mddev->events>>32); + sb->cp_events_lo = (u32)mddev->events; +- if (mddev->recovery_cp == MaxSector) ++ if (mddev->resync_offset == MaxSector) + sb->state = (1<< MD_SB_CLEAN); + } else +- sb->recovery_cp = 0; ++ sb->resync_offset = 0; + + sb->layout = mddev->layout; + sb->chunk_size = mddev->chunk_sectors << 9; +@@ -1888,7 +1888,7 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *freshest, struc + mddev->bitmap_info.default_space = (4096-1024) >> 9; + mddev->reshape_backwards = 0; + +- mddev->recovery_cp = le64_to_cpu(sb->resync_offset); ++ mddev->resync_offset = le64_to_cpu(sb->resync_offset); + memcpy(mddev->uuid, sb->set_uuid, 16); + + mddev->max_disks = (4096-256)/2; +@@ -2074,7 +2074,7 @@ static void super_1_sync(struct mddev *mddev, struct md_rdev *rdev) + sb->utime = cpu_to_le64((__u64)mddev->utime); + sb->events = cpu_to_le64(mddev->events); + if (mddev->in_sync) +- sb->resync_offset = cpu_to_le64(mddev->recovery_cp); ++ sb->resync_offset = cpu_to_le64(mddev->resync_offset); + else if (test_bit(MD_JOURNAL_CLEAN, &mddev->flags)) + sb->resync_offset = cpu_to_le64(MaxSector); + else +@@ -2754,7 +2754,7 @@ void md_update_sb(struct mddev *mddev, int force_change) + /* If this is just a dirty<->clean transition, and the array is clean + * and 'events' is odd, we can roll back to the previous clean state */ + if (nospares +- && (mddev->in_sync && mddev->recovery_cp == MaxSector) ++ && (mddev->in_sync && mddev->resync_offset == MaxSector) + && mddev->can_decrease_events + && mddev->events != 1) { + mddev->events--; +@@ -4290,9 +4290,9 @@ __ATTR(chunk_size, S_IRUGO|S_IWUSR, chunk_size_show, chunk_size_store); + static ssize_t + resync_start_show(struct mddev *mddev, char *page) + { +- if (mddev->recovery_cp == MaxSector) ++ if (mddev->resync_offset == MaxSector) + return sprintf(page, "none\n"); +- return sprintf(page, "%llu\n", (unsigned long long)mddev->recovery_cp); ++ return sprintf(page, "%llu\n", (unsigned long long)mddev->resync_offset); + } + + static ssize_t +@@ -4318,7 +4318,7 @@ resync_start_store(struct mddev *mddev, const char *buf, size_t len) + err = -EBUSY; + + if (!err) { +- mddev->recovery_cp = n; ++ mddev->resync_offset = n; + if (mddev->pers) + set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags); + } +@@ -4822,9 +4822,42 @@ metadata_store(struct mddev *mddev, const char *buf, size_t len) + static struct md_sysfs_entry md_metadata = + __ATTR_PREALLOC(metadata_version, S_IRUGO|S_IWUSR, metadata_show, metadata_store); + ++static bool rdev_needs_recovery(struct md_rdev *rdev, sector_t sectors) ++{ ++ return rdev->raid_disk >= 0 && ++ !test_bit(Journal, &rdev->flags) && ++ !test_bit(Faulty, &rdev->flags) && ++ !test_bit(In_sync, &rdev->flags) && ++ rdev->recovery_offset < sectors; ++} ++ ++static enum sync_action md_get_active_sync_action(struct mddev *mddev) ++{ ++ struct md_rdev *rdev; ++ bool is_recover = false; ++ ++ if (mddev->resync_offset < MaxSector) ++ return ACTION_RESYNC; ++ ++ if (mddev->reshape_position != MaxSector) ++ return ACTION_RESHAPE; ++ ++ rcu_read_lock(); ++ rdev_for_each_rcu(rdev, mddev) { ++ if (rdev_needs_recovery(rdev, MaxSector)) { ++ is_recover = true; ++ break; ++ } ++ } ++ rcu_read_unlock(); ++ ++ return is_recover ? ACTION_RECOVER : ACTION_IDLE; ++} ++ + enum sync_action md_sync_action(struct mddev *mddev) + { + unsigned long recovery = mddev->recovery; ++ enum sync_action active_action; + + /* + * frozen has the highest priority, means running sync_thread will be +@@ -4848,8 +4881,17 @@ enum sync_action md_sync_action(struct mddev *mddev) + !test_bit(MD_RECOVERY_NEEDED, &recovery)) + return ACTION_IDLE; + +- if (test_bit(MD_RECOVERY_RESHAPE, &recovery) || +- mddev->reshape_position != MaxSector) ++ /* ++ * Check if any sync operation (resync/recover/reshape) is ++ * currently active. This ensures that only one sync operation ++ * can run at a time. Returns the type of active operation, or ++ * ACTION_IDLE if none are active. ++ */ ++ active_action = md_get_active_sync_action(mddev); ++ if (active_action != ACTION_IDLE) ++ return active_action; ++ ++ if (test_bit(MD_RECOVERY_RESHAPE, &recovery)) + return ACTION_RESHAPE; + + if (test_bit(MD_RECOVERY_RECOVER, &recovery)) +@@ -6405,7 +6447,7 @@ static void md_clean(struct mddev *mddev) + mddev->external_size = 0; + mddev->dev_sectors = 0; + mddev->raid_disks = 0; +- mddev->recovery_cp = 0; ++ mddev->resync_offset = 0; + mddev->resync_min = 0; + mddev->resync_max = MaxSector; + mddev->reshape_position = MaxSector; +@@ -7359,9 +7401,9 @@ int md_set_array_info(struct mddev *mddev, struct mdu_array_info_s *info) + * openned + */ + if (info->state & (1<recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + else +- mddev->recovery_cp = 0; ++ mddev->resync_offset = 0; + mddev->persistent = ! info->not_persistent; + mddev->external = 0; + +@@ -8300,7 +8342,7 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) + seq_printf(seq, "\tresync=REMOTE"); + return 1; + } +- if (mddev->recovery_cp < MaxSector) { ++ if (mddev->resync_offset < MaxSector) { + seq_printf(seq, "\tresync=PENDING"); + return 1; + } +@@ -8943,7 +8985,7 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action) + return mddev->resync_min; + case ACTION_RESYNC: + if (!mddev->bitmap) +- return mddev->recovery_cp; ++ return mddev->resync_offset; + return 0; + case ACTION_RESHAPE: + /* +@@ -8959,11 +9001,7 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action) + start = MaxSector; + rcu_read_lock(); + rdev_for_each_rcu(rdev, mddev) +- if (rdev->raid_disk >= 0 && +- !test_bit(Journal, &rdev->flags) && +- !test_bit(Faulty, &rdev->flags) && +- !test_bit(In_sync, &rdev->flags) && +- rdev->recovery_offset < start) ++ if (rdev_needs_recovery(rdev, start)) + start = rdev->recovery_offset; + rcu_read_unlock(); + +@@ -9181,8 +9219,8 @@ void md_do_sync(struct md_thread *thread) + atomic_read(&mddev->recovery_active) == 0); + mddev->curr_resync_completed = j; + if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) && +- j > mddev->recovery_cp) +- mddev->recovery_cp = j; ++ j > mddev->resync_offset) ++ mddev->resync_offset = j; + update_time = jiffies; + set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags); + sysfs_notify_dirent_safe(mddev->sysfs_completed); +@@ -9302,19 +9340,19 @@ void md_do_sync(struct md_thread *thread) + mddev->curr_resync > MD_RESYNC_ACTIVE) { + if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) { + if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) { +- if (mddev->curr_resync >= mddev->recovery_cp) { ++ if (mddev->curr_resync >= mddev->resync_offset) { + pr_debug("md: checkpointing %s of %s.\n", + desc, mdname(mddev)); + if (test_bit(MD_RECOVERY_ERROR, + &mddev->recovery)) +- mddev->recovery_cp = ++ mddev->resync_offset = + mddev->curr_resync_completed; + else +- mddev->recovery_cp = ++ mddev->resync_offset = + mddev->curr_resync; + } + } else +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + } else { + if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery)) + mddev->curr_resync = MaxSector; +@@ -9322,12 +9360,8 @@ void md_do_sync(struct md_thread *thread) + test_bit(MD_RECOVERY_RECOVER, &mddev->recovery)) { + rcu_read_lock(); + rdev_for_each_rcu(rdev, mddev) +- if (rdev->raid_disk >= 0 && +- mddev->delta_disks >= 0 && +- !test_bit(Journal, &rdev->flags) && +- !test_bit(Faulty, &rdev->flags) && +- !test_bit(In_sync, &rdev->flags) && +- rdev->recovery_offset < mddev->curr_resync) ++ if (mddev->delta_disks >= 0 && ++ rdev_needs_recovery(rdev, mddev->curr_resync)) + rdev->recovery_offset = mddev->curr_resync; + rcu_read_unlock(); + } +@@ -9536,7 +9570,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares) + } + + /* Check if resync is in progress. */ +- if (mddev->recovery_cp < MaxSector) { ++ if (mddev->resync_offset < MaxSector) { + remove_spares(mddev, NULL); + set_bit(MD_RECOVERY_SYNC, &mddev->recovery); + clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery); +@@ -9717,7 +9751,7 @@ void md_check_recovery(struct mddev *mddev) + test_bit(MD_RECOVERY_DONE, &mddev->recovery) || + (mddev->external == 0 && mddev->safemode == 1) || + (mddev->safemode == 2 +- && !mddev->in_sync && mddev->recovery_cp == MaxSector) ++ && !mddev->in_sync && mddev->resync_offset == MaxSector) + )) + return; + +diff --git a/drivers/md/md.h b/drivers/md/md.h +index d45a9e6ead80c5..43ae2d03faa1e6 100644 +--- a/drivers/md/md.h ++++ b/drivers/md/md.h +@@ -523,7 +523,7 @@ struct mddev { + unsigned long normal_io_events; /* IO event timestamp */ + atomic_t recovery_active; /* blocks scheduled, but not written */ + wait_queue_head_t recovery_wait; +- sector_t recovery_cp; ++ sector_t resync_offset; + sector_t resync_min; /* user requested sync + * starts here */ + sector_t resync_max; /* resync should pause +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c +index d8f639f4ae1235..613f4fab83b22c 100644 +--- a/drivers/md/raid0.c ++++ b/drivers/md/raid0.c +@@ -673,7 +673,7 @@ static void *raid0_takeover_raid45(struct mddev *mddev) + mddev->raid_disks--; + mddev->delta_disks = -1; + /* make sure it will be not marked as dirty */ +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS); + + create_strip_zones(mddev, &priv_conf); +@@ -716,7 +716,7 @@ static void *raid0_takeover_raid10(struct mddev *mddev) + mddev->raid_disks += mddev->delta_disks; + mddev->degraded = 0; + /* make sure it will be not marked as dirty */ +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS); + + create_strip_zones(mddev, &priv_conf); +@@ -759,7 +759,7 @@ static void *raid0_takeover_raid1(struct mddev *mddev) + mddev->delta_disks = 1 - mddev->raid_disks; + mddev->raid_disks = 1; + /* make sure it will be not marked as dirty */ +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + mddev_clear_unsupported_flags(mddev, UNSUPPORTED_MDDEV_FLAGS); + + create_strip_zones(mddev, &priv_conf); +diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c +index b8b3a90697012c..52881e6032daee 100644 +--- a/drivers/md/raid1-10.c ++++ b/drivers/md/raid1-10.c +@@ -283,7 +283,7 @@ static inline int raid1_check_read_range(struct md_rdev *rdev, + static inline bool raid1_should_read_first(struct mddev *mddev, + sector_t this_sector, int len) + { +- if ((mddev->recovery_cp < this_sector + len)) ++ if ((mddev->resync_offset < this_sector + len)) + return true; + + if (mddev_is_clustered(mddev) && +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c +index 64b8176907a9b8..6cee738a645ff2 100644 +--- a/drivers/md/raid1.c ++++ b/drivers/md/raid1.c +@@ -2822,7 +2822,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, + } + + if (mddev->bitmap == NULL && +- mddev->recovery_cp == MaxSector && ++ mddev->resync_offset == MaxSector && + !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) && + conf->fullsync == 0) { + *skipped = 1; +@@ -3282,9 +3282,9 @@ static int raid1_run(struct mddev *mddev) + } + + if (conf->raid_disks - mddev->degraded == 1) +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + +- if (mddev->recovery_cp != MaxSector) ++ if (mddev->resync_offset != MaxSector) + pr_info("md/raid1:%s: not clean -- starting background reconstruction\n", + mdname(mddev)); + pr_info("md/raid1:%s: active with %d out of %d mirrors\n", +@@ -3345,8 +3345,8 @@ static int raid1_resize(struct mddev *mddev, sector_t sectors) + + md_set_array_sectors(mddev, newsize); + if (sectors > mddev->dev_sectors && +- mddev->recovery_cp > mddev->dev_sectors) { +- mddev->recovery_cp = mddev->dev_sectors; ++ mddev->resync_offset > mddev->dev_sectors) { ++ mddev->resync_offset = mddev->dev_sectors; + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + } + mddev->dev_sectors = sectors; +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index 95dc354a86a081..b60c30bfb6c794 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -2117,7 +2117,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) + int last = conf->geo.raid_disks - 1; + struct raid10_info *p; + +- if (mddev->recovery_cp < MaxSector) ++ if (mddev->resync_offset < MaxSector) + /* only hot-add to in-sync arrays, as recovery is + * very different from resync + */ +@@ -3185,7 +3185,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, + * of a clean array, like RAID1 does. + */ + if (mddev->bitmap == NULL && +- mddev->recovery_cp == MaxSector && ++ mddev->resync_offset == MaxSector && + mddev->reshape_position == MaxSector && + !test_bit(MD_RECOVERY_SYNC, &mddev->recovery) && + !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) && +@@ -4145,7 +4145,7 @@ static int raid10_run(struct mddev *mddev) + disk->recovery_disabled = mddev->recovery_disabled - 1; + } + +- if (mddev->recovery_cp != MaxSector) ++ if (mddev->resync_offset != MaxSector) + pr_notice("md/raid10:%s: not clean -- starting background reconstruction\n", + mdname(mddev)); + pr_info("md/raid10:%s: active with %d out of %d devices\n", +@@ -4245,8 +4245,8 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors) + + md_set_array_sectors(mddev, size); + if (sectors > mddev->dev_sectors && +- mddev->recovery_cp > oldsize) { +- mddev->recovery_cp = oldsize; ++ mddev->resync_offset > oldsize) { ++ mddev->resync_offset = oldsize; + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + } + calc_sectors(conf, sectors); +@@ -4275,7 +4275,7 @@ static void *raid10_takeover_raid0(struct mddev *mddev, sector_t size, int devs) + mddev->delta_disks = mddev->raid_disks; + mddev->raid_disks *= 2; + /* make sure it will be not marked as dirty */ +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + mddev->dev_sectors = size; + + conf = setup_conf(mddev); +@@ -5087,8 +5087,8 @@ static void raid10_finish_reshape(struct mddev *mddev) + return; + + if (mddev->delta_disks > 0) { +- if (mddev->recovery_cp > mddev->resync_max_sectors) { +- mddev->recovery_cp = mddev->resync_max_sectors; ++ if (mddev->resync_offset > mddev->resync_max_sectors) { ++ mddev->resync_offset = mddev->resync_max_sectors; + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + } + mddev->resync_max_sectors = mddev->array_sectors; +diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c +index c0fb335311aa6c..56b234683ee6be 100644 +--- a/drivers/md/raid5-ppl.c ++++ b/drivers/md/raid5-ppl.c +@@ -1163,7 +1163,7 @@ static int ppl_load_distributed(struct ppl_log *log) + le64_to_cpu(pplhdr->generation)); + + /* attempt to recover from log if we are starting a dirty array */ +- if (pplhdr && !mddev->pers && mddev->recovery_cp != MaxSector) ++ if (pplhdr && !mddev->pers && mddev->resync_offset != MaxSector) + ret = ppl_recover(log, pplhdr, pplhdr_offset); + + /* write empty header if we are starting the array */ +@@ -1422,14 +1422,14 @@ int ppl_init_log(struct r5conf *conf) + + if (ret) { + goto err; +- } else if (!mddev->pers && mddev->recovery_cp == 0 && ++ } else if (!mddev->pers && mddev->resync_offset == 0 && + ppl_conf->recovered_entries > 0 && + ppl_conf->mismatch_count == 0) { + /* + * If we are starting a dirty array and the recovery succeeds + * without any issues, set the array as clean. + */ +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + set_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags); + } else if (mddev->pers && ppl_conf->mismatch_count > 0) { + /* no mismatch allowed when enabling PPL for a running array */ +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index ca5b0e8ba707f0..38a193c0fdae7e 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -3740,7 +3740,7 @@ static int want_replace(struct stripe_head *sh, int disk_idx) + && !test_bit(Faulty, &rdev->flags) + && !test_bit(In_sync, &rdev->flags) + && (rdev->recovery_offset <= sh->sector +- || rdev->mddev->recovery_cp <= sh->sector)) ++ || rdev->mddev->resync_offset <= sh->sector)) + rv = 1; + return rv; + } +@@ -3832,7 +3832,7 @@ static int need_this_block(struct stripe_head *sh, struct stripe_head_state *s, + * is missing/faulty, then we need to read everything we can. + */ + if (!force_rcw && +- sh->sector < sh->raid_conf->mddev->recovery_cp) ++ sh->sector < sh->raid_conf->mddev->resync_offset) + /* reconstruct-write isn't being forced */ + return 0; + for (i = 0; i < s->failed && i < 2; i++) { +@@ -4097,7 +4097,7 @@ static int handle_stripe_dirtying(struct r5conf *conf, + int disks) + { + int rmw = 0, rcw = 0, i; +- sector_t recovery_cp = conf->mddev->recovery_cp; ++ sector_t resync_offset = conf->mddev->resync_offset; + + /* Check whether resync is now happening or should start. + * If yes, then the array is dirty (after unclean shutdown or +@@ -4107,14 +4107,14 @@ static int handle_stripe_dirtying(struct r5conf *conf, + * generate correct data from the parity. + */ + if (conf->rmw_level == PARITY_DISABLE_RMW || +- (recovery_cp < MaxSector && sh->sector >= recovery_cp && ++ (resync_offset < MaxSector && sh->sector >= resync_offset && + s->failed == 0)) { + /* Calculate the real rcw later - for now make it + * look like rcw is cheaper + */ + rcw = 1; rmw = 2; +- pr_debug("force RCW rmw_level=%u, recovery_cp=%llu sh->sector=%llu\n", +- conf->rmw_level, (unsigned long long)recovery_cp, ++ pr_debug("force RCW rmw_level=%u, resync_offset=%llu sh->sector=%llu\n", ++ conf->rmw_level, (unsigned long long)resync_offset, + (unsigned long long)sh->sector); + } else for (i = disks; i--; ) { + /* would I have to read this buffer for read_modify_write */ +@@ -4770,14 +4770,14 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s) + if (test_bit(STRIPE_SYNCING, &sh->state)) { + /* If there is a failed device being replaced, + * we must be recovering. +- * else if we are after recovery_cp, we must be syncing ++ * else if we are after resync_offset, we must be syncing + * else if MD_RECOVERY_REQUESTED is set, we also are syncing. + * else we can only be replacing + * sync and recovery both need to read all devices, and so + * use the same flag. + */ + if (do_recovery || +- sh->sector >= conf->mddev->recovery_cp || ++ sh->sector >= conf->mddev->resync_offset || + test_bit(MD_RECOVERY_REQUESTED, &(conf->mddev->recovery))) + s->syncing = 1; + else +@@ -7780,7 +7780,7 @@ static int raid5_run(struct mddev *mddev) + int first = 1; + int ret = -EIO; + +- if (mddev->recovery_cp != MaxSector) ++ if (mddev->resync_offset != MaxSector) + pr_notice("md/raid:%s: not clean -- starting background reconstruction\n", + mdname(mddev)); + +@@ -7921,7 +7921,7 @@ static int raid5_run(struct mddev *mddev) + mdname(mddev)); + mddev->ro = 1; + set_disk_ro(mddev->gendisk, 1); +- } else if (mddev->recovery_cp == MaxSector) ++ } else if (mddev->resync_offset == MaxSector) + set_bit(MD_JOURNAL_CLEAN, &mddev->flags); + } + +@@ -7988,7 +7988,7 @@ static int raid5_run(struct mddev *mddev) + mddev->resync_max_sectors = mddev->dev_sectors; + + if (mddev->degraded > dirty_parity_disks && +- mddev->recovery_cp != MaxSector) { ++ mddev->resync_offset != MaxSector) { + if (test_bit(MD_HAS_PPL, &mddev->flags)) + pr_crit("md/raid:%s: starting dirty degraded array with PPL.\n", + mdname(mddev)); +@@ -8328,8 +8328,8 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors) + + md_set_array_sectors(mddev, newsize); + if (sectors > mddev->dev_sectors && +- mddev->recovery_cp > mddev->dev_sectors) { +- mddev->recovery_cp = mddev->dev_sectors; ++ mddev->resync_offset > mddev->dev_sectors) { ++ mddev->resync_offset = mddev->dev_sectors; + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); + } + mddev->dev_sectors = sectors; +@@ -8423,7 +8423,7 @@ static int raid5_start_reshape(struct mddev *mddev) + return -EINVAL; + + /* raid5 can't handle concurrent reshape and recovery */ +- if (mddev->recovery_cp < MaxSector) ++ if (mddev->resync_offset < MaxSector) + return -EBUSY; + for (i = 0; i < conf->raid_disks; i++) + if (conf->disks[i].replacement) +@@ -8648,7 +8648,7 @@ static void *raid45_takeover_raid0(struct mddev *mddev, int level) + mddev->raid_disks += 1; + mddev->delta_disks = 1; + /* make sure it will be not marked as dirty */ +- mddev->recovery_cp = MaxSector; ++ mddev->resync_offset = MaxSector; + + return setup_conf(mddev); + } +diff --git a/drivers/media/cec/usb/rainshadow/rainshadow-cec.c b/drivers/media/cec/usb/rainshadow/rainshadow-cec.c +index ee870ea1a88601..6f8d6797c61459 100644 +--- a/drivers/media/cec/usb/rainshadow/rainshadow-cec.c ++++ b/drivers/media/cec/usb/rainshadow/rainshadow-cec.c +@@ -171,11 +171,12 @@ static irqreturn_t rain_interrupt(struct serio *serio, unsigned char data, + { + struct rain *rain = serio_get_drvdata(serio); + ++ spin_lock(&rain->buf_lock); + if (rain->buf_len == DATA_SIZE) { ++ spin_unlock(&rain->buf_lock); + dev_warn_once(rain->dev, "buffer overflow\n"); + return IRQ_HANDLED; + } +- spin_lock(&rain->buf_lock); + rain->buf_len++; + rain->buf[rain->buf_wr_idx] = data; + rain->buf_wr_idx = (rain->buf_wr_idx + 1) & 0xff; +diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c +index d3cc65b67855c8..9aec11ee369bf1 100644 +--- a/drivers/media/i2c/hi556.c ++++ b/drivers/media/i2c/hi556.c +@@ -756,21 +756,23 @@ static int hi556_test_pattern(struct hi556 *hi556, u32 pattern) + int ret; + u32 val; + +- if (pattern) { +- ret = hi556_read_reg(hi556, HI556_REG_ISP, +- HI556_REG_VALUE_08BIT, &val); +- if (ret) +- return ret; ++ ret = hi556_read_reg(hi556, HI556_REG_ISP, ++ HI556_REG_VALUE_08BIT, &val); ++ if (ret) ++ return ret; + +- ret = hi556_write_reg(hi556, HI556_REG_ISP, +- HI556_REG_VALUE_08BIT, +- val | HI556_REG_ISP_TPG_EN); +- if (ret) +- return ret; +- } ++ val = pattern ? (val | HI556_REG_ISP_TPG_EN) : ++ (val & ~HI556_REG_ISP_TPG_EN); ++ ++ ret = hi556_write_reg(hi556, HI556_REG_ISP, ++ HI556_REG_VALUE_08BIT, val); ++ if (ret) ++ return ret; ++ ++ val = pattern ? BIT(pattern - 1) : 0; + + return hi556_write_reg(hi556, HI556_REG_TEST_PATTERN, +- HI556_REG_VALUE_08BIT, pattern); ++ HI556_REG_VALUE_08BIT, val); + } + + static int hi556_set_ctrl(struct v4l2_ctrl *ctrl) +diff --git a/drivers/media/i2c/mt9m114.c b/drivers/media/i2c/mt9m114.c +index 5f0b0ad8f885f1..c00f9412d08eba 100644 +--- a/drivers/media/i2c/mt9m114.c ++++ b/drivers/media/i2c/mt9m114.c +@@ -1599,13 +1599,9 @@ static int mt9m114_ifp_get_frame_interval(struct v4l2_subdev *sd, + if (interval->which != V4L2_SUBDEV_FORMAT_ACTIVE) + return -EINVAL; + +- mutex_lock(sensor->ifp.hdl.lock); +- + ival->numerator = 1; + ival->denominator = sensor->ifp.frame_rate; + +- mutex_unlock(sensor->ifp.hdl.lock); +- + return 0; + } + +@@ -1624,8 +1620,6 @@ static int mt9m114_ifp_set_frame_interval(struct v4l2_subdev *sd, + if (interval->which != V4L2_SUBDEV_FORMAT_ACTIVE) + return -EINVAL; + +- mutex_lock(sensor->ifp.hdl.lock); +- + if (ival->numerator != 0 && ival->denominator != 0) + sensor->ifp.frame_rate = min_t(unsigned int, + ival->denominator / ival->numerator, +@@ -1639,8 +1633,6 @@ static int mt9m114_ifp_set_frame_interval(struct v4l2_subdev *sd, + if (sensor->streaming) + ret = mt9m114_set_frame_rate(sensor); + +- mutex_unlock(sensor->ifp.hdl.lock); +- + return ret; + } + +diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c +index 06b7896c3eaf14..586b31ba076b60 100644 +--- a/drivers/media/i2c/ov2659.c ++++ b/drivers/media/i2c/ov2659.c +@@ -1469,14 +1469,15 @@ static int ov2659_probe(struct i2c_client *client) + V4L2_CID_TEST_PATTERN, + ARRAY_SIZE(ov2659_test_pattern_menu) - 1, + 0, 0, ov2659_test_pattern_menu); +- ov2659->sd.ctrl_handler = &ov2659->ctrls; + + if (ov2659->ctrls.error) { + dev_err(&client->dev, "%s: control initialization error %d\n", + __func__, ov2659->ctrls.error); ++ v4l2_ctrl_handler_free(&ov2659->ctrls); + return ov2659->ctrls.error; + } + ++ ov2659->sd.ctrl_handler = &ov2659->ctrls; + sd = &ov2659->sd; + client->flags |= I2C_CLIENT_SCCB; + +diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c +index da8581a37e2204..6030bd23b4b944 100644 +--- a/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c ++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-csi2.c +@@ -354,9 +354,9 @@ static int ipu6_isys_csi2_enable_streams(struct v4l2_subdev *sd, + remote_pad = media_pad_remote_pad_first(&sd->entity.pads[CSI2_PAD_SINK]); + remote_sd = media_entity_to_v4l2_subdev(remote_pad->entity); + +- sink_streams = v4l2_subdev_state_xlate_streams(state, CSI2_PAD_SRC, +- CSI2_PAD_SINK, +- &streams_mask); ++ sink_streams = ++ v4l2_subdev_state_xlate_streams(state, pad, CSI2_PAD_SINK, ++ &streams_mask); + + ret = ipu6_isys_csi2_calc_timing(csi2, &timing, CSI2_ACCINV); + if (ret) +@@ -384,9 +384,9 @@ static int ipu6_isys_csi2_disable_streams(struct v4l2_subdev *sd, + struct media_pad *remote_pad; + u64 sink_streams; + +- sink_streams = v4l2_subdev_state_xlate_streams(state, CSI2_PAD_SRC, +- CSI2_PAD_SINK, +- &streams_mask); ++ sink_streams = ++ v4l2_subdev_state_xlate_streams(state, pad, CSI2_PAD_SINK, ++ &streams_mask); + + remote_pad = media_pad_remote_pad_first(&sd->entity.pads[CSI2_PAD_SINK]); + remote_sd = media_entity_to_v4l2_subdev(remote_pad->entity); +diff --git a/drivers/media/pci/intel/ivsc/mei_ace.c b/drivers/media/pci/intel/ivsc/mei_ace.c +index 3622271c71c883..50d18b627e152e 100644 +--- a/drivers/media/pci/intel/ivsc/mei_ace.c ++++ b/drivers/media/pci/intel/ivsc/mei_ace.c +@@ -529,6 +529,8 @@ static void mei_ace_remove(struct mei_cl_device *cldev) + + ace_set_camera_owner(ace, ACE_CAMERA_IVSC); + ++ mei_cldev_disable(cldev); ++ + mutex_destroy(&ace->lock); + } + +diff --git a/drivers/media/pci/intel/ivsc/mei_csi.c b/drivers/media/pci/intel/ivsc/mei_csi.c +index 92d871a378ba24..955f687e5d59c2 100644 +--- a/drivers/media/pci/intel/ivsc/mei_csi.c ++++ b/drivers/media/pci/intel/ivsc/mei_csi.c +@@ -760,6 +760,8 @@ static void mei_csi_remove(struct mei_cl_device *cldev) + + pm_runtime_disable(&cldev->dev); + ++ mei_cldev_disable(cldev); ++ + mutex_destroy(&csi->lock); + } + +diff --git a/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c b/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c +index f732a76de93e3e..88c0ba495c3271 100644 +--- a/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c ++++ b/drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c +@@ -849,8 +849,7 @@ static int csiphy_init(struct csiphy_device *csiphy) + regs->offset = 0x1000; + break; + default: +- WARN(1, "unknown csiphy version\n"); +- return -ENODEV; ++ break; + } + + return 0; +diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c +index 06f42875702f02..1507c79913bd45 100644 +--- a/drivers/media/platform/qcom/camss/camss.c ++++ b/drivers/media/platform/qcom/camss/camss.c +@@ -2486,8 +2486,8 @@ static const struct resources_icc icc_res_sm8550[] = { + static const struct camss_subdev_resources csiphy_res_x1e80100[] = { + /* CSIPHY0 */ + { +- .regulators = { "vdd-csiphy-0p8-supply", +- "vdd-csiphy-1p2-supply" }, ++ .regulators = { "vdd-csiphy-0p8", ++ "vdd-csiphy-1p2" }, + .clock = { "csiphy0", "csiphy0_timer" }, + .clock_rate = { { 300000000, 400000000, 480000000 }, + { 266666667, 400000000 } }, +@@ -2501,8 +2501,8 @@ static const struct camss_subdev_resources csiphy_res_x1e80100[] = { + }, + /* CSIPHY1 */ + { +- .regulators = { "vdd-csiphy-0p8-supply", +- "vdd-csiphy-1p2-supply" }, ++ .regulators = { "vdd-csiphy-0p8", ++ "vdd-csiphy-1p2" }, + .clock = { "csiphy1", "csiphy1_timer" }, + .clock_rate = { { 300000000, 400000000, 480000000 }, + { 266666667, 400000000 } }, +@@ -2516,8 +2516,8 @@ static const struct camss_subdev_resources csiphy_res_x1e80100[] = { + }, + /* CSIPHY2 */ + { +- .regulators = { "vdd-csiphy-0p8-supply", +- "vdd-csiphy-1p2-supply" }, ++ .regulators = { "vdd-csiphy-0p8", ++ "vdd-csiphy-1p2" }, + .clock = { "csiphy2", "csiphy2_timer" }, + .clock_rate = { { 300000000, 400000000, 480000000 }, + { 266666667, 400000000 } }, +@@ -2531,8 +2531,8 @@ static const struct camss_subdev_resources csiphy_res_x1e80100[] = { + }, + /* CSIPHY4 */ + { +- .regulators = { "vdd-csiphy-0p8-supply", +- "vdd-csiphy-1p2-supply" }, ++ .regulators = { "vdd-csiphy-0p8", ++ "vdd-csiphy-1p2" }, + .clock = { "csiphy4", "csiphy4_timer" }, + .clock_rate = { { 300000000, 400000000, 480000000 }, + { 266666667, 400000000 } }, +@@ -3625,7 +3625,7 @@ static int camss_probe(struct platform_device *pdev) + ret = v4l2_device_register(camss->dev, &camss->v4l2_dev); + if (ret < 0) { + dev_err(dev, "Failed to register V4L2 device: %d\n", ret); +- goto err_genpd_cleanup; ++ goto err_media_device_cleanup; + } + + v4l2_async_nf_init(&camss->notifier, &camss->v4l2_dev); +@@ -3680,6 +3680,8 @@ static int camss_probe(struct platform_device *pdev) + v4l2_device_unregister(&camss->v4l2_dev); + v4l2_async_nf_cleanup(&camss->notifier); + pm_runtime_disable(dev); ++err_media_device_cleanup: ++ media_device_cleanup(&camss->media_dev); + err_genpd_cleanup: + camss_genpd_cleanup(camss); + +diff --git a/drivers/media/platform/qcom/iris/iris_buffer.c b/drivers/media/platform/qcom/iris/iris_buffer.c +index 7dd5730a867af7..018334512baed2 100644 +--- a/drivers/media/platform/qcom/iris/iris_buffer.c ++++ b/drivers/media/platform/qcom/iris/iris_buffer.c +@@ -376,7 +376,7 @@ int iris_destroy_internal_buffer(struct iris_inst *inst, struct iris_buffer *buf + return 0; + } + +-int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane) ++static int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane, bool force) + { + const struct iris_platform_data *platform_data = inst->core->iris_platform_data; + struct iris_buffer *buf, *next; +@@ -396,6 +396,14 @@ int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane) + for (i = 0; i < len; i++) { + buffers = &inst->buffers[internal_buf_type[i]]; + list_for_each_entry_safe(buf, next, &buffers->list, list) { ++ /* ++ * during stream on, skip destroying internal(DPB) buffer ++ * if firmware did not return it. ++ * during close, destroy all buffers irrespectively. ++ */ ++ if (!force && buf->attr & BUF_ATTR_QUEUED) ++ continue; ++ + ret = iris_destroy_internal_buffer(inst, buf); + if (ret) + return ret; +@@ -405,6 +413,16 @@ int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane) + return 0; + } + ++int iris_destroy_all_internal_buffers(struct iris_inst *inst, u32 plane) ++{ ++ return iris_destroy_internal_buffers(inst, plane, true); ++} ++ ++int iris_destroy_dequeued_internal_buffers(struct iris_inst *inst, u32 plane) ++{ ++ return iris_destroy_internal_buffers(inst, plane, false); ++} ++ + static int iris_release_internal_buffers(struct iris_inst *inst, + enum iris_buffer_type buffer_type) + { +diff --git a/drivers/media/platform/qcom/iris/iris_buffer.h b/drivers/media/platform/qcom/iris/iris_buffer.h +index c36b6347b0770a..00825ad2dc3a4b 100644 +--- a/drivers/media/platform/qcom/iris/iris_buffer.h ++++ b/drivers/media/platform/qcom/iris/iris_buffer.h +@@ -106,7 +106,8 @@ void iris_get_internal_buffers(struct iris_inst *inst, u32 plane); + int iris_create_internal_buffers(struct iris_inst *inst, u32 plane); + int iris_queue_internal_buffers(struct iris_inst *inst, u32 plane); + int iris_destroy_internal_buffer(struct iris_inst *inst, struct iris_buffer *buffer); +-int iris_destroy_internal_buffers(struct iris_inst *inst, u32 plane); ++int iris_destroy_all_internal_buffers(struct iris_inst *inst, u32 plane); ++int iris_destroy_dequeued_internal_buffers(struct iris_inst *inst, u32 plane); + int iris_alloc_and_queue_persist_bufs(struct iris_inst *inst); + int iris_alloc_and_queue_input_int_bufs(struct iris_inst *inst); + int iris_queue_buffer(struct iris_inst *inst, struct iris_buffer *buf); +diff --git a/drivers/media/platform/qcom/iris/iris_ctrls.c b/drivers/media/platform/qcom/iris/iris_ctrls.c +index b690578256d59e..13f5cf0d0e8a44 100644 +--- a/drivers/media/platform/qcom/iris/iris_ctrls.c ++++ b/drivers/media/platform/qcom/iris/iris_ctrls.c +@@ -17,8 +17,6 @@ static inline bool iris_valid_cap_id(enum platform_inst_fw_cap_type cap_id) + static enum platform_inst_fw_cap_type iris_get_cap_id(u32 id) + { + switch (id) { +- case V4L2_CID_MPEG_VIDEO_DECODER_MPEG4_DEBLOCK_FILTER: +- return DEBLOCK; + case V4L2_CID_MPEG_VIDEO_H264_PROFILE: + return PROFILE; + case V4L2_CID_MPEG_VIDEO_H264_LEVEL: +@@ -34,8 +32,6 @@ static u32 iris_get_v4l2_id(enum platform_inst_fw_cap_type cap_id) + return 0; + + switch (cap_id) { +- case DEBLOCK: +- return V4L2_CID_MPEG_VIDEO_DECODER_MPEG4_DEBLOCK_FILTER; + case PROFILE: + return V4L2_CID_MPEG_VIDEO_H264_PROFILE; + case LEVEL: +@@ -84,8 +80,6 @@ int iris_ctrls_init(struct iris_inst *inst) + if (iris_get_v4l2_id(cap[idx].cap_id)) + num_ctrls++; + } +- if (!num_ctrls) +- return -EINVAL; + + /* Adding 1 to num_ctrls to include V4L2_CID_MIN_BUFFERS_FOR_CAPTURE */ + +@@ -163,6 +157,7 @@ void iris_session_init_caps(struct iris_core *core) + core->inst_fw_caps[cap_id].value = caps[i].value; + core->inst_fw_caps[cap_id].flags = caps[i].flags; + core->inst_fw_caps[cap_id].hfi_id = caps[i].hfi_id; ++ core->inst_fw_caps[cap_id].set = caps[i].set; + } + } + +diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c b/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c +index 64f887d9a17d73..bd9d86220e611e 100644 +--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c ++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_command.c +@@ -208,8 +208,10 @@ static int iris_hfi_gen1_session_stop(struct iris_inst *inst, u32 plane) + flush_pkt.flush_type = flush_type; + + ret = iris_hfi_queue_cmd_write(core, &flush_pkt, flush_pkt.shdr.hdr.size); +- if (!ret) ++ if (!ret) { ++ inst->flush_responses_pending++; + ret = iris_wait_for_session_response(inst, true); ++ } + } + + return ret; +@@ -490,14 +492,6 @@ iris_hfi_gen1_packet_session_set_property(struct hfi_session_set_property_pkt *p + packet->shdr.hdr.size += sizeof(u32) + sizeof(*wm); + break; + } +- case HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER: { +- struct hfi_enable *en = prop_data; +- u32 *in = pdata; +- +- en->enable = *in; +- packet->shdr.hdr.size += sizeof(u32) + sizeof(*en); +- break; +- } + default: + return -EINVAL; + } +@@ -546,14 +540,15 @@ static int iris_hfi_gen1_set_resolution(struct iris_inst *inst) + struct hfi_framesize fs; + int ret; + +- fs.buffer_type = HFI_BUFFER_INPUT; +- fs.width = inst->fmt_src->fmt.pix_mp.width; +- fs.height = inst->fmt_src->fmt.pix_mp.height; +- +- ret = hfi_gen1_set_property(inst, ptype, &fs, sizeof(fs)); +- if (ret) +- return ret; ++ if (!iris_drc_pending(inst)) { ++ fs.buffer_type = HFI_BUFFER_INPUT; ++ fs.width = inst->fmt_src->fmt.pix_mp.width; ++ fs.height = inst->fmt_src->fmt.pix_mp.height; + ++ ret = hfi_gen1_set_property(inst, ptype, &fs, sizeof(fs)); ++ if (ret) ++ return ret; ++ } + fs.buffer_type = HFI_BUFFER_OUTPUT2; + fs.width = inst->fmt_dst->fmt.pix_mp.width; + fs.height = inst->fmt_dst->fmt.pix_mp.height; +diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h +index 93b5f838c2901c..adffcead58ea77 100644 +--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h ++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_defines.h +@@ -65,7 +65,6 @@ + + #define HFI_PROPERTY_CONFIG_BUFFER_REQUIREMENTS 0x202001 + +-#define HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER 0x1200001 + #define HFI_PROPERTY_PARAM_VDEC_DPB_COUNTS 0x120300e + #define HFI_PROPERTY_CONFIG_VDEC_ENTROPY 0x1204004 + +diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c +index 91d95eed68aa29..14d8bef62b606a 100644 +--- a/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c ++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen1_response.c +@@ -200,14 +200,14 @@ static void iris_hfi_gen1_event_seq_changed(struct iris_inst *inst, + + iris_hfi_gen1_read_changed_params(inst, pkt); + +- if (inst->state != IRIS_INST_ERROR) { +- reinit_completion(&inst->flush_completion); ++ if (inst->state != IRIS_INST_ERROR && !(inst->sub_state & IRIS_INST_SUB_FIRST_IPSC)) { + + flush_pkt.shdr.hdr.size = sizeof(struct hfi_session_flush_pkt); + flush_pkt.shdr.hdr.pkt_type = HFI_CMD_SESSION_FLUSH; + flush_pkt.shdr.session_id = inst->session_id; + flush_pkt.flush_type = HFI_FLUSH_OUTPUT; +- iris_hfi_queue_cmd_write(inst->core, &flush_pkt, flush_pkt.shdr.hdr.size); ++ if (!iris_hfi_queue_cmd_write(inst->core, &flush_pkt, flush_pkt.shdr.hdr.size)) ++ inst->flush_responses_pending++; + } + + iris_vdec_src_change(inst); +@@ -408,7 +408,9 @@ static void iris_hfi_gen1_session_ftb_done(struct iris_inst *inst, void *packet) + flush_pkt.shdr.hdr.pkt_type = HFI_CMD_SESSION_FLUSH; + flush_pkt.shdr.session_id = inst->session_id; + flush_pkt.flush_type = HFI_FLUSH_OUTPUT; +- iris_hfi_queue_cmd_write(core, &flush_pkt, flush_pkt.shdr.hdr.size); ++ if (!iris_hfi_queue_cmd_write(core, &flush_pkt, flush_pkt.shdr.hdr.size)) ++ inst->flush_responses_pending++; ++ + iris_inst_sub_state_change_drain_last(inst); + + return; +@@ -564,7 +566,6 @@ static void iris_hfi_gen1_handle_response(struct iris_core *core, void *response + const struct iris_hfi_gen1_response_pkt_info *pkt_info; + struct device *dev = core->dev; + struct hfi_session_pkt *pkt; +- struct completion *done; + struct iris_inst *inst; + bool found = false; + u32 i; +@@ -625,9 +626,12 @@ static void iris_hfi_gen1_handle_response(struct iris_core *core, void *response + if (shdr->error_type != HFI_ERR_NONE) + iris_inst_change_state(inst, IRIS_INST_ERROR); + +- done = pkt_info->pkt == HFI_MSG_SESSION_FLUSH ? +- &inst->flush_completion : &inst->completion; +- complete(done); ++ if (pkt_info->pkt == HFI_MSG_SESSION_FLUSH) { ++ if (!(--inst->flush_responses_pending)) ++ complete(&inst->flush_completion); ++ } else { ++ complete(&inst->completion); ++ } + } + mutex_unlock(&inst->lock); + +diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c b/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c +index a908b41e2868fc..802fa62c26ebef 100644 +--- a/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c ++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen2_command.c +@@ -178,7 +178,7 @@ static int iris_hfi_gen2_set_crop_offsets(struct iris_inst *inst) + sizeof(u64)); + } + +-static int iris_hfi_gen2_set_bit_dpeth(struct iris_inst *inst) ++static int iris_hfi_gen2_set_bit_depth(struct iris_inst *inst) + { + struct iris_inst_hfi_gen2 *inst_hfi_gen2 = to_iris_inst_hfi_gen2(inst); + u32 port = iris_hfi_gen2_get_port(V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); +@@ -378,7 +378,7 @@ static int iris_hfi_gen2_session_set_config_params(struct iris_inst *inst, u32 p + {HFI_PROP_BITSTREAM_RESOLUTION, iris_hfi_gen2_set_bitstream_resolution }, + {HFI_PROP_CROP_OFFSETS, iris_hfi_gen2_set_crop_offsets }, + {HFI_PROP_CODED_FRAMES, iris_hfi_gen2_set_coded_frames }, +- {HFI_PROP_LUMA_CHROMA_BIT_DEPTH, iris_hfi_gen2_set_bit_dpeth }, ++ {HFI_PROP_LUMA_CHROMA_BIT_DEPTH, iris_hfi_gen2_set_bit_depth }, + {HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, iris_hfi_gen2_set_min_output_count }, + {HFI_PROP_PIC_ORDER_CNT_TYPE, iris_hfi_gen2_set_picture_order_count }, + {HFI_PROP_SIGNAL_COLOR_INFO, iris_hfi_gen2_set_colorspace }, +diff --git a/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c b/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c +index b75a01641d5d48..d2cede2fe1b5a8 100644 +--- a/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c ++++ b/drivers/media/platform/qcom/iris/iris_hfi_gen2_response.c +@@ -265,7 +265,8 @@ static int iris_hfi_gen2_handle_system_error(struct iris_core *core, + { + struct iris_inst *instance; + +- dev_err(core->dev, "received system error of type %#x\n", pkt->type); ++ if (pkt) ++ dev_err(core->dev, "received system error of type %#x\n", pkt->type); + + core->state = IRIS_CORE_ERROR; + +@@ -377,6 +378,11 @@ static int iris_hfi_gen2_handle_output_buffer(struct iris_inst *inst, + + buf->flags = iris_hfi_gen2_get_driver_buffer_flags(inst, hfi_buffer->flags); + ++ if (!buf->data_size && inst->state == IRIS_INST_STREAMING && ++ !(hfi_buffer->flags & HFI_BUF_FW_FLAG_LAST)) { ++ buf->flags |= V4L2_BUF_FLAG_ERROR; ++ } ++ + return 0; + } + +@@ -636,9 +642,6 @@ static int iris_hfi_gen2_handle_session_property(struct iris_inst *inst, + { + struct iris_inst_hfi_gen2 *inst_hfi_gen2 = to_iris_inst_hfi_gen2(inst); + +- if (pkt->port != HFI_PORT_BITSTREAM) +- return 0; +- + if (pkt->flags & HFI_FW_FLAGS_INFORMATION) + return 0; + +diff --git a/drivers/media/platform/qcom/iris/iris_hfi_queue.c b/drivers/media/platform/qcom/iris/iris_hfi_queue.c +index fac7df0c4d1aec..221dcd09e1e109 100644 +--- a/drivers/media/platform/qcom/iris/iris_hfi_queue.c ++++ b/drivers/media/platform/qcom/iris/iris_hfi_queue.c +@@ -113,7 +113,7 @@ int iris_hfi_queue_cmd_write_locked(struct iris_core *core, void *pkt, u32 pkt_s + { + struct iris_iface_q_info *q_info = &core->command_queue; + +- if (core->state == IRIS_CORE_ERROR) ++ if (core->state == IRIS_CORE_ERROR || core->state == IRIS_CORE_DEINIT) + return -EINVAL; + + if (!iris_hfi_queue_write(q_info, pkt, pkt_size)) { +diff --git a/drivers/media/platform/qcom/iris/iris_instance.h b/drivers/media/platform/qcom/iris/iris_instance.h +index caa3c65070061b..06a7f1174ad55e 100644 +--- a/drivers/media/platform/qcom/iris/iris_instance.h ++++ b/drivers/media/platform/qcom/iris/iris_instance.h +@@ -27,6 +27,7 @@ + * @crop: structure of crop info + * @completion: structure of signal completions + * @flush_completion: structure of signal completions for flush cmd ++ * @flush_responses_pending: counter to track number of pending flush responses + * @fw_caps: array of supported instance firmware capabilities + * @buffers: array of different iris buffers + * @fw_min_count: minimnum count of buffers needed by fw +@@ -57,6 +58,7 @@ struct iris_inst { + struct iris_hfi_rect_desc crop; + struct completion completion; + struct completion flush_completion; ++ u32 flush_responses_pending; + struct platform_inst_fw_cap fw_caps[INST_FW_CAP_MAX]; + struct iris_buffers buffers[BUF_TYPE_MAX]; + u32 fw_min_count; +diff --git a/drivers/media/platform/qcom/iris/iris_platform_common.h b/drivers/media/platform/qcom/iris/iris_platform_common.h +index ac76d9e1ef9c14..1dab276431c716 100644 +--- a/drivers/media/platform/qcom/iris/iris_platform_common.h ++++ b/drivers/media/platform/qcom/iris/iris_platform_common.h +@@ -89,7 +89,7 @@ enum platform_inst_fw_cap_type { + CODED_FRAMES, + BIT_DEPTH, + RAP_FRAME, +- DEBLOCK, ++ TIER, + INST_FW_CAP_MAX, + }; + +diff --git a/drivers/media/platform/qcom/iris/iris_platform_sm8250.c b/drivers/media/platform/qcom/iris/iris_platform_sm8250.c +index 5c86fd7b7b6fd3..543fa266153918 100644 +--- a/drivers/media/platform/qcom/iris/iris_platform_sm8250.c ++++ b/drivers/media/platform/qcom/iris/iris_platform_sm8250.c +@@ -30,15 +30,6 @@ static struct platform_inst_fw_cap inst_fw_cap_sm8250[] = { + .hfi_id = HFI_PROPERTY_PARAM_WORK_MODE, + .set = iris_set_stage, + }, +- { +- .cap_id = DEBLOCK, +- .min = 0, +- .max = 1, +- .step_or_mask = 1, +- .value = 0, +- .hfi_id = HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER, +- .set = iris_set_u32, +- }, + }; + + static struct platform_inst_caps platform_inst_cap_sm8250 = { +diff --git a/drivers/media/platform/qcom/iris/iris_state.c b/drivers/media/platform/qcom/iris/iris_state.c +index 5976e926c83d13..104e1687ad39da 100644 +--- a/drivers/media/platform/qcom/iris/iris_state.c ++++ b/drivers/media/platform/qcom/iris/iris_state.c +@@ -245,7 +245,7 @@ int iris_inst_sub_state_change_pause(struct iris_inst *inst, u32 plane) + return iris_inst_change_sub_state(inst, 0, set_sub_state); + } + +-static inline bool iris_drc_pending(struct iris_inst *inst) ++bool iris_drc_pending(struct iris_inst *inst) + { + return inst->sub_state & IRIS_INST_SUB_DRC && + inst->sub_state & IRIS_INST_SUB_DRC_LAST; +diff --git a/drivers/media/platform/qcom/iris/iris_state.h b/drivers/media/platform/qcom/iris/iris_state.h +index 78c61aac5e7e0e..e718386dbe0402 100644 +--- a/drivers/media/platform/qcom/iris/iris_state.h ++++ b/drivers/media/platform/qcom/iris/iris_state.h +@@ -140,5 +140,6 @@ int iris_inst_sub_state_change_drain_last(struct iris_inst *inst); + int iris_inst_sub_state_change_drc_last(struct iris_inst *inst); + int iris_inst_sub_state_change_pause(struct iris_inst *inst, u32 plane); + bool iris_allow_cmd(struct iris_inst *inst, u32 cmd); ++bool iris_drc_pending(struct iris_inst *inst); + + #endif +diff --git a/drivers/media/platform/qcom/iris/iris_vb2.c b/drivers/media/platform/qcom/iris/iris_vb2.c +index cdf11feb590b5c..b3bde10eb6d2f0 100644 +--- a/drivers/media/platform/qcom/iris/iris_vb2.c ++++ b/drivers/media/platform/qcom/iris/iris_vb2.c +@@ -259,13 +259,14 @@ int iris_vb2_buf_prepare(struct vb2_buffer *vb) + return -EINVAL; + } + +- if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE && +- vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_OUTPUT)) +- return -EINVAL; +- if (vb->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE && +- vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_INPUT)) +- return -EINVAL; +- ++ if (!(inst->sub_state & IRIS_INST_SUB_DRC)) { ++ if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE && ++ vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_OUTPUT)) ++ return -EINVAL; ++ if (vb->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE && ++ vb2_plane_size(vb, 0) < iris_get_buffer_size(inst, BUF_INPUT)) ++ return -EINVAL; ++ } + return 0; + } + +diff --git a/drivers/media/platform/qcom/iris/iris_vdec.c b/drivers/media/platform/qcom/iris/iris_vdec.c +index 4143acedfc5744..d342f733feb995 100644 +--- a/drivers/media/platform/qcom/iris/iris_vdec.c ++++ b/drivers/media/platform/qcom/iris/iris_vdec.c +@@ -171,6 +171,11 @@ int iris_vdec_s_fmt(struct iris_inst *inst, struct v4l2_format *f) + output_fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc; + output_fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization; + ++ /* Update capture format based on new ip w/h */ ++ output_fmt->fmt.pix_mp.width = ALIGN(f->fmt.pix_mp.width, 128); ++ output_fmt->fmt.pix_mp.height = ALIGN(f->fmt.pix_mp.height, 32); ++ inst->buffers[BUF_OUTPUT].size = iris_get_buffer_size(inst, BUF_OUTPUT); ++ + inst->crop.left = 0; + inst->crop.top = 0; + inst->crop.width = f->fmt.pix_mp.width; +@@ -408,7 +413,7 @@ int iris_vdec_streamon_input(struct iris_inst *inst) + + iris_get_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); + +- ret = iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); ++ ret = iris_destroy_dequeued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); + if (ret) + return ret; + +@@ -496,7 +501,7 @@ int iris_vdec_streamon_output(struct iris_inst *inst) + + iris_get_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); + +- ret = iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); ++ ret = iris_destroy_dequeued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); + if (ret) + return ret; + +diff --git a/drivers/media/platform/qcom/iris/iris_vidc.c b/drivers/media/platform/qcom/iris/iris_vidc.c +index ca0f4e310f77f9..a8144595cc78e8 100644 +--- a/drivers/media/platform/qcom/iris/iris_vidc.c ++++ b/drivers/media/platform/qcom/iris/iris_vidc.c +@@ -221,6 +221,33 @@ static void iris_session_close(struct iris_inst *inst) + iris_wait_for_session_response(inst, false); + } + ++static void iris_check_num_queued_internal_buffers(struct iris_inst *inst, u32 plane) ++{ ++ const struct iris_platform_data *platform_data = inst->core->iris_platform_data; ++ struct iris_buffer *buf, *next; ++ struct iris_buffers *buffers; ++ const u32 *internal_buf_type; ++ u32 internal_buffer_count, i; ++ u32 count = 0; ++ ++ if (V4L2_TYPE_IS_OUTPUT(plane)) { ++ internal_buf_type = platform_data->dec_ip_int_buf_tbl; ++ internal_buffer_count = platform_data->dec_ip_int_buf_tbl_size; ++ } else { ++ internal_buf_type = platform_data->dec_op_int_buf_tbl; ++ internal_buffer_count = platform_data->dec_op_int_buf_tbl_size; ++ } ++ ++ for (i = 0; i < internal_buffer_count; i++) { ++ buffers = &inst->buffers[internal_buf_type[i]]; ++ list_for_each_entry_safe(buf, next, &buffers->list, list) ++ count++; ++ if (count) ++ dev_err(inst->core->dev, "%d buffer of type %d not released", ++ count, internal_buf_type[i]); ++ } ++} ++ + int iris_close(struct file *filp) + { + struct iris_inst *inst = iris_get_inst(filp, NULL); +@@ -233,8 +260,10 @@ int iris_close(struct file *filp) + iris_session_close(inst); + iris_inst_change_state(inst, IRIS_INST_DEINIT); + iris_v4l2_fh_deinit(inst); +- iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); +- iris_destroy_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); ++ iris_destroy_all_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); ++ iris_destroy_all_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); ++ iris_check_num_queued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE); ++ iris_check_num_queued_internal_buffers(inst, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); + iris_remove_session(inst); + mutex_unlock(&inst->lock); + mutex_destroy(&inst->ctx_q_lock); +diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c +index d305d74bb152d2..4c049c694d9c43 100644 +--- a/drivers/media/platform/qcom/venus/core.c ++++ b/drivers/media/platform/qcom/venus/core.c +@@ -424,13 +424,13 @@ static int venus_probe(struct platform_device *pdev) + INIT_DELAYED_WORK(&core->work, venus_sys_error_handler); + init_waitqueue_head(&core->sys_err_done); + +- ret = devm_request_threaded_irq(dev, core->irq, hfi_isr, venus_isr_thread, +- IRQF_TRIGGER_HIGH | IRQF_ONESHOT, +- "venus", core); ++ ret = hfi_create(core, &venus_core_ops); + if (ret) + goto err_core_put; + +- ret = hfi_create(core, &venus_core_ops); ++ ret = devm_request_threaded_irq(dev, core->irq, hfi_isr, venus_isr_thread, ++ IRQF_TRIGGER_HIGH | IRQF_ONESHOT, ++ "venus", core); + if (ret) + goto err_core_put; + +@@ -709,11 +709,11 @@ static const struct venus_resources msm8996_res = { + }; + + static const struct freq_tbl msm8998_freq_table[] = { +- { 1944000, 465000000 }, /* 4k UHD @ 60 (decode only) */ +- { 972000, 465000000 }, /* 4k UHD @ 30 */ +- { 489600, 360000000 }, /* 1080p @ 60 */ +- { 244800, 186000000 }, /* 1080p @ 30 */ +- { 108000, 100000000 }, /* 720p @ 30 */ ++ { 1728000, 533000000 }, /* 4k UHD @ 60 (decode only) */ ++ { 1036800, 444000000 }, /* 2k @ 120 */ ++ { 829440, 355200000 }, /* 4k @ 44 */ ++ { 489600, 269330000 },/* 4k @ 30 */ ++ { 108000, 200000000 }, /* 1080p @ 60 */ + }; + + static const struct reg_val msm8998_reg_preset[] = { +diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h +index b412e0c5515a09..5b1ba1c69adba1 100644 +--- a/drivers/media/platform/qcom/venus/core.h ++++ b/drivers/media/platform/qcom/venus/core.h +@@ -28,6 +28,8 @@ + #define VIDC_RESETS_NUM_MAX 2 + #define VIDC_MAX_HIER_CODING_LAYER 6 + ++#define VENUS_MAX_FPS 240 ++ + extern int venus_fw_debug; + + struct freq_tbl { +diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c +index b5f2ea8799507f..cec7f5964d3d80 100644 +--- a/drivers/media/platform/qcom/venus/hfi_venus.c ++++ b/drivers/media/platform/qcom/venus/hfi_venus.c +@@ -239,6 +239,7 @@ static int venus_write_queue(struct venus_hfi_device *hdev, + static int venus_read_queue(struct venus_hfi_device *hdev, + struct iface_queue *queue, void *pkt, u32 *tx_req) + { ++ struct hfi_pkt_hdr *pkt_hdr = NULL; + struct hfi_queue_header *qhdr; + u32 dwords, new_rd_idx; + u32 rd_idx, wr_idx, type, qsize; +@@ -304,6 +305,9 @@ static int venus_read_queue(struct venus_hfi_device *hdev, + memcpy(pkt, rd_ptr, len); + memcpy(pkt + len, queue->qmem.kva, new_rd_idx << 2); + } ++ pkt_hdr = (struct hfi_pkt_hdr *)(pkt); ++ if ((pkt_hdr->size >> 2) != dwords) ++ return -EINVAL; + } else { + /* bad packet received, dropping */ + new_rd_idx = qhdr->write_idx; +@@ -1678,6 +1682,7 @@ void venus_hfi_destroy(struct venus_core *core) + venus_interface_queues_release(hdev); + mutex_destroy(&hdev->lock); + kfree(hdev); ++ disable_irq(core->irq); + core->ops = NULL; + } + +diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c +index 99ce5fd4157728..fca27be61f4b86 100644 +--- a/drivers/media/platform/qcom/venus/vdec.c ++++ b/drivers/media/platform/qcom/venus/vdec.c +@@ -481,11 +481,10 @@ static int vdec_s_parm(struct file *file, void *fh, struct v4l2_streamparm *a) + us_per_frame = timeperframe->numerator * (u64)USEC_PER_SEC; + do_div(us_per_frame, timeperframe->denominator); + +- if (!us_per_frame) +- return -EINVAL; +- ++ us_per_frame = clamp(us_per_frame, 1, USEC_PER_SEC); + fps = (u64)USEC_PER_SEC; + do_div(fps, us_per_frame); ++ fps = min(VENUS_MAX_FPS, fps); + + inst->fps = fps; + inst->timeperframe = *timeperframe; +diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c +index c7f8e37dba9b22..b9ccee870c3d12 100644 +--- a/drivers/media/platform/qcom/venus/venc.c ++++ b/drivers/media/platform/qcom/venus/venc.c +@@ -411,11 +411,10 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *a) + us_per_frame = timeperframe->numerator * (u64)USEC_PER_SEC; + do_div(us_per_frame, timeperframe->denominator); + +- if (!us_per_frame) +- return -EINVAL; +- ++ us_per_frame = clamp(us_per_frame, 1, USEC_PER_SEC); + fps = (u64)USEC_PER_SEC; + do_div(fps, us_per_frame); ++ fps = min(VENUS_MAX_FPS, fps); + + inst->timeperframe = *timeperframe; + inst->fps = fps; +diff --git a/drivers/media/platform/raspberrypi/pisp_be/Kconfig b/drivers/media/platform/raspberrypi/pisp_be/Kconfig +index 46765a2e4c4d15..a9e51fd94aadc6 100644 +--- a/drivers/media/platform/raspberrypi/pisp_be/Kconfig ++++ b/drivers/media/platform/raspberrypi/pisp_be/Kconfig +@@ -3,6 +3,7 @@ config VIDEO_RASPBERRYPI_PISP_BE + depends on V4L_PLATFORM_DRIVERS + depends on VIDEO_DEV + depends on ARCH_BCM2835 || COMPILE_TEST ++ depends on PM + select VIDEO_V4L2_SUBDEV_API + select MEDIA_CONTROLLER + select VIDEOBUF2_DMA_CONTIG +diff --git a/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c b/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c +index 7596ae1f7de667..f0a98afefdbd1e 100644 +--- a/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c ++++ b/drivers/media/platform/raspberrypi/pisp_be/pisp_be.c +@@ -1726,7 +1726,7 @@ static int pispbe_probe(struct platform_device *pdev) + pm_runtime_use_autosuspend(pispbe->dev); + pm_runtime_enable(pispbe->dev); + +- ret = pispbe_runtime_resume(pispbe->dev); ++ ret = pm_runtime_resume_and_get(pispbe->dev); + if (ret) + goto pm_runtime_disable_err; + +@@ -1748,7 +1748,7 @@ static int pispbe_probe(struct platform_device *pdev) + disable_devs_err: + pispbe_destroy_devices(pispbe); + pm_runtime_suspend_err: +- pispbe_runtime_suspend(pispbe->dev); ++ pm_runtime_put(pispbe->dev); + pm_runtime_disable_err: + pm_runtime_dont_use_autosuspend(pispbe->dev); + pm_runtime_disable(pispbe->dev); +@@ -1762,7 +1762,6 @@ static void pispbe_remove(struct platform_device *pdev) + + pispbe_destroy_devices(pispbe); + +- pispbe_runtime_suspend(pispbe->dev); + pm_runtime_dont_use_autosuspend(pispbe->dev); + pm_runtime_disable(pispbe->dev); + } +diff --git a/drivers/media/platform/verisilicon/rockchip_vpu_hw.c b/drivers/media/platform/verisilicon/rockchip_vpu_hw.c +index acd29fa41d2d10..02673be9878e1e 100644 +--- a/drivers/media/platform/verisilicon/rockchip_vpu_hw.c ++++ b/drivers/media/platform/verisilicon/rockchip_vpu_hw.c +@@ -17,7 +17,6 @@ + + #define RK3066_ACLK_MAX_FREQ (300 * 1000 * 1000) + #define RK3288_ACLK_MAX_FREQ (400 * 1000 * 1000) +-#define RK3588_ACLK_MAX_FREQ (300 * 1000 * 1000) + + #define ROCKCHIP_VPU981_MIN_SIZE 64 + +@@ -454,13 +453,6 @@ static int rk3066_vpu_hw_init(struct hantro_dev *vpu) + return 0; + } + +-static int rk3588_vpu981_hw_init(struct hantro_dev *vpu) +-{ +- /* Bump ACLKs to max. possible freq. to improve performance. */ +- clk_set_rate(vpu->clocks[0].clk, RK3588_ACLK_MAX_FREQ); +- return 0; +-} +- + static int rockchip_vpu_hw_init(struct hantro_dev *vpu) + { + /* Bump ACLK to max. possible freq. to improve performance. */ +@@ -821,7 +813,6 @@ const struct hantro_variant rk3588_vpu981_variant = { + .codec_ops = rk3588_vpu981_codec_ops, + .irqs = rk3588_vpu981_irqs, + .num_irqs = ARRAY_SIZE(rk3588_vpu981_irqs), +- .init = rk3588_vpu981_hw_init, + .clk_names = rk3588_vpu981_vpu_clk_names, + .num_clocks = ARRAY_SIZE(rk3588_vpu981_vpu_clk_names) + }; +diff --git a/drivers/media/test-drivers/vivid/vivid-ctrls.c b/drivers/media/test-drivers/vivid/vivid-ctrls.c +index e340df0b62617a..f94c15ff84f78f 100644 +--- a/drivers/media/test-drivers/vivid/vivid-ctrls.c ++++ b/drivers/media/test-drivers/vivid/vivid-ctrls.c +@@ -244,7 +244,8 @@ static const struct v4l2_ctrl_config vivid_ctrl_u8_pixel_array = { + .min = 0x00, + .max = 0xff, + .step = 1, +- .dims = { 640 / PIXEL_ARRAY_DIV, 360 / PIXEL_ARRAY_DIV }, ++ .dims = { DIV_ROUND_UP(360, PIXEL_ARRAY_DIV), ++ DIV_ROUND_UP(640, PIXEL_ARRAY_DIV) }, + }; + + static const struct v4l2_ctrl_config vivid_ctrl_s32_array = { +diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c +index 84e9155b58155c..2e4c1ed37cd2ab 100644 +--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c ++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c +@@ -454,8 +454,8 @@ void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls) + if (keep_controls) + return; + +- dims[0] = roundup(dev->src_rect.width, PIXEL_ARRAY_DIV); +- dims[1] = roundup(dev->src_rect.height, PIXEL_ARRAY_DIV); ++ dims[0] = DIV_ROUND_UP(dev->src_rect.height, PIXEL_ARRAY_DIV); ++ dims[1] = DIV_ROUND_UP(dev->src_rect.width, PIXEL_ARRAY_DIV); + v4l2_ctrl_modify_dimensions(dev->pixel_array, dims); + } + +diff --git a/drivers/media/usb/gspca/vicam.c b/drivers/media/usb/gspca/vicam.c +index d98343fd33fe34..91e177aa8136fd 100644 +--- a/drivers/media/usb/gspca/vicam.c ++++ b/drivers/media/usb/gspca/vicam.c +@@ -227,6 +227,7 @@ static int sd_init(struct gspca_dev *gspca_dev) + const struct ihex_binrec *rec; + const struct firmware *fw; + u8 *firmware_buf; ++ int len; + + ret = request_ihex_firmware(&fw, VICAM_FIRMWARE, + &gspca_dev->dev->dev); +@@ -241,9 +242,14 @@ static int sd_init(struct gspca_dev *gspca_dev) + goto exit; + } + for (rec = (void *)fw->data; rec; rec = ihex_next_binrec(rec)) { +- memcpy(firmware_buf, rec->data, be16_to_cpu(rec->len)); ++ len = be16_to_cpu(rec->len); ++ if (len > PAGE_SIZE) { ++ ret = -EINVAL; ++ break; ++ } ++ memcpy(firmware_buf, rec->data, len); + ret = vicam_control_msg(gspca_dev, 0xff, 0, 0, firmware_buf, +- be16_to_cpu(rec->len)); ++ len); + if (ret < 0) + break; + } +diff --git a/drivers/media/usb/usbtv/usbtv-video.c b/drivers/media/usb/usbtv/usbtv-video.c +index be22a9697197c6..de0328100a60dd 100644 +--- a/drivers/media/usb/usbtv/usbtv-video.c ++++ b/drivers/media/usb/usbtv/usbtv-video.c +@@ -73,6 +73,10 @@ static int usbtv_configure_for_norm(struct usbtv *usbtv, v4l2_std_id norm) + } + + if (params) { ++ if (vb2_is_busy(&usbtv->vb2q) && ++ (usbtv->width != params->cap_width || ++ usbtv->height != params->cap_height)) ++ return -EBUSY; + usbtv->width = params->cap_width; + usbtv->height = params->cap_height; + usbtv->n_chunks = usbtv->width * usbtv->height +diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c +index b45809a82f9a66..d28596c720d8a4 100644 +--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c ++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c +@@ -1661,7 +1661,6 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl) + kvfree(hdl->buckets); + hdl->buckets = NULL; + hdl->cached = NULL; +- hdl->error = 0; + mutex_unlock(hdl->lock); + mutex_destroy(&hdl->_lock); + } +diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c +index 7f3f47db4c98a5..e4275f8ee5db8a 100644 +--- a/drivers/memstick/core/memstick.c ++++ b/drivers/memstick/core/memstick.c +@@ -555,7 +555,6 @@ EXPORT_SYMBOL(memstick_add_host); + */ + void memstick_remove_host(struct memstick_host *host) + { +- host->removing = 1; + flush_workqueue(workqueue); + mutex_lock(&host->lock); + if (host->card) +diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c +index 3878136227e49c..5b5e9354fb2e4f 100644 +--- a/drivers/memstick/host/rtsx_usb_ms.c ++++ b/drivers/memstick/host/rtsx_usb_ms.c +@@ -812,6 +812,7 @@ static void rtsx_usb_ms_drv_remove(struct platform_device *pdev) + int err; + + host->eject = true; ++ msh->removing = true; + cancel_work_sync(&host->handle_req); + cancel_delayed_work_sync(&host->poll_card); + +diff --git a/drivers/mfd/mt6397-core.c b/drivers/mfd/mt6397-core.c +index 5f8ed898890783..3e58d0764c7e0b 100644 +--- a/drivers/mfd/mt6397-core.c ++++ b/drivers/mfd/mt6397-core.c +@@ -136,7 +136,7 @@ static const struct mfd_cell mt6323_devs[] = { + .name = "mt6323-led", + .of_compatible = "mediatek,mt6323-led" + }, { +- .name = "mtk-pmic-keys", ++ .name = "mt6323-keys", + .num_resources = ARRAY_SIZE(mt6323_keys_resources), + .resources = mt6323_keys_resources, + .of_compatible = "mediatek,mt6323-keys" +@@ -153,7 +153,7 @@ static const struct mfd_cell mt6328_devs[] = { + .name = "mt6328-regulator", + .of_compatible = "mediatek,mt6328-regulator" + }, { +- .name = "mtk-pmic-keys", ++ .name = "mt6328-keys", + .num_resources = ARRAY_SIZE(mt6328_keys_resources), + .resources = mt6328_keys_resources, + .of_compatible = "mediatek,mt6328-keys" +@@ -175,7 +175,7 @@ static const struct mfd_cell mt6357_devs[] = { + .name = "mt6357-sound", + .of_compatible = "mediatek,mt6357-sound" + }, { +- .name = "mtk-pmic-keys", ++ .name = "mt6357-keys", + .num_resources = ARRAY_SIZE(mt6357_keys_resources), + .resources = mt6357_keys_resources, + .of_compatible = "mediatek,mt6357-keys" +@@ -196,7 +196,7 @@ static const struct mfd_cell mt6331_mt6332_devs[] = { + .name = "mt6332-regulator", + .of_compatible = "mediatek,mt6332-regulator" + }, { +- .name = "mtk-pmic-keys", ++ .name = "mt6331-keys", + .num_resources = ARRAY_SIZE(mt6331_keys_resources), + .resources = mt6331_keys_resources, + .of_compatible = "mediatek,mt6331-keys" +@@ -240,7 +240,7 @@ static const struct mfd_cell mt6359_devs[] = { + }, + { .name = "mt6359-sound", }, + { +- .name = "mtk-pmic-keys", ++ .name = "mt6359-keys", + .num_resources = ARRAY_SIZE(mt6359_keys_resources), + .resources = mt6359_keys_resources, + .of_compatible = "mediatek,mt6359-keys" +@@ -272,7 +272,7 @@ static const struct mfd_cell mt6397_devs[] = { + .name = "mt6397-pinctrl", + .of_compatible = "mediatek,mt6397-pinctrl", + }, { +- .name = "mtk-pmic-keys", ++ .name = "mt6397-keys", + .num_resources = ARRAY_SIZE(mt6397_keys_resources), + .resources = mt6397_keys_resources, + .of_compatible = "mediatek,mt6397-keys" +diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c +index 8c29676ab6628b..33b9282cc80d9f 100644 +--- a/drivers/mmc/host/sdhci-of-arasan.c ++++ b/drivers/mmc/host/sdhci-of-arasan.c +@@ -99,6 +99,9 @@ + #define HIWORD_UPDATE(val, mask, shift) \ + ((val) << (shift) | (mask) << ((shift) + 16)) + ++#define CD_STABLE_TIMEOUT_US 1000000 ++#define CD_STABLE_MAX_SLEEP_US 10 ++ + /** + * struct sdhci_arasan_soc_ctl_field - Field used in sdhci_arasan_soc_ctl_map + * +@@ -206,12 +209,15 @@ struct sdhci_arasan_data { + * 19MHz instead + */ + #define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2) ++/* Enable CD stable check before power-up */ ++#define SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE BIT(3) + }; + + struct sdhci_arasan_of_data { + const struct sdhci_arasan_soc_ctl_map *soc_ctl_map; + const struct sdhci_pltfm_data *pdata; + const struct sdhci_arasan_clk_ops *clk_ops; ++ u32 quirks; + }; + + static const struct sdhci_arasan_soc_ctl_map rk3399_soc_ctl_map = { +@@ -514,6 +520,24 @@ static int sdhci_arasan_voltage_switch(struct mmc_host *mmc, + return -EINVAL; + } + ++static void sdhci_arasan_set_power_and_bus_voltage(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd) ++{ ++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ++ struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); ++ u32 reg; ++ ++ /* ++ * Ensure that the card detect logic has stabilized before powering up, this is ++ * necessary after a host controller reset. ++ */ ++ if (mode == MMC_POWER_UP && sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE) ++ read_poll_timeout(sdhci_readl, reg, reg & SDHCI_CD_STABLE, CD_STABLE_MAX_SLEEP_US, ++ CD_STABLE_TIMEOUT_US, false, host, SDHCI_PRESENT_STATE); ++ ++ sdhci_set_power_and_bus_voltage(host, mode, vdd); ++} ++ + static const struct sdhci_ops sdhci_arasan_ops = { + .set_clock = sdhci_arasan_set_clock, + .get_max_clock = sdhci_pltfm_clk_get_max_clock, +@@ -521,7 +545,7 @@ static const struct sdhci_ops sdhci_arasan_ops = { + .set_bus_width = sdhci_set_bus_width, + .reset = sdhci_arasan_reset, + .set_uhs_signaling = sdhci_set_uhs_signaling, +- .set_power = sdhci_set_power_and_bus_voltage, ++ .set_power = sdhci_arasan_set_power_and_bus_voltage, + .hw_reset = sdhci_arasan_hw_reset, + }; + +@@ -570,7 +594,7 @@ static const struct sdhci_ops sdhci_arasan_cqe_ops = { + .set_bus_width = sdhci_set_bus_width, + .reset = sdhci_arasan_reset, + .set_uhs_signaling = sdhci_set_uhs_signaling, +- .set_power = sdhci_set_power_and_bus_voltage, ++ .set_power = sdhci_arasan_set_power_and_bus_voltage, + .irq = sdhci_arasan_cqhci_irq, + }; + +@@ -1447,6 +1471,7 @@ static const struct sdhci_arasan_clk_ops zynqmp_clk_ops = { + static struct sdhci_arasan_of_data sdhci_arasan_zynqmp_data = { + .pdata = &sdhci_arasan_zynqmp_pdata, + .clk_ops = &zynqmp_clk_ops, ++ .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, + }; + + static const struct sdhci_arasan_clk_ops versal_clk_ops = { +@@ -1457,6 +1482,7 @@ static const struct sdhci_arasan_clk_ops versal_clk_ops = { + static struct sdhci_arasan_of_data sdhci_arasan_versal_data = { + .pdata = &sdhci_arasan_zynqmp_pdata, + .clk_ops = &versal_clk_ops, ++ .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, + }; + + static const struct sdhci_arasan_clk_ops versal_net_clk_ops = { +@@ -1467,6 +1493,7 @@ static const struct sdhci_arasan_clk_ops versal_net_clk_ops = { + static struct sdhci_arasan_of_data sdhci_arasan_versal_net_data = { + .pdata = &sdhci_arasan_versal_net_pdata, + .clk_ops = &versal_net_clk_ops, ++ .quirks = SDHCI_ARASAN_QUIRK_ENSURE_CD_STABLE, + }; + + static struct sdhci_arasan_of_data intel_keembay_emmc_data = { +@@ -1945,6 +1972,8 @@ static int sdhci_arasan_probe(struct platform_device *pdev) + if (of_device_is_compatible(np, "rockchip,rk3399-sdhci-5.1")) + sdhci_arasan_update_clockmultiplier(host, 0x0); + ++ sdhci_arasan->quirks |= data->quirks; ++ + if (of_device_is_compatible(np, "intel,keembay-sdhci-5.1-emmc") || + of_device_is_compatible(np, "intel,keembay-sdhci-5.1-sd") || + of_device_is_compatible(np, "intel,keembay-sdhci-5.1-sdio")) { +diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c +index 4c2ae71770f782..3a1de477e9af8d 100644 +--- a/drivers/mmc/host/sdhci-pci-gli.c ++++ b/drivers/mmc/host/sdhci-pci-gli.c +@@ -287,6 +287,20 @@ + #define GLI_MAX_TUNING_LOOP 40 + + /* Genesys Logic chipset */ ++static void sdhci_gli_mask_replay_timer_timeout(struct pci_dev *pdev) ++{ ++ int aer; ++ u32 value; ++ ++ /* mask the replay timer timeout of AER */ ++ aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); ++ if (aer) { ++ pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); ++ value |= PCI_ERR_COR_REP_TIMER; ++ pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); ++ } ++} ++ + static inline void gl9750_wt_on(struct sdhci_host *host) + { + u32 wt_value; +@@ -607,7 +621,6 @@ static void gl9750_hw_setting(struct sdhci_host *host) + { + struct sdhci_pci_slot *slot = sdhci_priv(host); + struct pci_dev *pdev; +- int aer; + u32 value; + + pdev = slot->chip->pdev; +@@ -626,12 +639,7 @@ static void gl9750_hw_setting(struct sdhci_host *host) + pci_set_power_state(pdev, PCI_D0); + + /* mask the replay timer timeout of AER */ +- aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); +- if (aer) { +- pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); +- value |= PCI_ERR_COR_REP_TIMER; +- pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); +- } ++ sdhci_gli_mask_replay_timer_timeout(pdev); + + gl9750_wt_off(host); + } +@@ -806,7 +814,6 @@ static void sdhci_gl9755_set_clock(struct sdhci_host *host, unsigned int clock) + static void gl9755_hw_setting(struct sdhci_pci_slot *slot) + { + struct pci_dev *pdev = slot->chip->pdev; +- int aer; + u32 value; + + gl9755_wt_on(pdev); +@@ -841,12 +848,7 @@ static void gl9755_hw_setting(struct sdhci_pci_slot *slot) + pci_set_power_state(pdev, PCI_D0); + + /* mask the replay timer timeout of AER */ +- aer = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ERR); +- if (aer) { +- pci_read_config_dword(pdev, aer + PCI_ERR_COR_MASK, &value); +- value |= PCI_ERR_COR_REP_TIMER; +- pci_write_config_dword(pdev, aer + PCI_ERR_COR_MASK, value); +- } ++ sdhci_gli_mask_replay_timer_timeout(pdev); + + gl9755_wt_off(pdev); + } +@@ -1751,7 +1753,7 @@ static int gl9763e_add_host(struct sdhci_pci_slot *slot) + return ret; + } + +-static void gli_set_gl9763e(struct sdhci_pci_slot *slot) ++static void gl9763e_hw_setting(struct sdhci_pci_slot *slot) + { + struct pci_dev *pdev = slot->chip->pdev; + u32 value; +@@ -1780,6 +1782,9 @@ static void gli_set_gl9763e(struct sdhci_pci_slot *slot) + value |= FIELD_PREP(GLI_9763E_HS400_RXDLY, GLI_9763E_HS400_RXDLY_5); + pci_write_config_dword(pdev, PCIE_GLI_9763E_CLKRXDLY, value); + ++ /* mask the replay timer timeout of AER */ ++ sdhci_gli_mask_replay_timer_timeout(pdev); ++ + pci_read_config_dword(pdev, PCIE_GLI_9763E_VHS, &value); + value &= ~GLI_9763E_VHS_REV; + value |= FIELD_PREP(GLI_9763E_VHS_REV, GLI_9763E_VHS_REV_R); +@@ -1923,7 +1928,7 @@ static int gli_probe_slot_gl9763e(struct sdhci_pci_slot *slot) + gli_pcie_enable_msi(slot); + host->mmc_host_ops.hs400_enhanced_strobe = + gl9763e_hs400_enhanced_strobe; +- gli_set_gl9763e(slot); ++ gl9763e_hw_setting(slot); + sdhci_enable_v4_mode(host); + + return 0; +diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c +index 9e94998e8df7d2..21739357273290 100644 +--- a/drivers/mmc/host/sdhci_am654.c ++++ b/drivers/mmc/host/sdhci_am654.c +@@ -156,6 +156,7 @@ struct sdhci_am654_data { + + #define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0) + #define SDHCI_AM654_QUIRK_SUPPRESS_V1P8_ENA BIT(1) ++#define SDHCI_AM654_QUIRK_DISABLE_HS400 BIT(2) + }; + + struct window { +@@ -765,6 +766,7 @@ static int sdhci_am654_init(struct sdhci_host *host) + { + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host); ++ struct device *dev = mmc_dev(host->mmc); + u32 ctl_cfg_2 = 0; + u32 mask; + u32 val; +@@ -820,6 +822,12 @@ static int sdhci_am654_init(struct sdhci_host *host) + if (ret) + goto err_cleanup_host; + ++ if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_DISABLE_HS400 && ++ host->mmc->caps2 & (MMC_CAP2_HS400 | MMC_CAP2_HS400_ES)) { ++ dev_info(dev, "HS400 mode not supported on this silicon revision, disabling it\n"); ++ host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); ++ } ++ + ret = __sdhci_add_host(host); + if (ret) + goto err_cleanup_host; +@@ -883,6 +891,12 @@ static int sdhci_am654_get_of_property(struct platform_device *pdev, + return 0; + } + ++static const struct soc_device_attribute sdhci_am654_descope_hs400[] = { ++ { .family = "AM62PX", .revision = "SR1.0" }, ++ { .family = "AM62PX", .revision = "SR1.1" }, ++ { /* sentinel */ } ++}; ++ + static const struct of_device_id sdhci_am654_of_match[] = { + { + .compatible = "ti,am654-sdhci-5.1", +@@ -975,6 +989,10 @@ static int sdhci_am654_probe(struct platform_device *pdev) + goto err_pltfm_free; + } + ++ soc = soc_device_match(sdhci_am654_descope_hs400); ++ if (soc) ++ sdhci_am654->quirks |= SDHCI_AM654_QUIRK_DISABLE_HS400; ++ + host->mmc_host_ops.start_signal_voltage_switch = sdhci_am654_start_signal_voltage_switch; + host->mmc_host_ops.execute_tuning = sdhci_am654_execute_tuning; + +diff --git a/drivers/most/core.c b/drivers/most/core.c +index a635d5082ebb64..da319d108ea1df 100644 +--- a/drivers/most/core.c ++++ b/drivers/most/core.c +@@ -538,8 +538,8 @@ static struct most_channel *get_channel(char *mdev, char *mdev_ch) + dev = bus_find_device_by_name(&mostbus, NULL, mdev); + if (!dev) + return NULL; +- put_device(dev); + iface = dev_get_drvdata(dev); ++ put_device(dev); + list_for_each_entry_safe(c, tmp, &iface->p->channel_list, list) { + if (!strcmp(dev_name(&c->dev), mdev_ch)) + return c; +diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c +index d579d5dd60d66e..df61db8ce46659 100644 +--- a/drivers/mtd/nand/raw/fsmc_nand.c ++++ b/drivers/mtd/nand/raw/fsmc_nand.c +@@ -503,6 +503,8 @@ static int dma_xfer(struct fsmc_nand_data *host, void *buffer, int len, + + dma_dev = chan->device; + dma_addr = dma_map_single(dma_dev->dev, buffer, len, direction); ++ if (dma_mapping_error(dma_dev->dev, dma_addr)) ++ return -EINVAL; + + if (direction == DMA_TO_DEVICE) { + dma_src = dma_addr; +diff --git a/drivers/mtd/nand/raw/renesas-nand-controller.c b/drivers/mtd/nand/raw/renesas-nand-controller.c +index 44f6603736d19b..ac8c1b80d7be96 100644 +--- a/drivers/mtd/nand/raw/renesas-nand-controller.c ++++ b/drivers/mtd/nand/raw/renesas-nand-controller.c +@@ -426,6 +426,9 @@ static int rnandc_read_page_hw_ecc(struct nand_chip *chip, u8 *buf, + /* Configure DMA */ + dma_addr = dma_map_single(rnandc->dev, rnandc->buf, mtd->writesize, + DMA_FROM_DEVICE); ++ if (dma_mapping_error(rnandc->dev, dma_addr)) ++ return -ENOMEM; ++ + writel(dma_addr, rnandc->regs + DMA_ADDR_LOW_REG); + writel(mtd->writesize, rnandc->regs + DMA_CNT_REG); + writel(DMA_TLVL_MAX, rnandc->regs + DMA_TLVL_REG); +@@ -606,6 +609,9 @@ static int rnandc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf, + /* Configure DMA */ + dma_addr = dma_map_single(rnandc->dev, (void *)rnandc->buf, mtd->writesize, + DMA_TO_DEVICE); ++ if (dma_mapping_error(rnandc->dev, dma_addr)) ++ return -ENOMEM; ++ + writel(dma_addr, rnandc->regs + DMA_ADDR_LOW_REG); + writel(mtd->writesize, rnandc->regs + DMA_CNT_REG); + writel(DMA_TLVL_MAX, rnandc->regs + DMA_TLVL_REG); +diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c +index c411fe9be3ef81..b90f15c986a317 100644 +--- a/drivers/mtd/nand/spi/core.c ++++ b/drivers/mtd/nand/spi/core.c +@@ -688,7 +688,10 @@ int spinand_write_page(struct spinand_device *spinand, + SPINAND_WRITE_INITIAL_DELAY_US, + SPINAND_WRITE_POLL_DELAY_US, + &status); +- if (!ret && (status & STATUS_PROG_FAILED)) ++ if (ret) ++ return ret; ++ ++ if (status & STATUS_PROG_FAILED) + return -EIO; + + return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req); +diff --git a/drivers/mtd/spi-nor/swp.c b/drivers/mtd/spi-nor/swp.c +index 9c9328478d8a5b..9b07f83aeac76d 100644 +--- a/drivers/mtd/spi-nor/swp.c ++++ b/drivers/mtd/spi-nor/swp.c +@@ -56,7 +56,6 @@ static u64 spi_nor_get_min_prot_length_sr(struct spi_nor *nor) + static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs, + u64 *len) + { +- struct mtd_info *mtd = &nor->mtd; + u64 min_prot_len; + u8 mask = spi_nor_get_sr_bp_mask(nor); + u8 tb_mask = spi_nor_get_sr_tb_mask(nor); +@@ -77,13 +76,13 @@ static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs, + min_prot_len = spi_nor_get_min_prot_length_sr(nor); + *len = min_prot_len << (bp - 1); + +- if (*len > mtd->size) +- *len = mtd->size; ++ if (*len > nor->params->size) ++ *len = nor->params->size; + + if (nor->flags & SNOR_F_HAS_SR_TB && sr & tb_mask) + *ofs = 0; + else +- *ofs = mtd->size - *len; ++ *ofs = nor->params->size - *len; + } + + /* +@@ -158,7 +157,6 @@ static bool spi_nor_is_unlocked_sr(struct spi_nor *nor, loff_t ofs, u64 len, + */ + static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len) + { +- struct mtd_info *mtd = &nor->mtd; + u64 min_prot_len; + int ret, status_old, status_new; + u8 mask = spi_nor_get_sr_bp_mask(nor); +@@ -183,7 +181,7 @@ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len) + can_be_bottom = false; + + /* If anything above us is unlocked, we can't use 'top' protection */ +- if (!spi_nor_is_locked_sr(nor, ofs + len, mtd->size - (ofs + len), ++ if (!spi_nor_is_locked_sr(nor, ofs + len, nor->params->size - (ofs + len), + status_old)) + can_be_top = false; + +@@ -195,11 +193,11 @@ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len) + + /* lock_len: length of region that should end up locked */ + if (use_top) +- lock_len = mtd->size - ofs; ++ lock_len = nor->params->size - ofs; + else + lock_len = ofs + len; + +- if (lock_len == mtd->size) { ++ if (lock_len == nor->params->size) { + val = mask; + } else { + min_prot_len = spi_nor_get_min_prot_length_sr(nor); +@@ -248,7 +246,6 @@ static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len) + */ + static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len) + { +- struct mtd_info *mtd = &nor->mtd; + u64 min_prot_len; + int ret, status_old, status_new; + u8 mask = spi_nor_get_sr_bp_mask(nor); +@@ -273,7 +270,7 @@ static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len) + can_be_top = false; + + /* If anything above us is locked, we can't use 'bottom' protection */ +- if (!spi_nor_is_unlocked_sr(nor, ofs + len, mtd->size - (ofs + len), ++ if (!spi_nor_is_unlocked_sr(nor, ofs + len, nor->params->size - (ofs + len), + status_old)) + can_be_bottom = false; + +@@ -285,7 +282,7 @@ static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len) + + /* lock_len: length of region that should remain locked */ + if (use_top) +- lock_len = mtd->size - (ofs + len); ++ lock_len = nor->params->size - (ofs + len); + else + lock_len = ofs; + +diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c +index c6807e473ab706..4c2560ae8866a1 100644 +--- a/drivers/net/bonding/bond_3ad.c ++++ b/drivers/net/bonding/bond_3ad.c +@@ -95,13 +95,13 @@ static int ad_marker_send(struct port *port, struct bond_marker *marker); + static void ad_mux_machine(struct port *port, bool *update_slave_arr); + static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port); + static void ad_tx_machine(struct port *port); +-static void ad_periodic_machine(struct port *port, struct bond_params *bond_params); ++static void ad_periodic_machine(struct port *port); + static void ad_port_selection_logic(struct port *port, bool *update_slave_arr); + static void ad_agg_selection_logic(struct aggregator *aggregator, + bool *update_slave_arr); + static void ad_clear_agg(struct aggregator *aggregator); + static void ad_initialize_agg(struct aggregator *aggregator); +-static void ad_initialize_port(struct port *port, int lacp_fast); ++static void ad_initialize_port(struct port *port, const struct bond_params *bond_params); + static void ad_enable_collecting(struct port *port); + static void ad_disable_distributing(struct port *port, + bool *update_slave_arr); +@@ -1296,10 +1296,16 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port) + * case of EXPIRED even if LINK_DOWN didn't arrive for + * the port. + */ +- port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION; + port->sm_vars &= ~AD_PORT_MATCHED; ++ /* Based on IEEE 8021AX-2014, Figure 6-18 - Receive ++ * machine state diagram, the statue should be ++ * Partner_Oper_Port_State.Synchronization = FALSE; ++ * Partner_Oper_Port_State.LACP_Timeout = Short Timeout; ++ * start current_while_timer(Short Timeout); ++ * Actor_Oper_Port_State.Expired = TRUE; ++ */ ++ port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION; + port->partner_oper.port_state |= LACP_STATE_LACP_TIMEOUT; +- port->partner_oper.port_state |= LACP_STATE_LACP_ACTIVITY; + port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT)); + port->actor_oper_port_state |= LACP_STATE_EXPIRED; + port->sm_vars |= AD_PORT_CHURNED; +@@ -1405,11 +1411,10 @@ static void ad_tx_machine(struct port *port) + /** + * ad_periodic_machine - handle a port's periodic state machine + * @port: the port we're looking at +- * @bond_params: bond parameters we will use + * + * Turn ntt flag on priodically to perform periodic transmission of lacpdu's. + */ +-static void ad_periodic_machine(struct port *port, struct bond_params *bond_params) ++static void ad_periodic_machine(struct port *port) + { + periodic_states_t last_state; + +@@ -1418,8 +1423,7 @@ static void ad_periodic_machine(struct port *port, struct bond_params *bond_para + + /* check if port was reinitialized */ + if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) || +- (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) || +- !bond_params->lacp_active) { ++ (!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY))) { + port->sm_periodic_state = AD_NO_PERIODIC; + } + /* check if state machine should change state */ +@@ -1943,16 +1947,16 @@ static void ad_initialize_agg(struct aggregator *aggregator) + /** + * ad_initialize_port - initialize a given port's parameters + * @port: the port we're looking at +- * @lacp_fast: boolean. whether fast periodic should be used ++ * @bond_params: bond parameters we will use + */ +-static void ad_initialize_port(struct port *port, int lacp_fast) ++static void ad_initialize_port(struct port *port, const struct bond_params *bond_params) + { + static const struct port_params tmpl = { + .system_priority = 0xffff, + .key = 1, + .port_number = 1, + .port_priority = 0xff, +- .port_state = 1, ++ .port_state = 0, + }; + static const struct lacpdu lacpdu = { + .subtype = 0x01, +@@ -1970,12 +1974,14 @@ static void ad_initialize_port(struct port *port, int lacp_fast) + port->actor_port_priority = 0xff; + port->actor_port_aggregator_identifier = 0; + port->ntt = false; +- port->actor_admin_port_state = LACP_STATE_AGGREGATION | +- LACP_STATE_LACP_ACTIVITY; +- port->actor_oper_port_state = LACP_STATE_AGGREGATION | +- LACP_STATE_LACP_ACTIVITY; ++ port->actor_admin_port_state = LACP_STATE_AGGREGATION; ++ port->actor_oper_port_state = LACP_STATE_AGGREGATION; ++ if (bond_params->lacp_active) { ++ port->actor_admin_port_state |= LACP_STATE_LACP_ACTIVITY; ++ port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY; ++ } + +- if (lacp_fast) ++ if (bond_params->lacp_fast) + port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT; + + memcpy(&port->partner_admin, &tmpl, sizeof(tmpl)); +@@ -2187,7 +2193,7 @@ void bond_3ad_bind_slave(struct slave *slave) + /* port initialization */ + port = &(SLAVE_AD_INFO(slave)->port); + +- ad_initialize_port(port, bond->params.lacp_fast); ++ ad_initialize_port(port, &bond->params); + + port->slave = slave; + port->actor_port_number = SLAVE_AD_INFO(slave)->id; +@@ -2499,7 +2505,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work) + } + + ad_rx_machine(NULL, port); +- ad_periodic_machine(port, &bond->params); ++ ad_periodic_machine(port); + ad_port_selection_logic(port, &update_slave_arr); + ad_mux_machine(port, &update_slave_arr); + ad_tx_machine(port); +@@ -2869,6 +2875,31 @@ void bond_3ad_update_lacp_rate(struct bonding *bond) + spin_unlock_bh(&bond->mode_lock); + } + ++/** ++ * bond_3ad_update_lacp_active - change the lacp active ++ * @bond: bonding struct ++ * ++ * Update actor_oper_port_state when lacp_active is modified. ++ */ ++void bond_3ad_update_lacp_active(struct bonding *bond) ++{ ++ struct port *port = NULL; ++ struct list_head *iter; ++ struct slave *slave; ++ int lacp_active; ++ ++ lacp_active = bond->params.lacp_active; ++ spin_lock_bh(&bond->mode_lock); ++ bond_for_each_slave(bond, slave, iter) { ++ port = &(SLAVE_AD_INFO(slave)->port); ++ if (lacp_active) ++ port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY; ++ else ++ port->actor_oper_port_state &= ~LACP_STATE_LACP_ACTIVITY; ++ } ++ spin_unlock_bh(&bond->mode_lock); ++} ++ + size_t bond_3ad_stats_size(void) + { + return nla_total_size_64bit(sizeof(u64)) + /* BOND_3AD_STAT_LACPDU_RX */ +diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c +index 91893c29b8995b..28c53f1b13826f 100644 +--- a/drivers/net/bonding/bond_options.c ++++ b/drivers/net/bonding/bond_options.c +@@ -1637,6 +1637,7 @@ static int bond_option_lacp_active_set(struct bonding *bond, + netdev_dbg(bond->dev, "Setting LACP active to %s (%llu)\n", + newval->string, newval->value); + bond->params.lacp_active = newval->value; ++ bond_3ad_update_lacp_active(bond); + + return 0; + } +diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c +index 7c142c17b3f69d..adef7aa327ceb0 100644 +--- a/drivers/net/dsa/microchip/ksz_common.c ++++ b/drivers/net/dsa/microchip/ksz_common.c +@@ -2347,6 +2347,12 @@ static void ksz_update_port_member(struct ksz_device *dev, int port) + dev->dev_ops->cfg_port_member(dev, i, val | cpu_port); + } + ++ /* HSR ports are setup once so need to use the assigned membership ++ * when the port is enabled. ++ */ ++ if (!port_member && p->stp_state == BR_STATE_FORWARDING && ++ (dev->hsr_ports & BIT(port))) ++ port_member = dev->hsr_ports; + dev->dev_ops->cfg_port_member(dev, port, port_member | cpu_port); + } + +diff --git a/drivers/net/ethernet/airoha/airoha_ppe.c b/drivers/net/ethernet/airoha/airoha_ppe.c +index 7832fe8fc2021d..af6e4d4c0ecea3 100644 +--- a/drivers/net/ethernet/airoha/airoha_ppe.c ++++ b/drivers/net/ethernet/airoha/airoha_ppe.c +@@ -726,10 +726,8 @@ static void airoha_ppe_foe_insert_entry(struct airoha_ppe *ppe, + continue; + } + +- if (commit_done || !airoha_ppe_foe_compare_entry(e, hwe)) { +- e->hash = 0xffff; ++ if (!airoha_ppe_foe_compare_entry(e, hwe)) + continue; +- } + + airoha_ppe_foe_commit_entry(ppe, &e->data, hash); + commit_done = true; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index 25681c2343fb46..ec8752c298e693 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -5325,7 +5325,7 @@ static void bnxt_free_ntp_fltrs(struct bnxt *bp, bool all) + { + int i; + +- netdev_assert_locked(bp->dev); ++ netdev_assert_locked_or_invisible(bp->dev); + + /* Under netdev instance lock and all our NAPIs have been disabled. + * It's safe to delete the hash table. +diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c +index d1aeb722d48f30..36a6d766b63863 100644 +--- a/drivers/net/ethernet/google/gve/gve_main.c ++++ b/drivers/net/ethernet/google/gve/gve_main.c +@@ -2726,6 +2726,8 @@ static void gve_shutdown(struct pci_dev *pdev) + struct gve_priv *priv = netdev_priv(netdev); + bool was_up = netif_running(priv->dev); + ++ netif_device_detach(netdev); ++ + rtnl_lock(); + netdev_lock(netdev); + if (was_up && gve_close(priv->dev)) { +diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c +index 031c332f66c471..1b4465d6b2b726 100644 +--- a/drivers/net/ethernet/intel/igc/igc_main.c ++++ b/drivers/net/ethernet/intel/igc/igc_main.c +@@ -7115,6 +7115,13 @@ static int igc_probe(struct pci_dev *pdev, + adapter->port_num = hw->bus.func; + adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); + ++ /* PCI config space info */ ++ hw->vendor_id = pdev->vendor; ++ hw->device_id = pdev->device; ++ hw->revision_id = pdev->revision; ++ hw->subsystem_vendor_id = pdev->subsystem_vendor; ++ hw->subsystem_device_id = pdev->subsystem_device; ++ + /* Disable ASPM L1.2 on I226 devices to avoid packet loss */ + if (igc_is_device_id_i226(hw)) + pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); +@@ -7141,13 +7148,6 @@ static int igc_probe(struct pci_dev *pdev, + netdev->mem_start = pci_resource_start(pdev, 0); + netdev->mem_end = pci_resource_end(pdev, 0); + +- /* PCI config space info */ +- hw->vendor_id = pdev->vendor; +- hw->device_id = pdev->device; +- hw->revision_id = pdev->revision; +- hw->subsystem_vendor_id = pdev->subsystem_vendor; +- hw->subsystem_device_id = pdev->subsystem_device; +- + /* Copy the default MAC and PHY function pointers */ + memcpy(&hw->mac.ops, ei->mac_ops, sizeof(hw->mac.ops)); + memcpy(&hw->phy.ops, ei->phy_ops, sizeof(hw->phy.ops)); +diff --git a/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c b/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c +index 54f1b83dfe42e0..d227f4d2a2d17a 100644 +--- a/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c ++++ b/drivers/net/ethernet/intel/ixgbe/devlink/devlink.c +@@ -543,6 +543,7 @@ int ixgbe_devlink_register_port(struct ixgbe_adapter *adapter) + + attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL; + attrs.phys.port_number = adapter->hw.bus.func; ++ attrs.no_phys_port_name = 1; + ixgbe_devlink_set_switch_id(adapter, &attrs.switch_id); + + devlink_port_attrs_set(devlink_port, &attrs); +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c +index ac58964b2f087e..7b941505a9d024 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c +@@ -398,7 +398,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget) + dma_addr_t dma; + u32 cmd_type; + +- while (budget-- > 0) { ++ while (likely(budget)) { + if (unlikely(!ixgbe_desc_unused(xdp_ring))) { + work_done = false; + break; +@@ -433,6 +433,8 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget) + xdp_ring->next_to_use++; + if (xdp_ring->next_to_use == xdp_ring->count) + xdp_ring->next_to_use = 0; ++ ++ budget--; + } + + if (tx_desc) { +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +index 1b765045aa636b..b56395ac5a7439 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +@@ -606,8 +606,8 @@ static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf) + if (!npc_check_field(rvu, blkaddr, NPC_LB, intf)) + *features &= ~BIT_ULL(NPC_OUTER_VID); + +- /* Set SPI flag only if AH/ESP and IPSEC_SPI are in the key */ +- if (npc_check_field(rvu, blkaddr, NPC_IPSEC_SPI, intf) && ++ /* Allow extracting SPI field from AH and ESP headers at same offset */ ++ if (npc_is_field_present(rvu, NPC_IPSEC_SPI, intf) && + (*features & (BIT_ULL(NPC_IPPROTO_ESP) | BIT_ULL(NPC_IPPROTO_AH)))) + *features |= BIT_ULL(NPC_IPSEC_SPI); + +diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +index c855fb799ce145..e9bd3274198379 100644 +--- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c ++++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c +@@ -101,7 +101,9 @@ mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_i + if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED)) + return -1; + ++ rcu_read_lock(); + err = dev_fill_forward_path(dev, addr, &stack); ++ rcu_read_unlock(); + if (err) + return err; + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h b/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h +index b59aee75de94e2..2c98a5299df337 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/dcbnl.h +@@ -26,7 +26,6 @@ struct mlx5e_dcbx { + u8 cap; + + /* Buffer configuration */ +- bool manual_buffer; + u32 cable_len; + u32 xoff; + u16 port_buff_cell_sz; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c +index 5ae787656a7ca0..3efa8bf1d14ef4 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c +@@ -272,8 +272,8 @@ static int port_update_shared_buffer(struct mlx5_core_dev *mdev, + /* Total shared buffer size is split in a ratio of 3:1 between + * lossy and lossless pools respectively. + */ +- lossy_epool_size = (shared_buffer_size / 4) * 3; + lossless_ipool_size = shared_buffer_size / 4; ++ lossy_epool_size = shared_buffer_size - lossless_ipool_size; + + mlx5e_port_set_sbpr(mdev, 0, MLX5_EGRESS_DIR, MLX5_LOSSY_POOL, 0, + lossy_epool_size); +@@ -288,14 +288,12 @@ static int port_set_buffer(struct mlx5e_priv *priv, + u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz; + struct mlx5_core_dev *mdev = priv->mdev; + int sz = MLX5_ST_SZ_BYTES(pbmc_reg); +- u32 new_headroom_size = 0; +- u32 current_headroom_size; ++ u32 current_headroom_cells = 0; ++ u32 new_headroom_cells = 0; + void *in; + int err; + int i; + +- current_headroom_size = port_buffer->headroom_size; +- + in = kzalloc(sz, GFP_KERNEL); + if (!in) + return -ENOMEM; +@@ -306,12 +304,14 @@ static int port_set_buffer(struct mlx5e_priv *priv, + + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { + void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]); ++ current_headroom_cells += MLX5_GET(bufferx_reg, buffer, size); ++ + u64 size = port_buffer->buffer[i].size; + u64 xoff = port_buffer->buffer[i].xoff; + u64 xon = port_buffer->buffer[i].xon; + +- new_headroom_size += size; + do_div(size, port_buff_cell_sz); ++ new_headroom_cells += size; + do_div(xoff, port_buff_cell_sz); + do_div(xon, port_buff_cell_sz); + MLX5_SET(bufferx_reg, buffer, size, size); +@@ -320,10 +320,8 @@ static int port_set_buffer(struct mlx5e_priv *priv, + MLX5_SET(bufferx_reg, buffer, xon_threshold, xon); + } + +- new_headroom_size /= port_buff_cell_sz; +- current_headroom_size /= port_buff_cell_sz; +- err = port_update_shared_buffer(priv->mdev, current_headroom_size, +- new_headroom_size); ++ err = port_update_shared_buffer(priv->mdev, current_headroom_cells, ++ new_headroom_cells); + if (err) + goto out; + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c +index a4263137fef5a0..01d522b0294707 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_hmfs.c +@@ -173,6 +173,8 @@ static void mlx5_ct_fs_hmfs_fill_rule_actions(struct mlx5_ct_fs_hmfs *fs_hmfs, + + memset(rule_actions, 0, NUM_CT_HMFS_RULES * sizeof(*rule_actions)); + rule_actions[0].action = mlx5_fc_get_hws_action(fs_hmfs->ctx, attr->counter); ++ rule_actions[0].counter.offset = ++ attr->counter->id - attr->counter->bulk->base_id; + /* Modify header is special, it may require extra arguments outside the action itself. */ + if (mh_action->mh_data) { + rule_actions[1].modify_header.offset = mh_action->mh_data->offset; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c +index 5fe016e477b37e..d166c0d5189e19 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c +@@ -362,6 +362,7 @@ static int mlx5e_dcbnl_ieee_getpfc(struct net_device *dev, + static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev, + struct ieee_pfc *pfc) + { ++ u8 buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN; + struct mlx5e_priv *priv = netdev_priv(dev); + struct mlx5_core_dev *mdev = priv->mdev; + u32 old_cable_len = priv->dcbx.cable_len; +@@ -389,7 +390,14 @@ static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev, + + if (MLX5_BUFFER_SUPPORTED(mdev)) { + pfc_new.pfc_en = (changed & MLX5E_PORT_BUFFER_PFC) ? pfc->pfc_en : curr_pfc_en; +- if (priv->dcbx.manual_buffer) ++ ret = mlx5_query_port_buffer_ownership(mdev, ++ &buffer_ownership); ++ if (ret) ++ netdev_err(dev, ++ "%s, Failed to get buffer ownership: %d\n", ++ __func__, ret); ++ ++ if (buffer_ownership == MLX5_BUF_OWNERSHIP_SW_OWNED) + ret = mlx5e_port_manual_buffer_config(priv, changed, + dev->mtu, &pfc_new, + NULL, NULL); +@@ -982,7 +990,6 @@ static int mlx5e_dcbnl_setbuffer(struct net_device *dev, + if (!changed) + return 0; + +- priv->dcbx.manual_buffer = true; + err = mlx5e_port_manual_buffer_config(priv, changed, dev->mtu, NULL, + buffer_size, prio2buffer); + return err; +@@ -1252,7 +1259,6 @@ void mlx5e_dcbnl_initialize(struct mlx5e_priv *priv) + priv->dcbx.cap |= DCB_CAP_DCBX_HOST; + + priv->dcbx.port_buff_cell_sz = mlx5e_query_port_buffers_cell_size(priv); +- priv->dcbx.manual_buffer = false; + priv->dcbx.cable_len = MLX5E_DEFAULT_CABLE_LEN; + + mlx5e_ets_init(priv); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c +index b7102e14d23d3b..c33accadae0f01 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c +@@ -47,10 +47,12 @@ static void mlx5_esw_offloads_pf_vf_devlink_port_attrs_set(struct mlx5_eswitch * + devlink_port_attrs_pci_vf_set(dl_port, controller_num, pfnum, + vport_num - 1, external); + } else if (mlx5_core_is_ec_vf_vport(esw->dev, vport_num)) { ++ u16 base_vport = mlx5_core_ec_vf_vport_base(dev); ++ + memcpy(dl_port->attrs.switch_id.id, ppid.id, ppid.id_len); + dl_port->attrs.switch_id.id_len = ppid.id_len; + devlink_port_attrs_pci_vf_set(dl_port, 0, pfnum, +- vport_num - 1, false); ++ vport_num - base_vport, false); + } + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +index 2e02bdea8361db..c2f6d205ddb1e8 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +@@ -358,6 +358,8 @@ int mlx5_query_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *out); + int mlx5_set_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *in); + int mlx5_set_trust_state(struct mlx5_core_dev *mdev, u8 trust_state); + int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state); ++int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev, ++ u8 *buffer_ownership); + int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio); + int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio); + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c +index 549f1066d2a508..2d7adf7444ba29 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c +@@ -968,6 +968,26 @@ int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state) + return err; + } + ++int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev, ++ u8 *buffer_ownership) ++{ ++ u32 out[MLX5_ST_SZ_DW(pfcc_reg)] = {}; ++ int err; ++ ++ if (!MLX5_CAP_PCAM_FEATURE(mdev, buffer_ownership)) { ++ *buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN; ++ return 0; ++ } ++ ++ err = mlx5_query_pfcc_reg(mdev, out, sizeof(out)); ++ if (err) ++ return err; ++ ++ *buffer_ownership = MLX5_GET(pfcc_reg, out, buf_ownership); ++ ++ return 0; ++} ++ + int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio) + { + int sz = MLX5_ST_SZ_BYTES(qpdpm_reg); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c +index ca7501c5746886..14e79579c719c2 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c +@@ -1328,11 +1328,11 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher) + { + struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx; + struct mlx5hws_matcher *matcher = bwc_matcher->matcher; +- bool move_error = false, poll_error = false; + u16 bwc_queues = mlx5hws_bwc_queues(ctx); + struct mlx5hws_bwc_rule *tmp_bwc_rule; + struct mlx5hws_rule_attr rule_attr; + struct mlx5hws_table *isolated_tbl; ++ int move_error = 0, poll_error = 0; + struct mlx5hws_rule *tmp_rule; + struct list_head *rules_list; + u32 expected_completions = 1; +@@ -1391,11 +1391,15 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher) + ret = mlx5hws_matcher_resize_rule_move(matcher, + tmp_rule, + &rule_attr); +- if (unlikely(ret && !move_error)) { +- mlx5hws_err(ctx, +- "Moving complex BWC rule failed (%d), attempting to move rest of the rules\n", +- ret); +- move_error = true; ++ if (unlikely(ret)) { ++ if (!move_error) { ++ mlx5hws_err(ctx, ++ "Moving complex BWC rule: move failed (%d), attempting to move rest of the rules\n", ++ ret); ++ move_error = ret; ++ } ++ /* Rule wasn't queued, no need to poll */ ++ continue; + } + + expected_completions = 1; +@@ -1403,11 +1407,19 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher) + rule_attr.queue_id, + &expected_completions, + true); +- if (unlikely(ret && !poll_error)) { +- mlx5hws_err(ctx, +- "Moving complex BWC rule: poll failed (%d), attempting to move rest of the rules\n", +- ret); +- poll_error = true; ++ if (unlikely(ret)) { ++ if (ret == -ETIMEDOUT) { ++ mlx5hws_err(ctx, ++ "Moving complex BWC rule: timeout polling for completions (%d), aborting rehash\n", ++ ret); ++ return ret; ++ } ++ if (!poll_error) { ++ mlx5hws_err(ctx, ++ "Moving complex BWC rule: polling for completions failed (%d), attempting to move rest of the rules\n", ++ ret); ++ poll_error = ret; ++ } + } + + /* Done moving the rule to the new matcher, +@@ -1422,8 +1434,11 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher) + } + } + +- if (move_error || poll_error) +- ret = -EINVAL; ++ /* Return the first error that happened */ ++ if (unlikely(move_error)) ++ return move_error; ++ if (unlikely(poll_error)) ++ return poll_error; + + return ret; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c +index 9c83753e459243..0bdcab2e5cf3a6 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.c +@@ -55,6 +55,7 @@ int mlx5hws_cmd_flow_table_create(struct mlx5_core_dev *mdev, + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); ++ MLX5_SET(create_flow_table_in, in, uid, ft_attr->uid); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h +index fa6bff210266cb..122ccc671628de 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/cmd.h +@@ -36,6 +36,7 @@ struct mlx5hws_cmd_set_fte_attr { + struct mlx5hws_cmd_ft_create_attr { + u8 type; + u8 level; ++ u16 uid; + bool rtc_valid; + bool decap_en; + bool reformat_en; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +index bf4643d0ce1790..47e3947e7b512f 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c +@@ -267,6 +267,7 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns, + + tbl_attr.type = MLX5HWS_TABLE_TYPE_FDB; + tbl_attr.level = ft_attr->level; ++ tbl_attr.uid = ft_attr->uid; + tbl = mlx5hws_table_create(ctx, &tbl_attr); + if (!tbl) { + mlx5_core_err(ns->dev, "Failed creating hws flow_table\n"); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +index ce28ee1c0e41bd..6000f2c641e083 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/matcher.c +@@ -85,6 +85,7 @@ static int hws_matcher_create_end_ft_isolated(struct mlx5hws_matcher *matcher) + + ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, + tbl, ++ 0, + &matcher->end_ft_id); + if (ret) { + mlx5hws_err(tbl->ctx, "Isolated matcher: failed to create end flow table\n"); +@@ -112,7 +113,9 @@ static int hws_matcher_create_end_ft(struct mlx5hws_matcher *matcher) + if (mlx5hws_matcher_is_isolated(matcher)) + ret = hws_matcher_create_end_ft_isolated(matcher); + else +- ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl, ++ ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, ++ tbl, ++ 0, + &matcher->end_ft_id); + + if (ret) { +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h +index d8ac6c196211c9..a2fe2f9e832d26 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h +@@ -75,6 +75,7 @@ struct mlx5hws_context_attr { + struct mlx5hws_table_attr { + enum mlx5hws_table_type type; + u32 level; ++ u16 uid; + }; + + enum mlx5hws_matcher_flow_src { +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c +index c4b22be19a9b10..b0595c9b09e421 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/send.c +@@ -964,7 +964,6 @@ static int hws_send_ring_open_cq(struct mlx5_core_dev *mdev, + return -ENOMEM; + + MLX5_SET(cqc, cqc_data, uar_page, mdev->priv.uar->index); +- MLX5_SET(cqc, cqc_data, cqe_sz, queue->num_entries); + MLX5_SET(cqc, cqc_data, log_cq_size, ilog2(queue->num_entries)); + + err = hws_send_ring_alloc_cq(mdev, numa_node, queue, cqc_data, cq); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c +index 568f691733f349..6113383ae47bbc 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.c +@@ -9,6 +9,7 @@ u32 mlx5hws_table_get_id(struct mlx5hws_table *tbl) + } + + static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl, ++ u16 uid, + struct mlx5hws_cmd_ft_create_attr *ft_attr) + { + ft_attr->type = tbl->fw_ft_type; +@@ -16,7 +17,9 @@ static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl, + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; ++ + ft_attr->rtc_valid = true; ++ ft_attr->uid = uid; + } + + static void hws_table_set_cap_attr(struct mlx5hws_table *tbl, +@@ -119,12 +122,12 @@ static int hws_table_connect_to_default_miss_tbl(struct mlx5hws_table *tbl, u32 + + int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev, + struct mlx5hws_table *tbl, +- u32 *ft_id) ++ u16 uid, u32 *ft_id) + { + struct mlx5hws_cmd_ft_create_attr ft_attr = {0}; + int ret; + +- hws_table_init_next_ft_attr(tbl, &ft_attr); ++ hws_table_init_next_ft_attr(tbl, uid, &ft_attr); + hws_table_set_cap_attr(tbl, &ft_attr); + + ret = mlx5hws_cmd_flow_table_create(mdev, &ft_attr, ft_id); +@@ -189,7 +192,10 @@ static int hws_table_init(struct mlx5hws_table *tbl) + } + + mutex_lock(&ctx->ctrl_lock); +- ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl, &tbl->ft_id); ++ ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, ++ tbl, ++ tbl->uid, ++ &tbl->ft_id); + if (ret) { + mlx5hws_err(tbl->ctx, "Failed to create flow table object\n"); + mutex_unlock(&ctx->ctrl_lock); +@@ -239,6 +245,7 @@ struct mlx5hws_table *mlx5hws_table_create(struct mlx5hws_context *ctx, + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; ++ tbl->uid = attr->uid; + + ret = hws_table_init(tbl); + if (ret) { +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h +index 0400cce0c317f7..1246f9bd84222f 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/table.h +@@ -18,6 +18,7 @@ struct mlx5hws_table { + enum mlx5hws_table_type type; + u32 fw_ft_type; + u32 level; ++ u16 uid; + struct list_head matchers_list; + struct list_head tbl_list_node; + struct mlx5hws_default_miss default_miss; +@@ -47,7 +48,7 @@ u32 mlx5hws_table_get_res_fw_ft_type(enum mlx5hws_table_type tbl_type, + + int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev, + struct mlx5hws_table *tbl, +- u32 *ft_id); ++ u16 uid, u32 *ft_id); + + void mlx5hws_table_destroy_default_ft(struct mlx5hws_table *tbl, + u32 ft_id); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +index 618957d6566364..9a2d64a0a8588a 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +@@ -2375,6 +2375,8 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = { + ROUTER_EXP, false), + MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_DIP_LINK_LOCAL, FORWARD, + ROUTER_EXP, false), ++ MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_SIP_LINK_LOCAL, FORWARD, ++ ROUTER_EXP, false), + /* Multicast Router Traps */ + MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false), + MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false), +diff --git a/drivers/net/ethernet/mellanox/mlxsw/trap.h b/drivers/net/ethernet/mellanox/mlxsw/trap.h +index 80ee5c4825dc96..9962dc1579019b 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/trap.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/trap.h +@@ -94,6 +94,7 @@ enum { + MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_SIP_BC = 0x16A, + MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_DIP_LOCAL_NET = 0x16B, + MLXSW_TRAP_ID_DISCARD_ING_ROUTER_DIP_LINK_LOCAL = 0x16C, ++ MLXSW_TRAP_ID_DISCARD_ING_ROUTER_SIP_LINK_LOCAL = 0x16D, + MLXSW_TRAP_ID_DISCARD_ROUTER_IRIF_EN = 0x178, + MLXSW_TRAP_ID_DISCARD_ROUTER_ERIF_EN = 0x179, + MLXSW_TRAP_ID_DISCARD_ROUTER_LPM4 = 0x17B, +diff --git a/drivers/net/ethernet/microchip/lan865x/lan865x.c b/drivers/net/ethernet/microchip/lan865x/lan865x.c +index dd436bdff0f86d..84c41f19356126 100644 +--- a/drivers/net/ethernet/microchip/lan865x/lan865x.c ++++ b/drivers/net/ethernet/microchip/lan865x/lan865x.c +@@ -32,6 +32,10 @@ + /* MAC Specific Addr 1 Top Reg */ + #define LAN865X_REG_MAC_H_SADDR1 0x00010023 + ++/* MAC TSU Timer Increment Register */ ++#define LAN865X_REG_MAC_TSU_TIMER_INCR 0x00010077 ++#define MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS 0x0028 ++ + struct lan865x_priv { + struct work_struct multicast_work; + struct net_device *netdev; +@@ -311,6 +315,8 @@ static int lan865x_net_open(struct net_device *netdev) + + phy_start(netdev->phydev); + ++ netif_start_queue(netdev); ++ + return 0; + } + +@@ -344,6 +350,21 @@ static int lan865x_probe(struct spi_device *spi) + goto free_netdev; + } + ++ /* LAN865x Rev.B0/B1 configuration parameters from AN1760 ++ * As per the Configuration Application Note AN1760 published in the ++ * link, https://www.microchip.com/en-us/application-notes/an1760 ++ * Revision F (DS60001760G - June 2024), configure the MAC to set time ++ * stamping at the end of the Start of Frame Delimiter (SFD) and set the ++ * Timer Increment reg to 40 ns to be used as a 25 MHz internal clock. ++ */ ++ ret = oa_tc6_write_register(priv->tc6, LAN865X_REG_MAC_TSU_TIMER_INCR, ++ MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS); ++ if (ret) { ++ dev_err(&spi->dev, "Failed to config TSU Timer Incr reg: %d\n", ++ ret); ++ goto oa_tc6_exit; ++ } ++ + /* As per the point s3 in the below errata, SPI receive Ethernet frame + * transfer may halt when starting the next frame in the same data block + * (chunk) as the end of a previous frame. The RFA field should be +diff --git a/drivers/net/ethernet/realtek/rtase/rtase.h b/drivers/net/ethernet/realtek/rtase/rtase.h +index 498cfe4d0cac3a..5f2e1ab6a10080 100644 +--- a/drivers/net/ethernet/realtek/rtase/rtase.h ++++ b/drivers/net/ethernet/realtek/rtase/rtase.h +@@ -241,7 +241,7 @@ union rtase_rx_desc { + #define RTASE_RX_RES BIT(20) + #define RTASE_RX_RUNT BIT(19) + #define RTASE_RX_RWT BIT(18) +-#define RTASE_RX_CRC BIT(16) ++#define RTASE_RX_CRC BIT(17) + #define RTASE_RX_V6F BIT(31) + #define RTASE_RX_V4F BIT(30) + #define RTASE_RX_UDPT BIT(29) +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c +index f2946bea0bc268..6c6c49e4b66fae 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-thead.c +@@ -152,7 +152,7 @@ static int thead_set_clk_tx_rate(void *bsp_priv, struct clk *clk_tx_i, + static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat) + { + struct thead_dwmac *dwmac = plat->bsp_priv; +- u32 reg; ++ u32 reg, div; + + switch (plat->mac_interface) { + case PHY_INTERFACE_MODE_MII: +@@ -164,6 +164,13 @@ static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat) + case PHY_INTERFACE_MODE_RGMII_RXID: + case PHY_INTERFACE_MODE_RGMII_TXID: + /* use pll */ ++ div = clk_get_rate(plat->stmmac_clk) / rgmii_clock(SPEED_1000); ++ reg = FIELD_PREP(GMAC_PLLCLK_DIV_EN, 1) | ++ FIELD_PREP(GMAC_PLLCLK_DIV_NUM, div); ++ ++ writel(0, dwmac->apb_base + GMAC_PLLCLK_DIV); ++ writel(reg, dwmac->apb_base + GMAC_PLLCLK_DIV); ++ + writel(GMAC_GTXCLK_SEL_PLL, dwmac->apb_base + GMAC_GTXCLK_SEL); + reg = GMAC_TX_CLK_EN | GMAC_TX_CLK_N_EN | GMAC_TX_CLK_OUT_EN | + GMAC_RX_CLK_EN | GMAC_RX_CLK_N_EN; +diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c +index 008d7772740078..f436d7cf565a14 100644 +--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c ++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c +@@ -240,6 +240,44 @@ static void prueth_emac_stop(struct prueth *prueth) + } + } + ++static void icssg_enable_fw_offload(struct prueth *prueth) ++{ ++ struct prueth_emac *emac; ++ int mac; ++ ++ for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) { ++ emac = prueth->emac[mac]; ++ if (prueth->is_hsr_offload_mode) { ++ if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM) ++ icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE); ++ else ++ icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE); ++ } ++ ++ if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) { ++ if (netif_running(emac->ndev)) { ++ icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan, ++ ICSSG_FDB_ENTRY_P0_MEMBERSHIP | ++ ICSSG_FDB_ENTRY_P1_MEMBERSHIP | ++ ICSSG_FDB_ENTRY_P2_MEMBERSHIP | ++ ICSSG_FDB_ENTRY_BLOCK, ++ true); ++ icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID, ++ BIT(emac->port_id) | DEFAULT_PORT_MASK, ++ BIT(emac->port_id) | DEFAULT_UNTAG_MASK, ++ true); ++ if (prueth->is_hsr_offload_mode) ++ icssg_vtbl_modify(emac, DEFAULT_VID, ++ DEFAULT_PORT_MASK, ++ DEFAULT_UNTAG_MASK, true); ++ icssg_set_pvid(prueth, emac->port_vlan, emac->port_id); ++ if (prueth->is_switch_mode) ++ icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE); ++ } ++ } ++ } ++} ++ + static int prueth_emac_common_start(struct prueth *prueth) + { + struct prueth_emac *emac; +@@ -790,6 +828,7 @@ static int emac_ndo_open(struct net_device *ndev) + ret = prueth_emac_common_start(prueth); + if (ret) + goto free_rx_irq; ++ icssg_enable_fw_offload(prueth); + } + + flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET; +@@ -1397,8 +1436,7 @@ static int prueth_emac_restart(struct prueth *prueth) + + static void icssg_change_mode(struct prueth *prueth) + { +- struct prueth_emac *emac; +- int mac, ret; ++ int ret; + + ret = prueth_emac_restart(prueth); + if (ret) { +@@ -1406,35 +1444,7 @@ static void icssg_change_mode(struct prueth *prueth) + return; + } + +- for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) { +- emac = prueth->emac[mac]; +- if (prueth->is_hsr_offload_mode) { +- if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM) +- icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE); +- else +- icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE); +- } +- +- if (netif_running(emac->ndev)) { +- icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan, +- ICSSG_FDB_ENTRY_P0_MEMBERSHIP | +- ICSSG_FDB_ENTRY_P1_MEMBERSHIP | +- ICSSG_FDB_ENTRY_P2_MEMBERSHIP | +- ICSSG_FDB_ENTRY_BLOCK, +- true); +- icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID, +- BIT(emac->port_id) | DEFAULT_PORT_MASK, +- BIT(emac->port_id) | DEFAULT_UNTAG_MASK, +- true); +- if (prueth->is_hsr_offload_mode) +- icssg_vtbl_modify(emac, DEFAULT_VID, +- DEFAULT_PORT_MASK, +- DEFAULT_UNTAG_MASK, true); +- icssg_set_pvid(prueth, emac->port_vlan, emac->port_id); +- if (prueth->is_switch_mode) +- icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE); +- } +- } ++ icssg_enable_fw_offload(prueth); + } + + static int prueth_netdevice_port_link(struct net_device *ndev, +diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +index 6011d7eae0c78a..0d8a05fe541afb 100644 +--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c ++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +@@ -1160,6 +1160,7 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result) + struct axienet_local *lp = data; + struct sk_buff *skb; + u32 *app_metadata; ++ int i; + + skbuf_dma = axienet_get_rx_desc(lp, lp->rx_ring_tail++); + skb = skbuf_dma->skb; +@@ -1178,7 +1179,10 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result) + u64_stats_add(&lp->rx_packets, 1); + u64_stats_add(&lp->rx_bytes, rx_len); + u64_stats_update_end(&lp->rx_stat_sync); +- axienet_rx_submit_desc(lp->ndev); ++ ++ for (i = 0; i < CIRC_SPACE(lp->rx_ring_head, lp->rx_ring_tail, ++ RX_BUF_NUM_DEFAULT); i++) ++ axienet_rx_submit_desc(lp->ndev); + dma_async_issue_pending(lp->rx_chan); + } + +@@ -1457,7 +1461,6 @@ static void axienet_rx_submit_desc(struct net_device *ndev) + if (!skbuf_dma) + return; + +- lp->rx_ring_head++; + skb = netdev_alloc_skb(ndev, lp->max_frm_size); + if (!skb) + return; +@@ -1482,6 +1485,7 @@ static void axienet_rx_submit_desc(struct net_device *ndev) + skbuf_dma->desc = dma_rx_desc; + dma_rx_desc->callback_param = lp; + dma_rx_desc->callback_result = axienet_dma_rx_cb; ++ lp->rx_ring_head++; + dmaengine_submit(dma_rx_desc); + + return; +diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h +index 6a3d8a754eb8de..58c6d47fbe046d 100644 +--- a/drivers/net/phy/mscc/mscc.h ++++ b/drivers/net/phy/mscc/mscc.h +@@ -362,6 +362,13 @@ struct vsc85xx_hw_stat { + u16 mask; + }; + ++struct vsc8531_skb_cb { ++ u32 ns; ++}; ++ ++#define VSC8531_SKB_CB(skb) \ ++ ((struct vsc8531_skb_cb *)((skb)->cb)) ++ + struct vsc8531_private { + int rate_magic; + u16 supp_led_modes; +@@ -410,6 +417,11 @@ struct vsc8531_private { + */ + struct mutex ts_lock; + struct mutex phc_lock; ++ ++ /* list of skbs that were received and need timestamp information but it ++ * didn't received it yet ++ */ ++ struct sk_buff_head rx_skbs_list; + }; + + /* Shared structure between the PHYs of the same package. +diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c +index 7ff975efd8e7af..c3209cf00e9607 100644 +--- a/drivers/net/phy/mscc/mscc_main.c ++++ b/drivers/net/phy/mscc/mscc_main.c +@@ -2336,6 +2336,13 @@ static int vsc85xx_probe(struct phy_device *phydev) + return vsc85xx_dt_led_modes_get(phydev, default_mode); + } + ++static void vsc85xx_remove(struct phy_device *phydev) ++{ ++ struct vsc8531_private *priv = phydev->priv; ++ ++ skb_queue_purge(&priv->rx_skbs_list); ++} ++ + /* Microsemi VSC85xx PHYs */ + static struct phy_driver vsc85xx_driver[] = { + { +@@ -2590,6 +2597,7 @@ static struct phy_driver vsc85xx_driver[] = { + .config_intr = &vsc85xx_config_intr, + .suspend = &genphy_suspend, + .resume = &genphy_resume, ++ .remove = &vsc85xx_remove, + .probe = &vsc8574_probe, + .set_wol = &vsc85xx_wol_set, + .get_wol = &vsc85xx_wol_get, +@@ -2615,6 +2623,7 @@ static struct phy_driver vsc85xx_driver[] = { + .config_intr = &vsc85xx_config_intr, + .suspend = &genphy_suspend, + .resume = &genphy_resume, ++ .remove = &vsc85xx_remove, + .probe = &vsc8574_probe, + .set_wol = &vsc85xx_wol_set, + .get_wol = &vsc85xx_wol_get, +@@ -2640,6 +2649,7 @@ static struct phy_driver vsc85xx_driver[] = { + .config_intr = &vsc85xx_config_intr, + .suspend = &genphy_suspend, + .resume = &genphy_resume, ++ .remove = &vsc85xx_remove, + .probe = &vsc8584_probe, + .get_tunable = &vsc85xx_get_tunable, + .set_tunable = &vsc85xx_set_tunable, +@@ -2663,6 +2673,7 @@ static struct phy_driver vsc85xx_driver[] = { + .config_intr = &vsc85xx_config_intr, + .suspend = &genphy_suspend, + .resume = &genphy_resume, ++ .remove = &vsc85xx_remove, + .probe = &vsc8584_probe, + .get_tunable = &vsc85xx_get_tunable, + .set_tunable = &vsc85xx_set_tunable, +@@ -2686,6 +2697,7 @@ static struct phy_driver vsc85xx_driver[] = { + .config_intr = &vsc85xx_config_intr, + .suspend = &genphy_suspend, + .resume = &genphy_resume, ++ .remove = &vsc85xx_remove, + .probe = &vsc8584_probe, + .get_tunable = &vsc85xx_get_tunable, + .set_tunable = &vsc85xx_set_tunable, +diff --git a/drivers/net/phy/mscc/mscc_ptp.c b/drivers/net/phy/mscc/mscc_ptp.c +index 275706de5847cd..de6c7312e8f290 100644 +--- a/drivers/net/phy/mscc/mscc_ptp.c ++++ b/drivers/net/phy/mscc/mscc_ptp.c +@@ -1194,9 +1194,7 @@ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts, + { + struct vsc8531_private *vsc8531 = + container_of(mii_ts, struct vsc8531_private, mii_ts); +- struct skb_shared_hwtstamps *shhwtstamps = NULL; + struct vsc85xx_ptphdr *ptphdr; +- struct timespec64 ts; + unsigned long ns; + + if (!vsc8531->ptp->configured) +@@ -1206,27 +1204,52 @@ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts, + type == PTP_CLASS_NONE) + return false; + +- vsc85xx_gettime(&vsc8531->ptp->caps, &ts); +- + ptphdr = get_ptp_header_rx(skb, vsc8531->ptp->rx_filter); + if (!ptphdr) + return false; + +- shhwtstamps = skb_hwtstamps(skb); +- memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps)); +- + ns = ntohl(ptphdr->rsrvd2); + +- /* nsec is in reserved field */ +- if (ts.tv_nsec < ns) +- ts.tv_sec--; ++ VSC8531_SKB_CB(skb)->ns = ns; ++ skb_queue_tail(&vsc8531->rx_skbs_list, skb); + +- shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ns); +- netif_rx(skb); ++ ptp_schedule_worker(vsc8531->ptp->ptp_clock, 0); + + return true; + } + ++static long vsc85xx_do_aux_work(struct ptp_clock_info *info) ++{ ++ struct vsc85xx_ptp *ptp = container_of(info, struct vsc85xx_ptp, caps); ++ struct skb_shared_hwtstamps *shhwtstamps = NULL; ++ struct phy_device *phydev = ptp->phydev; ++ struct vsc8531_private *priv = phydev->priv; ++ struct sk_buff_head received; ++ struct sk_buff *rx_skb; ++ struct timespec64 ts; ++ unsigned long flags; ++ ++ __skb_queue_head_init(&received); ++ spin_lock_irqsave(&priv->rx_skbs_list.lock, flags); ++ skb_queue_splice_tail_init(&priv->rx_skbs_list, &received); ++ spin_unlock_irqrestore(&priv->rx_skbs_list.lock, flags); ++ ++ vsc85xx_gettime(info, &ts); ++ while ((rx_skb = __skb_dequeue(&received)) != NULL) { ++ shhwtstamps = skb_hwtstamps(rx_skb); ++ memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps)); ++ ++ if (ts.tv_nsec < VSC8531_SKB_CB(rx_skb)->ns) ++ ts.tv_sec--; ++ ++ shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ++ VSC8531_SKB_CB(rx_skb)->ns); ++ netif_rx(rx_skb); ++ } ++ ++ return -1; ++} ++ + static const struct ptp_clock_info vsc85xx_clk_caps = { + .owner = THIS_MODULE, + .name = "VSC85xx timer", +@@ -1240,6 +1263,7 @@ static const struct ptp_clock_info vsc85xx_clk_caps = { + .adjfine = &vsc85xx_adjfine, + .gettime64 = &vsc85xx_gettime, + .settime64 = &vsc85xx_settime, ++ .do_aux_work = &vsc85xx_do_aux_work, + }; + + static struct vsc8531_private *vsc8584_base_priv(struct phy_device *phydev) +@@ -1567,6 +1591,7 @@ int vsc8584_ptp_probe(struct phy_device *phydev) + + mutex_init(&vsc8531->phc_lock); + mutex_init(&vsc8531->ts_lock); ++ skb_queue_head_init(&vsc8531->rx_skbs_list); + + /* Retrieve the shared load/save GPIO. Request it as non exclusive as + * the same GPIO can be requested by all the PHYs of the same package. +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c +index def84e87e05b2e..5e7672d2022c92 100644 +--- a/drivers/net/ppp/ppp_generic.c ++++ b/drivers/net/ppp/ppp_generic.c +@@ -33,6 +33,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -1612,11 +1613,14 @@ static int ppp_fill_forward_path(struct net_device_path_ctx *ctx, + if (ppp->flags & SC_MULTILINK) + return -EOPNOTSUPP; + +- if (list_empty(&ppp->channels)) ++ pch = list_first_or_null_rcu(&ppp->channels, struct channel, clist); ++ if (!pch) ++ return -ENODEV; ++ ++ chan = READ_ONCE(pch->chan); ++ if (!chan) + return -ENODEV; + +- pch = list_first_entry(&ppp->channels, struct channel, clist); +- chan = pch->chan; + if (!chan->ops->fill_forward_path) + return -EOPNOTSUPP; + +@@ -2999,7 +3003,7 @@ ppp_unregister_channel(struct ppp_channel *chan) + */ + down_write(&pch->chan_sem); + spin_lock_bh(&pch->downl); +- pch->chan = NULL; ++ WRITE_ONCE(pch->chan, NULL); + spin_unlock_bh(&pch->downl); + up_write(&pch->chan_sem); + ppp_disconnect_channel(pch); +@@ -3509,7 +3513,7 @@ ppp_connect_channel(struct channel *pch, int unit) + hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */ + if (hdrlen > ppp->dev->hard_header_len) + ppp->dev->hard_header_len = hdrlen; +- list_add_tail(&pch->clist, &ppp->channels); ++ list_add_tail_rcu(&pch->clist, &ppp->channels); + ++ppp->n_channels; + pch->ppp = ppp; + refcount_inc(&ppp->file.refcnt); +@@ -3539,10 +3543,11 @@ ppp_disconnect_channel(struct channel *pch) + if (ppp) { + /* remove it from the ppp unit's list */ + ppp_lock(ppp); +- list_del(&pch->clist); ++ list_del_rcu(&pch->clist); + if (--ppp->n_channels == 0) + wake_up_interruptible(&ppp->file.rwait); + ppp_unlock(ppp); ++ synchronize_net(); + if (refcount_dec_and_test(&ppp->file.refcnt)) + ppp_destroy_interface(ppp); + err = 0; +diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c +index d9f5942ccc447b..792ddda1ad493d 100644 +--- a/drivers/net/usb/asix_devices.c ++++ b/drivers/net/usb/asix_devices.c +@@ -676,7 +676,7 @@ static int ax88772_init_mdio(struct usbnet *dev) + priv->mdio->read = &asix_mdio_bus_read; + priv->mdio->write = &asix_mdio_bus_write; + priv->mdio->name = "Asix MDIO Bus"; +- priv->mdio->phy_mask = ~(BIT(priv->phy_addr) | BIT(AX_EMBD_PHY_ADDR)); ++ priv->mdio->phy_mask = ~(BIT(priv->phy_addr & 0x1f) | BIT(AX_EMBD_PHY_ADDR)); + /* mii bus name is usb-- */ + snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d", + dev->udev->bus->busnum, dev->udev->devnum); +diff --git a/drivers/net/wireless/ath/ath11k/ce.c b/drivers/net/wireless/ath/ath11k/ce.c +index 746038006eb465..6a4895310159dd 100644 +--- a/drivers/net/wireless/ath/ath11k/ce.c ++++ b/drivers/net/wireless/ath/ath11k/ce.c +@@ -393,9 +393,6 @@ static int ath11k_ce_completed_recv_next(struct ath11k_ce_pipe *pipe, + goto err; + } + +- /* Make sure descriptor is read after the head pointer. */ +- dma_rmb(); +- + *nbytes = ath11k_hal_ce_dst_status_get_length(desc); + + *skb = pipe->dest_ring->skb[sw_index]; +diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c +index 9230a965f6f0eb..065fc40e254166 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_rx.c ++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c +@@ -2650,9 +2650,6 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id, + try_again: + ath11k_hal_srng_access_begin(ab, srng); + +- /* Make sure descriptor is read after the head pointer. */ +- dma_rmb(); +- + while (likely(desc = + (struct hal_reo_dest_ring *)ath11k_hal_srng_dst_get_next_entry(ab, + srng))) { +diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c +index cab11a35f9115d..d8b066946a090f 100644 +--- a/drivers/net/wireless/ath/ath11k/hal.c ++++ b/drivers/net/wireless/ath/ath11k/hal.c +@@ -823,13 +823,23 @@ u32 *ath11k_hal_srng_src_peek(struct ath11k_base *ab, struct hal_srng *srng) + + void ath11k_hal_srng_access_begin(struct ath11k_base *ab, struct hal_srng *srng) + { ++ u32 hp; ++ + lockdep_assert_held(&srng->lock); + + if (srng->ring_dir == HAL_SRNG_DIR_SRC) { + srng->u.src_ring.cached_tp = + *(volatile u32 *)srng->u.src_ring.tp_addr; + } else { +- srng->u.dst_ring.cached_hp = READ_ONCE(*srng->u.dst_ring.hp_addr); ++ hp = READ_ONCE(*srng->u.dst_ring.hp_addr); ++ ++ if (hp != srng->u.dst_ring.cached_hp) { ++ srng->u.dst_ring.cached_hp = hp; ++ /* Make sure descriptor is read after the head ++ * pointer. ++ */ ++ dma_rmb(); ++ } + + /* Try to prefetch the next descriptor in the ring */ + if (srng->flags & HAL_SRNG_FLAGS_CACHED) +@@ -844,7 +854,6 @@ void ath11k_hal_srng_access_end(struct ath11k_base *ab, struct hal_srng *srng) + { + lockdep_assert_held(&srng->lock); + +- /* TODO: See if we need a write memory barrier here */ + if (srng->flags & HAL_SRNG_FLAGS_LMAC_RING) { + /* For LMAC rings, ring pointer updates are done through FW and + * hence written to a shared memory location that is read by FW +@@ -852,21 +861,37 @@ void ath11k_hal_srng_access_end(struct ath11k_base *ab, struct hal_srng *srng) + if (srng->ring_dir == HAL_SRNG_DIR_SRC) { + srng->u.src_ring.last_tp = + *(volatile u32 *)srng->u.src_ring.tp_addr; +- *srng->u.src_ring.hp_addr = srng->u.src_ring.hp; ++ /* Make sure descriptor is written before updating the ++ * head pointer. ++ */ ++ dma_wmb(); ++ WRITE_ONCE(*srng->u.src_ring.hp_addr, srng->u.src_ring.hp); + } else { + srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr; +- *srng->u.dst_ring.tp_addr = srng->u.dst_ring.tp; ++ /* Make sure descriptor is read before updating the ++ * tail pointer. ++ */ ++ dma_mb(); ++ WRITE_ONCE(*srng->u.dst_ring.tp_addr, srng->u.dst_ring.tp); + } + } else { + if (srng->ring_dir == HAL_SRNG_DIR_SRC) { + srng->u.src_ring.last_tp = + *(volatile u32 *)srng->u.src_ring.tp_addr; ++ /* Assume implementation use an MMIO write accessor ++ * which has the required wmb() so that the descriptor ++ * is written before the updating the head pointer. ++ */ + ath11k_hif_write32(ab, + (unsigned long)srng->u.src_ring.hp_addr - + (unsigned long)ab->mem, + srng->u.src_ring.hp); + } else { + srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr; ++ /* Make sure descriptor is read before updating the ++ * tail pointer. ++ */ ++ mb(); + ath11k_hif_write32(ab, + (unsigned long)srng->u.dst_ring.tp_addr - + (unsigned long)ab->mem, +diff --git a/drivers/net/wireless/ath/ath12k/ce.c b/drivers/net/wireless/ath/ath12k/ce.c +index 3f3439262cf47e..f7c15b547504d5 100644 +--- a/drivers/net/wireless/ath/ath12k/ce.c ++++ b/drivers/net/wireless/ath/ath12k/ce.c +@@ -433,9 +433,6 @@ static int ath12k_ce_completed_recv_next(struct ath12k_ce_pipe *pipe, + goto err; + } + +- /* Make sure descriptor is read after the head pointer. */ +- dma_rmb(); +- + *nbytes = ath12k_hal_ce_dst_status_get_length(desc); + + *skb = pipe->dest_ring->skb[sw_index]; +diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c +index a301898e5849ad..4ef8b4e99c25f7 100644 +--- a/drivers/net/wireless/ath/ath12k/hal.c ++++ b/drivers/net/wireless/ath/ath12k/hal.c +@@ -2143,13 +2143,24 @@ void *ath12k_hal_srng_src_get_next_reaped(struct ath12k_base *ab, + + void ath12k_hal_srng_access_begin(struct ath12k_base *ab, struct hal_srng *srng) + { ++ u32 hp; ++ + lockdep_assert_held(&srng->lock); + +- if (srng->ring_dir == HAL_SRNG_DIR_SRC) ++ if (srng->ring_dir == HAL_SRNG_DIR_SRC) { + srng->u.src_ring.cached_tp = + *(volatile u32 *)srng->u.src_ring.tp_addr; +- else +- srng->u.dst_ring.cached_hp = READ_ONCE(*srng->u.dst_ring.hp_addr); ++ } else { ++ hp = READ_ONCE(*srng->u.dst_ring.hp_addr); ++ ++ if (hp != srng->u.dst_ring.cached_hp) { ++ srng->u.dst_ring.cached_hp = hp; ++ /* Make sure descriptor is read after the head ++ * pointer. ++ */ ++ dma_rmb(); ++ } ++ } + } + + /* Update cached ring head/tail pointers to HW. ath12k_hal_srng_access_begin() +@@ -2159,7 +2170,6 @@ void ath12k_hal_srng_access_end(struct ath12k_base *ab, struct hal_srng *srng) + { + lockdep_assert_held(&srng->lock); + +- /* TODO: See if we need a write memory barrier here */ + if (srng->flags & HAL_SRNG_FLAGS_LMAC_RING) { + /* For LMAC rings, ring pointer updates are done through FW and + * hence written to a shared memory location that is read by FW +@@ -2167,21 +2177,37 @@ void ath12k_hal_srng_access_end(struct ath12k_base *ab, struct hal_srng *srng) + if (srng->ring_dir == HAL_SRNG_DIR_SRC) { + srng->u.src_ring.last_tp = + *(volatile u32 *)srng->u.src_ring.tp_addr; +- *srng->u.src_ring.hp_addr = srng->u.src_ring.hp; ++ /* Make sure descriptor is written before updating the ++ * head pointer. ++ */ ++ dma_wmb(); ++ WRITE_ONCE(*srng->u.src_ring.hp_addr, srng->u.src_ring.hp); + } else { + srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr; +- *srng->u.dst_ring.tp_addr = srng->u.dst_ring.tp; ++ /* Make sure descriptor is read before updating the ++ * tail pointer. ++ */ ++ dma_mb(); ++ WRITE_ONCE(*srng->u.dst_ring.tp_addr, srng->u.dst_ring.tp); + } + } else { + if (srng->ring_dir == HAL_SRNG_DIR_SRC) { + srng->u.src_ring.last_tp = + *(volatile u32 *)srng->u.src_ring.tp_addr; ++ /* Assume implementation use an MMIO write accessor ++ * which has the required wmb() so that the descriptor ++ * is written before the updating the head pointer. ++ */ + ath12k_hif_write32(ab, + (unsigned long)srng->u.src_ring.hp_addr - + (unsigned long)ab->mem, + srng->u.src_ring.hp); + } else { + srng->u.dst_ring.last_hp = *srng->u.dst_ring.hp_addr; ++ /* Make sure descriptor is read before updating the ++ * tail pointer. ++ */ ++ mb(); + ath12k_hif_write32(ab, + (unsigned long)srng->u.dst_ring.tp_addr - + (unsigned long)ab->mem, +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c +index d0faba24056105..b4bba67a45ec36 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c +@@ -919,7 +919,7 @@ void wlc_lcnphy_read_table(struct brcms_phy *pi, struct phytbl_info *pti) + + static void + wlc_lcnphy_common_read_table(struct brcms_phy *pi, u32 tbl_id, +- const u16 *tbl_ptr, u32 tbl_len, ++ u16 *tbl_ptr, u32 tbl_len, + u32 tbl_width, u32 tbl_offset) + { + struct phytbl_info tab; +diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c +index 5a38cfaf989b1c..fda03512944d52 100644 +--- a/drivers/pci/controller/dwc/pci-imx6.c ++++ b/drivers/pci/controller/dwc/pci-imx6.c +@@ -860,7 +860,6 @@ static int imx95_pcie_core_reset(struct imx_pcie *imx_pcie, bool assert) + static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie) + { + reset_control_assert(imx_pcie->pciephy_reset); +- reset_control_assert(imx_pcie->apps_reset); + + if (imx_pcie->drvdata->core_reset) + imx_pcie->drvdata->core_reset(imx_pcie, true); +@@ -872,7 +871,6 @@ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie) + static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie) + { + reset_control_deassert(imx_pcie->pciephy_reset); +- reset_control_deassert(imx_pcie->apps_reset); + + if (imx_pcie->drvdata->core_reset) + imx_pcie->drvdata->core_reset(imx_pcie, false); +@@ -1247,6 +1245,9 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp) + } + } + ++ /* Make sure that PCIe LTSSM is cleared */ ++ imx_pcie_ltssm_disable(dev); ++ + ret = imx_pcie_deassert_core_reset(imx_pcie); + if (ret < 0) { + dev_err(dev, "pcie deassert core reset failed: %d\n", ret); +@@ -1385,6 +1386,8 @@ static const struct pci_epc_features imx8m_pcie_epc_features = { + .msix_capable = false, + .bar[BAR_1] = { .type = BAR_RESERVED, }, + .bar[BAR_3] = { .type = BAR_RESERVED, }, ++ .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, }, ++ .bar[BAR_5] = { .type = BAR_RESERVED, }, + .align = SZ_64K, + }; + +@@ -1465,9 +1468,6 @@ static int imx_add_pcie_ep(struct imx_pcie *imx_pcie, + + pci_epc_init_notify(ep->epc); + +- /* Start LTSSM. */ +- imx_pcie_ltssm_enable(dev); +- + return 0; + } + +@@ -1912,7 +1912,7 @@ static const struct imx_pcie_drvdata drvdata[] = { + .mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE, + .mode_off[1] = IOMUXC_GPR12, + .mode_mask[1] = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, +- .epc_features = &imx8m_pcie_epc_features, ++ .epc_features = &imx8q_pcie_epc_features, + .init_phy = imx8mq_pcie_init_phy, + .enable_ref_clk = imx8mm_pcie_enable_ref_clk, + }, +diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c +index 4d794964fa0fd3..053e9c54043958 100644 +--- a/drivers/pci/controller/dwc/pcie-designware.c ++++ b/drivers/pci/controller/dwc/pcie-designware.c +@@ -714,6 +714,14 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci) + return -ETIMEDOUT; + } + ++ /* ++ * As per PCIe r6.0, sec 6.6.1, a Downstream Port that supports Link ++ * speeds greater than 5.0 GT/s, software must wait a minimum of 100 ms ++ * after Link training completes before sending a Configuration Request. ++ */ ++ if (pci->max_link_speed > 2) ++ msleep(PCIE_RESET_CONFIG_WAIT_MS); ++ + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); + val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); + +diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c +index 55416b8311dda2..300cd85fa035cd 100644 +--- a/drivers/pci/controller/pcie-rockchip-ep.c ++++ b/drivers/pci/controller/pcie-rockchip-ep.c +@@ -518,9 +518,9 @@ static void rockchip_pcie_ep_retrain_link(struct rockchip_pcie *rockchip) + { + u32 status; + +- status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_LCS); ++ status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + PCI_EXP_LNKCTL); + status |= PCI_EXP_LNKCTL_RL; +- rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_LCS); ++ rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_BASE + PCI_EXP_LNKCTL); + } + + static bool rockchip_pcie_ep_link_up(struct rockchip_pcie *rockchip) +diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c +index 648b6fcb93b0b7..77f065284fa3b1 100644 +--- a/drivers/pci/controller/pcie-rockchip-host.c ++++ b/drivers/pci/controller/pcie-rockchip-host.c +@@ -11,6 +11,7 @@ + * ARM PCI Host generic driver. + */ + ++#include + #include + #include + #include +@@ -40,18 +41,18 @@ static void rockchip_pcie_enable_bw_int(struct rockchip_pcie *rockchip) + { + u32 status; + +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE); +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + } + + static void rockchip_pcie_clr_bw_int(struct rockchip_pcie *rockchip) + { + u32 status; + +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16; +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + } + + static void rockchip_pcie_update_txcredit_mui(struct rockchip_pcie *rockchip) +@@ -269,7 +270,7 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip) + scale = 3; /* 0.001x */ + curr = curr / 1000; /* convert to mA */ + power = (curr * 3300) / 1000; /* milliwatt */ +- while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) { ++ while (power > FIELD_MAX(PCI_EXP_DEVCAP_PWR_VAL)) { + if (!scale) { + dev_warn(rockchip->dev, "invalid power supply\n"); + return; +@@ -278,10 +279,10 @@ static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip) + power = power / 10; + } + +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR); +- status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) | +- (scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT); +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCAP); ++ status |= FIELD_PREP(PCI_EXP_DEVCAP_PWR_VAL, power); ++ status |= FIELD_PREP(PCI_EXP_DEVCAP_PWR_SCL, scale); ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCAP); + } + + /** +@@ -309,14 +310,14 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip) + rockchip_pcie_set_power_limit(rockchip); + + /* Set RC's clock architecture as common clock */ +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + status |= PCI_EXP_LNKSTA_SLC << 16; +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + + /* Set RC's RCB to 128 */ +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + status |= PCI_EXP_LNKCTL_RCB; +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + + /* Enable Gen1 training */ + rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, +@@ -341,9 +342,13 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip) + * Enable retrain for gen2. This should be configured only after + * gen1 finished. + */ +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL2); ++ status &= ~PCI_EXP_LNKCTL2_TLS; ++ status |= PCI_EXP_LNKCTL2_TLS_5_0GT; ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL2); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + status |= PCI_EXP_LNKCTL_RL; +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCTL); + + err = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL, + status, PCIE_LINK_IS_GEN2(status), 20, +@@ -380,15 +385,15 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip) + + /* Clear L0s from RC's link cap */ + if (of_property_read_bool(dev->of_node, "aspm-no-l0s")) { +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LINK_CAP); +- status &= ~PCIE_RC_CONFIG_LINK_CAP_L0S; +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCAP); ++ status &= ~PCI_EXP_LNKCAP_ASPM_L0S; ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_LNKCAP); + } + +- status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCSR); +- status &= ~PCIE_RC_CONFIG_DCSR_MPS_MASK; +- status |= PCIE_RC_CONFIG_DCSR_MPS_256; +- rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR); ++ status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCTL); ++ status &= ~PCI_EXP_DEVCTL_PAYLOAD; ++ status |= PCI_EXP_DEVCTL_PAYLOAD_256B; ++ rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_CR + PCI_EXP_DEVCTL); + + return 0; + err_power_off_phy: +diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h +index 5864a20323f21a..f5cbf3c9d2d989 100644 +--- a/drivers/pci/controller/pcie-rockchip.h ++++ b/drivers/pci/controller/pcie-rockchip.h +@@ -155,17 +155,7 @@ + #define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00) + #define PCIE_EP_CONFIG_LCS (PCIE_EP_CONFIG_BASE + 0xd0) + #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) +-#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) +-#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 +-#define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff +-#define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26 +-#define PCIE_RC_CONFIG_DCSR (PCIE_RC_CONFIG_BASE + 0xc8) +-#define PCIE_RC_CONFIG_DCSR_MPS_MASK GENMASK(7, 5) +-#define PCIE_RC_CONFIG_DCSR_MPS_256 (0x1 << 5) +-#define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc) +-#define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10) +-#define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0) +-#define PCIE_EP_CONFIG_LCS (PCIE_EP_CONFIG_BASE + 0xd0) ++#define PCIE_RC_CONFIG_CR (PCIE_RC_CONFIG_BASE + 0xc0) + #define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c) + #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) + #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) +diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c +index d712c7a866d261..ef50c82e647f4d 100644 +--- a/drivers/pci/endpoint/pci-ep-cfs.c ++++ b/drivers/pci/endpoint/pci-ep-cfs.c +@@ -691,6 +691,7 @@ void pci_ep_cfs_remove_epf_group(struct config_group *group) + if (IS_ERR_OR_NULL(group)) + return; + ++ list_del(&group->group_entry); + configfs_unregister_default_group(group); + } + EXPORT_SYMBOL(pci_ep_cfs_remove_epf_group); +diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c +index 577a9e490115c9..167dc6ee63f7af 100644 +--- a/drivers/pci/endpoint/pci-epf-core.c ++++ b/drivers/pci/endpoint/pci-epf-core.c +@@ -338,7 +338,7 @@ static void pci_epf_remove_cfs(struct pci_epf_driver *driver) + mutex_lock(&pci_epf_mutex); + list_for_each_entry_safe(group, tmp, &driver->epf_group, group_entry) + pci_ep_cfs_remove_epf_group(group); +- list_del(&driver->epf_group); ++ WARN_ON(!list_empty(&driver->epf_group)); + mutex_unlock(&pci_epf_mutex); + } + +diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h +index 98d6fccb383e55..6ff1990c482f4e 100644 +--- a/drivers/pci/pci.h ++++ b/drivers/pci/pci.h +@@ -391,12 +391,14 @@ void pci_bus_put(struct pci_bus *bus); + + #define PCIE_LNKCAP_SLS2SPEED(lnkcap) \ + ({ \ +- ((lnkcap) == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT : \ +- (lnkcap) == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT : \ +- (lnkcap) == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT : \ +- (lnkcap) == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT : \ +- (lnkcap) == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT : \ +- (lnkcap) == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT : \ ++ u32 lnkcap_sls = (lnkcap) & PCI_EXP_LNKCAP_SLS; \ ++ \ ++ (lnkcap_sls == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT : \ ++ lnkcap_sls == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT : \ ++ lnkcap_sls == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT : \ ++ lnkcap_sls == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT : \ ++ lnkcap_sls == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT : \ ++ lnkcap_sls == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT : \ + PCI_SPEED_UNKNOWN); \ + }) + +@@ -411,13 +413,17 @@ void pci_bus_put(struct pci_bus *bus); + PCI_SPEED_UNKNOWN) + + #define PCIE_LNKCTL2_TLS2SPEED(lnkctl2) \ +- ((lnkctl2) == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT : \ +- (lnkctl2) == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT : \ +- (lnkctl2) == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT : \ +- (lnkctl2) == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT : \ +- (lnkctl2) == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT : \ +- (lnkctl2) == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT : \ +- PCI_SPEED_UNKNOWN) ++({ \ ++ u16 lnkctl2_tls = (lnkctl2) & PCI_EXP_LNKCTL2_TLS; \ ++ \ ++ (lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT : \ ++ lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT : \ ++ lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT : \ ++ lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT : \ ++ lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT : \ ++ lnkctl2_tls == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT : \ ++ PCI_SPEED_UNKNOWN); \ ++}) + + /* PCIe speed to Mb/s reduced by encoding overhead */ + #define PCIE_SPEED2MBS_ENC(speed) \ +diff --git a/drivers/pci/pcie/portdrv.c b/drivers/pci/pcie/portdrv.c +index e8318fd5f6ed53..d1b68c18444f80 100644 +--- a/drivers/pci/pcie/portdrv.c ++++ b/drivers/pci/pcie/portdrv.c +@@ -220,7 +220,7 @@ static int get_port_device_capability(struct pci_dev *dev) + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); + int services = 0; + +- if (dev->is_hotplug_bridge && ++ if (dev->is_pciehp && + (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || + pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM) && + (pcie_ports_native || host->native_pcie_hotplug)) { +diff --git a/drivers/phy/qualcomm/phy-qcom-m31.c b/drivers/phy/qualcomm/phy-qcom-m31.c +index 20d4c020a83c1f..8b0f8a3a059c21 100644 +--- a/drivers/phy/qualcomm/phy-qcom-m31.c ++++ b/drivers/phy/qualcomm/phy-qcom-m31.c +@@ -58,14 +58,16 @@ + #define USB2_0_TX_ENABLE BIT(2) + + #define USB2PHY_USB_PHY_M31_XCFGI_4 0xc8 +- #define HSTX_SLEW_RATE_565PS GENMASK(1, 0) ++ #define HSTX_SLEW_RATE_400PS GENMASK(2, 0) + #define PLL_CHARGING_PUMP_CURRENT_35UA GENMASK(4, 3) + #define ODT_VALUE_38_02_OHM GENMASK(7, 6) + + #define USB2PHY_USB_PHY_M31_XCFGI_5 0xcc +- #define ODT_VALUE_45_02_OHM BIT(2) + #define HSTX_PRE_EMPHASIS_LEVEL_0_55MA BIT(0) + ++#define USB2PHY_USB_PHY_M31_XCFGI_9 0xdc ++ #define HSTX_CURRENT_17_1MA_385MV BIT(1) ++ + #define USB2PHY_USB_PHY_M31_XCFGI_11 0xe4 + #define XCFG_COARSE_TUNE_NUM BIT(1) + #define XCFG_FINE_TUNE_NUM BIT(3) +@@ -164,7 +166,7 @@ static struct m31_phy_regs m31_ipq5332_regs[] = { + }, + { + USB2PHY_USB_PHY_M31_XCFGI_4, +- HSTX_SLEW_RATE_565PS | PLL_CHARGING_PUMP_CURRENT_35UA | ODT_VALUE_38_02_OHM, ++ HSTX_SLEW_RATE_400PS | PLL_CHARGING_PUMP_CURRENT_35UA | ODT_VALUE_38_02_OHM, + 0 + }, + { +@@ -174,9 +176,13 @@ static struct m31_phy_regs m31_ipq5332_regs[] = { + }, + { + USB2PHY_USB_PHY_M31_XCFGI_5, +- ODT_VALUE_45_02_OHM | HSTX_PRE_EMPHASIS_LEVEL_0_55MA, ++ HSTX_PRE_EMPHASIS_LEVEL_0_55MA, + 4 + }, ++ { ++ USB2PHY_USB_PHY_M31_XCFGI_9, ++ HSTX_CURRENT_17_1MA_385MV, ++ }, + { + USB_PHY_UTMI_CTRL5, + 0x0, +diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c +index 110771a8645ed7..fd58781a2fb7da 100644 +--- a/drivers/platform/chrome/cros_ec.c ++++ b/drivers/platform/chrome/cros_ec.c +@@ -318,6 +318,9 @@ EXPORT_SYMBOL(cros_ec_register); + */ + void cros_ec_unregister(struct cros_ec_device *ec_dev) + { ++ if (ec_dev->mkbp_event_supported) ++ blocking_notifier_chain_unregister(&ec_dev->event_notifier, ++ &ec_dev->notifier_ready); + platform_device_unregister(ec_dev->pd); + platform_device_unregister(ec_dev->ec); + mutex_destroy(&ec_dev->lock); +diff --git a/drivers/platform/x86/amd/hsmp/hsmp.c b/drivers/platform/x86/amd/hsmp/hsmp.c +index 885e2f8136fd44..19f82c1d309059 100644 +--- a/drivers/platform/x86/amd/hsmp/hsmp.c ++++ b/drivers/platform/x86/amd/hsmp/hsmp.c +@@ -356,6 +356,11 @@ ssize_t hsmp_metric_tbl_read(struct hsmp_socket *sock, char *buf, size_t size) + if (!sock || !buf) + return -EINVAL; + ++ if (!sock->metric_tbl_addr) { ++ dev_err(sock->dev, "Metrics table address not available\n"); ++ return -ENOMEM; ++ } ++ + /* Do not support lseek(), also don't allow more than the size of metric table */ + if (size != sizeof(struct hsmp_metric_table)) { + dev_err(sock->dev, "Wrong buffer size\n"); +diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c +index 44d9948ed2241b..dd7cdae5bb8662 100644 +--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c ++++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c +@@ -191,9 +191,14 @@ static int uncore_read_control_freq(struct uncore_data *data, unsigned int *valu + static int write_eff_lat_ctrl(struct uncore_data *data, unsigned int val, enum uncore_index index) + { + struct tpmi_uncore_cluster_info *cluster_info; ++ struct tpmi_uncore_struct *uncore_root; + u64 control; + + cluster_info = container_of(data, struct tpmi_uncore_cluster_info, uncore_data); ++ uncore_root = cluster_info->uncore_root; ++ ++ if (uncore_root->write_blocked) ++ return -EPERM; + + if (cluster_info->root_domain) + return -ENODATA; +diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c +index 7ee7b65b9b90c5..5b399de16d6040 100644 +--- a/drivers/pwm/pwm-imx-tpm.c ++++ b/drivers/pwm/pwm-imx-tpm.c +@@ -204,6 +204,15 @@ static int pwm_imx_tpm_apply_hw(struct pwm_chip *chip, + val |= FIELD_PREP(PWM_IMX_TPM_SC_PS, p->prescale); + writel(val, tpm->base + PWM_IMX_TPM_SC); + ++ /* ++ * if the counter is disabled (CMOD == 0), programming the new ++ * period length (MOD) will not reset the counter (CNT). If ++ * CNT.COUNT happens to be bigger than the new MOD value then ++ * the counter will end up being reset way too late. Therefore, ++ * manually reset it to 0. ++ */ ++ if (!cmod) ++ writel(0x0, tpm->base + PWM_IMX_TPM_CNT); + /* + * set period count: + * if the PWM is disabled (CMOD[1:0] = 2b00), then MOD register +diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c +index 33d3554b9197ab..bfbfe7f2917b1d 100644 +--- a/drivers/pwm/pwm-mediatek.c ++++ b/drivers/pwm/pwm-mediatek.c +@@ -115,6 +115,26 @@ static inline void pwm_mediatek_writel(struct pwm_mediatek_chip *chip, + writel(value, chip->regs + chip->soc->reg_offset[num] + offset); + } + ++static void pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm) ++{ ++ struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip); ++ u32 value; ++ ++ value = readl(pc->regs); ++ value |= BIT(pwm->hwpwm); ++ writel(value, pc->regs); ++} ++ ++static void pwm_mediatek_disable(struct pwm_chip *chip, struct pwm_device *pwm) ++{ ++ struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip); ++ u32 value; ++ ++ value = readl(pc->regs); ++ value &= ~BIT(pwm->hwpwm); ++ writel(value, pc->regs); ++} ++ + static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm, + int duty_ns, int period_ns) + { +@@ -144,7 +164,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm, + do_div(resolution, clk_rate); + + cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000, resolution); +- while (cnt_period > 8191) { ++ if (!cnt_period) ++ return -EINVAL; ++ ++ while (cnt_period > 8192) { + resolution *= 2; + clkdiv++; + cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000, +@@ -167,9 +190,16 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm, + } + + cnt_duty = DIV_ROUND_CLOSEST_ULL((u64)duty_ns * 1000, resolution); ++ + pwm_mediatek_writel(pc, pwm->hwpwm, PWMCON, BIT(15) | clkdiv); +- pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period); +- pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty); ++ pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period - 1); ++ ++ if (cnt_duty) { ++ pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty - 1); ++ pwm_mediatek_enable(chip, pwm); ++ } else { ++ pwm_mediatek_disable(chip, pwm); ++ } + + out: + pwm_mediatek_clk_disable(chip, pwm); +@@ -177,35 +207,6 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm, + return ret; + } + +-static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm) +-{ +- struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip); +- u32 value; +- int ret; +- +- ret = pwm_mediatek_clk_enable(chip, pwm); +- if (ret < 0) +- return ret; +- +- value = readl(pc->regs); +- value |= BIT(pwm->hwpwm); +- writel(value, pc->regs); +- +- return 0; +-} +- +-static void pwm_mediatek_disable(struct pwm_chip *chip, struct pwm_device *pwm) +-{ +- struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip); +- u32 value; +- +- value = readl(pc->regs); +- value &= ~BIT(pwm->hwpwm); +- writel(value, pc->regs); +- +- pwm_mediatek_clk_disable(chip, pwm); +-} +- + static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm, + const struct pwm_state *state) + { +@@ -215,8 +216,10 @@ static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm, + return -EINVAL; + + if (!state->enabled) { +- if (pwm->state.enabled) ++ if (pwm->state.enabled) { + pwm_mediatek_disable(chip, pwm); ++ pwm_mediatek_clk_disable(chip, pwm); ++ } + + return 0; + } +@@ -226,7 +229,7 @@ static int pwm_mediatek_apply(struct pwm_chip *chip, struct pwm_device *pwm, + return err; + + if (!pwm->state.enabled) +- err = pwm_mediatek_enable(chip, pwm); ++ err = pwm_mediatek_clk_enable(chip, pwm); + + return err; + } +diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c +index 14d19a6d665573..49ff762eb33e1f 100644 +--- a/drivers/regulator/pca9450-regulator.c ++++ b/drivers/regulator/pca9450-regulator.c +@@ -34,7 +34,6 @@ struct pca9450 { + struct device *dev; + struct regmap *regmap; + struct gpio_desc *sd_vsel_gpio; +- struct notifier_block restart_nb; + enum pca9450_chip_type type; + unsigned int rcnt; + int irq; +@@ -967,10 +966,9 @@ static irqreturn_t pca9450_irq_handler(int irq, void *data) + return IRQ_HANDLED; + } + +-static int pca9450_i2c_restart_handler(struct notifier_block *nb, +- unsigned long action, void *data) ++static int pca9450_i2c_restart_handler(struct sys_off_data *data) + { +- struct pca9450 *pca9450 = container_of(nb, struct pca9450, restart_nb); ++ struct pca9450 *pca9450 = data->cb_data; + struct i2c_client *i2c = container_of(pca9450->dev, struct i2c_client, dev); + + dev_dbg(&i2c->dev, "Restarting device..\n"); +@@ -1128,10 +1126,9 @@ static int pca9450_i2c_probe(struct i2c_client *i2c) + pca9450->sd_vsel_fixed_low = + of_property_read_bool(ldo5->dev.of_node, "nxp,sd-vsel-fixed-low"); + +- pca9450->restart_nb.notifier_call = pca9450_i2c_restart_handler; +- pca9450->restart_nb.priority = PCA9450_RESTART_HANDLER_PRIORITY; +- +- if (register_restart_handler(&pca9450->restart_nb)) ++ if (devm_register_sys_off_handler(&i2c->dev, SYS_OFF_MODE_RESTART, ++ PCA9450_RESTART_HANDLER_PRIORITY, ++ pca9450_i2c_restart_handler, pca9450)) + dev_warn(&i2c->dev, "Failed to register restart handler\n"); + + dev_info(&i2c->dev, "%s probed.\n", +diff --git a/drivers/regulator/tps65219-regulator.c b/drivers/regulator/tps65219-regulator.c +index 5e67fdc88f49e6..d77ca486879fd6 100644 +--- a/drivers/regulator/tps65219-regulator.c ++++ b/drivers/regulator/tps65219-regulator.c +@@ -454,9 +454,9 @@ static int tps65219_regulator_probe(struct platform_device *pdev) + irq_type->irq_name, + irq_data); + if (error) +- return dev_err_probe(tps->dev, PTR_ERR(rdev), +- "Failed to request %s IRQ %d: %d\n", +- irq_type->irq_name, irq, error); ++ return dev_err_probe(tps->dev, error, ++ "Failed to request %s IRQ %d\n", ++ irq_type->irq_name, irq); + } + + for (i = 0; i < pmic->dev_irq_size; ++i) { +@@ -477,9 +477,9 @@ static int tps65219_regulator_probe(struct platform_device *pdev) + irq_type->irq_name, + irq_data); + if (error) +- return dev_err_probe(tps->dev, PTR_ERR(rdev), +- "Failed to request %s IRQ %d: %d\n", +- irq_type->irq_name, irq, error); ++ return dev_err_probe(tps->dev, error, ++ "Failed to request %s IRQ %d\n", ++ irq_type->irq_name, irq); + } + + return 0; +diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c +index 9a55e2d04e633f..2ff1658bc93ccb 100644 +--- a/drivers/s390/char/sclp.c ++++ b/drivers/s390/char/sclp.c +@@ -76,6 +76,13 @@ unsigned long sclp_console_full; + /* The currently active SCLP command word. */ + static sclp_cmdw_t active_cmd; + ++static inline struct sccb_header *sclpint_to_sccb(u32 sccb_int) ++{ ++ if (sccb_int) ++ return __va(sccb_int); ++ return NULL; ++} ++ + static inline void sclp_trace(int prio, char *id, u32 a, u64 b, bool err) + { + struct sclp_trace_entry e; +@@ -619,7 +626,7 @@ __sclp_find_req(u32 sccb) + + static bool ok_response(u32 sccb_int, sclp_cmdw_t cmd) + { +- struct sccb_header *sccb = (struct sccb_header *)__va(sccb_int); ++ struct sccb_header *sccb = sclpint_to_sccb(sccb_int); + struct evbuf_header *evbuf; + u16 response; + +@@ -658,7 +665,7 @@ static void sclp_interrupt_handler(struct ext_code ext_code, + + /* INT: Interrupt received (a=intparm, b=cmd) */ + sclp_trace_sccb(0, "INT", param32, active_cmd, active_cmd, +- (struct sccb_header *)__va(finished_sccb), ++ sclpint_to_sccb(finished_sccb), + !ok_response(finished_sccb, active_cmd)); + + if (finished_sccb) { +diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h +index 9bbc7cb98ca324..f2201108ea9116 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr.h ++++ b/drivers/scsi/mpi3mr/mpi3mr.h +@@ -1137,6 +1137,8 @@ struct scmd_priv { + * @logdata_buf: Circular buffer to store log data entries + * @logdata_buf_idx: Index of entry in buffer to store + * @logdata_entry_sz: log data entry size ++ * @adm_req_q_bar_writeq_lock: Admin request queue lock ++ * @adm_reply_q_bar_writeq_lock: Admin reply queue lock + * @pend_large_data_sz: Counter to track pending large data + * @io_throttle_data_length: I/O size to track in 512b blocks + * @io_throttle_high: I/O size to start throttle in 512b blocks +@@ -1185,7 +1187,7 @@ struct mpi3mr_ioc { + char name[MPI3MR_NAME_LENGTH]; + char driver_name[MPI3MR_NAME_LENGTH]; + +- volatile struct mpi3_sysif_registers __iomem *sysif_regs; ++ struct mpi3_sysif_registers __iomem *sysif_regs; + resource_size_t sysif_regs_phys; + int bars; + u64 dma_mask; +@@ -1339,6 +1341,8 @@ struct mpi3mr_ioc { + u8 *logdata_buf; + u16 logdata_buf_idx; + u16 logdata_entry_sz; ++ spinlock_t adm_req_q_bar_writeq_lock; ++ spinlock_t adm_reply_q_bar_writeq_lock; + + atomic_t pend_large_data_sz; + u32 io_throttle_data_length; +diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c +index 1d7901a8f0e406..0152d31d430abd 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c ++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c +@@ -23,17 +23,22 @@ module_param(poll_queues, int, 0444); + MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)"); + + #if defined(writeq) && defined(CONFIG_64BIT) +-static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr) ++static inline void mpi3mr_writeq(__u64 b, void __iomem *addr, ++ spinlock_t *write_queue_lock) + { + writeq(b, addr); + } + #else +-static inline void mpi3mr_writeq(__u64 b, volatile void __iomem *addr) ++static inline void mpi3mr_writeq(__u64 b, void __iomem *addr, ++ spinlock_t *write_queue_lock) + { + __u64 data_out = b; ++ unsigned long flags; + ++ spin_lock_irqsave(write_queue_lock, flags); + writel((u32)(data_out), addr); + writel((u32)(data_out >> 32), (addr + 4)); ++ spin_unlock_irqrestore(write_queue_lock, flags); + } + #endif + +@@ -428,8 +433,8 @@ static void mpi3mr_process_admin_reply_desc(struct mpi3mr_ioc *mrioc, + MPI3MR_SENSE_BUF_SZ); + } + if (cmdptr->is_waiting) { +- complete(&cmdptr->done); + cmdptr->is_waiting = 0; ++ complete(&cmdptr->done); + } else if (cmdptr->callback) + cmdptr->callback(mrioc, cmdptr); + } +@@ -2954,9 +2959,11 @@ static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc) + (mrioc->num_admin_req); + writel(num_admin_entries, &mrioc->sysif_regs->admin_queue_num_entries); + mpi3mr_writeq(mrioc->admin_req_dma, +- &mrioc->sysif_regs->admin_request_queue_address); ++ &mrioc->sysif_regs->admin_request_queue_address, ++ &mrioc->adm_req_q_bar_writeq_lock); + mpi3mr_writeq(mrioc->admin_reply_dma, +- &mrioc->sysif_regs->admin_reply_queue_address); ++ &mrioc->sysif_regs->admin_reply_queue_address, ++ &mrioc->adm_reply_q_bar_writeq_lock); + writel(mrioc->admin_req_pi, &mrioc->sysif_regs->admin_request_queue_pi); + writel(mrioc->admin_reply_ci, &mrioc->sysif_regs->admin_reply_queue_ci); + return retval; +diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c +index 87983ea4e06e08..e467b56949e989 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr_os.c ++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c +@@ -5383,6 +5383,8 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id) + spin_lock_init(&mrioc->tgtdev_lock); + spin_lock_init(&mrioc->watchdog_lock); + spin_lock_init(&mrioc->chain_buf_lock); ++ spin_lock_init(&mrioc->adm_req_q_bar_writeq_lock); ++ spin_lock_init(&mrioc->adm_reply_q_bar_writeq_lock); + spin_lock_init(&mrioc->sas_node_lock); + spin_lock_init(&mrioc->trigger_lock); + +diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c +index a39f1da4ce474b..a761c0aa5127fe 100644 +--- a/drivers/scsi/qla4xxx/ql4_os.c ++++ b/drivers/scsi/qla4xxx/ql4_os.c +@@ -6606,6 +6606,8 @@ static struct iscsi_endpoint *qla4xxx_get_ep_fwdb(struct scsi_qla_host *ha, + + ep = qla4xxx_ep_connect(ha->host, (struct sockaddr *)dst_addr, 0); + vfree(dst_addr); ++ if (IS_ERR(ep)) ++ return NULL; + return ep; + } + +diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c +index 44589d10b15b50..64e0facc392e5d 100644 +--- a/drivers/soc/qcom/mdt_loader.c ++++ b/drivers/soc/qcom/mdt_loader.c +@@ -18,6 +18,37 @@ + #include + #include + ++static bool mdt_header_valid(const struct firmware *fw) ++{ ++ const struct elf32_hdr *ehdr; ++ size_t phend; ++ size_t shend; ++ ++ if (fw->size < sizeof(*ehdr)) ++ return false; ++ ++ ehdr = (struct elf32_hdr *)fw->data; ++ ++ if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG)) ++ return false; ++ ++ if (ehdr->e_phentsize != sizeof(struct elf32_phdr)) ++ return false; ++ ++ phend = size_add(size_mul(sizeof(struct elf32_phdr), ehdr->e_phnum), ehdr->e_phoff); ++ if (phend > fw->size) ++ return false; ++ ++ if (ehdr->e_shentsize != sizeof(struct elf32_shdr)) ++ return false; ++ ++ shend = size_add(size_mul(sizeof(struct elf32_shdr), ehdr->e_shnum), ehdr->e_shoff); ++ if (shend > fw->size) ++ return false; ++ ++ return true; ++} ++ + static bool mdt_phdr_valid(const struct elf32_phdr *phdr) + { + if (phdr->p_type != PT_LOAD) +@@ -82,6 +113,9 @@ ssize_t qcom_mdt_get_size(const struct firmware *fw) + phys_addr_t max_addr = 0; + int i; + ++ if (!mdt_header_valid(fw)) ++ return -EINVAL; ++ + ehdr = (struct elf32_hdr *)fw->data; + phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff); + +@@ -134,6 +168,9 @@ void *qcom_mdt_read_metadata(const struct firmware *fw, size_t *data_len, + ssize_t ret; + void *data; + ++ if (!mdt_header_valid(fw)) ++ return ERR_PTR(-EINVAL); ++ + ehdr = (struct elf32_hdr *)fw->data; + phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff); + +@@ -214,6 +251,9 @@ int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, + int ret; + int i; + ++ if (!mdt_header_valid(fw)) ++ return -EINVAL; ++ + ehdr = (struct elf32_hdr *)fw->data; + phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff); + +@@ -310,6 +350,9 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw, + if (!fw || !mem_region || !mem_phys || !mem_size) + return -EINVAL; + ++ if (!mdt_header_valid(fw)) ++ return -EINVAL; ++ + is_split = qcom_mdt_bins_are_split(fw, fw_name); + ehdr = (struct elf32_hdr *)fw->data; + phdrs = (struct elf32_phdr *)(fw->data + ehdr->e_phoff); +diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c +index e0d67bfe955cde..fda41b25a77144 100644 +--- a/drivers/soc/tegra/pmc.c ++++ b/drivers/soc/tegra/pmc.c +@@ -1234,7 +1234,7 @@ static int tegra_powergate_of_get_clks(struct tegra_powergate *pg, + } + + static int tegra_powergate_of_get_resets(struct tegra_powergate *pg, +- struct device_node *np, bool off) ++ struct device_node *np) + { + struct device *dev = pg->pmc->dev; + int err; +@@ -1249,22 +1249,6 @@ static int tegra_powergate_of_get_resets(struct tegra_powergate *pg, + err = reset_control_acquire(pg->reset); + if (err < 0) { + pr_err("failed to acquire resets: %d\n", err); +- goto out; +- } +- +- if (off) { +- err = reset_control_assert(pg->reset); +- } else { +- err = reset_control_deassert(pg->reset); +- if (err < 0) +- goto out; +- +- reset_control_release(pg->reset); +- } +- +-out: +- if (err) { +- reset_control_release(pg->reset); + reset_control_put(pg->reset); + } + +@@ -1309,20 +1293,43 @@ static int tegra_powergate_add(struct tegra_pmc *pmc, struct device_node *np) + goto set_available; + } + +- err = tegra_powergate_of_get_resets(pg, np, off); ++ err = tegra_powergate_of_get_resets(pg, np); + if (err < 0) { + dev_err(dev, "failed to get resets for %pOFn: %d\n", np, err); + goto remove_clks; + } + +- if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) { +- if (off) +- WARN_ON(tegra_powergate_power_up(pg, true)); ++ /* ++ * If the power-domain is off, then ensure the resets are asserted. ++ * If the power-domain is on, then power down to ensure that when is ++ * it turned on the power-domain, clocks and resets are all in the ++ * expected state. ++ */ ++ if (off) { ++ err = reset_control_assert(pg->reset); ++ if (err) { ++ pr_err("failed to assert resets: %d\n", err); ++ goto remove_resets; ++ } ++ } else { ++ err = tegra_powergate_power_down(pg); ++ if (err) { ++ dev_err(dev, "failed to turn off PM domain %s: %d\n", ++ pg->genpd.name, err); ++ goto remove_resets; ++ } ++ } + ++ /* ++ * If PM_GENERIC_DOMAINS is not enabled, power-on ++ * the domain and skip the genpd registration. ++ */ ++ if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) { ++ WARN_ON(tegra_powergate_power_up(pg, true)); + goto remove_resets; + } + +- err = pm_genpd_init(&pg->genpd, NULL, off); ++ err = pm_genpd_init(&pg->genpd, NULL, true); + if (err < 0) { + dev_err(dev, "failed to initialise PM domain %pOFn: %d\n", np, + err); +diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c +index 5e381844523440..1a22d356a73d95 100644 +--- a/drivers/spi/spi-fsl-lpspi.c ++++ b/drivers/spi/spi-fsl-lpspi.c +@@ -331,13 +331,11 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi) + } + + if (config.speed_hz > perclk_rate / 2) { +- dev_err(fsl_lpspi->dev, +- "per-clk should be at least two times of transfer speed"); +- return -EINVAL; ++ div = 2; ++ } else { ++ div = DIV_ROUND_UP(perclk_rate, config.speed_hz); + } + +- div = DIV_ROUND_UP(perclk_rate, config.speed_hz); +- + for (prescale = 0; prescale <= prescale_max; prescale++) { + scldiv = div / (1 << prescale) - 2; + if (scldiv >= 0 && scldiv < 256) { +diff --git a/drivers/spi/spi-qpic-snand.c b/drivers/spi/spi-qpic-snand.c +index 3b757e3d00c01d..e98e997680c754 100644 +--- a/drivers/spi/spi-qpic-snand.c ++++ b/drivers/spi/spi-qpic-snand.c +@@ -216,13 +216,21 @@ static int qcom_spi_ooblayout_ecc(struct mtd_info *mtd, int section, + struct qcom_nand_controller *snandc = nand_to_qcom_snand(nand); + struct qpic_ecc *qecc = snandc->qspi->ecc; + +- if (section > 1) +- return -ERANGE; +- +- oobregion->length = qecc->ecc_bytes_hw + qecc->spare_bytes; +- oobregion->offset = mtd->oobsize - oobregion->length; ++ switch (section) { ++ case 0: ++ oobregion->offset = 0; ++ oobregion->length = qecc->bytes * (qecc->steps - 1) + ++ qecc->bbm_size; ++ return 0; ++ case 1: ++ oobregion->offset = qecc->bytes * (qecc->steps - 1) + ++ qecc->bbm_size + ++ qecc->steps * 4; ++ oobregion->length = mtd->oobsize - oobregion->offset; ++ return 0; ++ } + +- return 0; ++ return -ERANGE; + } + + static int qcom_spi_ooblayout_free(struct mtd_info *mtd, int section, +@@ -1185,7 +1193,7 @@ static int qcom_spi_program_oob(struct qcom_nand_controller *snandc, + u32 cfg0, cfg1, ecc_bch_cfg, ecc_buf_cfg; + + cfg0 = (ecc_cfg->cfg0 & ~CW_PER_PAGE_MASK) | +- FIELD_PREP(CW_PER_PAGE_MASK, num_cw - 1); ++ FIELD_PREP(CW_PER_PAGE_MASK, 0); + cfg1 = ecc_cfg->cfg1; + ecc_bch_cfg = ecc_cfg->ecc_bch_cfg; + ecc_buf_cfg = ecc_cfg->ecc_buf_cfg; +diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c +index e5e08c6f79f222..19fd31cb9bb035 100644 +--- a/drivers/staging/media/imx/imx-media-csc-scaler.c ++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c +@@ -912,7 +912,7 @@ imx_media_csc_scaler_device_init(struct imx_media_dev *md) + return &priv->vdev; + + err_m2m: +- video_set_drvdata(vfd, NULL); ++ video_device_release(vfd); + err_vfd: + kfree(priv); + return ERR_PTR(ret); +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index 6d7b8c4667c9cd..db26cf5a5c0264 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -2376,9 +2376,8 @@ int serial8250_do_startup(struct uart_port *port) + /* + * Now, initialize the UART + */ +- serial_port_out(port, UART_LCR, UART_LCR_WLEN8); +- + uart_port_lock_irqsave(port, &flags); ++ serial_port_out(port, UART_LCR, UART_LCR_WLEN8); + if (up->port.flags & UPF_FOURPORT) { + if (!up->port.irq) + up->port.mctrl |= TIOCM_OUT1; +diff --git a/drivers/tty/vt/defkeymap.c_shipped b/drivers/tty/vt/defkeymap.c_shipped +index 0c043e4f292e8a..6af7bf8d5460c5 100644 +--- a/drivers/tty/vt/defkeymap.c_shipped ++++ b/drivers/tty/vt/defkeymap.c_shipped +@@ -23,6 +23,22 @@ unsigned short plain_map[NR_KEYS] = { + 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + static unsigned short shift_map[NR_KEYS] = { +@@ -42,6 +58,22 @@ static unsigned short shift_map[NR_KEYS] = { + 0xf20b, 0xf601, 0xf602, 0xf117, 0xf600, 0xf20a, 0xf115, 0xf116, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + static unsigned short altgr_map[NR_KEYS] = { +@@ -61,6 +93,22 @@ static unsigned short altgr_map[NR_KEYS] = { + 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + static unsigned short ctrl_map[NR_KEYS] = { +@@ -80,6 +128,22 @@ static unsigned short ctrl_map[NR_KEYS] = { + 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + static unsigned short shift_ctrl_map[NR_KEYS] = { +@@ -99,6 +163,22 @@ static unsigned short shift_ctrl_map[NR_KEYS] = { + 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + static unsigned short alt_map[NR_KEYS] = { +@@ -118,6 +198,22 @@ static unsigned short alt_map[NR_KEYS] = { + 0xf118, 0xf210, 0xf211, 0xf117, 0xf600, 0xf119, 0xf115, 0xf116, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + static unsigned short ctrl_alt_map[NR_KEYS] = { +@@ -137,6 +233,22 @@ static unsigned short ctrl_alt_map[NR_KEYS] = { + 0xf118, 0xf601, 0xf602, 0xf117, 0xf600, 0xf119, 0xf115, 0xf20c, + 0xf11a, 0xf10c, 0xf10d, 0xf11b, 0xf11c, 0xf110, 0xf311, 0xf11d, + 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, ++ 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, + }; + + unsigned short *key_maps[MAX_NR_KEYMAPS] = { +diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c +index dc585079c2fb8c..ee1d9c448c7ebf 100644 +--- a/drivers/tty/vt/keyboard.c ++++ b/drivers/tty/vt/keyboard.c +@@ -1487,7 +1487,7 @@ static void kbd_keycode(unsigned int keycode, int down, bool hw_raw) + rc = atomic_notifier_call_chain(&keyboard_notifier_list, + KBD_UNICODE, ¶m); + if (rc != NOTIFY_STOP) +- if (down && !raw_mode) ++ if (down && !(raw_mode || kbd->kbdmode == VC_OFF)) + k_unicode(vc, keysym, !down); + return; + } +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index 3cc566e8bd1d26..5224a214540212 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -5531,9 +5531,9 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status) + irqreturn_t retval = IRQ_NONE; + struct uic_command *cmd; + +- spin_lock(hba->host->host_lock); ++ guard(spinlock_irqsave)(hba->host->host_lock); + cmd = hba->active_uic_cmd; +- if (WARN_ON_ONCE(!cmd)) ++ if (!cmd) + goto unlock; + + if (ufshcd_is_auto_hibern8_error(hba, intr_status)) +@@ -5558,8 +5558,6 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status) + ufshcd_add_uic_command_trace(hba, cmd, UFS_CMD_COMP); + + unlock: +- spin_unlock(hba->host->host_lock); +- + return retval; + } + +@@ -6892,7 +6890,7 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status) + bool queue_eh_work = false; + irqreturn_t retval = IRQ_NONE; + +- spin_lock(hba->host->host_lock); ++ guard(spinlock_irqsave)(hba->host->host_lock); + hba->errors |= UFSHCD_ERROR_MASK & intr_status; + + if (hba->errors & INT_FATAL_ERRORS) { +@@ -6951,7 +6949,7 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba, u32 intr_status) + */ + hba->errors = 0; + hba->uic_error = 0; +- spin_unlock(hba->host->host_lock); ++ + return retval; + } + +diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c +index 3e545af536e53e..f0adcd9dd553d2 100644 +--- a/drivers/ufs/host/ufs-exynos.c ++++ b/drivers/ufs/host/ufs-exynos.c +@@ -1110,8 +1110,8 @@ static int exynos_ufs_post_link(struct ufs_hba *hba) + hci_writel(ufs, val, HCI_TXPRDT_ENTRY_SIZE); + + hci_writel(ufs, ilog2(DATA_UNIT_SIZE), HCI_RXPRDT_ENTRY_SIZE); +- hci_writel(ufs, (1 << hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE); +- hci_writel(ufs, (1 << hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE); ++ hci_writel(ufs, BIT(hba->nutrs) - 1, HCI_UTRL_NEXUS_TYPE); ++ hci_writel(ufs, BIT(hba->nutmrs) - 1, HCI_UTMRL_NEXUS_TYPE); + hci_writel(ufs, 0xf, HCI_AXIDMA_RWDATA_BURST_LEN); + + if (ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB) +diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c +index 18a9784520013e..2e4edc192e8e9b 100644 +--- a/drivers/ufs/host/ufs-qcom.c ++++ b/drivers/ufs/host/ufs-qcom.c +@@ -2053,17 +2053,6 @@ static irqreturn_t ufs_qcom_mcq_esi_handler(int irq, void *data) + return IRQ_HANDLED; + } + +-static void ufs_qcom_irq_free(struct ufs_qcom_irq *uqi) +-{ +- for (struct ufs_qcom_irq *q = uqi; q->irq; q++) +- devm_free_irq(q->hba->dev, q->irq, q->hba); +- +- platform_device_msi_free_irqs_all(uqi->hba->dev); +- devm_kfree(uqi->hba->dev, uqi); +-} +- +-DEFINE_FREE(ufs_qcom_irq, struct ufs_qcom_irq *, if (_T) ufs_qcom_irq_free(_T)) +- + static int ufs_qcom_config_esi(struct ufs_hba *hba) + { + struct ufs_qcom_host *host = ufshcd_get_variant(hba); +@@ -2078,18 +2067,18 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba) + */ + nr_irqs = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL]; + +- struct ufs_qcom_irq *qi __free(ufs_qcom_irq) = +- devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL); +- if (!qi) +- return -ENOMEM; +- /* Preset so __free() has a pointer to hba in all error paths */ +- qi[0].hba = hba; +- + ret = platform_device_msi_init_and_alloc_irqs(hba->dev, nr_irqs, + ufs_qcom_write_msi_msg); + if (ret) { +- dev_err(hba->dev, "Failed to request Platform MSI %d\n", ret); +- return ret; ++ dev_warn(hba->dev, "Platform MSI not supported or failed, continuing without ESI\n"); ++ return ret; /* Continue without ESI */ ++ } ++ ++ struct ufs_qcom_irq *qi = devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL); ++ ++ if (!qi) { ++ platform_device_msi_free_irqs_all(hba->dev); ++ return -ENOMEM; + } + + for (int idx = 0; idx < nr_irqs; idx++) { +@@ -2100,17 +2089,18 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba) + ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler, + IRQF_SHARED, "qcom-mcq-esi", qi + idx); + if (ret) { +- dev_err(hba->dev, "%s: Fail to request IRQ for %d, err = %d\n", ++ dev_err(hba->dev, "%s: Failed to request IRQ for %d, err = %d\n", + __func__, qi[idx].irq, ret); +- qi[idx].irq = 0; ++ /* Free previously allocated IRQs */ ++ for (int j = 0; j < idx; j++) ++ devm_free_irq(hba->dev, qi[j].irq, qi + j); ++ platform_device_msi_free_irqs_all(hba->dev); ++ devm_kfree(hba->dev, qi); + return ret; + } + } + +- retain_and_null_ptr(qi); +- +- if (host->hw_ver.major == 6 && host->hw_ver.minor == 0 && +- host->hw_ver.step == 0) { ++ if (host->hw_ver.major >= 6) { + ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1), + REG_UFS_CFG3); + } +diff --git a/drivers/ufs/host/ufshcd-pci.c b/drivers/ufs/host/ufshcd-pci.c +index 996387906aa14e..8aff32d7057d5e 100644 +--- a/drivers/ufs/host/ufshcd-pci.c ++++ b/drivers/ufs/host/ufshcd-pci.c +@@ -216,6 +216,32 @@ static int ufs_intel_lkf_apply_dev_quirks(struct ufs_hba *hba) + return ret; + } + ++static void ufs_intel_ctrl_uic_compl(struct ufs_hba *hba, bool enable) ++{ ++ u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE); ++ ++ if (enable) ++ set |= UIC_COMMAND_COMPL; ++ else ++ set &= ~UIC_COMMAND_COMPL; ++ ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE); ++} ++ ++static void ufs_intel_mtl_h8_notify(struct ufs_hba *hba, ++ enum uic_cmd_dme cmd, ++ enum ufs_notify_change_status status) ++{ ++ /* ++ * Disable UIC COMPL INTR to prevent access to UFSHCI after ++ * checking HCS.UPMCRS ++ */ ++ if (status == PRE_CHANGE && cmd == UIC_CMD_DME_HIBER_ENTER) ++ ufs_intel_ctrl_uic_compl(hba, false); ++ ++ if (status == POST_CHANGE && cmd == UIC_CMD_DME_HIBER_EXIT) ++ ufs_intel_ctrl_uic_compl(hba, true); ++} ++ + #define INTEL_ACTIVELTR 0x804 + #define INTEL_IDLELTR 0x808 + +@@ -442,10 +468,23 @@ static int ufs_intel_adl_init(struct ufs_hba *hba) + return ufs_intel_common_init(hba); + } + ++static void ufs_intel_mtl_late_init(struct ufs_hba *hba) ++{ ++ hba->rpm_lvl = UFS_PM_LVL_2; ++ hba->spm_lvl = UFS_PM_LVL_2; ++} ++ + static int ufs_intel_mtl_init(struct ufs_hba *hba) + { ++ struct ufs_host *ufs_host; ++ int err; ++ + hba->caps |= UFSHCD_CAP_CRYPTO | UFSHCD_CAP_WB_EN; +- return ufs_intel_common_init(hba); ++ err = ufs_intel_common_init(hba); ++ /* Get variant after it is set in ufs_intel_common_init() */ ++ ufs_host = ufshcd_get_variant(hba); ++ ufs_host->late_init = ufs_intel_mtl_late_init; ++ return err; + } + + static int ufs_qemu_get_hba_mac(struct ufs_hba *hba) +@@ -533,6 +572,7 @@ static struct ufs_hba_variant_ops ufs_intel_mtl_hba_vops = { + .init = ufs_intel_mtl_init, + .exit = ufs_intel_common_exit, + .hce_enable_notify = ufs_intel_hce_enable_notify, ++ .hibern8_notify = ufs_intel_mtl_h8_notify, + .link_startup_notify = ufs_intel_link_startup_notify, + .resume = ufs_intel_resume, + .device_reset = ufs_intel_device_reset, +diff --git a/drivers/usb/atm/cxacru.c b/drivers/usb/atm/cxacru.c +index a12ab90b3db75b..68a8e9de8b4fe9 100644 +--- a/drivers/usb/atm/cxacru.c ++++ b/drivers/usb/atm/cxacru.c +@@ -980,25 +980,60 @@ static int cxacru_fw(struct usb_device *usb_dev, enum cxacru_fw_request fw, + return ret; + } + +-static void cxacru_upload_firmware(struct cxacru_data *instance, +- const struct firmware *fw, +- const struct firmware *bp) ++ ++static int cxacru_find_firmware(struct cxacru_data *instance, ++ char *phase, const struct firmware **fw_p) + { +- int ret; ++ struct usbatm_data *usbatm = instance->usbatm; ++ struct device *dev = &usbatm->usb_intf->dev; ++ char buf[16]; ++ ++ sprintf(buf, "cxacru-%s.bin", phase); ++ usb_dbg(usbatm, "cxacru_find_firmware: looking for %s\n", buf); ++ ++ if (request_firmware(fw_p, buf, dev)) { ++ usb_dbg(usbatm, "no stage %s firmware found\n", phase); ++ return -ENOENT; ++ } ++ ++ usb_info(usbatm, "found firmware %s\n", buf); ++ ++ return 0; ++} ++ ++static int cxacru_heavy_init(struct usbatm_data *usbatm_instance, ++ struct usb_interface *usb_intf) ++{ ++ const struct firmware *fw, *bp; ++ struct cxacru_data *instance = usbatm_instance->driver_data; + struct usbatm_data *usbatm = instance->usbatm; + struct usb_device *usb_dev = usbatm->usb_dev; + __le16 signature[] = { usb_dev->descriptor.idVendor, + usb_dev->descriptor.idProduct }; + __le32 val; ++ int ret; + +- usb_dbg(usbatm, "%s\n", __func__); ++ ret = cxacru_find_firmware(instance, "fw", &fw); ++ if (ret) { ++ usb_warn(usbatm_instance, "firmware (cxacru-fw.bin) unavailable (system misconfigured?)\n"); ++ return ret; ++ } ++ ++ if (instance->modem_type->boot_rom_patch) { ++ ret = cxacru_find_firmware(instance, "bp", &bp); ++ if (ret) { ++ usb_warn(usbatm_instance, "boot ROM patch (cxacru-bp.bin) unavailable (system misconfigured?)\n"); ++ release_firmware(fw); ++ return ret; ++ } ++ } + + /* FirmwarePllFClkValue */ + val = cpu_to_le32(instance->modem_type->pll_f_clk); + ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, PLLFCLK_ADDR, (u8 *) &val, 4); + if (ret) { + usb_err(usbatm, "FirmwarePllFClkValue failed: %d\n", ret); +- return; ++ goto done; + } + + /* FirmwarePllBClkValue */ +@@ -1006,7 +1041,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, PLLBCLK_ADDR, (u8 *) &val, 4); + if (ret) { + usb_err(usbatm, "FirmwarePllBClkValue failed: %d\n", ret); +- return; ++ goto done; + } + + /* Enable SDRAM */ +@@ -1014,7 +1049,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, SDRAMEN_ADDR, (u8 *) &val, 4); + if (ret) { + usb_err(usbatm, "Enable SDRAM failed: %d\n", ret); +- return; ++ goto done; + } + + /* Firmware */ +@@ -1022,7 +1057,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, FW_ADDR, fw->data, fw->size); + if (ret) { + usb_err(usbatm, "Firmware upload failed: %d\n", ret); +- return; ++ goto done; + } + + /* Boot ROM patch */ +@@ -1031,7 +1066,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, BR_ADDR, bp->data, bp->size); + if (ret) { + usb_err(usbatm, "Boot ROM patching failed: %d\n", ret); +- return; ++ goto done; + } + } + +@@ -1039,7 +1074,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + ret = cxacru_fw(usb_dev, FW_WRITE_MEM, 0x2, 0x0, SIG_ADDR, (u8 *) signature, 4); + if (ret) { + usb_err(usbatm, "Signature storing failed: %d\n", ret); +- return; ++ goto done; + } + + usb_info(usbatm, "starting device\n"); +@@ -1051,7 +1086,7 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + } + if (ret) { + usb_err(usbatm, "Passing control to firmware failed: %d\n", ret); +- return; ++ goto done; + } + + /* Delay to allow firmware to start up. */ +@@ -1065,53 +1100,10 @@ static void cxacru_upload_firmware(struct cxacru_data *instance, + ret = cxacru_cm(instance, CM_REQUEST_CARD_GET_STATUS, NULL, 0, NULL, 0); + if (ret < 0) { + usb_err(usbatm, "modem failed to initialize: %d\n", ret); +- return; +- } +-} +- +-static int cxacru_find_firmware(struct cxacru_data *instance, +- char *phase, const struct firmware **fw_p) +-{ +- struct usbatm_data *usbatm = instance->usbatm; +- struct device *dev = &usbatm->usb_intf->dev; +- char buf[16]; +- +- sprintf(buf, "cxacru-%s.bin", phase); +- usb_dbg(usbatm, "cxacru_find_firmware: looking for %s\n", buf); +- +- if (request_firmware(fw_p, buf, dev)) { +- usb_dbg(usbatm, "no stage %s firmware found\n", phase); +- return -ENOENT; +- } +- +- usb_info(usbatm, "found firmware %s\n", buf); +- +- return 0; +-} +- +-static int cxacru_heavy_init(struct usbatm_data *usbatm_instance, +- struct usb_interface *usb_intf) +-{ +- const struct firmware *fw, *bp; +- struct cxacru_data *instance = usbatm_instance->driver_data; +- int ret = cxacru_find_firmware(instance, "fw", &fw); +- +- if (ret) { +- usb_warn(usbatm_instance, "firmware (cxacru-fw.bin) unavailable (system misconfigured?)\n"); +- return ret; ++ goto done; + } + +- if (instance->modem_type->boot_rom_patch) { +- ret = cxacru_find_firmware(instance, "bp", &bp); +- if (ret) { +- usb_warn(usbatm_instance, "boot ROM patch (cxacru-bp.bin) unavailable (system misconfigured?)\n"); +- release_firmware(fw); +- return ret; +- } +- } +- +- cxacru_upload_firmware(instance, fw, bp); +- ++done: + if (instance->modem_type->boot_rom_patch) + release_firmware(bp); + release_firmware(fw); +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index c22de97432a01b..04bb87e4615a2e 100644 +--- a/drivers/usb/core/hcd.c ++++ b/drivers/usb/core/hcd.c +@@ -1623,7 +1623,6 @@ static void __usb_hcd_giveback_urb(struct urb *urb) + struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus); + struct usb_anchor *anchor = urb->anchor; + int status = urb->unlinked; +- unsigned long flags; + + urb->hcpriv = NULL; + if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) && +@@ -1641,14 +1640,13 @@ static void __usb_hcd_giveback_urb(struct urb *urb) + /* pass ownership to the completion handler */ + urb->status = status; + /* +- * Only collect coverage in the softirq context and disable interrupts +- * to avoid scenarios with nested remote coverage collection sections +- * that KCOV does not support. +- * See the comment next to kcov_remote_start_usb_softirq() for details. ++ * This function can be called in task context inside another remote ++ * coverage collection section, but kcov doesn't support that kind of ++ * recursion yet. Only collect coverage in softirq context for now. + */ +- flags = kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); ++ kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); + urb->complete(urb); +- kcov_remote_stop_softirq(flags); ++ kcov_remote_stop_softirq(); + + usb_anchor_resume_wakeups(anchor); + atomic_dec(&urb->use_count); +@@ -2153,7 +2151,7 @@ static struct urb *request_single_step_set_feature_urb( + urb->complete = usb_ehset_completion; + urb->status = -EINPROGRESS; + urb->actual_length = 0; +- urb->transfer_flags = URB_DIR_IN; ++ urb->transfer_flags = URB_DIR_IN | URB_NO_TRANSFER_DMA_MAP; + usb_get_urb(urb); + atomic_inc(&urb->use_count); + atomic_inc(&urb->dev->urbnum); +@@ -2217,9 +2215,15 @@ int ehset_single_step_set_feature(struct usb_hcd *hcd, int port) + + /* Complete remaining DATA and STATUS stages using the same URB */ + urb->status = -EINPROGRESS; ++ urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP; + usb_get_urb(urb); + atomic_inc(&urb->use_count); + atomic_inc(&urb->dev->urbnum); ++ if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) { ++ usb_put_urb(urb); ++ goto out1; ++ } ++ + retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0); + if (!retval && !wait_for_completion_timeout(&done, + msecs_to_jiffies(2000))) { +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index 0cf94c7a2c9ce6..d6daad39491b75 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -371,6 +371,7 @@ static const struct usb_device_id usb_quirk_list[] = { + { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, + + /* SanDisk Corp. SanDisk 3.2Gen1 */ ++ { USB_DEVICE(0x0781, 0x5596), .driver_info = USB_QUIRK_DELAY_INIT }, + { USB_DEVICE(0x0781, 0x55a3), .driver_info = USB_QUIRK_DELAY_INIT }, + + /* SanDisk Extreme 55AE */ +diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c +index 3edc5aca76f97b..bce6af82f54c24 100644 +--- a/drivers/usb/dwc3/dwc3-imx8mp.c ++++ b/drivers/usb/dwc3/dwc3-imx8mp.c +@@ -244,7 +244,7 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev) + IRQF_ONESHOT, dev_name(dev), dwc3_imx); + if (err) { + dev_err(dev, "failed to request IRQ #%d --> %d\n", irq, err); +- goto depopulate; ++ goto put_dwc3; + } + + device_set_wakeup_capable(dev, true); +@@ -252,6 +252,8 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev) + + return 0; + ++put_dwc3: ++ put_device(&dwc3_imx->dwc3->dev); + depopulate: + of_platform_depopulate(dev); + remove_swnode: +@@ -265,8 +267,11 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev) + + static void dwc3_imx8mp_remove(struct platform_device *pdev) + { ++ struct dwc3_imx8mp *dwc3_imx = platform_get_drvdata(pdev); + struct device *dev = &pdev->dev; + ++ put_device(&dwc3_imx->dwc3->dev); ++ + pm_runtime_get_sync(dev); + of_platform_depopulate(dev); + device_remove_software_node(dev); +diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c +index 7d80bf7b18b0d2..55e144ba8cfc6c 100644 +--- a/drivers/usb/dwc3/dwc3-meson-g12a.c ++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c +@@ -837,6 +837,9 @@ static void dwc3_meson_g12a_remove(struct platform_device *pdev) + + usb_role_switch_unregister(priv->role_switch); + ++ put_device(priv->switch_desc.udc); ++ put_device(priv->switch_desc.usb2_port); ++ + of_platform_depopulate(dev); + + for (i = 0 ; i < PHY_COUNT ; ++i) { +diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c +index 54a4ee2b90b7f4..39c72cb52ce76a 100644 +--- a/drivers/usb/dwc3/dwc3-pci.c ++++ b/drivers/usb/dwc3/dwc3-pci.c +@@ -41,6 +41,7 @@ + #define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee + #define PCI_DEVICE_ID_INTEL_TGPH 0x43ee + #define PCI_DEVICE_ID_INTEL_JSP 0x4dee ++#define PCI_DEVICE_ID_INTEL_WCL 0x4d7e + #define PCI_DEVICE_ID_INTEL_ADL 0x460e + #define PCI_DEVICE_ID_INTEL_ADL_PCH 0x51ee + #define PCI_DEVICE_ID_INTEL_ADLN 0x465e +@@ -431,6 +432,7 @@ static const struct pci_device_id dwc3_pci_id_table[] = { + { PCI_DEVICE_DATA(INTEL, TGPLP, &dwc3_pci_intel_swnode) }, + { PCI_DEVICE_DATA(INTEL, TGPH, &dwc3_pci_intel_swnode) }, + { PCI_DEVICE_DATA(INTEL, JSP, &dwc3_pci_intel_swnode) }, ++ { PCI_DEVICE_DATA(INTEL, WCL, &dwc3_pci_intel_swnode) }, + { PCI_DEVICE_DATA(INTEL, ADL, &dwc3_pci_intel_swnode) }, + { PCI_DEVICE_DATA(INTEL, ADL_PCH, &dwc3_pci_intel_swnode) }, + { PCI_DEVICE_DATA(INTEL, ADLN, &dwc3_pci_intel_swnode) }, +diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c +index 666ac432f52d67..b4229aa13f375b 100644 +--- a/drivers/usb/dwc3/ep0.c ++++ b/drivers/usb/dwc3/ep0.c +@@ -288,7 +288,9 @@ void dwc3_ep0_out_start(struct dwc3 *dwc) + dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 8, + DWC3_TRBCTL_CONTROL_SETUP, false); + ret = dwc3_ep0_start_trans(dep); +- WARN_ON(ret < 0); ++ if (ret < 0) ++ dev_err(dwc->dev, "ep0 out start transfer failed: %d\n", ret); ++ + for (i = 2; i < DWC3_ENDPOINTS_NUM; i++) { + struct dwc3_ep *dwc3_ep; + +@@ -1061,7 +1063,9 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc, + ret = dwc3_ep0_start_trans(dep); + } + +- WARN_ON(ret < 0); ++ if (ret < 0) ++ dev_err(dwc->dev, ++ "ep0 data phase start transfer failed: %d\n", ret); + } + + static int dwc3_ep0_start_control_status(struct dwc3_ep *dep) +@@ -1078,7 +1082,12 @@ static int dwc3_ep0_start_control_status(struct dwc3_ep *dep) + + static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep) + { +- WARN_ON(dwc3_ep0_start_control_status(dep)); ++ int ret; ++ ++ ret = dwc3_ep0_start_control_status(dep); ++ if (ret) ++ dev_err(dwc->dev, ++ "ep0 status phase start transfer failed: %d\n", ret); + } + + static void dwc3_ep0_do_control_status(struct dwc3 *dwc, +@@ -1121,7 +1130,10 @@ void dwc3_ep0_end_control_data(struct dwc3 *dwc, struct dwc3_ep *dep) + cmd |= DWC3_DEPCMD_PARAM(dep->resource_index); + memset(¶ms, 0, sizeof(params)); + ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms); +- WARN_ON_ONCE(ret); ++ if (ret) ++ dev_err_ratelimited(dwc->dev, ++ "ep0 data phase end transfer failed: %d\n", ret); ++ + dep->resource_index = 0; + } + +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index 74968f93d4a353..6ab6c9f163b893 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -1774,7 +1774,11 @@ static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool int + dep->flags |= DWC3_EP_DELAY_STOP; + return 0; + } +- WARN_ON_ONCE(ret); ++ ++ if (ret) ++ dev_err_ratelimited(dep->dwc->dev, ++ "end transfer failed: %d\n", ret); ++ + dep->resource_index = 0; + + if (!interrupt) +@@ -3779,6 +3783,15 @@ static void dwc3_gadget_endpoint_transfer_complete(struct dwc3_ep *dep, + static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep, + const struct dwc3_event_depevt *event) + { ++ /* ++ * During a device-initiated disconnect, a late xferNotReady event can ++ * be generated after the End Transfer command resets the event filter, ++ * but before the controller is halted. Ignore it to prevent a new ++ * transfer from starting. ++ */ ++ if (!dep->dwc->connected) ++ return; ++ + dwc3_gadget_endpoint_frame_from_event(dep, event); + + /* +@@ -4041,7 +4054,9 @@ static void dwc3_clear_stall_all_ep(struct dwc3 *dwc) + dep->flags &= ~DWC3_EP_STALL; + + ret = dwc3_send_clear_stall_ep_cmd(dep); +- WARN_ON_ONCE(ret); ++ if (ret) ++ dev_err_ratelimited(dwc->dev, ++ "failed to clear STALL on %s\n", dep->name); + } + } + +diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c +index 3e4d5645759791..d23b1762e0e45f 100644 +--- a/drivers/usb/gadget/udc/renesas_usb3.c ++++ b/drivers/usb/gadget/udc/renesas_usb3.c +@@ -2657,6 +2657,7 @@ static void renesas_usb3_remove(struct platform_device *pdev) + struct renesas_usb3 *usb3 = platform_get_drvdata(pdev); + + debugfs_remove_recursive(usb3->dentry); ++ put_device(usb3->host_dev); + device_remove_file(&pdev->dev, &dev_attr_role); + + cancel_work_sync(&usb3->role_work); +diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c +index 92bb84f8132a97..b3a59ce1b3f41f 100644 +--- a/drivers/usb/host/xhci-hub.c ++++ b/drivers/usb/host/xhci-hub.c +@@ -704,8 +704,7 @@ static int xhci_enter_test_mode(struct xhci_hcd *xhci, + if (!xhci->devs[i]) + continue; + +- retval = xhci_disable_slot(xhci, i); +- xhci_free_virt_device(xhci, i); ++ retval = xhci_disable_and_free_slot(xhci, i); + if (retval) + xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n", + i, retval); +diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c +index 07289333a1e8f1..81eaad87a3d9d0 100644 +--- a/drivers/usb/host/xhci-mem.c ++++ b/drivers/usb/host/xhci-mem.c +@@ -865,21 +865,20 @@ int xhci_alloc_tt_info(struct xhci_hcd *xhci, + * will be manipulated by the configure endpoint, allocate device, or update + * hub functions while this function is removing the TT entries from the list. + */ +-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id) ++void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, ++ int slot_id) + { +- struct xhci_virt_device *dev; + int i; + int old_active_eps = 0; + + /* Slot ID 0 is reserved */ +- if (slot_id == 0 || !xhci->devs[slot_id]) ++ if (slot_id == 0 || !dev) + return; + +- dev = xhci->devs[slot_id]; +- +- xhci->dcbaa->dev_context_ptrs[slot_id] = 0; +- if (!dev) +- return; ++ /* If device ctx array still points to _this_ device, clear it */ ++ if (dev->out_ctx && ++ xhci->dcbaa->dev_context_ptrs[slot_id] == cpu_to_le64(dev->out_ctx->dma)) ++ xhci->dcbaa->dev_context_ptrs[slot_id] = 0; + + trace_xhci_free_virt_device(dev); + +@@ -920,8 +919,9 @@ void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id) + dev->udev->slot_id = 0; + if (dev->rhub_port && dev->rhub_port->slot_id == slot_id) + dev->rhub_port->slot_id = 0; +- kfree(xhci->devs[slot_id]); +- xhci->devs[slot_id] = NULL; ++ if (xhci->devs[slot_id] == dev) ++ xhci->devs[slot_id] = NULL; ++ kfree(dev); + } + + /* +@@ -962,7 +962,7 @@ static void xhci_free_virt_devices_depth_first(struct xhci_hcd *xhci, int slot_i + out: + /* we are now at a leaf device */ + xhci_debugfs_remove_slot(xhci, slot_id); +- xhci_free_virt_device(xhci, slot_id); ++ xhci_free_virt_device(xhci, vdev, slot_id); + } + + int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, +diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c +index 620f8f0febb84b..86df80399c9fdc 100644 +--- a/drivers/usb/host/xhci-pci-renesas.c ++++ b/drivers/usb/host/xhci-pci-renesas.c +@@ -47,8 +47,9 @@ + #define RENESAS_ROM_ERASE_MAGIC 0x5A65726F + #define RENESAS_ROM_WRITE_MAGIC 0x53524F4D + +-#define RENESAS_RETRY 10000 +-#define RENESAS_DELAY 10 ++#define RENESAS_RETRY 50000 /* 50000 * RENESAS_DELAY ~= 500ms */ ++#define RENESAS_CHIP_ERASE_RETRY 500000 /* 500000 * RENESAS_DELAY ~= 5s */ ++#define RENESAS_DELAY 10 + + #define RENESAS_FW_NAME "renesas_usb_fw.mem" + +@@ -407,7 +408,7 @@ static void renesas_rom_erase(struct pci_dev *pdev) + /* sleep a bit while ROM is erased */ + msleep(20); + +- for (i = 0; i < RENESAS_RETRY; i++) { ++ for (i = 0; i < RENESAS_CHIP_ERASE_RETRY; i++) { + retval = pci_read_config_byte(pdev, RENESAS_ROM_STATUS, + &status); + status &= RENESAS_ROM_STATUS_ERASE; +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c +index ecd757d482c582..4f8f5aab109d0c 100644 +--- a/drivers/usb/host/xhci-ring.c ++++ b/drivers/usb/host/xhci-ring.c +@@ -1592,7 +1592,8 @@ static void xhci_handle_cmd_enable_slot(int slot_id, struct xhci_command *comman + command->slot_id = 0; + } + +-static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id) ++static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id, ++ u32 cmd_comp_code) + { + struct xhci_virt_device *virt_dev; + struct xhci_slot_ctx *slot_ctx; +@@ -1607,6 +1608,10 @@ static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id) + if (xhci->quirks & XHCI_EP_LIMIT_QUIRK) + /* Delete default control endpoint resources */ + xhci_free_device_endpoint_resources(xhci, virt_dev, true); ++ if (cmd_comp_code == COMP_SUCCESS) { ++ xhci->dcbaa->dev_context_ptrs[slot_id] = 0; ++ xhci->devs[slot_id] = NULL; ++ } + } + + static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id) +@@ -1856,7 +1861,7 @@ static void handle_cmd_completion(struct xhci_hcd *xhci, + xhci_handle_cmd_enable_slot(slot_id, cmd, cmd_comp_code); + break; + case TRB_DISABLE_SLOT: +- xhci_handle_cmd_disable_slot(xhci, slot_id); ++ xhci_handle_cmd_disable_slot(xhci, slot_id, cmd_comp_code); + break; + case TRB_CONFIG_EP: + if (!cmd->completion) +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index 47151ca527bfaf..742c23826e173a 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -309,6 +309,7 @@ int xhci_enable_interrupter(struct xhci_interrupter *ir) + return -EINVAL; + + iman = readl(&ir->ir_set->iman); ++ iman &= ~IMAN_IP; + iman |= IMAN_IE; + writel(iman, &ir->ir_set->iman); + +@@ -325,6 +326,7 @@ int xhci_disable_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir) + return -EINVAL; + + iman = readl(&ir->ir_set->iman); ++ iman &= ~IMAN_IP; + iman &= ~IMAN_IE; + writel(iman, &ir->ir_set->iman); + +@@ -3932,8 +3934,7 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd, + * Obtaining a new device slot to inform the xHCI host that + * the USB device has been reset. + */ +- ret = xhci_disable_slot(xhci, udev->slot_id); +- xhci_free_virt_device(xhci, udev->slot_id); ++ ret = xhci_disable_and_free_slot(xhci, udev->slot_id); + if (!ret) { + ret = xhci_alloc_dev(hcd, udev); + if (ret == 1) +@@ -4090,7 +4091,7 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev) + xhci_disable_slot(xhci, udev->slot_id); + + spin_lock_irqsave(&xhci->lock, flags); +- xhci_free_virt_device(xhci, udev->slot_id); ++ xhci_free_virt_device(xhci, virt_dev, udev->slot_id); + spin_unlock_irqrestore(&xhci->lock, flags); + + } +@@ -4139,6 +4140,16 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id) + return 0; + } + ++int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id) ++{ ++ struct xhci_virt_device *vdev = xhci->devs[slot_id]; ++ int ret; ++ ++ ret = xhci_disable_slot(xhci, slot_id); ++ xhci_free_virt_device(xhci, vdev, slot_id); ++ return ret; ++} ++ + /* + * Checks if we have enough host controller resources for the default control + * endpoint. +@@ -4245,8 +4256,7 @@ int xhci_alloc_dev(struct usb_hcd *hcd, struct usb_device *udev) + return 1; + + disable_slot: +- xhci_disable_slot(xhci, udev->slot_id); +- xhci_free_virt_device(xhci, udev->slot_id); ++ xhci_disable_and_free_slot(xhci, udev->slot_id); + + return 0; + } +@@ -4382,8 +4392,7 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev, + dev_warn(&udev->dev, "Device not responding to setup %s.\n", act); + + mutex_unlock(&xhci->mutex); +- ret = xhci_disable_slot(xhci, udev->slot_id); +- xhci_free_virt_device(xhci, udev->slot_id); ++ ret = xhci_disable_and_free_slot(xhci, udev->slot_id); + if (!ret) { + if (xhci_alloc_dev(hcd, udev) == 1) + xhci_setup_addressable_virt_dev(xhci, udev); +diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h +index a20f4e7cd43a80..85d5b964bf1e9b 100644 +--- a/drivers/usb/host/xhci.h ++++ b/drivers/usb/host/xhci.h +@@ -1791,7 +1791,7 @@ void xhci_dbg_trace(struct xhci_hcd *xhci, void (*trace)(struct va_format *), + /* xHCI memory management */ + void xhci_mem_cleanup(struct xhci_hcd *xhci); + int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags); +-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id); ++void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev, int slot_id); + int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id, struct usb_device *udev, gfp_t flags); + int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *udev); + void xhci_copy_ep0_dequeue_into_input_ctx(struct xhci_hcd *xhci, +@@ -1888,6 +1888,7 @@ void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev); + int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev, + struct usb_tt *tt, gfp_t mem_flags); + int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id); ++int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id); + int xhci_ext_cap_init(struct xhci_hcd *xhci); + + int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup); +diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c +index 2970967a4fd28b..36f756f9b7f603 100644 +--- a/drivers/usb/musb/omap2430.c ++++ b/drivers/usb/musb/omap2430.c +@@ -400,7 +400,7 @@ static int omap2430_probe(struct platform_device *pdev) + ret = platform_device_add_resources(musb, pdev->resource, pdev->num_resources); + if (ret) { + dev_err(&pdev->dev, "failed to add resources\n"); +- goto err2; ++ goto err_put_control_otghs; + } + + if (populate_irqs) { +@@ -413,7 +413,7 @@ static int omap2430_probe(struct platform_device *pdev) + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + ret = -EINVAL; +- goto err2; ++ goto err_put_control_otghs; + } + + musb_res[i].start = res->start; +@@ -441,14 +441,14 @@ static int omap2430_probe(struct platform_device *pdev) + ret = platform_device_add_resources(musb, musb_res, i); + if (ret) { + dev_err(&pdev->dev, "failed to add IRQ resources\n"); +- goto err2; ++ goto err_put_control_otghs; + } + } + + ret = platform_device_add_data(musb, pdata, sizeof(*pdata)); + if (ret) { + dev_err(&pdev->dev, "failed to add platform_data\n"); +- goto err2; ++ goto err_put_control_otghs; + } + + pm_runtime_enable(glue->dev); +@@ -463,7 +463,9 @@ static int omap2430_probe(struct platform_device *pdev) + + err3: + pm_runtime_disable(glue->dev); +- ++err_put_control_otghs: ++ if (!IS_ERR(glue->control_otghs)) ++ put_device(glue->control_otghs); + err2: + platform_device_put(musb); + +@@ -477,6 +479,8 @@ static void omap2430_remove(struct platform_device *pdev) + + platform_device_unregister(glue->musb); + pm_runtime_disable(glue->dev); ++ if (!IS_ERR(glue->control_otghs)) ++ put_device(glue->control_otghs); + } + + #ifdef CONFIG_PM +diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c +index c18dfa2ca034e7..dc655bd640dc22 100644 +--- a/drivers/usb/storage/realtek_cr.c ++++ b/drivers/usb/storage/realtek_cr.c +@@ -252,7 +252,7 @@ static int rts51x_bulk_transport(struct us_data *us, u8 lun, + return USB_STOR_TRANSPORT_ERROR; + } + +- residue = bcs->Residue; ++ residue = le32_to_cpu(bcs->Residue); + if (bcs->Tag != us->tag) + return USB_STOR_TRANSPORT_ERROR; + +diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h +index 54f0b1c83317cd..dfa5276a5a43e2 100644 +--- a/drivers/usb/storage/unusual_devs.h ++++ b/drivers/usb/storage/unusual_devs.h +@@ -934,6 +934,13 @@ UNUSUAL_DEV( 0x05e3, 0x0723, 0x9451, 0x9451, + USB_SC_DEVICE, USB_PR_DEVICE, NULL, + US_FL_SANE_SENSE ), + ++/* Added by Maël GUERIN */ ++UNUSUAL_DEV( 0x0603, 0x8611, 0x0000, 0xffff, ++ "Novatek", ++ "NTK96550-based camera", ++ USB_SC_SCSI, USB_PR_BULK, NULL, ++ US_FL_BULK_IGNORE_TAG ), ++ + /* + * Reported by Hanno Boeck + * Taken from the Lycoris Kernel +@@ -1494,6 +1501,28 @@ UNUSUAL_DEV( 0x0bc2, 0x3332, 0x0000, 0x9999, + USB_SC_DEVICE, USB_PR_DEVICE, NULL, + US_FL_NO_WP_DETECT ), + ++/* ++ * Reported by Zenm Chen ++ * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch ++ * the device into Wi-Fi mode. ++ */ ++UNUSUAL_DEV( 0x0bda, 0x1a2b, 0x0000, 0xffff, ++ "Realtek", ++ "DISK", ++ USB_SC_DEVICE, USB_PR_DEVICE, NULL, ++ US_FL_IGNORE_DEVICE ), ++ ++/* ++ * Reported by Zenm Chen ++ * Ignore driver CD mode, otherwise usb_modeswitch may fail to switch ++ * the device into Wi-Fi mode. ++ */ ++UNUSUAL_DEV( 0x0bda, 0xa192, 0x0000, 0xffff, ++ "Realtek", ++ "DISK", ++ USB_SC_DEVICE, USB_PR_DEVICE, NULL, ++ US_FL_IGNORE_DEVICE ), ++ + UNUSUAL_DEV( 0x0d49, 0x7310, 0x0000, 0x9999, + "Maxtor", + "USB to SATA", +diff --git a/drivers/usb/typec/tcpm/maxim_contaminant.c b/drivers/usb/typec/tcpm/maxim_contaminant.c +index 0cdda06592fd3c..af8da6dc60ae0b 100644 +--- a/drivers/usb/typec/tcpm/maxim_contaminant.c ++++ b/drivers/usb/typec/tcpm/maxim_contaminant.c +@@ -188,6 +188,11 @@ static int max_contaminant_read_comparators(struct max_tcpci_chip *chip, u8 *ven + if (ret < 0) + return ret; + ++ /* Disable low power mode */ ++ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL, ++ FIELD_PREP(CCLPMODESEL, ++ LOW_POWER_MODE_DISABLE)); ++ + /* Sleep to allow comparators settle */ + usleep_range(5000, 6000); + ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, TCPC_TCPC_CTRL_ORIENTATION, PLUG_ORNT_CC1); +@@ -324,6 +329,39 @@ static int max_contaminant_enable_dry_detection(struct max_tcpci_chip *chip) + return 0; + } + ++static int max_contaminant_enable_toggling(struct max_tcpci_chip *chip) ++{ ++ struct regmap *regmap = chip->data.regmap; ++ int ret; ++ ++ /* Disable dry detection if enabled. */ ++ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCLPMODESEL, ++ FIELD_PREP(CCLPMODESEL, ++ LOW_POWER_MODE_DISABLE)); ++ if (ret) ++ return ret; ++ ++ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL1, CCCONNDRY, 0); ++ if (ret) ++ return ret; ++ ++ ret = max_tcpci_write8(chip, TCPC_ROLE_CTRL, TCPC_ROLE_CTRL_DRP | ++ FIELD_PREP(TCPC_ROLE_CTRL_CC1, ++ TCPC_ROLE_CTRL_CC_RD) | ++ FIELD_PREP(TCPC_ROLE_CTRL_CC2, ++ TCPC_ROLE_CTRL_CC_RD)); ++ if (ret) ++ return ret; ++ ++ ret = regmap_update_bits(regmap, TCPC_TCPC_CTRL, ++ TCPC_TCPC_CTRL_EN_LK4CONN_ALRT, ++ TCPC_TCPC_CTRL_EN_LK4CONN_ALRT); ++ if (ret) ++ return ret; ++ ++ return max_tcpci_write8(chip, TCPC_COMMAND, TCPC_CMD_LOOK4CONNECTION); ++} ++ + bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect_while_debounce, + bool *cc_handled) + { +@@ -340,6 +378,12 @@ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect + if (ret < 0) + return false; + ++ if (cc_status & TCPC_CC_STATUS_TOGGLING) { ++ if (chip->contaminant_state == DETECTED) ++ return true; ++ return false; ++ } ++ + if (chip->contaminant_state == NOT_DETECTED || chip->contaminant_state == SINK) { + if (!disconnect_while_debounce) + msleep(100); +@@ -372,6 +416,12 @@ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect + max_contaminant_enable_dry_detection(chip); + return true; + } ++ ++ ret = max_contaminant_enable_toggling(chip); ++ if (ret) ++ dev_err(chip->dev, ++ "Failed to enable toggling, ret=%d", ++ ret); + } + } else if (chip->contaminant_state == DETECTED) { + if (!(cc_status & TCPC_CC_STATUS_TOGGLING)) { +@@ -379,6 +429,14 @@ bool max_contaminant_is_contaminant(struct max_tcpci_chip *chip, bool disconnect + if (chip->contaminant_state == DETECTED) { + max_contaminant_enable_dry_detection(chip); + return true; ++ } else { ++ ret = max_contaminant_enable_toggling(chip); ++ if (ret) { ++ dev_err(chip->dev, ++ "Failed to enable toggling, ret=%d", ++ ret); ++ return true; ++ } + } + } + } +diff --git a/drivers/usb/typec/tcpm/tcpci_maxim.h b/drivers/usb/typec/tcpm/tcpci_maxim.h +index 76270d5c283880..b33540a42a953d 100644 +--- a/drivers/usb/typec/tcpm/tcpci_maxim.h ++++ b/drivers/usb/typec/tcpm/tcpci_maxim.h +@@ -21,6 +21,7 @@ + #define CCOVPDIS BIT(6) + #define SBURPCTRL BIT(5) + #define CCLPMODESEL GENMASK(4, 3) ++#define LOW_POWER_MODE_DISABLE 0 + #define ULTRA_LOW_POWER_MODE 1 + #define CCRPCTRL GENMASK(2, 0) + #define UA_1_SRC 1 +diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c +index 802153e230730b..66a0f060770ef2 100644 +--- a/drivers/vhost/vsock.c ++++ b/drivers/vhost/vsock.c +@@ -344,6 +344,9 @@ vhost_vsock_alloc_skb(struct vhost_virtqueue *vq, + + len = iov_length(vq->iov, out); + ++ if (len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE + VIRTIO_VSOCK_SKB_HEADROOM) ++ return NULL; ++ + /* len contains both payload and hdr */ + skb = virtio_vsock_alloc_skb(len, GFP_KERNEL); + if (!skb) +@@ -367,8 +370,7 @@ vhost_vsock_alloc_skb(struct vhost_virtqueue *vq, + return skb; + + /* The pkt is too big or the length in the header is invalid */ +- if (payload_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE || +- payload_len + sizeof(*hdr) > len) { ++ if (payload_len + sizeof(*hdr) > len) { + kfree_skb(skb); + return NULL; + } +diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c +index f9cdbf8c53e34b..37bd18730fe0df 100644 +--- a/drivers/video/console/vgacon.c ++++ b/drivers/video/console/vgacon.c +@@ -1168,7 +1168,7 @@ static bool vgacon_scroll(struct vc_data *c, unsigned int t, unsigned int b, + c->vc_screenbuf_size - delta); + c->vc_origin = vga_vram_end - c->vc_screenbuf_size; + vga_rolled_over = 0; +- } else if (oldo - delta >= (unsigned long)c->vc_screenbuf) ++ } else + c->vc_origin -= delta; + c->vc_scr_end = c->vc_origin + c->vc_screenbuf_size; + scr_memsetw((u16 *) (c->vc_origin), c->vc_video_erase_char, +diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c +index 8abc066ce51fb4..94fd6f2404cdce 100644 +--- a/fs/btrfs/ctree.c ++++ b/fs/btrfs/ctree.c +@@ -198,7 +198,7 @@ struct extent_buffer *btrfs_root_node(struct btrfs_root *root) + * the inc_not_zero dance and if it doesn't work then + * synchronize_rcu and try again. + */ +- if (atomic_inc_not_zero(&eb->refs)) { ++ if (refcount_inc_not_zero(&eb->refs)) { + rcu_read_unlock(); + break; + } +@@ -283,7 +283,14 @@ int btrfs_copy_root(struct btrfs_trans_handle *trans, + + write_extent_buffer_fsid(cow, fs_info->fs_devices->metadata_uuid); + +- WARN_ON(btrfs_header_generation(buf) > trans->transid); ++ if (unlikely(btrfs_header_generation(buf) > trans->transid)) { ++ btrfs_tree_unlock(cow); ++ free_extent_buffer(cow); ++ ret = -EUCLEAN; ++ btrfs_abort_transaction(trans, ret); ++ return ret; ++ } ++ + if (new_root_objectid == BTRFS_TREE_RELOC_OBJECTID) + ret = btrfs_inc_ref(trans, root, cow, 1); + else +@@ -549,7 +556,7 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans, + btrfs_abort_transaction(trans, ret); + goto error_unlock_cow; + } +- atomic_inc(&cow->refs); ++ refcount_inc(&cow->refs); + rcu_assign_pointer(root->node, cow); + + ret = btrfs_free_tree_block(trans, btrfs_root_id(root), buf, +@@ -1081,7 +1088,7 @@ static noinline int balance_level(struct btrfs_trans_handle *trans, + /* update the path */ + if (left) { + if (btrfs_header_nritems(left) > orig_slot) { +- atomic_inc(&left->refs); ++ refcount_inc(&left->refs); + /* left was locked after cow */ + path->nodes[level] = left; + path->slots[level + 1] -= 1; +@@ -1685,7 +1692,7 @@ static struct extent_buffer *btrfs_search_slot_get_root(struct btrfs_root *root, + + if (p->search_commit_root) { + b = root->commit_root; +- atomic_inc(&b->refs); ++ refcount_inc(&b->refs); + level = btrfs_header_level(b); + /* + * Ensure that all callers have set skip_locking when +@@ -2886,7 +2893,7 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans, + free_extent_buffer(old); + + add_root_to_dirty_list(root); +- atomic_inc(&c->refs); ++ refcount_inc(&c->refs); + path->nodes[level] = c; + path->locks[level] = BTRFS_WRITE_LOCK; + path->slots[level] = 0; +@@ -4443,7 +4450,7 @@ static noinline int btrfs_del_leaf(struct btrfs_trans_handle *trans, + + root_sub_used_bytes(root); + +- atomic_inc(&leaf->refs); ++ refcount_inc(&leaf->refs); + ret = btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1); + free_extent_buffer_stale(leaf); + if (ret < 0) +@@ -4528,7 +4535,7 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root, + * for possible call to btrfs_del_ptr below + */ + slot = path->slots[1]; +- atomic_inc(&leaf->refs); ++ refcount_inc(&leaf->refs); + /* + * We want to be able to at least push one item to the + * left neighbour leaf, and that's the first item. +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 5f16e6a79d1088..528ac11505de96 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -6342,7 +6342,7 @@ int btrfs_drop_subtree(struct btrfs_trans_handle *trans, + + btrfs_assert_tree_write_locked(parent); + parent_level = btrfs_header_level(parent); +- atomic_inc(&parent->refs); ++ refcount_inc(&parent->refs); + path->nodes[parent_level] = parent; + path->slots[parent_level] = btrfs_header_nritems(parent); + +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 1dc931c4937fc0..3711a5d073423d 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -77,7 +77,7 @@ void btrfs_extent_buffer_leak_debug_check(struct btrfs_fs_info *fs_info) + struct extent_buffer, leak_list); + pr_err( + "BTRFS: buffer leak start %llu len %u refs %d bflags %lu owner %llu\n", +- eb->start, eb->len, atomic_read(&eb->refs), eb->bflags, ++ eb->start, eb->len, refcount_read(&eb->refs), eb->bflags, + btrfs_header_owner(eb)); + list_del(&eb->leak_list); + WARN_ON_ONCE(1); +@@ -782,7 +782,7 @@ static void submit_extent_folio(struct btrfs_bio_ctrl *bio_ctrl, + + static int attach_extent_buffer_folio(struct extent_buffer *eb, + struct folio *folio, +- struct btrfs_subpage *prealloc) ++ struct btrfs_folio_state *prealloc) + { + struct btrfs_fs_info *fs_info = eb->fs_info; + int ret = 0; +@@ -806,7 +806,7 @@ static int attach_extent_buffer_folio(struct extent_buffer *eb, + + /* Already mapped, just free prealloc */ + if (folio_test_private(folio)) { +- btrfs_free_subpage(prealloc); ++ btrfs_free_folio_state(prealloc); + return 0; + } + +@@ -815,7 +815,7 @@ static int attach_extent_buffer_folio(struct extent_buffer *eb, + folio_attach_private(folio, prealloc); + else + /* Do new allocation to attach subpage */ +- ret = btrfs_attach_subpage(fs_info, folio, BTRFS_SUBPAGE_METADATA); ++ ret = btrfs_attach_folio_state(fs_info, folio, BTRFS_SUBPAGE_METADATA); + return ret; + } + +@@ -831,7 +831,7 @@ int set_folio_extent_mapped(struct folio *folio) + fs_info = folio_to_fs_info(folio); + + if (btrfs_is_subpage(fs_info, folio)) +- return btrfs_attach_subpage(fs_info, folio, BTRFS_SUBPAGE_DATA); ++ return btrfs_attach_folio_state(fs_info, folio, BTRFS_SUBPAGE_DATA); + + folio_attach_private(folio, (void *)EXTENT_FOLIO_PRIVATE); + return 0; +@@ -848,7 +848,7 @@ void clear_folio_extent_mapped(struct folio *folio) + + fs_info = folio_to_fs_info(folio); + if (btrfs_is_subpage(fs_info, folio)) +- return btrfs_detach_subpage(fs_info, folio, BTRFS_SUBPAGE_DATA); ++ return btrfs_detach_folio_state(fs_info, folio, BTRFS_SUBPAGE_DATA); + + folio_detach_private(folio); + } +@@ -1961,7 +1961,7 @@ static inline struct extent_buffer *find_get_eb(struct xa_state *xas, unsigned l + if (!eb) + return NULL; + +- if (!atomic_inc_not_zero(&eb->refs)) { ++ if (!refcount_inc_not_zero(&eb->refs)) { + xas_reset(xas); + goto retry; + } +@@ -2012,7 +2012,7 @@ static struct extent_buffer *find_extent_buffer_nolock( + + rcu_read_lock(); + eb = xa_load(&fs_info->buffer_tree, index); +- if (eb && !atomic_inc_not_zero(&eb->refs)) ++ if (eb && !refcount_inc_not_zero(&eb->refs)) + eb = NULL; + rcu_read_unlock(); + return eb; +@@ -2731,13 +2731,13 @@ static int extent_buffer_under_io(const struct extent_buffer *eb) + + static bool folio_range_has_eb(struct folio *folio) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + + lockdep_assert_held(&folio->mapping->i_private_lock); + + if (folio_test_private(folio)) { +- subpage = folio_get_private(folio); +- if (atomic_read(&subpage->eb_refs)) ++ bfs = folio_get_private(folio); ++ if (atomic_read(&bfs->eb_refs)) + return true; + } + return false; +@@ -2787,7 +2787,7 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo + * attached to one dummy eb, no sharing. + */ + if (!mapped) { +- btrfs_detach_subpage(fs_info, folio, BTRFS_SUBPAGE_METADATA); ++ btrfs_detach_folio_state(fs_info, folio, BTRFS_SUBPAGE_METADATA); + return; + } + +@@ -2798,7 +2798,7 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo + * page range and no unfinished IO. + */ + if (!folio_range_has_eb(folio)) +- btrfs_detach_subpage(fs_info, folio, BTRFS_SUBPAGE_METADATA); ++ btrfs_detach_folio_state(fs_info, folio, BTRFS_SUBPAGE_METADATA); + + spin_unlock(&mapping->i_private_lock); + } +@@ -2842,7 +2842,7 @@ static struct extent_buffer *__alloc_extent_buffer(struct btrfs_fs_info *fs_info + btrfs_leak_debug_add_eb(eb); + + spin_lock_init(&eb->refs_lock); +- atomic_set(&eb->refs, 1); ++ refcount_set(&eb->refs, 1); + + ASSERT(eb->len <= BTRFS_MAX_METADATA_BLOCKSIZE); + +@@ -2975,13 +2975,13 @@ static void check_buffer_tree_ref(struct extent_buffer *eb) + * once io is initiated, TREE_REF can no longer be cleared, so that is + * the moment at which any such race is best fixed. + */ +- refs = atomic_read(&eb->refs); ++ refs = refcount_read(&eb->refs); + if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) + return; + + spin_lock(&eb->refs_lock); + if (!test_and_set_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) +- atomic_inc(&eb->refs); ++ refcount_inc(&eb->refs); + spin_unlock(&eb->refs_lock); + } + +@@ -3047,7 +3047,7 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info, + return ERR_PTR(ret); + } + if (exists) { +- if (!atomic_inc_not_zero(&exists->refs)) { ++ if (!refcount_inc_not_zero(&exists->refs)) { + /* The extent buffer is being freed, retry. */ + xa_unlock_irq(&fs_info->buffer_tree); + goto again; +@@ -3092,7 +3092,7 @@ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info, + * just overwrite folio private. + */ + exists = folio_get_private(folio); +- if (atomic_inc_not_zero(&exists->refs)) ++ if (refcount_inc_not_zero(&exists->refs)) + return exists; + + WARN_ON(folio_test_dirty(folio)); +@@ -3141,7 +3141,7 @@ static bool check_eb_alignment(struct btrfs_fs_info *fs_info, u64 start) + * The caller needs to free the existing folios and retry using the same order. + */ + static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, +- struct btrfs_subpage *prealloc, ++ struct btrfs_folio_state *prealloc, + struct extent_buffer **found_eb_ret) + { + +@@ -3224,7 +3224,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + int attached = 0; + struct extent_buffer *eb; + struct extent_buffer *existing_eb = NULL; +- struct btrfs_subpage *prealloc = NULL; ++ struct btrfs_folio_state *prealloc = NULL; + u64 lockdep_owner = owner_root; + bool page_contig = true; + int uptodate = 1; +@@ -3269,7 +3269,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + * manually if we exit earlier. + */ + if (btrfs_meta_is_subpage(fs_info)) { +- prealloc = btrfs_alloc_subpage(fs_info, PAGE_SIZE, BTRFS_SUBPAGE_METADATA); ++ prealloc = btrfs_alloc_folio_state(fs_info, PAGE_SIZE, BTRFS_SUBPAGE_METADATA); + if (IS_ERR(prealloc)) { + ret = PTR_ERR(prealloc); + goto out; +@@ -3280,7 +3280,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + /* Allocate all pages first. */ + ret = alloc_eb_folio_array(eb, true); + if (ret < 0) { +- btrfs_free_subpage(prealloc); ++ btrfs_free_folio_state(prealloc); + goto out; + } + +@@ -3362,7 +3362,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + goto out; + } + if (existing_eb) { +- if (!atomic_inc_not_zero(&existing_eb->refs)) { ++ if (!refcount_inc_not_zero(&existing_eb->refs)) { + xa_unlock_irq(&fs_info->buffer_tree); + goto again; + } +@@ -3391,7 +3391,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + return eb; + + out: +- WARN_ON(!atomic_dec_and_test(&eb->refs)); ++ WARN_ON(!refcount_dec_and_test(&eb->refs)); + + /* + * Any attached folios need to be detached before we unlock them. This +@@ -3437,8 +3437,7 @@ static int release_extent_buffer(struct extent_buffer *eb) + { + lockdep_assert_held(&eb->refs_lock); + +- WARN_ON(atomic_read(&eb->refs) == 0); +- if (atomic_dec_and_test(&eb->refs)) { ++ if (refcount_dec_and_test(&eb->refs)) { + struct btrfs_fs_info *fs_info = eb->fs_info; + + spin_unlock(&eb->refs_lock); +@@ -3484,22 +3483,26 @@ void free_extent_buffer(struct extent_buffer *eb) + if (!eb) + return; + +- refs = atomic_read(&eb->refs); ++ refs = refcount_read(&eb->refs); + while (1) { +- if ((!test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) && refs <= 3) +- || (test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags) && +- refs == 1)) ++ if (test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags)) { ++ if (refs == 1) ++ break; ++ } else if (refs <= 3) { + break; +- if (atomic_try_cmpxchg(&eb->refs, &refs, refs - 1)) ++ } ++ ++ /* Optimization to avoid locking eb->refs_lock. */ ++ if (atomic_try_cmpxchg(&eb->refs.refs, &refs, refs - 1)) + return; + } + + spin_lock(&eb->refs_lock); +- if (atomic_read(&eb->refs) == 2 && ++ if (refcount_read(&eb->refs) == 2 && + test_bit(EXTENT_BUFFER_STALE, &eb->bflags) && + !extent_buffer_under_io(eb) && + test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) +- atomic_dec(&eb->refs); ++ refcount_dec(&eb->refs); + + /* + * I know this is terrible, but it's temporary until we stop tracking +@@ -3516,9 +3519,9 @@ void free_extent_buffer_stale(struct extent_buffer *eb) + spin_lock(&eb->refs_lock); + set_bit(EXTENT_BUFFER_STALE, &eb->bflags); + +- if (atomic_read(&eb->refs) == 2 && !extent_buffer_under_io(eb) && ++ if (refcount_read(&eb->refs) == 2 && !extent_buffer_under_io(eb) && + test_and_clear_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)) +- atomic_dec(&eb->refs); ++ refcount_dec(&eb->refs); + release_extent_buffer(eb); + } + +@@ -3576,7 +3579,7 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans, + btree_clear_folio_dirty_tag(folio); + folio_unlock(folio); + } +- WARN_ON(atomic_read(&eb->refs) == 0); ++ WARN_ON(refcount_read(&eb->refs) == 0); + } + + void set_extent_buffer_dirty(struct extent_buffer *eb) +@@ -3587,7 +3590,7 @@ void set_extent_buffer_dirty(struct extent_buffer *eb) + + was_dirty = test_and_set_bit(EXTENT_BUFFER_DIRTY, &eb->bflags); + +- WARN_ON(atomic_read(&eb->refs) == 0); ++ WARN_ON(refcount_read(&eb->refs) == 0); + WARN_ON(!test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags)); + WARN_ON(test_bit(EXTENT_BUFFER_ZONED_ZEROOUT, &eb->bflags)); + +@@ -3713,7 +3716,7 @@ int read_extent_buffer_pages_nowait(struct extent_buffer *eb, int mirror_num, + + eb->read_mirror = 0; + check_buffer_tree_ref(eb); +- atomic_inc(&eb->refs); ++ refcount_inc(&eb->refs); + + bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES, + REQ_OP_READ | REQ_META, eb->fs_info, +@@ -4301,15 +4304,18 @@ static int try_release_subpage_extent_buffer(struct folio *folio) + unsigned long end = index + (PAGE_SIZE >> fs_info->sectorsize_bits) - 1; + int ret; + +- xa_lock_irq(&fs_info->buffer_tree); ++ rcu_read_lock(); + xa_for_each_range(&fs_info->buffer_tree, index, eb, start, end) { + /* + * The same as try_release_extent_buffer(), to ensure the eb + * won't disappear out from under us. + */ + spin_lock(&eb->refs_lock); +- if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { ++ rcu_read_unlock(); ++ ++ if (refcount_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { + spin_unlock(&eb->refs_lock); ++ rcu_read_lock(); + continue; + } + +@@ -4328,11 +4334,10 @@ static int try_release_subpage_extent_buffer(struct folio *folio) + * check the folio private at the end. And + * release_extent_buffer() will release the refs_lock. + */ +- xa_unlock_irq(&fs_info->buffer_tree); + release_extent_buffer(eb); +- xa_lock_irq(&fs_info->buffer_tree); ++ rcu_read_lock(); + } +- xa_unlock_irq(&fs_info->buffer_tree); ++ rcu_read_unlock(); + + /* + * Finally to check if we have cleared folio private, as if we have +@@ -4345,7 +4350,6 @@ static int try_release_subpage_extent_buffer(struct folio *folio) + ret = 0; + spin_unlock(&folio->mapping->i_private_lock); + return ret; +- + } + + int try_release_extent_buffer(struct folio *folio) +@@ -4374,7 +4378,7 @@ int try_release_extent_buffer(struct folio *folio) + * this page. + */ + spin_lock(&eb->refs_lock); +- if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { ++ if (refcount_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) { + spin_unlock(&eb->refs_lock); + spin_unlock(&folio->mapping->i_private_lock); + return 0; +diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h +index e36e8d6a00bc50..65bb87f1dce61e 100644 +--- a/fs/btrfs/extent_io.h ++++ b/fs/btrfs/extent_io.h +@@ -98,7 +98,7 @@ struct extent_buffer { + void *addr; + + spinlock_t refs_lock; +- atomic_t refs; ++ refcount_t refs; + int read_mirror; + /* >= 0 if eb belongs to a log tree, -1 otherwise */ + s8 log_index; +diff --git a/fs/btrfs/fiemap.c b/fs/btrfs/fiemap.c +index 43bf0979fd5394..7935586a9dbd0f 100644 +--- a/fs/btrfs/fiemap.c ++++ b/fs/btrfs/fiemap.c +@@ -320,7 +320,7 @@ static int fiemap_next_leaf_item(struct btrfs_inode *inode, struct btrfs_path *p + * the cost of allocating a new one. + */ + ASSERT(test_bit(EXTENT_BUFFER_UNMAPPED, &clone->bflags)); +- atomic_inc(&clone->refs); ++ refcount_inc(&clone->refs); + + ret = btrfs_next_leaf(inode->root, path); + if (ret != 0) +diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c +index a83c268f7f87ca..d37ce8200a1026 100644 +--- a/fs/btrfs/free-space-tree.c ++++ b/fs/btrfs/free-space-tree.c +@@ -1431,12 +1431,17 @@ static int __add_block_group_free_space(struct btrfs_trans_handle *trans, + set_bit(BLOCK_GROUP_FLAG_FREE_SPACE_ADDED, &block_group->runtime_flags); + + ret = add_new_free_space_info(trans, block_group, path); +- if (ret) ++ if (ret) { ++ btrfs_abort_transaction(trans, ret); + return ret; ++ } ++ ++ ret = __add_to_free_space_tree(trans, block_group, path, ++ block_group->start, block_group->length); ++ if (ret) ++ btrfs_abort_transaction(trans, ret); + +- return __add_to_free_space_tree(trans, block_group, path, +- block_group->start, +- block_group->length); ++ return 0; + } + + int add_block_group_free_space(struct btrfs_trans_handle *trans, +@@ -1456,16 +1461,14 @@ int add_block_group_free_space(struct btrfs_trans_handle *trans, + path = btrfs_alloc_path(); + if (!path) { + ret = -ENOMEM; ++ btrfs_abort_transaction(trans, ret); + goto out; + } + + ret = __add_block_group_free_space(trans, block_group, path); +- + out: + btrfs_free_path(path); + mutex_unlock(&block_group->free_space_lock); +- if (ret) +- btrfs_abort_transaction(trans, ret); + return ret; + } + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 4ae34c22ff1de1..df4c8312aae39d 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -7364,13 +7364,13 @@ struct extent_map *btrfs_create_io_em(struct btrfs_inode *inode, u64 start, + static void wait_subpage_spinlock(struct folio *folio) + { + struct btrfs_fs_info *fs_info = folio_to_fs_info(folio); +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + + if (!btrfs_is_subpage(fs_info, folio)) + return; + + ASSERT(folio_test_private(folio) && folio_get_private(folio)); +- subpage = folio_get_private(folio); ++ bfs = folio_get_private(folio); + + /* + * This may look insane as we just acquire the spinlock and release it, +@@ -7383,8 +7383,8 @@ static void wait_subpage_spinlock(struct folio *folio) + * Here we just acquire the spinlock so that all existing callers + * should exit and we're safe to release/invalidate the page. + */ +- spin_lock_irq(&subpage->lock); +- spin_unlock_irq(&subpage->lock); ++ spin_lock_irq(&bfs->lock); ++ spin_unlock_irq(&bfs->lock); + } + + static int btrfs_launder_folio(struct folio *folio) +diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c +index fc821aa446f02f..21605b03f51188 100644 +--- a/fs/btrfs/print-tree.c ++++ b/fs/btrfs/print-tree.c +@@ -223,7 +223,7 @@ static void print_eb_refs_lock(const struct extent_buffer *eb) + { + #ifdef CONFIG_BTRFS_DEBUG + btrfs_info(eb->fs_info, "refs %u lock_owner %u current %u", +- atomic_read(&eb->refs), eb->lock_owner, current->pid); ++ refcount_read(&eb->refs), eb->lock_owner, current->pid); + #endif + } + +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c +index e1a34e69927dd3..68cbb2b1e3df8e 100644 +--- a/fs/btrfs/qgroup.c ++++ b/fs/btrfs/qgroup.c +@@ -2348,7 +2348,7 @@ static int qgroup_trace_extent_swap(struct btrfs_trans_handle* trans, + btrfs_item_key_to_cpu(dst_path->nodes[dst_level], &key, 0); + + /* For src_path */ +- atomic_inc(&src_eb->refs); ++ refcount_inc(&src_eb->refs); + src_path->nodes[root_level] = src_eb; + src_path->slots[root_level] = dst_path->slots[root_level]; + src_path->locks[root_level] = 0; +@@ -2581,7 +2581,7 @@ static int qgroup_trace_subtree_swap(struct btrfs_trans_handle *trans, + goto out; + } + /* For dst_path */ +- atomic_inc(&dst_eb->refs); ++ refcount_inc(&dst_eb->refs); + dst_path->nodes[level] = dst_eb; + dst_path->slots[level] = 0; + dst_path->locks[level] = 0; +@@ -2673,7 +2673,7 @@ int btrfs_qgroup_trace_subtree(struct btrfs_trans_handle *trans, + * walk back up the tree (adjusting slot pointers as we go) + * and restart the search process. + */ +- atomic_inc(&root_eb->refs); /* For path */ ++ refcount_inc(&root_eb->refs); /* For path */ + path->nodes[root_level] = root_eb; + path->slots[root_level] = 0; + path->locks[root_level] = 0; /* so release_path doesn't try to unlock */ +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c +index 068c7a1ad73173..1e139d23bd75fb 100644 +--- a/fs/btrfs/relocation.c ++++ b/fs/btrfs/relocation.c +@@ -1535,7 +1535,7 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc, + + if (btrfs_disk_key_objectid(&root_item->drop_progress) == 0) { + level = btrfs_root_level(root_item); +- atomic_inc(&reloc_root->node->refs); ++ refcount_inc(&reloc_root->node->refs); + path->nodes[level] = reloc_root->node; + path->slots[level] = 0; + } else { +@@ -4358,7 +4358,7 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans, + } + + btrfs_backref_drop_node_buffer(node); +- atomic_inc(&cow->refs); ++ refcount_inc(&cow->refs); + node->eb = cow; + node->new_bytenr = cow->start; + +diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c +index d4f0192334936c..2951fdc5db4e39 100644 +--- a/fs/btrfs/subpage.c ++++ b/fs/btrfs/subpage.c +@@ -49,7 +49,7 @@ + * Implementation: + * + * - Common +- * Both metadata and data will use a new structure, btrfs_subpage, to ++ * Both metadata and data will use a new structure, btrfs_folio_state, to + * record the status of each sector inside a page. This provides the extra + * granularity needed. + * +@@ -63,10 +63,10 @@ + * This means a slightly higher tree locking latency. + */ + +-int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info, +- struct folio *folio, enum btrfs_subpage_type type) ++int btrfs_attach_folio_state(const struct btrfs_fs_info *fs_info, ++ struct folio *folio, enum btrfs_folio_type type) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + + /* For metadata we don't support large folio yet. */ + if (type == BTRFS_SUBPAGE_METADATA) +@@ -87,18 +87,18 @@ int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info, + if (type == BTRFS_SUBPAGE_DATA && !btrfs_is_subpage(fs_info, folio)) + return 0; + +- subpage = btrfs_alloc_subpage(fs_info, folio_size(folio), type); +- if (IS_ERR(subpage)) +- return PTR_ERR(subpage); ++ bfs = btrfs_alloc_folio_state(fs_info, folio_size(folio), type); ++ if (IS_ERR(bfs)) ++ return PTR_ERR(bfs); + +- folio_attach_private(folio, subpage); ++ folio_attach_private(folio, bfs); + return 0; + } + +-void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *folio, +- enum btrfs_subpage_type type) ++void btrfs_detach_folio_state(const struct btrfs_fs_info *fs_info, struct folio *folio, ++ enum btrfs_folio_type type) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + + /* Either not subpage, or the folio already has private attached. */ + if (!folio_test_private(folio)) +@@ -108,15 +108,15 @@ void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *fol + if (type == BTRFS_SUBPAGE_DATA && !btrfs_is_subpage(fs_info, folio)) + return; + +- subpage = folio_detach_private(folio); +- ASSERT(subpage); +- btrfs_free_subpage(subpage); ++ bfs = folio_detach_private(folio); ++ ASSERT(bfs); ++ btrfs_free_folio_state(bfs); + } + +-struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info, +- size_t fsize, enum btrfs_subpage_type type) ++struct btrfs_folio_state *btrfs_alloc_folio_state(const struct btrfs_fs_info *fs_info, ++ size_t fsize, enum btrfs_folio_type type) + { +- struct btrfs_subpage *ret; ++ struct btrfs_folio_state *ret; + unsigned int real_size; + + ASSERT(fs_info->sectorsize < fsize); +@@ -136,11 +136,6 @@ struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info, + return ret; + } + +-void btrfs_free_subpage(struct btrfs_subpage *subpage) +-{ +- kfree(subpage); +-} +- + /* + * Increase the eb_refs of current subpage. + * +@@ -152,7 +147,7 @@ void btrfs_free_subpage(struct btrfs_subpage *subpage) + */ + void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + + if (!btrfs_meta_is_subpage(fs_info)) + return; +@@ -160,13 +155,13 @@ void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio * + ASSERT(folio_test_private(folio) && folio->mapping); + lockdep_assert_held(&folio->mapping->i_private_lock); + +- subpage = folio_get_private(folio); +- atomic_inc(&subpage->eb_refs); ++ bfs = folio_get_private(folio); ++ atomic_inc(&bfs->eb_refs); + } + + void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + + if (!btrfs_meta_is_subpage(fs_info)) + return; +@@ -174,9 +169,9 @@ void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio * + ASSERT(folio_test_private(folio) && folio->mapping); + lockdep_assert_held(&folio->mapping->i_private_lock); + +- subpage = folio_get_private(folio); +- ASSERT(atomic_read(&subpage->eb_refs)); +- atomic_dec(&subpage->eb_refs); ++ bfs = folio_get_private(folio); ++ ASSERT(atomic_read(&bfs->eb_refs)); ++ atomic_dec(&bfs->eb_refs); + } + + static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info, +@@ -228,7 +223,7 @@ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len) + static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len); + const int nbits = (len >> fs_info->sectorsize_bits); + unsigned long flags; +@@ -238,7 +233,7 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info, + + btrfs_subpage_assert(fs_info, folio, start, len); + +- spin_lock_irqsave(&subpage->lock, flags); ++ spin_lock_irqsave(&bfs->lock, flags); + /* + * We have call sites passing @lock_page into + * extent_clear_unlock_delalloc() for compression path. +@@ -246,18 +241,18 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info, + * This @locked_page is locked by plain lock_page(), thus its + * subpage::locked is 0. Handle them in a special way. + */ +- if (atomic_read(&subpage->nr_locked) == 0) { +- spin_unlock_irqrestore(&subpage->lock, flags); ++ if (atomic_read(&bfs->nr_locked) == 0) { ++ spin_unlock_irqrestore(&bfs->lock, flags); + return true; + } + +- for_each_set_bit_from(bit, subpage->bitmaps, start_bit + nbits) { +- clear_bit(bit, subpage->bitmaps); ++ for_each_set_bit_from(bit, bfs->bitmaps, start_bit + nbits) { ++ clear_bit(bit, bfs->bitmaps); + cleared++; + } +- ASSERT(atomic_read(&subpage->nr_locked) >= cleared); +- last = atomic_sub_and_test(cleared, &subpage->nr_locked); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ ASSERT(atomic_read(&bfs->nr_locked) >= cleared); ++ last = atomic_sub_and_test(cleared, &bfs->nr_locked); ++ spin_unlock_irqrestore(&bfs->lock, flags); + return last; + } + +@@ -280,7 +275,7 @@ static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info, + void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + + ASSERT(folio_test_locked(folio)); + +@@ -296,7 +291,7 @@ void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info, + * Since we own the page lock, no one else could touch subpage::locked + * and we are safe to do several atomic operations without spinlock. + */ +- if (atomic_read(&subpage->nr_locked) == 0) { ++ if (atomic_read(&bfs->nr_locked) == 0) { + /* No subpage lock, locked by plain lock_page(). */ + folio_unlock(folio); + return; +@@ -310,7 +305,7 @@ void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info, + void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info, + struct folio *folio, unsigned long bitmap) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio); + const int start_bit = blocks_per_folio * btrfs_bitmap_nr_locked; + unsigned long flags; +@@ -323,42 +318,42 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info, + return; + } + +- if (atomic_read(&subpage->nr_locked) == 0) { ++ if (atomic_read(&bfs->nr_locked) == 0) { + /* No subpage lock, locked by plain lock_page(). */ + folio_unlock(folio); + return; + } + +- spin_lock_irqsave(&subpage->lock, flags); ++ spin_lock_irqsave(&bfs->lock, flags); + for_each_set_bit(bit, &bitmap, blocks_per_folio) { +- if (test_and_clear_bit(bit + start_bit, subpage->bitmaps)) ++ if (test_and_clear_bit(bit + start_bit, bfs->bitmaps)) + cleared++; + } +- ASSERT(atomic_read(&subpage->nr_locked) >= cleared); +- last = atomic_sub_and_test(cleared, &subpage->nr_locked); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ ASSERT(atomic_read(&bfs->nr_locked) >= cleared); ++ last = atomic_sub_and_test(cleared, &bfs->nr_locked); ++ spin_unlock_irqrestore(&bfs->lock, flags); + if (last) + folio_unlock(folio); + } + + #define subpage_test_bitmap_all_set(fs_info, folio, name) \ + ({ \ +- struct btrfs_subpage *subpage = folio_get_private(folio); \ ++ struct btrfs_folio_state *bfs = folio_get_private(folio); \ + const unsigned int blocks_per_folio = \ + btrfs_blocks_per_folio(fs_info, folio); \ + \ +- bitmap_test_range_all_set(subpage->bitmaps, \ ++ bitmap_test_range_all_set(bfs->bitmaps, \ + blocks_per_folio * btrfs_bitmap_nr_##name, \ + blocks_per_folio); \ + }) + + #define subpage_test_bitmap_all_zero(fs_info, folio, name) \ + ({ \ +- struct btrfs_subpage *subpage = folio_get_private(folio); \ ++ struct btrfs_folio_state *bfs = folio_get_private(folio); \ + const unsigned int blocks_per_folio = \ + btrfs_blocks_per_folio(fs_info, folio); \ + \ +- bitmap_test_range_all_zero(subpage->bitmaps, \ ++ bitmap_test_range_all_zero(bfs->bitmaps, \ + blocks_per_folio * btrfs_bitmap_nr_##name, \ + blocks_per_folio); \ + }) +@@ -366,43 +361,43 @@ void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info, + void btrfs_subpage_set_uptodate(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + uptodate, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + if (subpage_test_bitmap_all_set(fs_info, folio, uptodate)) + folio_mark_uptodate(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_clear_uptodate(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + uptodate, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + folio_clear_uptodate(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + dirty, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_unlock_irqrestore(&bfs->lock, flags); + folio_mark_dirty(folio); + } + +@@ -419,17 +414,17 @@ void btrfs_subpage_set_dirty(const struct btrfs_fs_info *fs_info, + bool btrfs_subpage_clear_and_test_dirty(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + dirty, start, len); + unsigned long flags; + bool last = false; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + if (subpage_test_bitmap_all_zero(fs_info, folio, dirty)) + last = true; +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + return last; + } + +@@ -446,91 +441,108 @@ void btrfs_subpage_clear_dirty(const struct btrfs_fs_info *fs_info, + void btrfs_subpage_set_writeback(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + writeback, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ ++ /* ++ * Don't clear the TOWRITE tag when starting writeback on a still-dirty ++ * folio. Doing so can cause WB_SYNC_ALL writepages() to overlook it, ++ * assume writeback is complete, and exit too early — violating sync ++ * ordering guarantees. ++ */ + if (!folio_test_writeback(folio)) +- folio_start_writeback(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ __folio_start_writeback(folio, true); ++ if (!folio_test_dirty(folio)) { ++ struct address_space *mapping = folio_mapping(folio); ++ XA_STATE(xas, &mapping->i_pages, folio->index); ++ unsigned long flags; ++ ++ xas_lock_irqsave(&xas, flags); ++ xas_load(&xas); ++ xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); ++ xas_unlock_irqrestore(&xas, flags); ++ } ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + writeback, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + if (subpage_test_bitmap_all_zero(fs_info, folio, writeback)) { + ASSERT(folio_test_writeback(folio)); + folio_end_writeback(folio); + } +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + ordered, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + folio_set_ordered(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_clear_ordered(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + ordered, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + if (subpage_test_bitmap_all_zero(fs_info, folio, ordered)) + folio_clear_ordered(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_set_checked(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + checked, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_set(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + if (subpage_test_bitmap_all_set(fs_info, folio, checked)) + folio_set_checked(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage = folio_get_private(folio); ++ struct btrfs_folio_state *bfs = folio_get_private(folio); + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, + checked, start, len); + unsigned long flags; + +- spin_lock_irqsave(&subpage->lock, flags); +- bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits); ++ spin_lock_irqsave(&bfs->lock, flags); ++ bitmap_clear(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits); + folio_clear_checked(folio); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + /* +@@ -541,16 +553,16 @@ void btrfs_subpage_clear_checked(const struct btrfs_fs_info *fs_info, + bool btrfs_subpage_test_##name(const struct btrfs_fs_info *fs_info, \ + struct folio *folio, u64 start, u32 len) \ + { \ +- struct btrfs_subpage *subpage = folio_get_private(folio); \ ++ struct btrfs_folio_state *bfs = folio_get_private(folio); \ + unsigned int start_bit = subpage_calc_start_bit(fs_info, folio, \ + name, start, len); \ + unsigned long flags; \ + bool ret; \ + \ +- spin_lock_irqsave(&subpage->lock, flags); \ +- ret = bitmap_test_range_all_set(subpage->bitmaps, start_bit, \ ++ spin_lock_irqsave(&bfs->lock, flags); \ ++ ret = bitmap_test_range_all_set(bfs->bitmaps, start_bit, \ + len >> fs_info->sectorsize_bits); \ +- spin_unlock_irqrestore(&subpage->lock, flags); \ ++ spin_unlock_irqrestore(&bfs->lock, flags); \ + return ret; \ + } + IMPLEMENT_BTRFS_SUBPAGE_TEST_OP(uptodate); +@@ -662,10 +674,10 @@ IMPLEMENT_BTRFS_PAGE_OPS(checked, folio_set_checked, folio_clear_checked, + { \ + const unsigned int blocks_per_folio = \ + btrfs_blocks_per_folio(fs_info, folio); \ +- const struct btrfs_subpage *subpage = folio_get_private(folio); \ ++ const struct btrfs_folio_state *bfs = folio_get_private(folio); \ + \ + ASSERT(blocks_per_folio <= BITS_PER_LONG); \ +- *dst = bitmap_read(subpage->bitmaps, \ ++ *dst = bitmap_read(bfs->bitmaps, \ + blocks_per_folio * btrfs_bitmap_nr_##name, \ + blocks_per_folio); \ + } +@@ -690,7 +702,7 @@ IMPLEMENT_BTRFS_PAGE_OPS(checked, folio_set_checked, folio_clear_checked, + void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + unsigned int start_bit; + unsigned int nbits; + unsigned long flags; +@@ -705,15 +717,15 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info, + + start_bit = subpage_calc_start_bit(fs_info, folio, dirty, start, len); + nbits = len >> fs_info->sectorsize_bits; +- subpage = folio_get_private(folio); +- ASSERT(subpage); +- spin_lock_irqsave(&subpage->lock, flags); +- if (unlikely(!bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits))) { ++ bfs = folio_get_private(folio); ++ ASSERT(bfs); ++ spin_lock_irqsave(&bfs->lock, flags); ++ if (unlikely(!bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits))) { + SUBPAGE_DUMP_BITMAP(fs_info, folio, dirty, start, len); +- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits)); ++ ASSERT(bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits)); + } +- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits)); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ ASSERT(bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits)); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + /* +@@ -726,7 +738,7 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info, + void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + unsigned long flags; + unsigned int start_bit; + unsigned int nbits; +@@ -736,19 +748,19 @@ void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info, + if (unlikely(!fs_info) || !btrfs_is_subpage(fs_info, folio)) + return; + +- subpage = folio_get_private(folio); ++ bfs = folio_get_private(folio); + start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len); + nbits = len >> fs_info->sectorsize_bits; +- spin_lock_irqsave(&subpage->lock, flags); ++ spin_lock_irqsave(&bfs->lock, flags); + /* Target range should not yet be locked. */ +- if (unlikely(!bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits))) { ++ if (unlikely(!bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits))) { + SUBPAGE_DUMP_BITMAP(fs_info, folio, locked, start, len); +- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits)); ++ ASSERT(bitmap_test_range_all_zero(bfs->bitmaps, start_bit, nbits)); + } +- bitmap_set(subpage->bitmaps, start_bit, nbits); +- ret = atomic_add_return(nbits, &subpage->nr_locked); ++ bitmap_set(bfs->bitmaps, start_bit, nbits); ++ ret = atomic_add_return(nbits, &bfs->nr_locked); + ASSERT(ret <= btrfs_blocks_per_folio(fs_info, folio)); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } + + /* +@@ -776,7 +788,7 @@ bool btrfs_meta_folio_clear_and_test_dirty(struct folio *folio, const struct ext + void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info, + struct folio *folio, u64 start, u32 len) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio); + unsigned long uptodate_bitmap; + unsigned long dirty_bitmap; +@@ -788,18 +800,18 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info, + + ASSERT(folio_test_private(folio) && folio_get_private(folio)); + ASSERT(blocks_per_folio > 1); +- subpage = folio_get_private(folio); ++ bfs = folio_get_private(folio); + +- spin_lock_irqsave(&subpage->lock, flags); ++ spin_lock_irqsave(&bfs->lock, flags); + GET_SUBPAGE_BITMAP(fs_info, folio, uptodate, &uptodate_bitmap); + GET_SUBPAGE_BITMAP(fs_info, folio, dirty, &dirty_bitmap); + GET_SUBPAGE_BITMAP(fs_info, folio, writeback, &writeback_bitmap); + GET_SUBPAGE_BITMAP(fs_info, folio, ordered, &ordered_bitmap); + GET_SUBPAGE_BITMAP(fs_info, folio, checked, &checked_bitmap); + GET_SUBPAGE_BITMAP(fs_info, folio, locked, &locked_bitmap); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + +- dump_page(folio_page(folio, 0), "btrfs subpage dump"); ++ dump_page(folio_page(folio, 0), "btrfs folio state dump"); + btrfs_warn(fs_info, + "start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl locked=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl", + start, len, folio_pos(folio), +@@ -815,14 +827,14 @@ void btrfs_get_subpage_dirty_bitmap(struct btrfs_fs_info *fs_info, + struct folio *folio, + unsigned long *ret_bitmap) + { +- struct btrfs_subpage *subpage; ++ struct btrfs_folio_state *bfs; + unsigned long flags; + + ASSERT(folio_test_private(folio) && folio_get_private(folio)); + ASSERT(btrfs_blocks_per_folio(fs_info, folio) > 1); +- subpage = folio_get_private(folio); ++ bfs = folio_get_private(folio); + +- spin_lock_irqsave(&subpage->lock, flags); ++ spin_lock_irqsave(&bfs->lock, flags); + GET_SUBPAGE_BITMAP(fs_info, folio, dirty, ret_bitmap); +- spin_unlock_irqrestore(&subpage->lock, flags); ++ spin_unlock_irqrestore(&bfs->lock, flags); + } +diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h +index 3042c5ea840aec..b6e40a678d7387 100644 +--- a/fs/btrfs/subpage.h ++++ b/fs/btrfs/subpage.h +@@ -32,9 +32,31 @@ struct folio; + enum { + btrfs_bitmap_nr_uptodate = 0, + btrfs_bitmap_nr_dirty, ++ ++ /* ++ * This can be changed to atomic eventually. But this change will rely ++ * on the async delalloc range rework for locked bitmap. As async ++ * delalloc can unlock its range and mark blocks writeback at random ++ * timing. ++ */ + btrfs_bitmap_nr_writeback, ++ ++ /* ++ * The ordered and checked flags are for COW fixup, already marked ++ * deprecated, and will be removed eventually. ++ */ + btrfs_bitmap_nr_ordered, + btrfs_bitmap_nr_checked, ++ ++ /* ++ * The locked bit is for async delalloc range (compression), currently ++ * async extent is queued with the range locked, until the compression ++ * is done. ++ * So an async extent can unlock the range at any random timing. ++ * ++ * This will need a rework on the async extent lifespan (mark writeback ++ * and do compression) before deprecating this flag. ++ */ + btrfs_bitmap_nr_locked, + btrfs_bitmap_nr_max + }; +@@ -43,7 +65,7 @@ enum { + * Structure to trace status of each sector inside a page, attached to + * page::private for both data and metadata inodes. + */ +-struct btrfs_subpage { ++struct btrfs_folio_state { + /* Common members for both data and metadata pages */ + spinlock_t lock; + union { +@@ -51,7 +73,7 @@ struct btrfs_subpage { + * Structures only used by metadata + * + * @eb_refs should only be operated under private_lock, as it +- * manages whether the subpage can be detached. ++ * manages whether the btrfs_folio_state can be detached. + */ + atomic_t eb_refs; + +@@ -65,7 +87,7 @@ struct btrfs_subpage { + unsigned long bitmaps[]; + }; + +-enum btrfs_subpage_type { ++enum btrfs_folio_type { + BTRFS_SUBPAGE_METADATA, + BTRFS_SUBPAGE_DATA, + }; +@@ -105,15 +127,18 @@ static inline bool btrfs_is_subpage(const struct btrfs_fs_info *fs_info, + } + #endif + +-int btrfs_attach_subpage(const struct btrfs_fs_info *fs_info, +- struct folio *folio, enum btrfs_subpage_type type); +-void btrfs_detach_subpage(const struct btrfs_fs_info *fs_info, struct folio *folio, +- enum btrfs_subpage_type type); ++int btrfs_attach_folio_state(const struct btrfs_fs_info *fs_info, ++ struct folio *folio, enum btrfs_folio_type type); ++void btrfs_detach_folio_state(const struct btrfs_fs_info *fs_info, struct folio *folio, ++ enum btrfs_folio_type type); + + /* Allocate additional data where page represents more than one sector */ +-struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info, +- size_t fsize, enum btrfs_subpage_type type); +-void btrfs_free_subpage(struct btrfs_subpage *subpage); ++struct btrfs_folio_state *btrfs_alloc_folio_state(const struct btrfs_fs_info *fs_info, ++ size_t fsize, enum btrfs_folio_type type); ++static inline void btrfs_free_folio_state(struct btrfs_folio_state *bfs) ++{ ++ kfree(bfs); ++} + + void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio); + void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio); +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index a0c65adce1abd2..3213815ed76571 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -88,6 +88,9 @@ struct btrfs_fs_context { + refcount_t refs; + }; + ++static void btrfs_emit_options(struct btrfs_fs_info *info, ++ struct btrfs_fs_context *old); ++ + enum { + Opt_acl, + Opt_clear_cache, +@@ -689,12 +692,9 @@ bool btrfs_check_options(const struct btrfs_fs_info *info, + + if (!test_bit(BTRFS_FS_STATE_REMOUNTING, &info->fs_state)) { + if (btrfs_raw_test_opt(*mount_opt, SPACE_CACHE)) { +- btrfs_info(info, "disk space caching is enabled"); + btrfs_warn(info, + "space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2"); + } +- if (btrfs_raw_test_opt(*mount_opt, FREE_SPACE_TREE)) +- btrfs_info(info, "using free-space-tree"); + } + + return ret; +@@ -971,6 +971,8 @@ static int btrfs_fill_super(struct super_block *sb, + return err; + } + ++ btrfs_emit_options(fs_info, NULL); ++ + inode = btrfs_iget(BTRFS_FIRST_FREE_OBJECTID, fs_info->fs_root); + if (IS_ERR(inode)) { + err = PTR_ERR(inode); +@@ -1428,7 +1430,7 @@ static void btrfs_emit_options(struct btrfs_fs_info *info, + { + btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum"); + btrfs_info_if_set(info, old, DEGRADED, "allowing degraded mounts"); +- btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum"); ++ btrfs_info_if_set(info, old, NODATACOW, "setting nodatacow"); + btrfs_info_if_set(info, old, SSD, "enabling ssd optimizations"); + btrfs_info_if_set(info, old, SSD_SPREAD, "using spread ssd allocation scheme"); + btrfs_info_if_set(info, old, NOBARRIER, "turning off barriers"); +@@ -1450,10 +1452,11 @@ static void btrfs_emit_options(struct btrfs_fs_info *info, + btrfs_info_if_set(info, old, IGNOREMETACSUMS, "ignoring meta csums"); + btrfs_info_if_set(info, old, IGNORESUPERFLAGS, "ignoring unknown super block flags"); + ++ btrfs_info_if_unset(info, old, NODATASUM, "setting datasum"); + btrfs_info_if_unset(info, old, NODATACOW, "setting datacow"); + btrfs_info_if_unset(info, old, SSD, "not using ssd optimizations"); + btrfs_info_if_unset(info, old, SSD_SPREAD, "not using spread ssd allocation scheme"); +- btrfs_info_if_unset(info, old, NOBARRIER, "turning off barriers"); ++ btrfs_info_if_unset(info, old, NOBARRIER, "turning on barriers"); + btrfs_info_if_unset(info, old, NOTREELOG, "enabling tree log"); + btrfs_info_if_unset(info, old, SPACE_CACHE, "disabling disk space caching"); + btrfs_info_if_unset(info, old, FREE_SPACE_TREE, "disabling free space tree"); +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 7241229d218b3a..afc05e406689ae 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -2747,7 +2747,7 @@ static int walk_log_tree(struct btrfs_trans_handle *trans, + level = btrfs_header_level(log->node); + orig_level = level; + path->nodes[level] = log->node; +- atomic_inc(&log->node->refs); ++ refcount_inc(&log->node->refs); + path->slots[level] = 0; + + while (1) { +@@ -3711,7 +3711,7 @@ static int clone_leaf(struct btrfs_path *path, struct btrfs_log_ctx *ctx) + * Add extra ref to scratch eb so that it is not freed when callers + * release the path, so we can reuse it later if needed. + */ +- atomic_inc(&ctx->scratch_eb->refs); ++ refcount_inc(&ctx->scratch_eb->refs); + + return 0; + } +diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c +index 5439d837471617..af5ba3ad2eb833 100644 +--- a/fs/btrfs/zoned.c ++++ b/fs/btrfs/zoned.c +@@ -18,6 +18,7 @@ + #include "accessors.h" + #include "bio.h" + #include "transaction.h" ++#include "sysfs.h" + + /* Maximum number of zones to report per blkdev_report_zones() call */ + #define BTRFS_REPORT_NR_ZONES 4096 +@@ -2169,10 +2170,15 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group) + goto out_unlock; + } + +- /* No space left */ +- if (btrfs_zoned_bg_is_full(block_group)) { +- ret = false; +- goto out_unlock; ++ if (block_group->flags & BTRFS_BLOCK_GROUP_DATA) { ++ /* The caller should check if the block group is full. */ ++ if (WARN_ON_ONCE(btrfs_zoned_bg_is_full(block_group))) { ++ ret = false; ++ goto out_unlock; ++ } ++ } else { ++ /* Since it is already written, it should have been active. */ ++ WARN_ON_ONCE(block_group->meta_write_pointer != block_group->start); + } + + for (i = 0; i < map->num_stripes; i++) { +@@ -2486,7 +2492,7 @@ void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg, + + /* For the work */ + btrfs_get_block_group(bg); +- atomic_inc(&eb->refs); ++ refcount_inc(&eb->refs); + bg->last_eb = eb; + INIT_WORK(&bg->zone_finish_work, btrfs_zone_finish_endio_workfn); + queue_work(system_unbound_wq, &bg->zone_finish_work); +@@ -2505,12 +2511,12 @@ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) + void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info) + { + struct btrfs_space_info *data_sinfo = fs_info->data_sinfo; +- struct btrfs_space_info *space_info = data_sinfo->sub_group[0]; ++ struct btrfs_space_info *space_info = data_sinfo; + struct btrfs_trans_handle *trans; + struct btrfs_block_group *bg; + struct list_head *bg_list; + u64 alloc_flags; +- bool initial = false; ++ bool first = true; + bool did_chunk_alloc = false; + int index; + int ret; +@@ -2524,21 +2530,52 @@ void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info) + if (sb_rdonly(fs_info->sb)) + return; + +- ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC); + alloc_flags = btrfs_get_alloc_profile(fs_info, space_info->flags); + index = btrfs_bg_flags_to_raid_index(alloc_flags); + +- bg_list = &data_sinfo->block_groups[index]; ++ /* Scan the data space_info to find empty block groups. Take the second one. */ + again: ++ bg_list = &space_info->block_groups[index]; + list_for_each_entry(bg, bg_list, list) { +- if (bg->used > 0) ++ if (bg->alloc_offset != 0) + continue; + +- if (!initial) { +- initial = true; ++ if (first) { ++ first = false; + continue; + } + ++ if (space_info == data_sinfo) { ++ /* Migrate the block group to the data relocation space_info. */ ++ struct btrfs_space_info *reloc_sinfo = data_sinfo->sub_group[0]; ++ int factor; ++ ++ ASSERT(reloc_sinfo->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC); ++ factor = btrfs_bg_type_to_factor(bg->flags); ++ ++ down_write(&space_info->groups_sem); ++ list_del_init(&bg->list); ++ /* We can assume this as we choose the second empty one. */ ++ ASSERT(!list_empty(&space_info->block_groups[index])); ++ up_write(&space_info->groups_sem); ++ ++ spin_lock(&space_info->lock); ++ space_info->total_bytes -= bg->length; ++ space_info->disk_total -= bg->length * factor; ++ /* There is no allocation ever happened. */ ++ ASSERT(bg->used == 0); ++ ASSERT(bg->zone_unusable == 0); ++ /* No super block in a block group on the zoned setup. */ ++ ASSERT(bg->bytes_super == 0); ++ spin_unlock(&space_info->lock); ++ ++ bg->space_info = reloc_sinfo; ++ if (reloc_sinfo->block_group_kobjs[index] == NULL) ++ btrfs_sysfs_add_block_group_type(bg); ++ ++ btrfs_add_bg_to_space_info(fs_info, bg); ++ } ++ + fs_info->data_reloc_bg = bg->start; + set_bit(BLOCK_GROUP_FLAG_ZONED_DATA_RELOC, &bg->runtime_flags); + btrfs_zone_activate(bg); +@@ -2553,11 +2590,18 @@ void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info) + if (IS_ERR(trans)) + return; + ++ /* Allocate new BG in the data relocation space_info. */ ++ space_info = data_sinfo->sub_group[0]; ++ ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC); + ret = btrfs_chunk_alloc(trans, space_info, alloc_flags, CHUNK_ALLOC_FORCE); + btrfs_end_transaction(trans); + if (ret == 1) { ++ /* ++ * We allocated a new block group in the data relocation space_info. We ++ * can take that one. ++ */ ++ first = false; + did_chunk_alloc = true; +- bg_list = &space_info->block_groups[index]; + goto again; + } + } +diff --git a/fs/buffer.c b/fs/buffer.c +index 8cf4a1dc481eb1..eb6d85edc37a12 100644 +--- a/fs/buffer.c ++++ b/fs/buffer.c +@@ -157,8 +157,8 @@ static void __end_buffer_read_notouch(struct buffer_head *bh, int uptodate) + */ + void end_buffer_read_sync(struct buffer_head *bh, int uptodate) + { +- __end_buffer_read_notouch(bh, uptodate); + put_bh(bh); ++ __end_buffer_read_notouch(bh, uptodate); + } + EXPORT_SYMBOL(end_buffer_read_sync); + +diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c +index 30c4944e18622d..644e90ee865458 100644 +--- a/fs/debugfs/inode.c ++++ b/fs/debugfs/inode.c +@@ -183,6 +183,9 @@ static int debugfs_reconfigure(struct fs_context *fc) + struct debugfs_fs_info *sb_opts = sb->s_fs_info; + struct debugfs_fs_info *new_opts = fc->s_fs_info; + ++ if (!new_opts) ++ return 0; ++ + sync_filesystem(sb); + + /* structure copy of new mount options to sb */ +@@ -282,10 +285,16 @@ static int debugfs_fill_super(struct super_block *sb, struct fs_context *fc) + + static int debugfs_get_tree(struct fs_context *fc) + { ++ int err; ++ + if (!(debugfs_allow & DEBUGFS_ALLOW_API)) + return -EPERM; + +- return get_tree_single(fc, debugfs_fill_super); ++ err = get_tree_single(fc, debugfs_fill_super); ++ if (err) ++ return err; ++ ++ return debugfs_reconfigure(fc); + } + + static void debugfs_free_fc(struct fs_context *fc) +diff --git a/fs/erofs/Kconfig b/fs/erofs/Kconfig +index 6beeb7063871cc..d81f3318417dff 100644 +--- a/fs/erofs/Kconfig ++++ b/fs/erofs/Kconfig +@@ -3,8 +3,18 @@ + config EROFS_FS + tristate "EROFS filesystem support" + depends on BLOCK ++ select CACHEFILES if EROFS_FS_ONDEMAND + select CRC32 ++ select CRYPTO if EROFS_FS_ZIP_ACCEL ++ select CRYPTO_DEFLATE if EROFS_FS_ZIP_ACCEL + select FS_IOMAP ++ select LZ4_DECOMPRESS if EROFS_FS_ZIP ++ select NETFS_SUPPORT if EROFS_FS_ONDEMAND ++ select XXHASH if EROFS_FS_XATTR ++ select XZ_DEC if EROFS_FS_ZIP_LZMA ++ select XZ_DEC_MICROLZMA if EROFS_FS_ZIP_LZMA ++ select ZLIB_INFLATE if EROFS_FS_ZIP_DEFLATE ++ select ZSTD_DECOMPRESS if EROFS_FS_ZIP_ZSTD + help + EROFS (Enhanced Read-Only File System) is a lightweight read-only + file system with modern designs (e.g. no buffer heads, inline +@@ -38,7 +48,6 @@ config EROFS_FS_DEBUG + config EROFS_FS_XATTR + bool "EROFS extended attributes" + depends on EROFS_FS +- select XXHASH + default y + help + Extended attributes are name:value pairs associated with inodes by +@@ -94,7 +103,6 @@ config EROFS_FS_BACKED_BY_FILE + config EROFS_FS_ZIP + bool "EROFS Data Compression Support" + depends on EROFS_FS +- select LZ4_DECOMPRESS + default y + help + Enable transparent compression support for EROFS file systems. +@@ -104,8 +112,6 @@ config EROFS_FS_ZIP + config EROFS_FS_ZIP_LZMA + bool "EROFS LZMA compressed data support" + depends on EROFS_FS_ZIP +- select XZ_DEC +- select XZ_DEC_MICROLZMA + help + Saying Y here includes support for reading EROFS file systems + containing LZMA compressed data, specifically called microLZMA. It +@@ -117,7 +123,6 @@ config EROFS_FS_ZIP_LZMA + config EROFS_FS_ZIP_DEFLATE + bool "EROFS DEFLATE compressed data support" + depends on EROFS_FS_ZIP +- select ZLIB_INFLATE + help + Saying Y here includes support for reading EROFS file systems + containing DEFLATE compressed data. It gives better compression +@@ -132,7 +137,6 @@ config EROFS_FS_ZIP_DEFLATE + config EROFS_FS_ZIP_ZSTD + bool "EROFS Zstandard compressed data support" + depends on EROFS_FS_ZIP +- select ZSTD_DECOMPRESS + help + Saying Y here includes support for reading EROFS file systems + containing Zstandard compressed data. It gives better compression +@@ -161,9 +165,7 @@ config EROFS_FS_ZIP_ACCEL + config EROFS_FS_ONDEMAND + bool "EROFS fscache-based on-demand read support (deprecated)" + depends on EROFS_FS +- select NETFS_SUPPORT + select FSCACHE +- select CACHEFILES + select CACHEFILES_ONDEMAND + help + This permits EROFS to use fscache-backed data blobs with on-demand +diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c +index 383c6edea6dd31..91185c40f755a5 100644 +--- a/fs/ext4/fsmap.c ++++ b/fs/ext4/fsmap.c +@@ -393,6 +393,14 @@ static unsigned int ext4_getfsmap_find_sb(struct super_block *sb, + /* Reserved GDT blocks */ + if (!ext4_has_feature_meta_bg(sb) || metagroup < first_meta_bg) { + len = le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks); ++ ++ /* ++ * mkfs.ext4 can set s_reserved_gdt_blocks as 0 in some cases, ++ * check for that. ++ */ ++ if (!len) ++ return 0; ++ + error = ext4_getfsmap_fill(meta_list, fsb, len, + EXT4_FMR_OWN_RESV_GDT); + if (error) +@@ -526,6 +534,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb, + ext4_group_t end_ag; + ext4_grpblk_t first_cluster; + ext4_grpblk_t last_cluster; ++ struct ext4_fsmap irec; + int error = 0; + + bofs = le32_to_cpu(sbi->s_es->s_first_data_block); +@@ -609,10 +618,18 @@ static int ext4_getfsmap_datadev(struct super_block *sb, + goto err; + } + +- /* Report any gaps at the end of the bg */ ++ /* ++ * The dummy record below will cause ext4_getfsmap_helper() to report ++ * any allocated blocks at the end of the range. ++ */ ++ irec.fmr_device = 0; ++ irec.fmr_physical = end_fsb + 1; ++ irec.fmr_length = 0; ++ irec.fmr_owner = EXT4_FMR_OWN_FREE; ++ irec.fmr_flags = 0; ++ + info->gfi_last = true; +- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1, +- 0, info); ++ error = ext4_getfsmap_helper(sb, info, &irec); + if (error) + goto err; + +diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c +index 7de327fa7b1c51..d45124318200d8 100644 +--- a/fs/ext4/indirect.c ++++ b/fs/ext4/indirect.c +@@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode, + int indirect_blks; + int blocks_to_boundary = 0; + int depth; +- int count = 0; ++ u64 count = 0; + ext4_fsblk_t first_block = 0; + + trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags); +@@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode, + count++; + /* Fill in size of a hole we found */ + map->m_pblk = 0; +- map->m_len = min_t(unsigned int, map->m_len, count); ++ map->m_len = umin(map->m_len, count); + goto cleanup; + } + +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c +index 3df0796a30104e..c0a85b7548535a 100644 +--- a/fs/ext4/inode.c ++++ b/fs/ext4/inode.c +@@ -146,7 +146,7 @@ static inline int ext4_begin_ordered_truncate(struct inode *inode, + */ + int ext4_inode_is_fast_symlink(struct inode *inode) + { +- if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) { ++ if (!ext4_has_feature_ea_inode(inode->i_sb)) { + int ea_blocks = EXT4_I(inode)->i_file_acl ? + EXT4_CLUSTER_SIZE(inode->i_sb) >> 9 : 0; + +diff --git a/fs/ext4/orphan.c b/fs/ext4/orphan.c +index 7c7f792ad6aba9..524d4658fa408d 100644 +--- a/fs/ext4/orphan.c ++++ b/fs/ext4/orphan.c +@@ -589,8 +589,9 @@ int ext4_init_orphan_info(struct super_block *sb) + } + oi->of_blocks = inode->i_size >> sb->s_blocksize_bits; + oi->of_csum_seed = EXT4_I(inode)->i_csum_seed; +- oi->of_binfo = kmalloc(oi->of_blocks*sizeof(struct ext4_orphan_block), +- GFP_KERNEL); ++ oi->of_binfo = kmalloc_array(oi->of_blocks, ++ sizeof(struct ext4_orphan_block), ++ GFP_KERNEL); + if (!oi->of_binfo) { + ret = -ENOMEM; + goto out_put; +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index c7d39da7e733b1..8f460663d6c457 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -1998,6 +1998,9 @@ int ext4_init_fs_context(struct fs_context *fc) + fc->fs_private = ctx; + fc->ops = &ext4_context_ops; + ++ /* i_version is always enabled now */ ++ fc->sb_flags |= SB_I_VERSION; ++ + return 0; + } + +@@ -5314,9 +5317,6 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb) + sb->s_flags = (sb->s_flags & ~SB_POSIXACL) | + (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0); + +- /* i_version is always enabled now */ +- sb->s_flags |= SB_I_VERSION; +- + /* HSM events are allowed by default. */ + sb->s_iflags |= SB_I_ALLOW_HSM; + +@@ -5414,6 +5414,8 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb) + err = ext4_load_and_init_journal(sb, es, ctx); + if (err) + goto failed_mount3a; ++ if (bdev_read_only(sb->s_bdev)) ++ needs_recovery = 0; + } else if (test_opt(sb, NOLOAD) && !sb_rdonly(sb) && + ext4_has_feature_journal_needs_recovery(sb)) { + ext4_msg(sb, KERN_ERR, "required journal recovery " +diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c +index 2fd287f2bca4ba..e8d1abbb8052e6 100644 +--- a/fs/f2fs/node.c ++++ b/fs/f2fs/node.c +@@ -816,6 +816,16 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) + for (i = 1; i <= level; i++) { + bool done = false; + ++ if (nids[i] && nids[i] == dn->inode->i_ino) { ++ err = -EFSCORRUPTED; ++ f2fs_err_ratelimited(sbi, ++ "inode mapping table is corrupted, run fsck to fix it, " ++ "ino:%lu, nid:%u, level:%d, offset:%d", ++ dn->inode->i_ino, nids[i], level, offset[level]); ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ goto release_pages; ++ } ++ + if (!nids[i] && mode == ALLOC_NODE) { + /* alloc new node */ + if (!f2fs_alloc_nid(sbi, &(nids[i]))) { +diff --git a/fs/fhandle.c b/fs/fhandle.c +index 66ff60591d17b2..e21ec857f2abcf 100644 +--- a/fs/fhandle.c ++++ b/fs/fhandle.c +@@ -404,7 +404,7 @@ static long do_handle_open(int mountdirfd, struct file_handle __user *ufh, + if (retval) + return retval; + +- CLASS(get_unused_fd, fd)(O_CLOEXEC); ++ CLASS(get_unused_fd, fd)(open_flag); + if (fd < 0) + return fd; + +diff --git a/fs/internal.h b/fs/internal.h +index 393f6c5c24f6b8..22ba066d1dbaf7 100644 +--- a/fs/internal.h ++++ b/fs/internal.h +@@ -322,12 +322,15 @@ struct mnt_idmap *alloc_mnt_idmap(struct user_namespace *mnt_userns); + struct mnt_idmap *mnt_idmap_get(struct mnt_idmap *idmap); + void mnt_idmap_put(struct mnt_idmap *idmap); + struct stashed_operations { ++ struct dentry *(*stash_dentry)(struct dentry **stashed, ++ struct dentry *dentry); + void (*put_data)(void *data); + int (*init_inode)(struct inode *inode, void *data); + }; + int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data, + struct path *path); + void stashed_dentry_prune(struct dentry *dentry); ++struct dentry *stash_dentry(struct dentry **stashed, struct dentry *dentry); + struct dentry *stashed_dentry_get(struct dentry **stashed); + /** + * path_mounted - check whether path is mounted +diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c +index 844261a31156c4..b875165b7c27d9 100644 +--- a/fs/iomap/direct-io.c ++++ b/fs/iomap/direct-io.c +@@ -368,14 +368,14 @@ static int iomap_dio_bio_iter(struct iomap_iter *iter, struct iomap_dio *dio) + if (iomap->flags & IOMAP_F_SHARED) + dio->flags |= IOMAP_DIO_COW; + +- if (iomap->flags & IOMAP_F_NEW) { ++ if (iomap->flags & IOMAP_F_NEW) + need_zeroout = true; +- } else if (iomap->type == IOMAP_MAPPED) { +- if (iomap_dio_can_use_fua(iomap, dio)) +- bio_opf |= REQ_FUA; +- else +- dio->flags &= ~IOMAP_DIO_WRITE_THROUGH; +- } ++ else if (iomap->type == IOMAP_MAPPED && ++ iomap_dio_can_use_fua(iomap, dio)) ++ bio_opf |= REQ_FUA; ++ ++ if (!(bio_opf & REQ_FUA)) ++ dio->flags &= ~IOMAP_DIO_WRITE_THROUGH; + + /* + * We can only do deferred completion for pure overwrites that +diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c +index b3971e91e8eb80..38861ca04899f0 100644 +--- a/fs/jbd2/checkpoint.c ++++ b/fs/jbd2/checkpoint.c +@@ -285,6 +285,7 @@ int jbd2_log_do_checkpoint(journal_t *journal) + retry: + if (batch_count) + __flush_batch(journal, &batch_count); ++ cond_resched(); + spin_lock(&journal->j_list_lock); + goto restart; + } +diff --git a/fs/libfs.c b/fs/libfs.c +index 972b95cc743357..5b936ee71892a9 100644 +--- a/fs/libfs.c ++++ b/fs/libfs.c +@@ -2126,6 +2126,8 @@ struct dentry *stashed_dentry_get(struct dentry **stashed) + dentry = rcu_dereference(*stashed); + if (!dentry) + return NULL; ++ if (IS_ERR(dentry)) ++ return dentry; + if (!lockref_get_not_dead(&dentry->d_lockref)) + return NULL; + return dentry; +@@ -2174,8 +2176,7 @@ static struct dentry *prepare_anon_dentry(struct dentry **stashed, + return dentry; + } + +-static struct dentry *stash_dentry(struct dentry **stashed, +- struct dentry *dentry) ++struct dentry *stash_dentry(struct dentry **stashed, struct dentry *dentry) + { + guard(rcu)(); + for (;;) { +@@ -2216,12 +2217,15 @@ static struct dentry *stash_dentry(struct dentry **stashed, + int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data, + struct path *path) + { +- struct dentry *dentry; ++ struct dentry *dentry, *res; + const struct stashed_operations *sops = mnt->mnt_sb->s_fs_info; + + /* See if dentry can be reused. */ +- path->dentry = stashed_dentry_get(stashed); +- if (path->dentry) { ++ res = stashed_dentry_get(stashed); ++ if (IS_ERR(res)) ++ return PTR_ERR(res); ++ if (res) { ++ path->dentry = res; + sops->put_data(data); + goto out_path; + } +@@ -2232,8 +2236,17 @@ int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data, + return PTR_ERR(dentry); + + /* Added a new dentry. @data is now owned by the filesystem. */ +- path->dentry = stash_dentry(stashed, dentry); +- if (path->dentry != dentry) ++ if (sops->stash_dentry) ++ res = sops->stash_dentry(stashed, dentry); ++ else ++ res = stash_dentry(stashed, dentry); ++ if (IS_ERR(res)) { ++ dput(dentry); ++ return PTR_ERR(res); ++ } ++ path->dentry = res; ++ /* A dentry was reused. */ ++ if (res != dentry) + dput(dentry); + + out_path: +diff --git a/fs/namespace.c b/fs/namespace.c +index 54c59e091919bc..6b038bf74a3df2 100644 +--- a/fs/namespace.c ++++ b/fs/namespace.c +@@ -2925,6 +2925,19 @@ static int graft_tree(struct mount *mnt, struct mount *p, struct mountpoint *mp) + return attach_recursive_mnt(mnt, p, mp, 0); + } + ++static int may_change_propagation(const struct mount *m) ++{ ++ struct mnt_namespace *ns = m->mnt_ns; ++ ++ // it must be mounted in some namespace ++ if (IS_ERR_OR_NULL(ns)) // is_mounted() ++ return -EINVAL; ++ // and the caller must be admin in userns of that namespace ++ if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN)) ++ return -EPERM; ++ return 0; ++} ++ + /* + * Sanity check the flags to change_mnt_propagation. + */ +@@ -2961,10 +2974,10 @@ static int do_change_type(struct path *path, int ms_flags) + return -EINVAL; + + namespace_lock(); +- if (!check_mnt(mnt)) { +- err = -EINVAL; ++ err = may_change_propagation(mnt); ++ if (err) + goto out_unlock; +- } ++ + if (type == MS_SHARED) { + err = invent_group_ids(mnt, recurse); + if (err) +@@ -3419,18 +3432,11 @@ static int do_set_group(struct path *from_path, struct path *to_path) + + namespace_lock(); + +- err = -EINVAL; +- /* To and From must be mounted */ +- if (!is_mounted(&from->mnt)) +- goto out; +- if (!is_mounted(&to->mnt)) +- goto out; +- +- err = -EPERM; +- /* We should be allowed to modify mount namespaces of both mounts */ +- if (!ns_capable(from->mnt_ns->user_ns, CAP_SYS_ADMIN)) ++ err = may_change_propagation(from); ++ if (err) + goto out; +- if (!ns_capable(to->mnt_ns->user_ns, CAP_SYS_ADMIN)) ++ err = may_change_propagation(to); ++ if (err) + goto out; + + err = -EINVAL; +@@ -4657,20 +4663,10 @@ SYSCALL_DEFINE5(move_mount, + if (flags & MOVE_MOUNT_SET_GROUP) mflags |= MNT_TREE_PROPAGATION; + if (flags & MOVE_MOUNT_BENEATH) mflags |= MNT_TREE_BENEATH; + +- lflags = 0; +- if (flags & MOVE_MOUNT_F_SYMLINKS) lflags |= LOOKUP_FOLLOW; +- if (flags & MOVE_MOUNT_F_AUTOMOUNTS) lflags |= LOOKUP_AUTOMOUNT; + uflags = 0; +- if (flags & MOVE_MOUNT_F_EMPTY_PATH) uflags = AT_EMPTY_PATH; +- from_name = getname_maybe_null(from_pathname, uflags); +- if (IS_ERR(from_name)) +- return PTR_ERR(from_name); ++ if (flags & MOVE_MOUNT_T_EMPTY_PATH) ++ uflags = AT_EMPTY_PATH; + +- lflags = 0; +- if (flags & MOVE_MOUNT_T_SYMLINKS) lflags |= LOOKUP_FOLLOW; +- if (flags & MOVE_MOUNT_T_AUTOMOUNTS) lflags |= LOOKUP_AUTOMOUNT; +- uflags = 0; +- if (flags & MOVE_MOUNT_T_EMPTY_PATH) uflags = AT_EMPTY_PATH; + to_name = getname_maybe_null(to_pathname, uflags); + if (IS_ERR(to_name)) + return PTR_ERR(to_name); +@@ -4683,11 +4679,24 @@ SYSCALL_DEFINE5(move_mount, + to_path = fd_file(f_to)->f_path; + path_get(&to_path); + } else { ++ lflags = 0; ++ if (flags & MOVE_MOUNT_T_SYMLINKS) ++ lflags |= LOOKUP_FOLLOW; ++ if (flags & MOVE_MOUNT_T_AUTOMOUNTS) ++ lflags |= LOOKUP_AUTOMOUNT; + ret = filename_lookup(to_dfd, to_name, lflags, &to_path, NULL); + if (ret) + return ret; + } + ++ uflags = 0; ++ if (flags & MOVE_MOUNT_F_EMPTY_PATH) ++ uflags = AT_EMPTY_PATH; ++ ++ from_name = getname_maybe_null(from_pathname, uflags); ++ if (IS_ERR(from_name)) ++ return PTR_ERR(from_name); ++ + if (!from_name && from_dfd >= 0) { + CLASS(fd_raw, f_from)(from_dfd); + if (fd_empty(f_from)) +@@ -4696,6 +4705,11 @@ SYSCALL_DEFINE5(move_mount, + return vfs_move_mount(&fd_file(f_from)->f_path, &to_path, mflags); + } + ++ lflags = 0; ++ if (flags & MOVE_MOUNT_F_SYMLINKS) ++ lflags |= LOOKUP_FOLLOW; ++ if (flags & MOVE_MOUNT_F_AUTOMOUNTS) ++ lflags |= LOOKUP_AUTOMOUNT; + ret = filename_lookup(from_dfd, from_name, lflags, &from_path, NULL); + if (ret) + return ret; +@@ -5302,7 +5316,8 @@ SYSCALL_DEFINE5(open_tree_attr, int, dfd, const char __user *, filename, + int ret; + struct mount_kattr kattr = {}; + +- kattr.kflags = MOUNT_KATTR_IDMAP_REPLACE; ++ if (flags & OPEN_TREE_CLONE) ++ kattr.kflags = MOUNT_KATTR_IDMAP_REPLACE; + if (flags & AT_RECURSIVE) + kattr.kflags |= MOUNT_KATTR_RECURSE; + +diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c +index 3e804da1e1eb10..a95e7aadafd072 100644 +--- a/fs/netfs/read_collect.c ++++ b/fs/netfs/read_collect.c +@@ -281,8 +281,10 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq) + } else if (test_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags)) { + notes |= MADE_PROGRESS; + } else { +- if (!stream->failed) ++ if (!stream->failed) { + stream->transferred += transferred; ++ stream->transferred_valid = true; ++ } + if (front->transferred < front->len) + set_bit(NETFS_RREQ_SHORT_TRANSFER, &rreq->flags); + notes |= MADE_PROGRESS; +diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c +index 0f3a36852a4dc1..cbf3d9194c7bf6 100644 +--- a/fs/netfs/write_collect.c ++++ b/fs/netfs/write_collect.c +@@ -254,6 +254,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) + if (front->start + front->transferred > stream->collected_to) { + stream->collected_to = front->start + front->transferred; + stream->transferred = stream->collected_to - wreq->start; ++ stream->transferred_valid = true; + notes |= MADE_PROGRESS; + } + if (test_bit(NETFS_SREQ_FAILED, &front->flags)) { +@@ -356,6 +357,7 @@ bool netfs_write_collection(struct netfs_io_request *wreq) + { + struct netfs_inode *ictx = netfs_inode(wreq->inode); + size_t transferred; ++ bool transferred_valid = false; + int s; + + _enter("R=%x", wreq->debug_id); +@@ -376,12 +378,16 @@ bool netfs_write_collection(struct netfs_io_request *wreq) + continue; + if (!list_empty(&stream->subrequests)) + return false; +- if (stream->transferred < transferred) ++ if (stream->transferred_valid && ++ stream->transferred < transferred) { + transferred = stream->transferred; ++ transferred_valid = true; ++ } + } + + /* Okay, declare that all I/O is complete. */ +- wreq->transferred = transferred; ++ if (transferred_valid) ++ wreq->transferred = transferred; + trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); + + if (wreq->io_streams[1].active && +diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c +index 50bee2c4130d1e..0584cba1a04392 100644 +--- a/fs/netfs/write_issue.c ++++ b/fs/netfs/write_issue.c +@@ -118,12 +118,12 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping, + wreq->io_streams[0].prepare_write = ictx->ops->prepare_write; + wreq->io_streams[0].issue_write = ictx->ops->issue_write; + wreq->io_streams[0].collected_to = start; +- wreq->io_streams[0].transferred = LONG_MAX; ++ wreq->io_streams[0].transferred = 0; + + wreq->io_streams[1].stream_nr = 1; + wreq->io_streams[1].source = NETFS_WRITE_TO_CACHE; + wreq->io_streams[1].collected_to = start; +- wreq->io_streams[1].transferred = LONG_MAX; ++ wreq->io_streams[1].transferred = 0; + if (fscache_resources_valid(&wreq->cache_resources)) { + wreq->io_streams[1].avail = true; + wreq->io_streams[1].active = true; +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c +index 11968dcb724317..6e69ce43a13ff7 100644 +--- a/fs/nfs/pagelist.c ++++ b/fs/nfs/pagelist.c +@@ -253,13 +253,14 @@ nfs_page_group_unlock(struct nfs_page *req) + nfs_page_clear_headlock(req); + } + +-/* +- * nfs_page_group_sync_on_bit_locked ++/** ++ * nfs_page_group_sync_on_bit_locked - Test if all requests have @bit set ++ * @req: request in page group ++ * @bit: PG_* bit that is used to sync page group + * + * must be called with page group lock held + */ +-static bool +-nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit) ++bool nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit) + { + struct nfs_page *head = req->wb_head; + struct nfs_page *tmp; +diff --git a/fs/nfs/write.c b/fs/nfs/write.c +index 374fc6b34c7954..ff29335ed85999 100644 +--- a/fs/nfs/write.c ++++ b/fs/nfs/write.c +@@ -153,20 +153,10 @@ nfs_page_set_inode_ref(struct nfs_page *req, struct inode *inode) + } + } + +-static int +-nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) ++static void nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode) + { +- int ret; +- +- if (!test_bit(PG_REMOVE, &req->wb_flags)) +- return 0; +- ret = nfs_page_group_lock(req); +- if (ret) +- return ret; + if (test_and_clear_bit(PG_REMOVE, &req->wb_flags)) + nfs_page_set_inode_ref(req, inode); +- nfs_page_group_unlock(req); +- return 0; + } + + /** +@@ -585,19 +575,18 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio) + } + } + ++ ret = nfs_page_group_lock(head); ++ if (ret < 0) ++ goto out_unlock; ++ + /* Ensure that nobody removed the request before we locked it */ + if (head != folio->private) { ++ nfs_page_group_unlock(head); + nfs_unlock_and_release_request(head); + goto retry; + } + +- ret = nfs_cancel_remove_inode(head, inode); +- if (ret < 0) +- goto out_unlock; +- +- ret = nfs_page_group_lock(head); +- if (ret < 0) +- goto out_unlock; ++ nfs_cancel_remove_inode(head, inode); + + /* lock each request in the page group */ + for (subreq = head->wb_this_page; +@@ -786,7 +775,8 @@ static void nfs_inode_remove_request(struct nfs_page *req) + { + struct nfs_inode *nfsi = NFS_I(nfs_page_to_inode(req)); + +- if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) { ++ nfs_page_group_lock(req); ++ if (nfs_page_group_sync_on_bit_locked(req, PG_REMOVE)) { + struct folio *folio = nfs_page_to_folio(req->wb_head); + struct address_space *mapping = folio->mapping; + +@@ -798,6 +788,7 @@ static void nfs_inode_remove_request(struct nfs_page *req) + } + spin_unlock(&mapping->i_private_lock); + } ++ nfs_page_group_unlock(req); + + if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) { + atomic_long_dec(&nfsi->nrequests); +diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c +index d7310fcf38881e..c2263148ff20aa 100644 +--- a/fs/overlayfs/copy_up.c ++++ b/fs/overlayfs/copy_up.c +@@ -779,7 +779,7 @@ static int ovl_copy_up_workdir(struct ovl_copy_up_ctx *c) + return err; + + ovl_start_write(c->dentry); +- inode_lock(wdir); ++ inode_lock_nested(wdir, I_MUTEX_PARENT); + temp = ovl_create_temp(ofs, c->workdir, &cattr); + inode_unlock(wdir); + ovl_end_write(c->dentry); +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c +index 0102ab3aaec162..bdc3c2e9334ad5 100644 +--- a/fs/proc/task_mmu.c ++++ b/fs/proc/task_mmu.c +@@ -212,8 +212,8 @@ static int proc_maps_open(struct inode *inode, struct file *file, + + priv->inode = inode; + priv->mm = proc_mem_open(inode, PTRACE_MODE_READ); +- if (IS_ERR_OR_NULL(priv->mm)) { +- int err = priv->mm ? PTR_ERR(priv->mm) : -ESRCH; ++ if (IS_ERR(priv->mm)) { ++ int err = PTR_ERR(priv->mm); + + seq_release_private(inode, file); + return err; +diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c +index 4bb065a6fbaa7d..d3e09b10dea476 100644 +--- a/fs/smb/client/smb2ops.c ++++ b/fs/smb/client/smb2ops.c +@@ -4496,7 +4496,7 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst, + for (int i = 1; i < num_rqst; i++) { + struct smb_rqst *old = &old_rq[i - 1]; + struct smb_rqst *new = &new_rq[i]; +- struct folio_queue *buffer; ++ struct folio_queue *buffer = NULL; + size_t size = iov_iter_count(&old->rq_iter); + + orig_len += smb_rqst_len(server, old); +diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c +index 3f04a2977ba86c..67c4f73398dfee 100644 +--- a/fs/smb/server/connection.c ++++ b/fs/smb/server/connection.c +@@ -504,7 +504,8 @@ void ksmbd_conn_transport_destroy(void) + { + mutex_lock(&init_lock); + ksmbd_tcp_destroy(); +- ksmbd_rdma_destroy(); ++ ksmbd_rdma_stop_listening(); + stop_sessions(); ++ ksmbd_rdma_destroy(); + mutex_unlock(&init_lock); + } +diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h +index 31dd1caac1e8a8..2aa8084bb59302 100644 +--- a/fs/smb/server/connection.h ++++ b/fs/smb/server/connection.h +@@ -46,7 +46,12 @@ struct ksmbd_conn { + struct mutex srv_mutex; + int status; + unsigned int cli_cap; +- __be32 inet_addr; ++ union { ++ __be32 inet_addr; ++#if IS_ENABLED(CONFIG_IPV6) ++ u8 inet6_addr[16]; ++#endif ++ }; + char *request_buf; + struct ksmbd_transport *transport; + struct nls_table *local_nls; +diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c +index d7a8a580d01362..a04d5702820d07 100644 +--- a/fs/smb/server/oplock.c ++++ b/fs/smb/server/oplock.c +@@ -1102,8 +1102,10 @@ void smb_send_parent_lease_break_noti(struct ksmbd_file *fp, + if (!atomic_inc_not_zero(&opinfo->refcount)) + continue; + +- if (ksmbd_conn_releasing(opinfo->conn)) ++ if (ksmbd_conn_releasing(opinfo->conn)) { ++ opinfo_put(opinfo); + continue; ++ } + + oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); + opinfo_put(opinfo); +@@ -1139,8 +1141,11 @@ void smb_lazy_parent_lease_break_close(struct ksmbd_file *fp) + if (!atomic_inc_not_zero(&opinfo->refcount)) + continue; + +- if (ksmbd_conn_releasing(opinfo->conn)) ++ if (ksmbd_conn_releasing(opinfo->conn)) { ++ opinfo_put(opinfo); + continue; ++ } ++ + oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); + opinfo_put(opinfo); + } +@@ -1343,8 +1348,10 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp, + if (!atomic_inc_not_zero(&brk_op->refcount)) + continue; + +- if (ksmbd_conn_releasing(brk_op->conn)) ++ if (ksmbd_conn_releasing(brk_op->conn)) { ++ opinfo_put(brk_op); + continue; ++ } + + if (brk_op->is_lease && (brk_op->o_lease->state & + (~(SMB2_LEASE_READ_CACHING_LE | +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 8d366db5f60547..5466aa8c39b1cd 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -2194,7 +2194,7 @@ int ksmbd_rdma_init(void) + return 0; + } + +-void ksmbd_rdma_destroy(void) ++void ksmbd_rdma_stop_listening(void) + { + if (!smb_direct_listener.cm_id) + return; +@@ -2203,7 +2203,10 @@ void ksmbd_rdma_destroy(void) + rdma_destroy_id(smb_direct_listener.cm_id); + + smb_direct_listener.cm_id = NULL; ++} + ++void ksmbd_rdma_destroy(void) ++{ + if (smb_direct_wq) { + destroy_workqueue(smb_direct_wq); + smb_direct_wq = NULL; +diff --git a/fs/smb/server/transport_rdma.h b/fs/smb/server/transport_rdma.h +index 77aee4e5c9dcd8..a2291b77488a15 100644 +--- a/fs/smb/server/transport_rdma.h ++++ b/fs/smb/server/transport_rdma.h +@@ -54,13 +54,15 @@ struct smb_direct_data_transfer { + + #ifdef CONFIG_SMB_SERVER_SMBDIRECT + int ksmbd_rdma_init(void); ++void ksmbd_rdma_stop_listening(void); + void ksmbd_rdma_destroy(void); + bool ksmbd_rdma_capable_netdev(struct net_device *netdev); + void init_smbd_max_io_size(unsigned int sz); + unsigned int get_smbd_max_read_write_size(void); + #else + static inline int ksmbd_rdma_init(void) { return 0; } +-static inline int ksmbd_rdma_destroy(void) { return 0; } ++static inline void ksmbd_rdma_stop_listening(void) { } ++static inline void ksmbd_rdma_destroy(void) { } + static inline bool ksmbd_rdma_capable_netdev(struct net_device *netdev) { return false; } + static inline void init_smbd_max_io_size(unsigned int sz) { } + static inline unsigned int get_smbd_max_read_write_size(void) { return 0; } +diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c +index d72588f33b9cd1..756833c91b140b 100644 +--- a/fs/smb/server/transport_tcp.c ++++ b/fs/smb/server/transport_tcp.c +@@ -87,7 +87,14 @@ static struct tcp_transport *alloc_transport(struct socket *client_sk) + return NULL; + } + ++#if IS_ENABLED(CONFIG_IPV6) ++ if (client_sk->sk->sk_family == AF_INET6) ++ memcpy(&conn->inet6_addr, &client_sk->sk->sk_v6_daddr, 16); ++ else ++ conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr; ++#else + conn->inet_addr = inet_sk(client_sk->sk)->inet_daddr; ++#endif + conn->transport = KSMBD_TRANS(t); + KSMBD_TRANS(t)->conn = conn; + KSMBD_TRANS(t)->ops = &ksmbd_tcp_transport_ops; +@@ -231,7 +238,6 @@ static int ksmbd_kthread_fn(void *p) + { + struct socket *client_sk = NULL; + struct interface *iface = (struct interface *)p; +- struct inet_sock *csk_inet; + struct ksmbd_conn *conn; + int ret; + +@@ -254,13 +260,27 @@ static int ksmbd_kthread_fn(void *p) + /* + * Limits repeated connections from clients with the same IP. + */ +- csk_inet = inet_sk(client_sk->sk); + down_read(&conn_list_lock); + list_for_each_entry(conn, &conn_list, conns_list) +- if (csk_inet->inet_daddr == conn->inet_addr) { ++#if IS_ENABLED(CONFIG_IPV6) ++ if (client_sk->sk->sk_family == AF_INET6) { ++ if (memcmp(&client_sk->sk->sk_v6_daddr, ++ &conn->inet6_addr, 16) == 0) { ++ ret = -EAGAIN; ++ break; ++ } ++ } else if (inet_sk(client_sk->sk)->inet_daddr == ++ conn->inet_addr) { ++ ret = -EAGAIN; ++ break; ++ } ++#else ++ if (inet_sk(client_sk->sk)->inet_daddr == ++ conn->inet_addr) { + ret = -EAGAIN; + break; + } ++#endif + up_read(&conn_list_lock); + if (ret == -EAGAIN) + continue; +diff --git a/fs/splice.c b/fs/splice.c +index 4d6df083e0c06a..f5094b6d00a09f 100644 +--- a/fs/splice.c ++++ b/fs/splice.c +@@ -739,6 +739,9 @@ iter_file_splice_write(struct pipe_inode_info *pipe, struct file *out, + sd.pos = kiocb.ki_pos; + if (ret <= 0) + break; ++ WARN_ONCE(ret > sd.total_len - left, ++ "Splice Exceeded! ret=%zd tot=%zu left=%zu\n", ++ ret, sd.total_len, left); + + sd.num_spliced += ret; + sd.total_len -= ret; +diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c +index 992ea0e372572f..4465cf05603a8f 100644 +--- a/fs/squashfs/super.c ++++ b/fs/squashfs/super.c +@@ -187,10 +187,15 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc) + unsigned short flags; + unsigned int fragments; + u64 lookup_table_start, xattr_id_table_start, next_table; +- int err; ++ int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); + + TRACE("Entered squashfs_fill_superblock\n"); + ++ if (!devblksize) { ++ errorf(fc, "squashfs: unable to set blocksize\n"); ++ return -EINVAL; ++ } ++ + sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL); + if (sb->s_fs_info == NULL) { + ERROR("Failed to allocate squashfs_sb_info\n"); +@@ -201,12 +206,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc) + + msblk->panic_on_errors = (opts->errors == Opt_errors_panic); + +- msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); +- if (!msblk->devblksize) { +- errorf(fc, "squashfs: unable to set blocksize\n"); +- return -EINVAL; +- } +- ++ msblk->devblksize = devblksize; + msblk->devblksize_log2 = ffz(~msblk->devblksize); + + mutex_init(&msblk->meta_index_mutex); +diff --git a/fs/xfs/libxfs/xfs_refcount.c b/fs/xfs/libxfs/xfs_refcount.c +index cebe83f7842a1c..8977840374836e 100644 +--- a/fs/xfs/libxfs/xfs_refcount.c ++++ b/fs/xfs/libxfs/xfs_refcount.c +@@ -2099,9 +2099,7 @@ xfs_refcount_recover_cow_leftovers( + * recording the CoW debris we cancel the (empty) transaction + * and everything goes away cleanly. + */ +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; ++ tp = xfs_trans_alloc_empty(mp); + + if (isrt) { + xfs_rtgroup_lock(to_rtg(xg), XFS_RTGLOCK_REFCOUNT); +diff --git a/fs/xfs/scrub/common.c b/fs/xfs/scrub/common.c +index 28ad341df8eede..d080f4e6e9d8c2 100644 +--- a/fs/xfs/scrub/common.c ++++ b/fs/xfs/scrub/common.c +@@ -870,7 +870,8 @@ int + xchk_trans_alloc_empty( + struct xfs_scrub *sc) + { +- return xfs_trans_alloc_empty(sc->mp, &sc->tp); ++ sc->tp = xfs_trans_alloc_empty(sc->mp); ++ return 0; + } + + /* +diff --git a/fs/xfs/scrub/repair.c b/fs/xfs/scrub/repair.c +index f8f9ed30f56b03..f7f80ff32afc25 100644 +--- a/fs/xfs/scrub/repair.c ++++ b/fs/xfs/scrub/repair.c +@@ -1279,18 +1279,10 @@ xrep_trans_alloc_hook_dummy( + void **cookiep, + struct xfs_trans **tpp) + { +- int error; +- + *cookiep = current->journal_info; + current->journal_info = NULL; +- +- error = xfs_trans_alloc_empty(mp, tpp); +- if (!error) +- return 0; +- +- current->journal_info = *cookiep; +- *cookiep = NULL; +- return error; ++ *tpp = xfs_trans_alloc_empty(mp); ++ return 0; + } + + /* Cancel a dummy transaction used by a live update hook function. */ +diff --git a/fs/xfs/scrub/scrub.c b/fs/xfs/scrub/scrub.c +index 76e24032e99a53..3c3b0d25006ff4 100644 +--- a/fs/xfs/scrub/scrub.c ++++ b/fs/xfs/scrub/scrub.c +@@ -876,10 +876,7 @@ xchk_scrubv_open_by_handle( + struct xfs_inode *ip; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return NULL; +- ++ tp = xfs_trans_alloc_empty(mp); + error = xfs_iget(mp, tp, head->svh_ino, XCHK_IGET_FLAGS, 0, &ip); + xfs_trans_cancel(tp); + if (error) +diff --git a/fs/xfs/xfs_attr_item.c b/fs/xfs/xfs_attr_item.c +index f683b7a9323f16..da1e11f38eb004 100644 +--- a/fs/xfs/xfs_attr_item.c ++++ b/fs/xfs/xfs_attr_item.c +@@ -616,10 +616,7 @@ xfs_attri_iread_extents( + struct xfs_trans *tp; + int error; + +- error = xfs_trans_alloc_empty(ip->i_mount, &tp); +- if (error) +- return error; +- ++ tp = xfs_trans_alloc_empty(ip->i_mount); + xfs_ilock(ip, XFS_ILOCK_EXCL); + error = xfs_iread_extents(tp, ip, XFS_ATTR_FORK); + xfs_iunlock(ip, XFS_ILOCK_EXCL); +diff --git a/fs/xfs/xfs_discard.c b/fs/xfs/xfs_discard.c +index 603d5136564508..ee49f20875afa3 100644 +--- a/fs/xfs/xfs_discard.c ++++ b/fs/xfs/xfs_discard.c +@@ -189,9 +189,7 @@ xfs_trim_gather_extents( + */ + xfs_log_force(mp, XFS_LOG_SYNC); + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; ++ tp = xfs_trans_alloc_empty(mp); + + error = xfs_alloc_read_agf(pag, tp, 0, &agbp); + if (error) +@@ -583,9 +581,7 @@ xfs_trim_rtextents( + struct xfs_trans *tp; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; ++ tp = xfs_trans_alloc_empty(mp); + + /* + * Walk the free ranges between low and high. The query_range function +@@ -701,9 +697,7 @@ xfs_trim_rtgroup_extents( + struct xfs_trans *tp; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; ++ tp = xfs_trans_alloc_empty(mp); + + /* + * Walk the free ranges between low and high. The query_range function +diff --git a/fs/xfs/xfs_fsmap.c b/fs/xfs/xfs_fsmap.c +index 414b27a8645886..af68c7de8ee8ad 100644 +--- a/fs/xfs/xfs_fsmap.c ++++ b/fs/xfs/xfs_fsmap.c +@@ -1270,9 +1270,7 @@ xfs_getfsmap( + * buffer locking abilities to detect cycles in the rmapbt + * without deadlocking. + */ +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- break; ++ tp = xfs_trans_alloc_empty(mp); + + info.dev = handlers[i].dev; + info.last = false; +diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c +index bbc2f2973dcc90..4cf7abe5014371 100644 +--- a/fs/xfs/xfs_icache.c ++++ b/fs/xfs/xfs_icache.c +@@ -893,10 +893,7 @@ xfs_metafile_iget( + struct xfs_trans *tp; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; +- ++ tp = xfs_trans_alloc_empty(mp); + error = xfs_trans_metafile_iget(tp, ino, metafile_type, ipp); + xfs_trans_cancel(tp); + return error; +diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c +index 761a996a857cac..9c39251961a32a 100644 +--- a/fs/xfs/xfs_inode.c ++++ b/fs/xfs/xfs_inode.c +@@ -2932,12 +2932,9 @@ xfs_inode_reload_unlinked( + struct xfs_inode *ip) + { + struct xfs_trans *tp; +- int error; +- +- error = xfs_trans_alloc_empty(ip->i_mount, &tp); +- if (error) +- return error; ++ int error = 0; + ++ tp = xfs_trans_alloc_empty(ip->i_mount); + xfs_ilock(ip, XFS_ILOCK_SHARED); + if (xfs_inode_unlinked_incomplete(ip)) + error = xfs_inode_reload_unlinked_bucket(tp, ip); +diff --git a/fs/xfs/xfs_itable.c b/fs/xfs/xfs_itable.c +index 1fa1c0564b0c5a..5116842420b2e6 100644 +--- a/fs/xfs/xfs_itable.c ++++ b/fs/xfs/xfs_itable.c +@@ -239,14 +239,10 @@ xfs_bulkstat_one( + * Grab an empty transaction so that we can use its recursive buffer + * locking abilities to detect cycles in the inobt without deadlocking. + */ +- error = xfs_trans_alloc_empty(breq->mp, &tp); +- if (error) +- goto out; +- ++ tp = xfs_trans_alloc_empty(breq->mp); + error = xfs_bulkstat_one_int(breq->mp, breq->idmap, tp, + breq->startino, &bc); + xfs_trans_cancel(tp); +-out: + kfree(bc.buf); + + /* +@@ -331,17 +327,13 @@ xfs_bulkstat( + * Grab an empty transaction so that we can use its recursive buffer + * locking abilities to detect cycles in the inobt without deadlocking. + */ +- error = xfs_trans_alloc_empty(breq->mp, &tp); +- if (error) +- goto out; +- ++ tp = xfs_trans_alloc_empty(breq->mp); + if (breq->flags & XFS_IBULK_SAME_AG) + iwalk_flags |= XFS_IWALK_SAME_AG; + + error = xfs_iwalk(breq->mp, tp, breq->startino, iwalk_flags, + xfs_bulkstat_iwalk, breq->icount, &bc); + xfs_trans_cancel(tp); +-out: + kfree(bc.buf); + + /* +@@ -455,23 +447,23 @@ xfs_inumbers( + .breq = breq, + }; + struct xfs_trans *tp; ++ unsigned int iwalk_flags = 0; + int error = 0; + + if (xfs_bulkstat_already_done(breq->mp, breq->startino)) + return 0; + ++ if (breq->flags & XFS_IBULK_SAME_AG) ++ iwalk_flags |= XFS_IWALK_SAME_AG; ++ + /* + * Grab an empty transaction so that we can use its recursive buffer + * locking abilities to detect cycles in the inobt without deadlocking. + */ +- error = xfs_trans_alloc_empty(breq->mp, &tp); +- if (error) +- goto out; +- +- error = xfs_inobt_walk(breq->mp, tp, breq->startino, breq->flags, ++ tp = xfs_trans_alloc_empty(breq->mp); ++ error = xfs_inobt_walk(breq->mp, tp, breq->startino, iwalk_flags, + xfs_inumbers_walk, breq->icount, &ic); + xfs_trans_cancel(tp); +-out: + + /* + * We found some inode groups, so clear the error status and return +diff --git a/fs/xfs/xfs_iwalk.c b/fs/xfs/xfs_iwalk.c +index 7db3ece370b100..c1c31d1a8e21b5 100644 +--- a/fs/xfs/xfs_iwalk.c ++++ b/fs/xfs/xfs_iwalk.c +@@ -377,11 +377,8 @@ xfs_iwalk_run_callbacks( + if (!has_more) + return 0; + +- if (iwag->drop_trans) { +- error = xfs_trans_alloc_empty(mp, &iwag->tp); +- if (error) +- return error; +- } ++ if (iwag->drop_trans) ++ iwag->tp = xfs_trans_alloc_empty(mp); + + /* ...and recreate the cursor just past where we left off. */ + error = xfs_ialloc_read_agi(iwag->pag, iwag->tp, 0, agi_bpp); +@@ -617,9 +614,7 @@ xfs_iwalk_ag_work( + * Grab an empty transaction so that we can use its recursive buffer + * locking abilities to detect cycles in the inobt without deadlocking. + */ +- error = xfs_trans_alloc_empty(mp, &iwag->tp); +- if (error) +- goto out; ++ iwag->tp = xfs_trans_alloc_empty(mp); + iwag->drop_trans = 1; + + error = xfs_iwalk_ag(iwag); +diff --git a/fs/xfs/xfs_notify_failure.c b/fs/xfs/xfs_notify_failure.c +index 42e9c72b85c00f..fbeddcac479208 100644 +--- a/fs/xfs/xfs_notify_failure.c ++++ b/fs/xfs/xfs_notify_failure.c +@@ -279,10 +279,7 @@ xfs_dax_notify_dev_failure( + kernel_frozen = xfs_dax_notify_failure_freeze(mp) == 0; + } + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- goto out; +- ++ tp = xfs_trans_alloc_empty(mp); + start_gno = xfs_fsb_to_gno(mp, start_bno, type); + end_gno = xfs_fsb_to_gno(mp, end_bno, type); + while ((xg = xfs_group_next_range(mp, xg, start_gno, end_gno, type))) { +@@ -353,7 +350,6 @@ xfs_dax_notify_dev_failure( + error = -EFSCORRUPTED; + } + +-out: + /* Thaw the fs if it has been frozen before. */ + if (mf_flags & MF_MEM_PRE_REMOVE) + xfs_dax_notify_failure_thaw(mp, kernel_frozen); +diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c +index fa135ac264710a..23ba84ec919a4d 100644 +--- a/fs/xfs/xfs_qm.c ++++ b/fs/xfs/xfs_qm.c +@@ -660,10 +660,7 @@ xfs_qm_load_metadir_qinos( + struct xfs_trans *tp; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; +- ++ tp = xfs_trans_alloc_empty(mp); + error = xfs_dqinode_load_parent(tp, &qi->qi_dirip); + if (error == -ENOENT) { + /* no quota dir directory, but we'll create one later */ +@@ -1755,10 +1752,7 @@ xfs_qm_qino_load( + struct xfs_inode *dp = NULL; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; +- ++ tp = xfs_trans_alloc_empty(mp); + if (xfs_has_metadir(mp)) { + error = xfs_dqinode_load_parent(tp, &dp); + if (error) +diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c +index 736eb0924573d3..6907e871fa1511 100644 +--- a/fs/xfs/xfs_rtalloc.c ++++ b/fs/xfs/xfs_rtalloc.c +@@ -729,9 +729,7 @@ xfs_rtginode_ensure( + if (rtg->rtg_inodes[type]) + return 0; + +- error = xfs_trans_alloc_empty(rtg_mount(rtg), &tp); +- if (error) +- return error; ++ tp = xfs_trans_alloc_empty(rtg_mount(rtg)); + error = xfs_rtginode_load(rtg, type, tp); + xfs_trans_cancel(tp); + +@@ -1305,9 +1303,7 @@ xfs_growfs_rt_prep_groups( + if (!mp->m_rtdirip) { + struct xfs_trans *tp; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; ++ tp = xfs_trans_alloc_empty(mp); + error = xfs_rtginode_load_parent(tp); + xfs_trans_cancel(tp); + +@@ -1674,10 +1670,7 @@ xfs_rtmount_inodes( + struct xfs_rtgroup *rtg = NULL; + int error; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; +- ++ tp = xfs_trans_alloc_empty(mp); + if (xfs_has_rtgroups(mp) && mp->m_sb.sb_rgcount > 0) { + error = xfs_rtginode_load_parent(tp); + if (error) +diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c +index b4a07af513bada..f1135c4cf3c204 100644 +--- a/fs/xfs/xfs_trans.c ++++ b/fs/xfs/xfs_trans.c +@@ -241,6 +241,28 @@ xfs_trans_reserve( + return error; + } + ++static struct xfs_trans * ++__xfs_trans_alloc( ++ struct xfs_mount *mp, ++ uint flags) ++{ ++ struct xfs_trans *tp; ++ ++ ASSERT(!(flags & XFS_TRANS_RES_FDBLKS) || xfs_has_lazysbcount(mp)); ++ ++ tp = kmem_cache_zalloc(xfs_trans_cache, GFP_KERNEL | __GFP_NOFAIL); ++ if (!(flags & XFS_TRANS_NO_WRITECOUNT)) ++ sb_start_intwrite(mp->m_super); ++ xfs_trans_set_context(tp); ++ tp->t_flags = flags; ++ tp->t_mountp = mp; ++ INIT_LIST_HEAD(&tp->t_items); ++ INIT_LIST_HEAD(&tp->t_busy); ++ INIT_LIST_HEAD(&tp->t_dfops); ++ tp->t_highest_agno = NULLAGNUMBER; ++ return tp; ++} ++ + int + xfs_trans_alloc( + struct xfs_mount *mp, +@@ -254,33 +276,16 @@ xfs_trans_alloc( + bool want_retry = true; + int error; + ++ ASSERT(resp->tr_logres > 0); ++ + /* + * Allocate the handle before we do our freeze accounting and setting up + * GFP_NOFS allocation context so that we avoid lockdep false positives + * by doing GFP_KERNEL allocations inside sb_start_intwrite(). + */ + retry: +- tp = kmem_cache_zalloc(xfs_trans_cache, GFP_KERNEL | __GFP_NOFAIL); +- if (!(flags & XFS_TRANS_NO_WRITECOUNT)) +- sb_start_intwrite(mp->m_super); +- xfs_trans_set_context(tp); +- +- /* +- * Zero-reservation ("empty") transactions can't modify anything, so +- * they're allowed to run while we're frozen. +- */ +- WARN_ON(resp->tr_logres > 0 && +- mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE); +- ASSERT(!(flags & XFS_TRANS_RES_FDBLKS) || +- xfs_has_lazysbcount(mp)); +- +- tp->t_flags = flags; +- tp->t_mountp = mp; +- INIT_LIST_HEAD(&tp->t_items); +- INIT_LIST_HEAD(&tp->t_busy); +- INIT_LIST_HEAD(&tp->t_dfops); +- tp->t_highest_agno = NULLAGNUMBER; +- ++ tp = __xfs_trans_alloc(mp, flags); ++ WARN_ON(mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE); + error = xfs_trans_reserve(tp, resp, blocks, rtextents); + if (error == -ENOSPC && want_retry) { + xfs_trans_cancel(tp); +@@ -324,14 +329,11 @@ xfs_trans_alloc( + * where we can be grabbing buffers at the same time that freeze is trying to + * drain the buffer LRU list. + */ +-int ++struct xfs_trans * + xfs_trans_alloc_empty( +- struct xfs_mount *mp, +- struct xfs_trans **tpp) ++ struct xfs_mount *mp) + { +- struct xfs_trans_res resv = {0}; +- +- return xfs_trans_alloc(mp, &resv, 0, 0, XFS_TRANS_NO_WRITECOUNT, tpp); ++ return __xfs_trans_alloc(mp, XFS_TRANS_NO_WRITECOUNT); + } + + /* +diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h +index 2b366851e9a4f4..a6b10aaeb1f1e0 100644 +--- a/fs/xfs/xfs_trans.h ++++ b/fs/xfs/xfs_trans.h +@@ -168,8 +168,7 @@ int xfs_trans_alloc(struct xfs_mount *mp, struct xfs_trans_res *resp, + struct xfs_trans **tpp); + int xfs_trans_reserve_more(struct xfs_trans *tp, + unsigned int blocks, unsigned int rtextents); +-int xfs_trans_alloc_empty(struct xfs_mount *mp, +- struct xfs_trans **tpp); ++struct xfs_trans *xfs_trans_alloc_empty(struct xfs_mount *mp); + void xfs_trans_mod_sb(xfs_trans_t *, uint, int64_t); + + int xfs_trans_get_buf_map(struct xfs_trans *tp, struct xfs_buftarg *target, +diff --git a/fs/xfs/xfs_zone_alloc.c b/fs/xfs/xfs_zone_alloc.c +index 01315ed75502dc..417895f8a24c74 100644 +--- a/fs/xfs/xfs_zone_alloc.c ++++ b/fs/xfs/xfs_zone_alloc.c +@@ -654,13 +654,6 @@ static inline bool xfs_zoned_pack_tight(struct xfs_inode *ip) + !(ip->i_diflags & XFS_DIFLAG_APPEND); + } + +-/* +- * Pick a new zone for writes. +- * +- * If we aren't using up our budget of open zones just open a new one from the +- * freelist. Else try to find one that matches the expected data lifetime. If +- * we don't find one that is good pick any zone that is available. +- */ + static struct xfs_open_zone * + xfs_select_zone_nowait( + struct xfs_mount *mp, +@@ -688,7 +681,8 @@ xfs_select_zone_nowait( + goto out_unlock; + + /* +- * See if we can open a new zone and use that. ++ * See if we can open a new zone and use that so that data for different ++ * files is mixed as little as possible. + */ + oz = xfs_try_open_zone(mp, write_hint); + if (oz) +diff --git a/fs/xfs/xfs_zone_gc.c b/fs/xfs/xfs_zone_gc.c +index 9c00fc5baa3073..e1954b0e6021ef 100644 +--- a/fs/xfs/xfs_zone_gc.c ++++ b/fs/xfs/xfs_zone_gc.c +@@ -328,10 +328,7 @@ xfs_zone_gc_query( + iter->rec_idx = 0; + iter->rec_count = 0; + +- error = xfs_trans_alloc_empty(mp, &tp); +- if (error) +- return error; +- ++ tp = xfs_trans_alloc_empty(mp); + xfs_rtgroup_lock(rtg, XFS_RTGLOCK_RMAP); + cur = xfs_rtrmapbt_init_cursor(tp, rtg); + error = xfs_rmap_query_range(cur, &ri_low, &ri_high, +diff --git a/include/crypto/hash.h b/include/crypto/hash.h +index db294d452e8cd9..bbaeae705ef0e5 100644 +--- a/include/crypto/hash.h ++++ b/include/crypto/hash.h +@@ -184,7 +184,7 @@ struct shash_desc { + * Worst case is hmac(sha3-224-s390). Its context is a nested 'shash_desc' + * containing a 'struct s390_sha_ctx'. + */ +-#define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + 360) ++#define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + 361) + #define MAX_SYNC_HASH_REQSIZE (sizeof(struct ahash_request) + \ + HASH_MAX_DESCSIZE) + +diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h +index ffffd88bbbad33..2d97440028ffd7 100644 +--- a/include/crypto/internal/acompress.h ++++ b/include/crypto/internal/acompress.h +@@ -63,10 +63,7 @@ struct crypto_acomp_stream { + struct crypto_acomp_streams { + /* These must come first because of struct scomp_alg. */ + void *(*alloc_ctx)(void); +- union { +- void (*free_ctx)(void *); +- void (*cfree_ctx)(const void *); +- }; ++ void (*free_ctx)(void *); + + struct crypto_acomp_stream __percpu *streams; + struct work_struct stream_work; +diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h +index d8539174ca11ba..49a2e09155d1ce 100644 +--- a/include/drm/drm_format_helper.h ++++ b/include/drm/drm_format_helper.h +@@ -102,6 +102,15 @@ void drm_fb_xrgb8888_to_bgr888(struct iosys_map *dst, const unsigned int *dst_pi + void drm_fb_xrgb8888_to_argb8888(struct iosys_map *dst, const unsigned int *dst_pitch, + const struct iosys_map *src, const struct drm_framebuffer *fb, + const struct drm_rect *clip, struct drm_format_conv_state *state); ++void drm_fb_xrgb8888_to_abgr8888(struct iosys_map *dst, const unsigned int *dst_pitch, ++ const struct iosys_map *src, const struct drm_framebuffer *fb, ++ const struct drm_rect *clip, struct drm_format_conv_state *state); ++void drm_fb_xrgb8888_to_xbgr8888(struct iosys_map *dst, const unsigned int *dst_pitch, ++ const struct iosys_map *src, const struct drm_framebuffer *fb, ++ const struct drm_rect *clip, struct drm_format_conv_state *state); ++void drm_fb_xrgb8888_to_bgrx8888(struct iosys_map *dst, const unsigned int *dst_pitch, ++ const struct iosys_map *src, const struct drm_framebuffer *fb, ++ const struct drm_rect *clip, struct drm_format_conv_state *state); + void drm_fb_xrgb8888_to_xrgb2101010(struct iosys_map *dst, const unsigned int *dst_pitch, + const struct iosys_map *src, const struct drm_framebuffer *fb, + const struct drm_rect *clip, +diff --git a/include/drm/intel/pciids.h b/include/drm/intel/pciids.h +index a7ce9523c50d37..9864d2d7d661dc 100644 +--- a/include/drm/intel/pciids.h ++++ b/include/drm/intel/pciids.h +@@ -846,6 +846,7 @@ + /* BMG */ + #define INTEL_BMG_IDS(MACRO__, ...) \ + MACRO__(0xE202, ## __VA_ARGS__), \ ++ MACRO__(0xE209, ## __VA_ARGS__), \ + MACRO__(0xE20B, ## __VA_ARGS__), \ + MACRO__(0xE20C, ## __VA_ARGS__), \ + MACRO__(0xE20D, ## __VA_ARGS__), \ +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index 620345ce3aaad9..3921188c9e1397 100644 +--- a/include/linux/blkdev.h ++++ b/include/linux/blkdev.h +@@ -652,6 +652,7 @@ enum { + QUEUE_FLAG_SQ_SCHED, /* single queue style io dispatch */ + QUEUE_FLAG_DISABLE_WBT_DEF, /* for sched to disable/enable wbt */ + QUEUE_FLAG_NO_ELV_SWITCH, /* can't switch elevator any more */ ++ QUEUE_FLAG_QOS_ENABLED, /* qos is enabled */ + QUEUE_FLAG_MAX + }; + +diff --git a/include/linux/compiler.h b/include/linux/compiler.h +index 6f04a1d8c72094..64ff73c533e54e 100644 +--- a/include/linux/compiler.h ++++ b/include/linux/compiler.h +@@ -288,14 +288,6 @@ static inline void *offset_to_ptr(const int *off) + #define __ADDRESSABLE(sym) \ + ___ADDRESSABLE(sym, __section(".discard.addressable")) + +-#define __ADDRESSABLE_ASM(sym) \ +- .pushsection .discard.addressable,"aw"; \ +- .align ARCH_SEL(8,4); \ +- ARCH_SEL(.quad, .long) __stringify(sym); \ +- .popsection; +- +-#define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym)) +- + /* + * This returns a constant expression while determining if an argument is + * a constant expression, most importantly without evaluating the argument. +diff --git a/include/linux/iosys-map.h b/include/linux/iosys-map.h +index 4696abfd311cc1..3e85afe794c0aa 100644 +--- a/include/linux/iosys-map.h ++++ b/include/linux/iosys-map.h +@@ -264,12 +264,7 @@ static inline bool iosys_map_is_set(const struct iosys_map *map) + */ + static inline void iosys_map_clear(struct iosys_map *map) + { +- if (map->is_iomem) { +- map->vaddr_iomem = NULL; +- map->is_iomem = false; +- } else { +- map->vaddr = NULL; +- } ++ memset(map, 0, sizeof(*map)); + } + + /** +diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h +index c4aa58032faf87..f9a17fbbd3980b 100644 +--- a/include/linux/iov_iter.h ++++ b/include/linux/iov_iter.h +@@ -160,7 +160,7 @@ size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2 + + do { + struct folio *folio = folioq_folio(folioq, slot); +- size_t part, remain, consumed; ++ size_t part, remain = 0, consumed; + size_t fsize; + void *base; + +@@ -168,14 +168,16 @@ size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2 + break; + + fsize = folioq_folio_size(folioq, slot); +- base = kmap_local_folio(folio, skip); +- part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); +- remain = step(base, progress, part, priv, priv2); +- kunmap_local(base); +- consumed = part - remain; +- len -= consumed; +- progress += consumed; +- skip += consumed; ++ if (skip < fsize) { ++ base = kmap_local_folio(folio, skip); ++ part = umin(len, PAGE_SIZE - skip % PAGE_SIZE); ++ remain = step(base, progress, part, priv, priv2); ++ kunmap_local(base); ++ consumed = part - remain; ++ len -= consumed; ++ progress += consumed; ++ skip += consumed; ++ } + if (skip >= fsize) { + skip = 0; + slot++; +diff --git a/include/linux/kcov.h b/include/linux/kcov.h +index 75a2fb8b16c329..0143358874b07b 100644 +--- a/include/linux/kcov.h ++++ b/include/linux/kcov.h +@@ -57,47 +57,21 @@ static inline void kcov_remote_start_usb(u64 id) + + /* + * The softirq flavor of kcov_remote_*() functions is introduced as a temporary +- * workaround for KCOV's lack of nested remote coverage sections support. +- * +- * Adding support is tracked in https://bugzilla.kernel.org/show_bug.cgi?id=210337. +- * +- * kcov_remote_start_usb_softirq(): +- * +- * 1. Only collects coverage when called in the softirq context. This allows +- * avoiding nested remote coverage collection sections in the task context. +- * For example, USB/IP calls usb_hcd_giveback_urb() in the task context +- * within an existing remote coverage collection section. Thus, KCOV should +- * not attempt to start collecting coverage within the coverage collection +- * section in __usb_hcd_giveback_urb() in this case. +- * +- * 2. Disables interrupts for the duration of the coverage collection section. +- * This allows avoiding nested remote coverage collection sections in the +- * softirq context (a softirq might occur during the execution of a work in +- * the BH workqueue, which runs with in_serving_softirq() > 0). +- * For example, usb_giveback_urb_bh() runs in the BH workqueue with +- * interrupts enabled, so __usb_hcd_giveback_urb() might be interrupted in +- * the middle of its remote coverage collection section, and the interrupt +- * handler might invoke __usb_hcd_giveback_urb() again. ++ * work around for kcov's lack of nested remote coverage sections support in ++ * task context. Adding support for nested sections is tracked in: ++ * https://bugzilla.kernel.org/show_bug.cgi?id=210337 + */ + +-static inline unsigned long kcov_remote_start_usb_softirq(u64 id) ++static inline void kcov_remote_start_usb_softirq(u64 id) + { +- unsigned long flags = 0; +- +- if (in_serving_softirq()) { +- local_irq_save(flags); ++ if (in_serving_softirq() && !in_hardirq()) + kcov_remote_start_usb(id); +- } +- +- return flags; + } + +-static inline void kcov_remote_stop_softirq(unsigned long flags) ++static inline void kcov_remote_stop_softirq(void) + { +- if (in_serving_softirq()) { ++ if (in_serving_softirq() && !in_hardirq()) + kcov_remote_stop(); +- local_irq_restore(flags); +- } + } + + #ifdef CONFIG_64BIT +@@ -131,11 +105,8 @@ static inline u64 kcov_common_handle(void) + } + static inline void kcov_remote_start_common(u64 id) {} + static inline void kcov_remote_start_usb(u64 id) {} +-static inline unsigned long kcov_remote_start_usb_softirq(u64 id) +-{ +- return 0; +-} +-static inline void kcov_remote_stop_softirq(unsigned long flags) {} ++static inline void kcov_remote_start_usb_softirq(u64 id) {} ++static inline void kcov_remote_stop_softirq(void) {} + + #endif /* CONFIG_KCOV */ + #endif /* _LINUX_KCOV_H */ +diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h +index 2c09df4ee57428..83288df7bb4597 100644 +--- a/include/linux/mlx5/mlx5_ifc.h ++++ b/include/linux/mlx5/mlx5_ifc.h +@@ -10460,8 +10460,16 @@ struct mlx5_ifc_pifr_reg_bits { + u8 port_filter_update_en[8][0x20]; + }; + ++enum { ++ MLX5_BUF_OWNERSHIP_UNKNOWN = 0x0, ++ MLX5_BUF_OWNERSHIP_FW_OWNED = 0x1, ++ MLX5_BUF_OWNERSHIP_SW_OWNED = 0x2, ++}; ++ + struct mlx5_ifc_pfcc_reg_bits { +- u8 reserved_at_0[0x8]; ++ u8 reserved_at_0[0x4]; ++ u8 buf_ownership[0x2]; ++ u8 reserved_at_6[0x2]; + u8 local_port[0x8]; + u8 reserved_at_10[0xb]; + u8 ppan_mask_n[0x1]; +@@ -10597,7 +10605,9 @@ struct mlx5_ifc_pcam_enhanced_features_bits { + u8 fec_200G_per_lane_in_pplm[0x1]; + u8 reserved_at_1e[0x2a]; + u8 fec_100G_per_lane_in_pplm[0x1]; +- u8 reserved_at_49[0x1f]; ++ u8 reserved_at_49[0xa]; ++ u8 buffer_ownership[0x1]; ++ u8 resereved_at_54[0x14]; + u8 fec_50G_per_lane_in_pplm[0x1]; + u8 reserved_at_69[0x4]; + u8 rx_icrc_encapsulated_counter[0x1]; +diff --git a/include/linux/netfs.h b/include/linux/netfs.h +index f43f075852c06b..31929c84b71822 100644 +--- a/include/linux/netfs.h ++++ b/include/linux/netfs.h +@@ -150,6 +150,7 @@ struct netfs_io_stream { + bool active; /* T if stream is active */ + bool need_retry; /* T if this stream needs retrying */ + bool failed; /* T if this stream failed */ ++ bool transferred_valid; /* T is ->transferred is valid */ + }; + + /* +diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h +index 169b4ae30ff479..9aed39abc94bc3 100644 +--- a/include/linux/nfs_page.h ++++ b/include/linux/nfs_page.h +@@ -160,6 +160,7 @@ extern void nfs_join_page_group(struct nfs_page *head, + extern int nfs_page_group_lock(struct nfs_page *); + extern void nfs_page_group_unlock(struct nfs_page *); + extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int); ++extern bool nfs_page_group_sync_on_bit_locked(struct nfs_page *, unsigned int); + extern int nfs_page_set_headlock(struct nfs_page *req); + extern void nfs_page_clear_headlock(struct nfs_page *req); + extern bool nfs_async_iocounter_wait(struct rpc_task *, struct nfs_lock_context *); +diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h +index 114299bd8b9878..b847cdd2c9d3cb 100644 +--- a/include/net/bluetooth/bluetooth.h ++++ b/include/net/bluetooth/bluetooth.h +@@ -638,7 +638,7 @@ static inline void sco_exit(void) + #if IS_ENABLED(CONFIG_BT_LE) + int iso_init(void); + int iso_exit(void); +-bool iso_enabled(void); ++bool iso_inited(void); + #else + static inline int iso_init(void) + { +@@ -650,7 +650,7 @@ static inline int iso_exit(void) + return 0; + } + +-static inline bool iso_enabled(void) ++static inline bool iso_inited(void) + { + return false; + } +diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h +index 8fa82987313424..7d1ba92b71f656 100644 +--- a/include/net/bluetooth/hci.h ++++ b/include/net/bluetooth/hci.h +@@ -562,6 +562,7 @@ enum { + #define LE_LINK 0x80 + #define CIS_LINK 0x82 + #define BIS_LINK 0x83 ++#define PA_LINK 0x84 + #define INVALID_LINK 0xff + + /* LMP features */ +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h +index f22881bf1b392a..439bc124ce7098 100644 +--- a/include/net/bluetooth/hci_core.h ++++ b/include/net/bluetooth/hci_core.h +@@ -129,7 +129,9 @@ struct hci_conn_hash { + struct list_head list; + unsigned int acl_num; + unsigned int sco_num; +- unsigned int iso_num; ++ unsigned int cis_num; ++ unsigned int bis_num; ++ unsigned int pa_num; + unsigned int le_num; + unsigned int le_num_peripheral; + }; +@@ -1014,8 +1016,13 @@ static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c) + h->sco_num++; + break; + case CIS_LINK: ++ h->cis_num++; ++ break; + case BIS_LINK: +- h->iso_num++; ++ h->bis_num++; ++ break; ++ case PA_LINK: ++ h->pa_num++; + break; + } + } +@@ -1041,8 +1048,13 @@ static inline void hci_conn_hash_del(struct hci_dev *hdev, struct hci_conn *c) + h->sco_num--; + break; + case CIS_LINK: ++ h->cis_num--; ++ break; + case BIS_LINK: +- h->iso_num--; ++ h->bis_num--; ++ break; ++ case PA_LINK: ++ h->pa_num--; + break; + } + } +@@ -1059,8 +1071,11 @@ static inline unsigned int hci_conn_num(struct hci_dev *hdev, __u8 type) + case ESCO_LINK: + return h->sco_num; + case CIS_LINK: ++ return h->cis_num; + case BIS_LINK: +- return h->iso_num; ++ return h->bis_num; ++ case PA_LINK: ++ return h->pa_num; + default: + return 0; + } +@@ -1070,7 +1085,15 @@ static inline unsigned int hci_conn_count(struct hci_dev *hdev) + { + struct hci_conn_hash *c = &hdev->conn_hash; + +- return c->acl_num + c->sco_num + c->le_num + c->iso_num; ++ return c->acl_num + c->sco_num + c->le_num + c->cis_num + c->bis_num + ++ c->pa_num; ++} ++ ++static inline unsigned int hci_iso_count(struct hci_dev *hdev) ++{ ++ struct hci_conn_hash *c = &hdev->conn_hash; ++ ++ return c->cis_num + c->bis_num; + } + + static inline bool hci_conn_valid(struct hci_dev *hdev, struct hci_conn *conn) +@@ -1142,7 +1165,7 @@ hci_conn_hash_lookup_create_pa_sync(struct hci_dev *hdev) + rcu_read_lock(); + + list_for_each_entry_rcu(c, &h->list, list) { +- if (c->type != BIS_LINK) ++ if (c->type != PA_LINK) + continue; + + if (!test_bit(HCI_CONN_CREATE_PA_SYNC, &c->flags)) +@@ -1337,7 +1360,7 @@ hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev, + rcu_read_lock(); + + list_for_each_entry_rcu(c, &h->list, list) { +- if (c->type != BIS_LINK) ++ if (c->type != PA_LINK) + continue; + + if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) { +@@ -1407,7 +1430,7 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle) + rcu_read_lock(); + + list_for_each_entry_rcu(c, &h->list, list) { +- if (c->type != BIS_LINK) ++ if (c->type != PA_LINK) + continue; + + /* Ignore the listen hcon, we are looking +@@ -1932,6 +1955,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn); + !hci_dev_test_flag(dev, HCI_RPA_EXPIRED)) + #define adv_rpa_valid(adv) (bacmp(&adv->random_addr, BDADDR_ANY) && \ + !adv->rpa_expired) ++#define le_enabled(dev) (lmp_le_capable(dev) && \ ++ hci_dev_test_flag(dev, HCI_LE_ENABLED)) + + #define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \ + ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M)) +@@ -1949,6 +1974,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn); + ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED)) + + #define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY) ++#define ll_privacy_enabled(dev) (le_enabled(dev) && ll_privacy_capable(dev)) + + #define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \ + ((dev)->commands[39] & 0x04)) +@@ -1998,14 +2024,23 @@ void hci_conn_del_sysfs(struct hci_conn *conn); + + /* CIS Master/Slave and BIS support */ + #define iso_capable(dev) (cis_capable(dev) || bis_capable(dev)) ++#define iso_enabled(dev) (le_enabled(dev) && iso_capable(dev)) + #define cis_capable(dev) \ + (cis_central_capable(dev) || cis_peripheral_capable(dev)) ++#define cis_enabled(dev) (le_enabled(dev) && cis_capable(dev)) + #define cis_central_capable(dev) \ + ((dev)->le_features[3] & HCI_LE_CIS_CENTRAL) ++#define cis_central_enabled(dev) \ ++ (le_enabled(dev) && cis_central_capable(dev)) + #define cis_peripheral_capable(dev) \ + ((dev)->le_features[3] & HCI_LE_CIS_PERIPHERAL) ++#define cis_peripheral_enabled(dev) \ ++ (le_enabled(dev) && cis_peripheral_capable(dev)) + #define bis_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_BROADCASTER) +-#define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER) ++#define bis_enabled(dev) (le_enabled(dev) && bis_capable(dev)) ++#define sync_recv_capable(dev) \ ++ ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER) ++#define sync_recv_enabled(dev) (le_enabled(dev) && sync_recv_capable(dev)) + + #define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \ + (!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG))) +@@ -2026,6 +2061,7 @@ static inline int hci_proto_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, + + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + return iso_connect_ind(hdev, bdaddr, flags); + + default: +diff --git a/include/net/bond_3ad.h b/include/net/bond_3ad.h +index 2053cd8e788a73..dba369a2cf27ef 100644 +--- a/include/net/bond_3ad.h ++++ b/include/net/bond_3ad.h +@@ -307,6 +307,7 @@ int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond, + struct slave *slave); + int bond_3ad_set_carrier(struct bonding *bond); + void bond_3ad_update_lacp_rate(struct bonding *bond); ++void bond_3ad_update_lacp_active(struct bonding *bond); + void bond_3ad_update_ad_actor_settings(struct bonding *bond); + int bond_3ad_stats_fill(struct sk_buff *skb, struct bond_3ad_stats *stats); + size_t bond_3ad_stats_size(void); +diff --git a/include/net/devlink.h b/include/net/devlink.h +index 0091f23a40f7d9..af3fd45155dd6e 100644 +--- a/include/net/devlink.h ++++ b/include/net/devlink.h +@@ -78,6 +78,9 @@ struct devlink_port_pci_sf_attrs { + * @flavour: flavour of the port + * @split: indicates if this is split port + * @splittable: indicates if the port can be split. ++ * @no_phys_port_name: skip automatic phys_port_name generation; for ++ * compatibility only, newly added driver/port instance ++ * should never set this. + * @lanes: maximum number of lanes the port supports. 0 value is not passed to netlink. + * @switch_id: if the port is part of switch, this is buffer with ID, otherwise this is NULL + * @phys: physical port attributes +@@ -87,7 +90,8 @@ struct devlink_port_pci_sf_attrs { + */ + struct devlink_port_attrs { + u8 split:1, +- splittable:1; ++ splittable:1, ++ no_phys_port_name:1; + u32 lanes; + enum devlink_port_flavour flavour; + struct netdev_phys_item_id switch_id; +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index 638948be4c50e2..738cd5b13c62fb 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -1038,12 +1038,17 @@ static inline struct sk_buff *qdisc_dequeue_internal(struct Qdisc *sch, bool dir + skb = __skb_dequeue(&sch->gso_skb); + if (skb) { + sch->q.qlen--; ++ qdisc_qstats_backlog_dec(sch, skb); + return skb; + } +- if (direct) +- return __qdisc_dequeue_head(&sch->q); +- else ++ if (direct) { ++ skb = __qdisc_dequeue_head(&sch->q); ++ if (skb) ++ qdisc_qstats_backlog_dec(sch, skb); ++ return skb; ++ } else { + return sch->dequeue(sch); ++ } + } + + static inline struct sk_buff *qdisc_dequeue_head(struct Qdisc *sch) +diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h +index e17c4cadd04d1a..7c8bbe8ad1e2de 100644 +--- a/include/sound/cs35l56.h ++++ b/include/sound/cs35l56.h +@@ -107,8 +107,8 @@ + #define CS35L56_DSP1_PMEM_5114 0x3804FE8 + + #define CS35L63_DSP1_FW_VER CS35L56_DSP1_FW_VER +-#define CS35L63_DSP1_HALO_STATE 0x280396C +-#define CS35L63_DSP1_PM_CUR_STATE 0x28042C8 ++#define CS35L63_DSP1_HALO_STATE 0x2803C04 ++#define CS35L63_DSP1_PM_CUR_STATE 0x2804518 + #define CS35L63_PROTECTION_STATUS 0x340009C + #define CS35L63_TRANSDUCER_ACTUAL_PS 0x34000F4 + #define CS35L63_MAIN_RENDER_USER_MUTE 0x3400020 +@@ -306,6 +306,7 @@ struct cs35l56_base { + struct gpio_desc *reset_gpio; + struct cs35l56_spi_payload *spi_payload_buf; + const struct cs35l56_fw_reg *fw_reg; ++ const struct cirrus_amp_cal_controls *calibration_controls; + }; + + static inline bool cs35l56_is_otp_register(unsigned int reg) +diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h +index bebc252db8654f..a32305044371d9 100644 +--- a/include/trace/events/btrfs.h ++++ b/include/trace/events/btrfs.h +@@ -1095,7 +1095,7 @@ TRACE_EVENT(btrfs_cow_block, + TP_fast_assign_btrfs(root->fs_info, + __entry->root_objectid = btrfs_root_id(root); + __entry->buf_start = buf->start; +- __entry->refs = atomic_read(&buf->refs); ++ __entry->refs = refcount_read(&buf->refs); + __entry->cow_start = cow->start; + __entry->buf_level = btrfs_header_level(buf); + __entry->cow_level = btrfs_header_level(cow); +diff --git a/include/uapi/linux/pfrut.h b/include/uapi/linux/pfrut.h +index 42fa15f8310d6b..b77d5c210c2620 100644 +--- a/include/uapi/linux/pfrut.h ++++ b/include/uapi/linux/pfrut.h +@@ -89,6 +89,7 @@ struct pfru_payload_hdr { + __u32 hw_ver; + __u32 rt_ver; + __u8 platform_id[16]; ++ __u32 svn_ver; + }; + + enum pfru_dsm_status { +diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h +index ff47b6f0ba0f5f..b1394628727758 100644 +--- a/include/uapi/linux/raid/md_p.h ++++ b/include/uapi/linux/raid/md_p.h +@@ -173,7 +173,7 @@ typedef struct mdp_superblock_s { + #else + #error unspecified endianness + #endif +- __u32 recovery_cp; /* 11 recovery checkpoint sector count */ ++ __u32 resync_offset; /* 11 resync checkpoint sector count */ + /* There are only valid for minor_version > 90 */ + __u64 reshape_position; /* 12,13 next address in array-space for reshape */ + __u32 new_level; /* 14 new level we are reshaping to */ +diff --git a/io_uring/futex.c b/io_uring/futex.c +index 692462d50c8c0c..9113a44984f3cb 100644 +--- a/io_uring/futex.c ++++ b/io_uring/futex.c +@@ -288,6 +288,7 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags) + goto done_unlock; + } + ++ req->flags |= REQ_F_ASYNC_DATA; + req->async_data = ifd; + ifd->q = futex_q_init; + ifd->q.bitset = iof->futex_mask; +@@ -309,6 +310,8 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags) + if (ret < 0) + req_set_fail(req); + io_req_set_res(req, ret, 0); ++ req->async_data = NULL; ++ req->flags &= ~REQ_F_ASYNC_DATA; + kfree(ifd); + return IOU_COMPLETE; + } +diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec +index 2ee603a98813e2..1224dd937df0c4 100644 +--- a/kernel/Kconfig.kexec ++++ b/kernel/Kconfig.kexec +@@ -97,6 +97,7 @@ config KEXEC_JUMP + config KEXEC_HANDOVER + bool "kexec handover" + depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE ++ depends on !DEFERRED_STRUCT_PAGE_INIT + select MEMBLOCK_KHO_SCRATCH + select KEXEC_FILE + select DEBUG_FS +diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c +index 3bc4301466f334..f9d7799c5c9470 100644 +--- a/kernel/cgroup/cpuset.c ++++ b/kernel/cgroup/cpuset.c +@@ -280,7 +280,7 @@ static inline void check_insane_mems_config(nodemask_t *nodes) + { + if (!cpusets_insane_config() && + movable_only_nodes(nodes)) { +- static_branch_enable(&cpusets_insane_config_key); ++ static_branch_enable_cpuslocked(&cpusets_insane_config_key); + pr_info("Unsupported (movable nodes only) cpuset configuration detected (nmask=%*pbl)!\n" + "Cpuset allocations might fail even with a lot of memory available.\n", + nodemask_pr_args(nodes)); +@@ -1843,7 +1843,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd, + if (is_partition_valid(cs)) + adding = cpumask_and(tmp->addmask, + xcpus, parent->effective_xcpus); +- } else if (is_partition_invalid(cs) && ++ } else if (is_partition_invalid(cs) && !cpumask_empty(xcpus) && + cpumask_subset(xcpus, parent->effective_xcpus)) { + struct cgroup_subsys_state *css; + struct cpuset *child; +@@ -3870,9 +3870,10 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp) + partcmd = partcmd_invalidate; + /* + * On the other hand, an invalid partition root may be transitioned +- * back to a regular one. ++ * back to a regular one with a non-empty effective xcpus. + */ +- else if (is_partition_valid(parent) && is_partition_invalid(cs)) ++ else if (is_partition_valid(parent) && is_partition_invalid(cs) && ++ !cpumask_empty(cs->effective_xcpus)) + partcmd = partcmd_update; + + if (partcmd >= 0) { +diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c +index cbeaa499a96af3..408e52d5f7a4e2 100644 +--- a/kernel/cgroup/rstat.c ++++ b/kernel/cgroup/rstat.c +@@ -488,6 +488,9 @@ void css_rstat_exit(struct cgroup_subsys_state *css) + if (!css_uses_rstat(css)) + return; + ++ if (!css->rstat_cpu) ++ return; ++ + css_rstat_flush(css); + + /* sanity check */ +diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c +index 5a21dbe179505a..e640d3eb343098 100644 +--- a/kernel/kexec_handover.c ++++ b/kernel/kexec_handover.c +@@ -144,14 +144,34 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, + unsigned int order) + { + struct kho_mem_phys_bits *bits; +- struct kho_mem_phys *physxa; ++ struct kho_mem_phys *physxa, *new_physxa; + const unsigned long pfn_high = pfn >> order; + + might_sleep(); + +- physxa = xa_load_or_alloc(&track->orders, order, sizeof(*physxa)); +- if (IS_ERR(physxa)) +- return PTR_ERR(physxa); ++ physxa = xa_load(&track->orders, order); ++ if (!physxa) { ++ int err; ++ ++ new_physxa = kzalloc(sizeof(*physxa), GFP_KERNEL); ++ if (!new_physxa) ++ return -ENOMEM; ++ ++ xa_init(&new_physxa->phys_bits); ++ physxa = xa_cmpxchg(&track->orders, order, NULL, new_physxa, ++ GFP_KERNEL); ++ ++ err = xa_err(physxa); ++ if (err || physxa) { ++ xa_destroy(&new_physxa->phys_bits); ++ kfree(new_physxa); ++ ++ if (err) ++ return err; ++ } else { ++ physxa = new_physxa; ++ } ++ } + + bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS, + sizeof(*bits)); +@@ -544,6 +564,7 @@ static void __init kho_reserve_scratch(void) + err_free_scratch_desc: + memblock_free(kho_scratch, kho_scratch_cnt * sizeof(*kho_scratch)); + err_disable_kho: ++ pr_warn("Failed to reserve scratch area, disabling kexec handover\n"); + kho_enable = false; + } + +diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c +index 7dd5cbcb7a069d..717e3d1d6a2fa2 100644 +--- a/kernel/sched/ext.c ++++ b/kernel/sched/ext.c +@@ -5694,6 +5694,9 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link) + __setscheduler_class(p->policy, p->prio); + struct sched_enq_and_set_ctx ctx; + ++ if (!tryget_task_struct(p)) ++ continue; ++ + if (old_class != new_class && p->se.sched_delayed) + dequeue_task(task_rq(p), p, DEQUEUE_SLEEP | DEQUEUE_DELAYED); + +@@ -5706,6 +5709,7 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link) + sched_enq_and_set_task(&ctx); + + check_class_changed(task_rq(p), p, old_class, p->prio); ++ put_task_struct(p); + } + scx_task_iter_stop(&sti); + percpu_up_write(&scx_fork_rwsem); +diff --git a/kernel/signal.c b/kernel/signal.c +index 148082db9a553d..6b1493558a3dd1 100644 +--- a/kernel/signal.c ++++ b/kernel/signal.c +@@ -4067,6 +4067,7 @@ SYSCALL_DEFINE4(pidfd_send_signal, int, pidfd, int, sig, + { + struct pid *pid; + enum pid_type type; ++ int ret; + + /* Enforce flags be set to 0 until we add an extension. */ + if (flags & ~PIDFD_SEND_SIGNAL_FLAGS) +@@ -4108,7 +4109,10 @@ SYSCALL_DEFINE4(pidfd_send_signal, int, pidfd, int, sig, + } + } + +- return do_pidfd_send_signal(pid, sig, type, info, flags); ++ ret = do_pidfd_send_signal(pid, sig, type, info, flags); ++ put_pid(pid); ++ ++ return ret; + } + + static int +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index 4203fad56b6c58..366efafe9f49c7 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -4665,13 +4665,17 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag, + } else { + iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash); + } ++ } else { ++ if (hash) ++ iter->hash = alloc_and_copy_ftrace_hash(hash->size_bits, hash); ++ else ++ iter->hash = EMPTY_HASH; ++ } + +- if (!iter->hash) { +- trace_parser_put(&iter->parser); +- goto out_unlock; +- } +- } else +- iter->hash = hash; ++ if (!iter->hash) { ++ trace_parser_put(&iter->parser); ++ goto out_unlock; ++ } + + ret = 0; + +@@ -6547,9 +6551,6 @@ int ftrace_regex_release(struct inode *inode, struct file *file) + ftrace_hash_move_and_update_ops(iter->ops, orig_hash, + iter->hash, filter_hash); + mutex_unlock(&ftrace_lock); +- } else { +- /* For read only, the hash is the ops hash */ +- iter->hash = NULL; + } + + mutex_unlock(&iter->ops->func_hash->regex_lock); +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 7996f26c3f46d2..8ea6ada38c40ec 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -1846,7 +1846,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf, + + ret = get_user(ch, ubuf++); + if (ret) +- goto out; ++ goto fail; + + read++; + cnt--; +@@ -1860,7 +1860,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf, + while (cnt && isspace(ch)) { + ret = get_user(ch, ubuf++); + if (ret) +- goto out; ++ goto fail; + read++; + cnt--; + } +@@ -1870,8 +1870,7 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf, + /* only spaces were written */ + if (isspace(ch) || !ch) { + *ppos += read; +- ret = read; +- goto out; ++ return read; + } + } + +@@ -1881,11 +1880,12 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf, + parser->buffer[parser->idx++] = ch; + else { + ret = -EINVAL; +- goto out; ++ goto fail; + } ++ + ret = get_user(ch, ubuf++); + if (ret) +- goto out; ++ goto fail; + read++; + cnt--; + } +@@ -1901,13 +1901,13 @@ int trace_get_user(struct trace_parser *parser, const char __user *ubuf, + parser->buffer[parser->idx] = 0; + } else { + ret = -EINVAL; +- goto out; ++ goto fail; + } + + *ppos += read; +- ret = read; +- +-out: ++ return read; ++fail: ++ trace_parser_fail(parser); + return ret; + } + +@@ -2410,10 +2410,10 @@ int __init register_tracer(struct tracer *type) + mutex_unlock(&trace_types_lock); + + if (ret || !default_bootup_tracer) +- goto out_unlock; ++ return ret; + + if (strncmp(default_bootup_tracer, type->name, MAX_TRACER_SIZE)) +- goto out_unlock; ++ return 0; + + printk(KERN_INFO "Starting tracer '%s'\n", type->name); + /* Do we want this tracer to start on bootup? */ +@@ -2425,8 +2425,7 @@ int __init register_tracer(struct tracer *type) + /* disable other selftests, since this will break it. */ + disable_tracing_selftest("running a tracer"); + +- out_unlock: +- return ret; ++ return 0; + } + + static void tracing_reset_cpu(struct array_buffer *buf, int cpu) +@@ -8954,12 +8953,12 @@ ftrace_trace_snapshot_callback(struct trace_array *tr, struct ftrace_hash *hash, + out_reg: + ret = tracing_arm_snapshot(tr); + if (ret < 0) +- goto out; ++ return ret; + + ret = register_ftrace_function_probe(glob, tr, ops, count); + if (ret < 0) + tracing_disarm_snapshot(tr); +- out: ++ + return ret < 0 ? ret : 0; + } + +@@ -11057,7 +11056,7 @@ __init static int tracer_alloc_buffers(void) + BUILD_BUG_ON(TRACE_ITER_LAST_BIT > TRACE_FLAGS_MAX_SIZE); + + if (!alloc_cpumask_var(&tracing_buffer_mask, GFP_KERNEL)) +- goto out; ++ return -ENOMEM; + + if (!alloc_cpumask_var(&global_trace.tracing_cpumask, GFP_KERNEL)) + goto out_free_buffer_mask; +@@ -11175,7 +11174,6 @@ __init static int tracer_alloc_buffers(void) + free_cpumask_var(global_trace.tracing_cpumask); + out_free_buffer_mask: + free_cpumask_var(tracing_buffer_mask); +-out: + return ret; + } + +diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h +index bd084953a98bef..dba1a9e4f7385c 100644 +--- a/kernel/trace/trace.h ++++ b/kernel/trace/trace.h +@@ -1292,6 +1292,7 @@ bool ftrace_event_is_function(struct trace_event_call *call); + */ + struct trace_parser { + bool cont; ++ bool fail; + char *buffer; + unsigned idx; + unsigned size; +@@ -1299,7 +1300,7 @@ struct trace_parser { + + static inline bool trace_parser_loaded(struct trace_parser *parser) + { +- return (parser->idx != 0); ++ return !parser->fail && parser->idx != 0; + } + + static inline bool trace_parser_cont(struct trace_parser *parser) +@@ -1313,6 +1314,11 @@ static inline void trace_parser_clear(struct trace_parser *parser) + parser->idx = 0; + } + ++static inline void trace_parser_fail(struct trace_parser *parser) ++{ ++ parser->fail = true; ++} ++ + extern int trace_parser_get_init(struct trace_parser *parser, int size); + extern void trace_parser_put(struct trace_parser *parser); + extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf, +@@ -2204,7 +2210,7 @@ static inline bool is_good_system_name(const char *name) + static inline void sanitize_event_name(char *name) + { + while (*name++ != '\0') +- if (*name == ':' || *name == '.') ++ if (*name == ':' || *name == '.' || *name == '*') + *name = '_'; + } + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index d9c4a93b24509c..8ead13792f0495 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -843,6 +843,18 @@ static struct damos_filter *damos_nth_filter(int n, struct damos *s) + return NULL; + } + ++static struct damos_filter *damos_nth_ops_filter(int n, struct damos *s) ++{ ++ struct damos_filter *filter; ++ int i = 0; ++ ++ damos_for_each_ops_filter(filter, s) { ++ if (i++ == n) ++ return filter; ++ } ++ return NULL; ++} ++ + static void damos_commit_filter_arg( + struct damos_filter *dst, struct damos_filter *src) + { +@@ -869,6 +881,7 @@ static void damos_commit_filter( + { + dst->type = src->type; + dst->matching = src->matching; ++ dst->allow = src->allow; + damos_commit_filter_arg(dst, src); + } + +@@ -906,7 +919,7 @@ static int damos_commit_ops_filters(struct damos *dst, struct damos *src) + int i = 0, j = 0; + + damos_for_each_ops_filter_safe(dst_filter, next, dst) { +- src_filter = damos_nth_filter(i++, src); ++ src_filter = damos_nth_ops_filter(i++, src); + if (src_filter) + damos_commit_filter(dst_filter, src_filter); + else +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index 4102a8c5f9926d..578546ef74a012 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -476,6 +476,10 @@ static unsigned long damon_pa_migrate_pages(struct list_head *folio_list, + if (list_empty(folio_list)) + return nr_migrated; + ++ if (target_nid < 0 || target_nid >= MAX_NUMNODES || ++ !node_state(target_nid, N_MEMORY)) ++ return nr_migrated; ++ + noreclaim_flag = memalloc_noreclaim_save(); + + nid = folio_nid(lru_to_folio(folio_list)); +diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c +index 7731b238b5340f..0f5ddefd128afb 100644 +--- a/mm/debug_vm_pgtable.c ++++ b/mm/debug_vm_pgtable.c +@@ -1041,29 +1041,34 @@ static void __init destroy_args(struct pgtable_debug_args *args) + + /* Free page table entries */ + if (args->start_ptep) { ++ pmd_clear(args->pmdp); + pte_free(args->mm, args->start_ptep); + mm_dec_nr_ptes(args->mm); + } + + if (args->start_pmdp) { ++ pud_clear(args->pudp); + pmd_free(args->mm, args->start_pmdp); + mm_dec_nr_pmds(args->mm); + } + + if (args->start_pudp) { ++ p4d_clear(args->p4dp); + pud_free(args->mm, args->start_pudp); + mm_dec_nr_puds(args->mm); + } + +- if (args->start_p4dp) ++ if (args->start_p4dp) { ++ pgd_clear(args->pgdp); + p4d_free(args->mm, args->start_p4dp); ++ } + + /* Free vma and mm struct */ + if (args->vma) + vm_area_free(args->vma); + + if (args->mm) +- mmdrop(args->mm); ++ mmput(args->mm); + } + + static struct page * __init +diff --git a/mm/filemap.c b/mm/filemap.c +index bada249b9fb762..a6459874bb2aaa 100644 +--- a/mm/filemap.c ++++ b/mm/filemap.c +@@ -1778,8 +1778,9 @@ pgoff_t page_cache_next_miss(struct address_space *mapping, + pgoff_t index, unsigned long max_scan) + { + XA_STATE(xas, &mapping->i_pages, index); ++ unsigned long nr = max_scan; + +- while (max_scan--) { ++ while (nr--) { + void *entry = xas_next(&xas); + if (!entry || xa_is_value(entry)) + return xas.xa_index; +diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c +index 5f922dd38ffa13..c9cdafdde13234 100644 +--- a/mm/kasan/kasan_test_c.c ++++ b/mm/kasan/kasan_test_c.c +@@ -47,7 +47,7 @@ static struct { + * Some tests use these global variables to store return values from function + * calls that could otherwise be eliminated by the compiler as dead code. + */ +-static volatile void *kasan_ptr_result; ++static void *volatile kasan_ptr_result; + static volatile int kasan_int_result; + + /* Probe for console output: obtains test_status lines of interest. */ +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index 225dddff091d71..dd543dd7755fc0 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -847,9 +847,17 @@ static int hwpoison_hugetlb_range(pte_t *ptep, unsigned long hmask, + #define hwpoison_hugetlb_range NULL + #endif + ++static int hwpoison_test_walk(unsigned long start, unsigned long end, ++ struct mm_walk *walk) ++{ ++ /* We also want to consider pages mapped into VM_PFNMAP. */ ++ return 0; ++} ++ + static const struct mm_walk_ops hwpoison_walk_ops = { + .pmd_entry = hwpoison_pte_range, + .hugetlb_entry = hwpoison_hugetlb_range, ++ .test_walk = hwpoison_test_walk, + .walk_lock = PGWALK_RDLOCK, + }; + +diff --git a/mm/mremap.c b/mm/mremap.c +index 60f6b8d0d5f0ba..3211dd47ece679 100644 +--- a/mm/mremap.c ++++ b/mm/mremap.c +@@ -294,6 +294,25 @@ static inline bool arch_supports_page_table_move(void) + } + #endif + ++static inline bool uffd_supports_page_table_move(struct pagetable_move_control *pmc) ++{ ++ /* ++ * If we are moving a VMA that has uffd-wp registered but with ++ * remap events disabled (new VMA will not be registered with uffd), we ++ * need to ensure that the uffd-wp state is cleared from all pgtables. ++ * This means recursing into lower page tables in move_page_tables(). ++ * ++ * We might get called with VMAs reversed when recovering from a ++ * failed page table move. In that case, the ++ * "old"-but-actually-"originally new" VMA during recovery will not have ++ * a uffd context. Recursing into lower page tables during the original ++ * move but not during the recovery move will cause trouble, because we ++ * run into already-existing page tables. So check both VMAs. ++ */ ++ return !vma_has_uffd_without_event_remap(pmc->old) && ++ !vma_has_uffd_without_event_remap(pmc->new); ++} ++ + #ifdef CONFIG_HAVE_MOVE_PMD + static bool move_normal_pmd(struct pagetable_move_control *pmc, + pmd_t *old_pmd, pmd_t *new_pmd) +@@ -306,6 +325,8 @@ static bool move_normal_pmd(struct pagetable_move_control *pmc, + + if (!arch_supports_page_table_move()) + return false; ++ if (!uffd_supports_page_table_move(pmc)) ++ return false; + /* + * The destination pmd shouldn't be established, free_pgtables() + * should have released it. +@@ -332,15 +353,6 @@ static bool move_normal_pmd(struct pagetable_move_control *pmc, + if (WARN_ON_ONCE(!pmd_none(*new_pmd))) + return false; + +- /* If this pmd belongs to a uffd vma with remap events disabled, we need +- * to ensure that the uffd-wp state is cleared from all pgtables. This +- * means recursing into lower page tables in move_page_tables(), and we +- * can reuse the existing code if we simply treat the entry as "not +- * moved". +- */ +- if (vma_has_uffd_without_event_remap(vma)) +- return false; +- + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_lock prevents deadlock. +@@ -389,6 +401,8 @@ static bool move_normal_pud(struct pagetable_move_control *pmc, + + if (!arch_supports_page_table_move()) + return false; ++ if (!uffd_supports_page_table_move(pmc)) ++ return false; + /* + * The destination pud shouldn't be established, free_pgtables() + * should have released it. +@@ -396,15 +410,6 @@ static bool move_normal_pud(struct pagetable_move_control *pmc, + if (WARN_ON_ONCE(!pud_none(*new_pud))) + return false; + +- /* If this pud belongs to a uffd vma with remap events disabled, we need +- * to ensure that the uffd-wp state is cleared from all pgtables. This +- * means recursing into lower page tables in move_page_tables(), and we +- * can reuse the existing code if we simply treat the entry as "not +- * moved". +- */ +- if (vma_has_uffd_without_event_remap(vma)) +- return false; +- + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_lock prevents deadlock. +diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c +index f5cd935490ad97..6a064a6b0e4319 100644 +--- a/net/bluetooth/hci_conn.c ++++ b/net/bluetooth/hci_conn.c +@@ -339,7 +339,8 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data) + case BT_CODEC_TRANSPARENT: + if (!find_next_esco_param(conn, esco_param_msbc, + ARRAY_SIZE(esco_param_msbc))) +- return false; ++ return -EINVAL; ++ + param = &esco_param_msbc[conn->attempt - 1]; + cp.tx_coding_format.id = 0x03; + cp.rx_coding_format.id = 0x03; +@@ -785,7 +786,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c + d->sync_handle = conn->sync_handle; + + if (test_and_clear_bit(HCI_CONN_PA_SYNC, &conn->flags)) { +- hci_conn_hash_list_flag(hdev, find_bis, BIS_LINK, ++ hci_conn_hash_list_flag(hdev, find_bis, PA_LINK, + HCI_CONN_PA_SYNC, d); + + if (!d->count) +@@ -914,6 +915,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t + break; + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + if (hdev->iso_mtu) + /* Dedicated ISO Buffer exists */ + break; +@@ -979,6 +981,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t + break; + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + /* conn->src should reflect the local identity address */ + hci_copy_identity_address(hdev, &conn->src, &conn->src_type); + +@@ -1033,7 +1036,6 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t + } + + hci_conn_init_sysfs(conn); +- + return conn; + } + +@@ -1077,6 +1079,7 @@ static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason) + break; + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + if ((conn->state != BT_CONNECTED && + !test_bit(HCI_CONN_CREATE_CIS, &conn->flags)) || + test_bit(HCI_CONN_BIG_CREATED, &conn->flags)) +@@ -1152,7 +1155,8 @@ void hci_conn_del(struct hci_conn *conn) + } else { + /* Unacked ISO frames */ + if (conn->type == CIS_LINK || +- conn->type == BIS_LINK) { ++ conn->type == BIS_LINK || ++ conn->type == PA_LINK) { + if (hdev->iso_pkts) + hdev->iso_cnt += conn->sent; + else if (hdev->le_pkts) +@@ -2081,7 +2085,7 @@ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, + + bt_dev_dbg(hdev, "dst %pMR type %d sid %d", dst, dst_type, sid); + +- conn = hci_conn_add_unset(hdev, BIS_LINK, dst, HCI_ROLE_SLAVE); ++ conn = hci_conn_add_unset(hdev, PA_LINK, dst, HCI_ROLE_SLAVE); + if (IS_ERR(conn)) + return conn; + +@@ -2246,7 +2250,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, + * the start periodic advertising and create BIG commands have + * been queued + */ +- hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK, ++ hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK, + BT_BOUND, &data); + + /* Queue start periodic advertising and create BIG */ +@@ -2980,6 +2984,7 @@ void hci_conn_tx_queue(struct hci_conn *conn, struct sk_buff *skb) + switch (conn->type) { + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + case ACL_LINK: + case LE_LINK: + break; +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index 441cb1700f9972..0aa8a591ce428c 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -2938,12 +2938,14 @@ int hci_recv_frame(struct hci_dev *hdev, struct sk_buff *skb) + case HCI_ACLDATA_PKT: + /* Detect if ISO packet has been sent as ACL */ + if (hci_conn_num(hdev, CIS_LINK) || +- hci_conn_num(hdev, BIS_LINK)) { ++ hci_conn_num(hdev, BIS_LINK) || ++ hci_conn_num(hdev, PA_LINK)) { + __u16 handle = __le16_to_cpu(hci_acl_hdr(skb)->handle); + __u8 type; + + type = hci_conn_lookup_type(hdev, hci_handle(handle)); +- if (type == CIS_LINK || type == BIS_LINK) ++ if (type == CIS_LINK || type == BIS_LINK || ++ type == PA_LINK) + hci_skb_pkt_type(skb) = HCI_ISODATA_PKT; + } + break; +@@ -3398,6 +3400,7 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote) + break; + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + cnt = hdev->iso_mtu ? hdev->iso_cnt : + hdev->le_mtu ? hdev->le_cnt : hdev->acl_cnt; + break; +@@ -3411,7 +3414,7 @@ static inline void hci_quote_sent(struct hci_conn *conn, int num, int *quote) + } + + static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type, +- __u8 type2, int *quote) ++ int *quote) + { + struct hci_conn_hash *h = &hdev->conn_hash; + struct hci_conn *conn = NULL, *c; +@@ -3423,7 +3426,7 @@ static struct hci_conn *hci_low_sent(struct hci_dev *hdev, __u8 type, + rcu_read_lock(); + + list_for_each_entry_rcu(c, &h->list, list) { +- if ((c->type != type && c->type != type2) || ++ if (c->type != type || + skb_queue_empty(&c->data_q)) + continue; + +@@ -3627,7 +3630,7 @@ static void hci_sched_sco(struct hci_dev *hdev, __u8 type) + else + cnt = &hdev->sco_cnt; + +- while (*cnt && (conn = hci_low_sent(hdev, type, type, "e))) { ++ while (*cnt && (conn = hci_low_sent(hdev, type, "e))) { + while (quote-- && (skb = skb_dequeue(&conn->data_q))) { + BT_DBG("skb %p len %d", skb, skb->len); + hci_send_conn_frame(hdev, conn, skb); +@@ -3746,8 +3749,8 @@ static void hci_sched_le(struct hci_dev *hdev) + hci_prio_recalculate(hdev, LE_LINK); + } + +-/* Schedule CIS */ +-static void hci_sched_iso(struct hci_dev *hdev) ++/* Schedule iso */ ++static void hci_sched_iso(struct hci_dev *hdev, __u8 type) + { + struct hci_conn *conn; + struct sk_buff *skb; +@@ -3755,14 +3758,12 @@ static void hci_sched_iso(struct hci_dev *hdev) + + BT_DBG("%s", hdev->name); + +- if (!hci_conn_num(hdev, CIS_LINK) && +- !hci_conn_num(hdev, BIS_LINK)) ++ if (!hci_conn_num(hdev, type)) + return; + + cnt = hdev->iso_pkts ? &hdev->iso_cnt : + hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt; +- while (*cnt && (conn = hci_low_sent(hdev, CIS_LINK, BIS_LINK, +- "e))) { ++ while (*cnt && (conn = hci_low_sent(hdev, type, "e))) { + while (quote-- && (skb = skb_dequeue(&conn->data_q))) { + BT_DBG("skb %p len %d", skb, skb->len); + hci_send_conn_frame(hdev, conn, skb); +@@ -3787,7 +3788,9 @@ static void hci_tx_work(struct work_struct *work) + /* Schedule queues and send stuff to HCI driver */ + hci_sched_sco(hdev, SCO_LINK); + hci_sched_sco(hdev, ESCO_LINK); +- hci_sched_iso(hdev); ++ hci_sched_iso(hdev, CIS_LINK); ++ hci_sched_iso(hdev, BIS_LINK); ++ hci_sched_iso(hdev, PA_LINK); + hci_sched_acl(hdev); + hci_sched_le(hdev); + } +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index b83995898da098..5ef54853bc5eb1 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -4432,6 +4432,7 @@ static void hci_num_comp_pkts_evt(struct hci_dev *hdev, void *data, + + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + if (hdev->iso_pkts) { + hdev->iso_cnt += count; + if (hdev->iso_cnt > hdev->iso_pkts) +@@ -6381,7 +6382,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data, + conn->sync_handle = le16_to_cpu(ev->handle); + conn->sid = HCI_SID_INVALID; + +- mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, BIS_LINK, ++ mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, PA_LINK, + &flags); + if (!(mask & HCI_LM_ACCEPT)) { + hci_le_pa_term_sync(hdev, ev->handle); +@@ -6392,7 +6393,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data, + goto unlock; + + /* Add connection to indicate PA sync event */ +- pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY, ++ pa_sync = hci_conn_add_unset(hdev, PA_LINK, BDADDR_ANY, + HCI_ROLE_SLAVE); + + if (IS_ERR(pa_sync)) +@@ -6423,7 +6424,7 @@ static void hci_le_per_adv_report_evt(struct hci_dev *hdev, void *data, + + hci_dev_lock(hdev); + +- mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, BIS_LINK, &flags); ++ mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, PA_LINK, &flags); + if (!(mask & HCI_LM_ACCEPT)) + goto unlock; + +@@ -6744,8 +6745,8 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data, + qos->ucast.out.latency = + DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency), + 1000); +- qos->ucast.in.sdu = le16_to_cpu(ev->c_mtu); +- qos->ucast.out.sdu = le16_to_cpu(ev->p_mtu); ++ qos->ucast.in.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0; ++ qos->ucast.out.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0; + qos->ucast.in.phy = ev->c_phy; + qos->ucast.out.phy = ev->p_phy; + break; +@@ -6759,8 +6760,8 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data, + qos->ucast.in.latency = + DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency), + 1000); +- qos->ucast.out.sdu = le16_to_cpu(ev->c_mtu); +- qos->ucast.in.sdu = le16_to_cpu(ev->p_mtu); ++ qos->ucast.out.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0; ++ qos->ucast.in.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0; + qos->ucast.out.phy = ev->c_phy; + qos->ucast.in.phy = ev->p_phy; + break; +diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c +index 7938c004071c49..115dc1cd99ce40 100644 +--- a/net/bluetooth/hci_sync.c ++++ b/net/bluetooth/hci_sync.c +@@ -2929,7 +2929,7 @@ static int hci_le_set_ext_scan_param_sync(struct hci_dev *hdev, u8 type, + if (sent) { + struct hci_conn *conn; + +- conn = hci_conn_hash_lookup_ba(hdev, BIS_LINK, ++ conn = hci_conn_hash_lookup_ba(hdev, PA_LINK, + &sent->bdaddr); + if (conn) { + struct bt_iso_qos *qos = &conn->iso_qos; +@@ -4531,14 +4531,14 @@ static int hci_le_set_host_feature_sync(struct hci_dev *hdev) + { + struct hci_cp_le_set_host_feature cp; + +- if (!cis_capable(hdev)) ++ if (!iso_capable(hdev)) + return 0; + + memset(&cp, 0, sizeof(cp)); + + /* Connected Isochronous Channels (Host Support) */ + cp.bit_number = 32; +- cp.bit_value = 1; ++ cp.bit_value = iso_enabled(hdev) ? 0x01 : 0x00; + + return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_HOST_FEATURE, + sizeof(cp), &cp, HCI_CMD_TIMEOUT); +@@ -5493,7 +5493,7 @@ static int hci_disconnect_sync(struct hci_dev *hdev, struct hci_conn *conn, + { + struct hci_cp_disconnect cp; + +- if (conn->type == BIS_LINK) { ++ if (conn->type == BIS_LINK || conn->type == PA_LINK) { + /* This is a BIS connection, hci_conn_del will + * do the necessary cleanup. + */ +@@ -5562,7 +5562,7 @@ static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn, + return HCI_ERROR_LOCAL_HOST_TERM; + } + +- if (conn->type == BIS_LINK) { ++ if (conn->type == BIS_LINK || conn->type == PA_LINK) { + /* There is no way to cancel a BIS without terminating the BIG + * which is done later on connection cleanup. + */ +@@ -5627,7 +5627,7 @@ static int hci_reject_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, + if (conn->type == CIS_LINK) + return hci_le_reject_cis_sync(hdev, conn, reason); + +- if (conn->type == BIS_LINK) ++ if (conn->type == BIS_LINK || conn->type == PA_LINK) + return -EINVAL; + + if (conn->type == SCO_LINK || conn->type == ESCO_LINK) +@@ -6985,8 +6985,6 @@ static void create_pa_complete(struct hci_dev *hdev, void *data, int err) + + hci_dev_lock(hdev); + +- hci_dev_clear_flag(hdev, HCI_PA_SYNC); +- + if (!hci_conn_valid(hdev, conn)) + clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags); + +@@ -6994,7 +6992,7 @@ static void create_pa_complete(struct hci_dev *hdev, void *data, int err) + goto unlock; + + /* Add connection to indicate PA sync error */ +- pa_sync = hci_conn_add_unset(hdev, BIS_LINK, BDADDR_ANY, ++ pa_sync = hci_conn_add_unset(hdev, PA_LINK, BDADDR_ANY, + HCI_ROLE_SLAVE); + + if (IS_ERR(pa_sync)) +@@ -7047,10 +7045,13 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data) + /* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update + * it. + */ +- if (conn->sid == HCI_SID_INVALID) +- __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL, +- HCI_EV_LE_EXT_ADV_REPORT, +- conn->conn_timeout, NULL); ++ if (conn->sid == HCI_SID_INVALID) { ++ err = __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL, ++ HCI_EV_LE_EXT_ADV_REPORT, ++ conn->conn_timeout, NULL); ++ if (err == -ETIMEDOUT) ++ goto done; ++ } + + memset(&cp, 0, sizeof(cp)); + cp.options = qos->bcast.options; +@@ -7080,6 +7081,12 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data) + __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL, + 0, NULL, HCI_CMD_TIMEOUT); + ++done: ++ hci_dev_clear_flag(hdev, HCI_PA_SYNC); ++ ++ /* Update passive scan since HCI_PA_SYNC flag has been cleared */ ++ hci_update_passive_scan_sync(hdev); ++ + return err; + } + +diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c +index 3c2c98eecc6267..14a4215352d5f1 100644 +--- a/net/bluetooth/iso.c ++++ b/net/bluetooth/iso.c +@@ -2226,7 +2226,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags) + + static void iso_connect_cfm(struct hci_conn *hcon, __u8 status) + { +- if (hcon->type != CIS_LINK && hcon->type != BIS_LINK) { ++ if (hcon->type != CIS_LINK && hcon->type != BIS_LINK && ++ hcon->type != PA_LINK) { + if (hcon->type != LE_LINK) + return; + +@@ -2267,7 +2268,8 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status) + + static void iso_disconn_cfm(struct hci_conn *hcon, __u8 reason) + { +- if (hcon->type != CIS_LINK && hcon->type != BIS_LINK) ++ if (hcon->type != CIS_LINK && hcon->type != BIS_LINK && ++ hcon->type != PA_LINK) + return; + + BT_DBG("hcon %p reason %d", hcon, reason); +@@ -2455,11 +2457,11 @@ static const struct net_proto_family iso_sock_family_ops = { + .create = iso_sock_create, + }; + +-static bool iso_inited; ++static bool inited; + +-bool iso_enabled(void) ++bool iso_inited(void) + { +- return iso_inited; ++ return inited; + } + + int iso_init(void) +@@ -2468,7 +2470,7 @@ int iso_init(void) + + BUILD_BUG_ON(sizeof(struct sockaddr_iso) > sizeof(struct sockaddr)); + +- if (iso_inited) ++ if (inited) + return -EALREADY; + + err = proto_register(&iso_proto, 0); +@@ -2496,7 +2498,7 @@ int iso_init(void) + iso_debugfs = debugfs_create_file("iso", 0444, bt_debugfs, + NULL, &iso_debugfs_fops); + +- iso_inited = true; ++ inited = true; + + return 0; + +@@ -2507,7 +2509,7 @@ int iso_init(void) + + int iso_exit(void) + { +- if (!iso_inited) ++ if (!inited) + return -EALREADY; + + bt_procfs_cleanup(&init_net, "iso"); +@@ -2521,7 +2523,7 @@ int iso_exit(void) + + proto_unregister(&iso_proto); + +- iso_inited = false; ++ inited = false; + + return 0; + } +diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c +index 63dba0503653bd..3166f5fb876b11 100644 +--- a/net/bluetooth/mgmt.c ++++ b/net/bluetooth/mgmt.c +@@ -922,19 +922,19 @@ static u32 get_current_settings(struct hci_dev *hdev) + if (hci_dev_test_flag(hdev, HCI_WIDEBAND_SPEECH_ENABLED)) + settings |= MGMT_SETTING_WIDEBAND_SPEECH; + +- if (cis_central_capable(hdev)) ++ if (cis_central_enabled(hdev)) + settings |= MGMT_SETTING_CIS_CENTRAL; + +- if (cis_peripheral_capable(hdev)) ++ if (cis_peripheral_enabled(hdev)) + settings |= MGMT_SETTING_CIS_PERIPHERAL; + +- if (bis_capable(hdev)) ++ if (bis_enabled(hdev)) + settings |= MGMT_SETTING_ISO_BROADCASTER; + +- if (sync_recv_capable(hdev)) ++ if (sync_recv_enabled(hdev)) + settings |= MGMT_SETTING_ISO_SYNC_RECEIVER; + +- if (ll_privacy_capable(hdev)) ++ if (ll_privacy_enabled(hdev)) + settings |= MGMT_SETTING_LL_PRIVACY; + + return settings; +@@ -3237,6 +3237,7 @@ static u8 link_to_bdaddr(u8 link_type, u8 addr_type) + switch (link_type) { + case CIS_LINK: + case BIS_LINK: ++ case PA_LINK: + case LE_LINK: + switch (addr_type) { + case ADDR_LE_DEV_PUBLIC: +@@ -4512,7 +4513,7 @@ static int read_exp_features_info(struct sock *sk, struct hci_dev *hdev, + } + + if (IS_ENABLED(CONFIG_BT_LE)) { +- flags = iso_enabled() ? BIT(0) : 0; ++ flags = iso_inited() ? BIT(0) : 0; + memcpy(rp->features[idx].uuid, iso_socket_uuid, 16); + rp->features[idx].flags = cpu_to_le32(flags); + idx++; +diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c +index 1377f31b719cdd..8ce145938b02d9 100644 +--- a/net/bridge/br_multicast.c ++++ b/net/bridge/br_multicast.c +@@ -4818,6 +4818,14 @@ void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx, + intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MIN; + } + ++ if (intvl_jiffies > BR_MULTICAST_QUERY_INTVL_MAX) { ++ br_info(brmctx->br, ++ "trying to set multicast query interval above maximum, setting to %lu (%ums)\n", ++ jiffies_to_clock_t(BR_MULTICAST_QUERY_INTVL_MAX), ++ jiffies_to_msecs(BR_MULTICAST_QUERY_INTVL_MAX)); ++ intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MAX; ++ } ++ + brmctx->multicast_query_interval = intvl_jiffies; + } + +@@ -4834,6 +4842,14 @@ void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx, + intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MIN; + } + ++ if (intvl_jiffies > BR_MULTICAST_STARTUP_QUERY_INTVL_MAX) { ++ br_info(brmctx->br, ++ "trying to set multicast startup query interval above maximum, setting to %lu (%ums)\n", ++ jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX), ++ jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX)); ++ intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MAX; ++ } ++ + brmctx->multicast_startup_query_interval = intvl_jiffies; + } + +diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h +index b159aae594c0b0..8de0904b9627f7 100644 +--- a/net/bridge/br_private.h ++++ b/net/bridge/br_private.h +@@ -31,6 +31,8 @@ + #define BR_MULTICAST_DEFAULT_HASH_MAX 4096 + #define BR_MULTICAST_QUERY_INTVL_MIN msecs_to_jiffies(1000) + #define BR_MULTICAST_STARTUP_QUERY_INTVL_MIN BR_MULTICAST_QUERY_INTVL_MIN ++#define BR_MULTICAST_QUERY_INTVL_MAX msecs_to_jiffies(86400000) /* 24 hours */ ++#define BR_MULTICAST_STARTUP_QUERY_INTVL_MAX BR_MULTICAST_QUERY_INTVL_MAX + + #define BR_HWDOM_MAX BITS_PER_LONG + +diff --git a/net/core/dev.c b/net/core/dev.c +index be97c440ecd5f9..b014a5ce9e0ff2 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -3782,6 +3782,18 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb, + features &= ~NETIF_F_TSO_MANGLEID; + } + ++ /* NETIF_F_IPV6_CSUM does not support IPv6 extension headers, ++ * so neither does TSO that depends on it. ++ */ ++ if (features & NETIF_F_IPV6_CSUM && ++ (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 || ++ (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && ++ vlan_get_protocol(skb) == htons(ETH_P_IPV6))) && ++ skb_transport_header_was_set(skb) && ++ skb_network_header_len(skb) != sizeof(struct ipv6hdr) && ++ !ipv6_has_hopopt_jumbo(skb)) ++ features &= ~(NETIF_F_IPV6_CSUM | NETIF_F_TSO6 | NETIF_F_GSO_UDP_L4); ++ + return features; + } + +diff --git a/net/devlink/port.c b/net/devlink/port.c +index 939081a0e6154a..cb8d4df616199e 100644 +--- a/net/devlink/port.c ++++ b/net/devlink/port.c +@@ -1519,7 +1519,7 @@ static int __devlink_port_phys_port_name_get(struct devlink_port *devlink_port, + struct devlink_port_attrs *attrs = &devlink_port->attrs; + int n = 0; + +- if (!devlink_port->attrs_set) ++ if (!devlink_port->attrs_set || devlink_port->attrs.no_phys_port_name) + return -EOPNOTSUPP; + + switch (attrs->flavour) { +diff --git a/net/hsr/hsr_slave.c b/net/hsr/hsr_slave.c +index b87b6a6fe07000..102eccf5ead734 100644 +--- a/net/hsr/hsr_slave.c ++++ b/net/hsr/hsr_slave.c +@@ -63,8 +63,14 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb) + skb_push(skb, ETH_HLEN); + skb_reset_mac_header(skb); + if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) || +- protocol == htons(ETH_P_HSR)) ++ protocol == htons(ETH_P_HSR)) { ++ if (!pskb_may_pull(skb, ETH_HLEN + HSR_HLEN)) { ++ kfree_skb(skb); ++ goto finish_consume; ++ } ++ + skb_set_network_header(skb, ETH_HLEN + HSR_HLEN); ++ } + skb_reset_mac_len(skb); + + /* Only the frames received over the interlink port will assign a +diff --git a/net/ipv4/netfilter/nf_reject_ipv4.c b/net/ipv4/netfilter/nf_reject_ipv4.c +index 87fd945a0d27a5..0d3cb2ba6fc841 100644 +--- a/net/ipv4/netfilter/nf_reject_ipv4.c ++++ b/net/ipv4/netfilter/nf_reject_ipv4.c +@@ -247,8 +247,7 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb, + if (!oth) + return; + +- if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) && +- nf_reject_fill_skb_dst(oldskb) < 0) ++ if (!skb_dst(oldskb) && nf_reject_fill_skb_dst(oldskb) < 0) + return; + + if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) +@@ -321,8 +320,7 @@ void nf_send_unreach(struct sk_buff *skb_in, int code, int hook) + if (iph->frag_off & htons(IP_OFFSET)) + return; + +- if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) && +- nf_reject_fill_skb_dst(skb_in) < 0) ++ if (!skb_dst(skb_in) && nf_reject_fill_skb_dst(skb_in) < 0) + return; + + if (skb_csum_unnecessary(skb_in) || +diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c +index 9ae2b2725bf99a..c3d64c4b69d7de 100644 +--- a/net/ipv6/netfilter/nf_reject_ipv6.c ++++ b/net/ipv6/netfilter/nf_reject_ipv6.c +@@ -293,7 +293,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb, + fl6.fl6_sport = otcph->dest; + fl6.fl6_dport = otcph->source; + +- if (hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) { ++ if (!skb_dst(oldskb)) { + nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false); + if (!dst) + return; +@@ -397,8 +397,7 @@ void nf_send_unreach6(struct net *net, struct sk_buff *skb_in, + if (hooknum == NF_INET_LOCAL_OUT && skb_in->dev == NULL) + skb_in->dev = net->loopback_dev; + +- if ((hooknum == NF_INET_PRE_ROUTING || hooknum == NF_INET_INGRESS) && +- nf_reject6_fill_skb_dst(skb_in) < 0) ++ if (!skb_dst(skb_in) && nf_reject6_fill_skb_dst(skb_in) < 0) + return; + + icmpv6_send(skb_in, ICMPV6_DEST_UNREACH, code, 0); +diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c +index f78ecb6ad83834..fd58426f222beb 100644 +--- a/net/ipv6/seg6_hmac.c ++++ b/net/ipv6/seg6_hmac.c +@@ -35,6 +35,7 @@ + #include + + #include ++#include + #include + #include + #include +@@ -280,7 +281,7 @@ bool seg6_hmac_validate_skb(struct sk_buff *skb) + if (seg6_hmac_compute(hinfo, srh, &ipv6_hdr(skb)->saddr, hmac_output)) + return false; + +- if (memcmp(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN) != 0) ++ if (crypto_memneq(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN)) + return false; + + return true; +@@ -304,6 +305,9 @@ int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hinfo) + struct seg6_pernet_data *sdata = seg6_pernet(net); + int err; + ++ if (!__hmac_get_algo(hinfo->alg_id)) ++ return -EINVAL; ++ + err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, + rht_params); + +diff --git a/net/mptcp/options.c b/net/mptcp/options.c +index 1f898888b22357..c6983471dca552 100644 +--- a/net/mptcp/options.c ++++ b/net/mptcp/options.c +@@ -1117,7 +1117,9 @@ static bool add_addr_hmac_valid(struct mptcp_sock *msk, + return hmac == mp_opt->ahmac; + } + +-/* Return false if a subflow has been reset, else return true */ ++/* Return false in case of error (or subflow has been reset), ++ * else return true. ++ */ + bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb) + { + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); +@@ -1221,7 +1223,7 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb) + + mpext = skb_ext_add(skb, SKB_EXT_MPTCP); + if (!mpext) +- return true; ++ return false; + + memset(mpext, 0, sizeof(*mpext)); + +diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c +index 420d416e2603de..136a380602cae8 100644 +--- a/net/mptcp/pm.c ++++ b/net/mptcp/pm.c +@@ -274,6 +274,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer) + add_timer); + struct mptcp_sock *msk = entry->sock; + struct sock *sk = (struct sock *)msk; ++ unsigned int timeout; + + pr_debug("msk=%p\n", msk); + +@@ -291,6 +292,10 @@ static void mptcp_pm_add_timer(struct timer_list *timer) + goto out; + } + ++ timeout = mptcp_get_add_addr_timeout(sock_net(sk)); ++ if (!timeout) ++ goto out; ++ + spin_lock_bh(&msk->pm.lock); + + if (!mptcp_pm_should_add_signal_addr(msk)) { +@@ -302,7 +307,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer) + + if (entry->retrans_times < ADD_ADDR_RETRANS_MAX) + sk_reset_timer(sk, timer, +- jiffies + mptcp_get_add_addr_timeout(sock_net(sk))); ++ jiffies + timeout); + + spin_unlock_bh(&msk->pm.lock); + +@@ -344,6 +349,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk, + struct mptcp_pm_add_entry *add_entry = NULL; + struct sock *sk = (struct sock *)msk; + struct net *net = sock_net(sk); ++ unsigned int timeout; + + lockdep_assert_held(&msk->pm.lock); + +@@ -353,9 +359,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk, + if (WARN_ON_ONCE(mptcp_pm_is_kernel(msk))) + return false; + +- sk_reset_timer(sk, &add_entry->add_timer, +- jiffies + mptcp_get_add_addr_timeout(net)); +- return true; ++ goto reset_timer; + } + + add_entry = kmalloc(sizeof(*add_entry), GFP_ATOMIC); +@@ -369,8 +373,10 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk, + add_entry->retrans_times = 0; + + timer_setup(&add_entry->add_timer, mptcp_pm_add_timer, 0); +- sk_reset_timer(sk, &add_entry->add_timer, +- jiffies + mptcp_get_add_addr_timeout(net)); ++reset_timer: ++ timeout = mptcp_get_add_addr_timeout(net); ++ if (timeout) ++ sk_reset_timer(sk, &add_entry->add_timer, jiffies + timeout); + + return true; + } +diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c +index d39e7c1784608d..667803d72b643a 100644 +--- a/net/mptcp/pm_kernel.c ++++ b/net/mptcp/pm_kernel.c +@@ -1085,7 +1085,6 @@ static void __flush_addrs(struct list_head *list) + static void __reset_counters(struct pm_nl_pernet *pernet) + { + WRITE_ONCE(pernet->add_addr_signal_max, 0); +- WRITE_ONCE(pernet->add_addr_accept_max, 0); + WRITE_ONCE(pernet->local_addr_max, 0); + pernet->addrs = 0; + } +diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c +index 48dd8c88903feb..aa9f31e4415ad8 100644 +--- a/net/sched/sch_cake.c ++++ b/net/sched/sch_cake.c +@@ -1747,7 +1747,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, + ktime_t now = ktime_get(); + struct cake_tin_data *b; + struct cake_flow *flow; +- u32 idx; ++ u32 idx, tin; + + /* choose flow to insert into */ + idx = cake_classify(sch, &b, skb, q->flow_mode, &ret); +@@ -1757,6 +1757,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, + __qdisc_drop(skb, to_free); + return ret; + } ++ tin = (u32)(b - q->tins); + idx--; + flow = &b->flows[idx]; + +@@ -1924,13 +1925,22 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, + q->buffer_max_used = q->buffer_used; + + if (q->buffer_used > q->buffer_limit) { ++ bool same_flow = false; + u32 dropped = 0; ++ u32 drop_id; + + while (q->buffer_used > q->buffer_limit) { + dropped++; +- cake_drop(sch, to_free); ++ drop_id = cake_drop(sch, to_free); ++ ++ if ((drop_id >> 16) == tin && ++ (drop_id & 0xFFFF) == idx) ++ same_flow = true; + } + b->drop_overlimit += dropped; ++ ++ if (same_flow) ++ return NET_XMIT_CN; + } + return NET_XMIT_SUCCESS; + } +diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c +index c93761040c6e77..fa0314679e434a 100644 +--- a/net/sched/sch_codel.c ++++ b/net/sched/sch_codel.c +@@ -101,9 +101,9 @@ static const struct nla_policy codel_policy[TCA_CODEL_MAX + 1] = { + static int codel_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) + { ++ unsigned int dropped_pkts = 0, dropped_bytes = 0; + struct codel_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_CODEL_MAX + 1]; +- unsigned int qlen, dropped = 0; + int err; + + err = nla_parse_nested_deprecated(tb, TCA_CODEL_MAX, opt, +@@ -142,15 +142,17 @@ static int codel_change(struct Qdisc *sch, struct nlattr *opt, + WRITE_ONCE(q->params.ecn, + !!nla_get_u32(tb[TCA_CODEL_ECN])); + +- qlen = sch->q.qlen; + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = qdisc_dequeue_internal(sch, true); + +- dropped += qdisc_pkt_len(skb); +- qdisc_qstats_backlog_dec(sch, skb); ++ if (!skb) ++ break; ++ ++ dropped_pkts++; ++ dropped_bytes += qdisc_pkt_len(skb); + rtnl_qdisc_drop(skb, sch); + } +- qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped); ++ qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); + + sch_tree_unlock(sch); + return 0; +diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c +index 902ff54706072b..fee922da2f99c0 100644 +--- a/net/sched/sch_fq.c ++++ b/net/sched/sch_fq.c +@@ -1013,11 +1013,11 @@ static int fq_load_priomap(struct fq_sched_data *q, + static int fq_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) + { ++ unsigned int dropped_pkts = 0, dropped_bytes = 0; + struct fq_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_FQ_MAX + 1]; +- int err, drop_count = 0; +- unsigned drop_len = 0; + u32 fq_log; ++ int err; + + err = nla_parse_nested_deprecated(tb, TCA_FQ_MAX, opt, fq_policy, + NULL); +@@ -1135,16 +1135,18 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt, + err = fq_resize(sch, fq_log); + sch_tree_lock(sch); + } ++ + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = qdisc_dequeue_internal(sch, false); + + if (!skb) + break; +- drop_len += qdisc_pkt_len(skb); ++ ++ dropped_pkts++; ++ dropped_bytes += qdisc_pkt_len(skb); + rtnl_kfree_skbs(skb, skb); +- drop_count++; + } +- qdisc_tree_reduce_backlog(sch, drop_count, drop_len); ++ qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); + + sch_tree_unlock(sch); + return err; +diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c +index 2a0f3a513bfaa1..a141423929394d 100644 +--- a/net/sched/sch_fq_codel.c ++++ b/net/sched/sch_fq_codel.c +@@ -366,6 +366,7 @@ static const struct nla_policy fq_codel_policy[TCA_FQ_CODEL_MAX + 1] = { + static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) + { ++ unsigned int dropped_pkts = 0, dropped_bytes = 0; + struct fq_codel_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_FQ_CODEL_MAX + 1]; + u32 quantum = 0; +@@ -443,13 +444,14 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt, + q->memory_usage > q->memory_limit) { + struct sk_buff *skb = qdisc_dequeue_internal(sch, false); + +- q->cstats.drop_len += qdisc_pkt_len(skb); ++ if (!skb) ++ break; ++ ++ dropped_pkts++; ++ dropped_bytes += qdisc_pkt_len(skb); + rtnl_kfree_skbs(skb, skb); +- q->cstats.drop_count++; + } +- qdisc_tree_reduce_backlog(sch, q->cstats.drop_count, q->cstats.drop_len); +- q->cstats.drop_count = 0; +- q->cstats.drop_len = 0; ++ qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); + + sch_tree_unlock(sch); + return 0; +diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c +index b0e34daf1f7517..7b96bc3ff89184 100644 +--- a/net/sched/sch_fq_pie.c ++++ b/net/sched/sch_fq_pie.c +@@ -287,10 +287,9 @@ static struct sk_buff *fq_pie_qdisc_dequeue(struct Qdisc *sch) + static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) + { ++ unsigned int dropped_pkts = 0, dropped_bytes = 0; + struct fq_pie_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_FQ_PIE_MAX + 1]; +- unsigned int len_dropped = 0; +- unsigned int num_dropped = 0; + int err; + + err = nla_parse_nested(tb, TCA_FQ_PIE_MAX, opt, fq_pie_policy, extack); +@@ -368,11 +367,14 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt, + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = qdisc_dequeue_internal(sch, false); + +- len_dropped += qdisc_pkt_len(skb); +- num_dropped += 1; ++ if (!skb) ++ break; ++ ++ dropped_pkts++; ++ dropped_bytes += qdisc_pkt_len(skb); + rtnl_kfree_skbs(skb, skb); + } +- qdisc_tree_reduce_backlog(sch, num_dropped, len_dropped); ++ qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); + + sch_tree_unlock(sch); + return 0; +diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c +index 5aa434b4670738..2d4855e28a286f 100644 +--- a/net/sched/sch_hhf.c ++++ b/net/sched/sch_hhf.c +@@ -508,9 +508,9 @@ static const struct nla_policy hhf_policy[TCA_HHF_MAX + 1] = { + static int hhf_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) + { ++ unsigned int dropped_pkts = 0, dropped_bytes = 0; + struct hhf_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_HHF_MAX + 1]; +- unsigned int qlen, prev_backlog; + int err; + u64 non_hh_quantum; + u32 new_quantum = q->quantum; +@@ -561,15 +561,17 @@ static int hhf_change(struct Qdisc *sch, struct nlattr *opt, + usecs_to_jiffies(us)); + } + +- qlen = sch->q.qlen; +- prev_backlog = sch->qstats.backlog; + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = qdisc_dequeue_internal(sch, false); + ++ if (!skb) ++ break; ++ ++ dropped_pkts++; ++ dropped_bytes += qdisc_pkt_len(skb); + rtnl_kfree_skbs(skb, skb); + } +- qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, +- prev_backlog - sch->qstats.backlog); ++ qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); + + sch_tree_unlock(sch); + return 0; +diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c +index c968ea76377463..b5e40c51655a73 100644 +--- a/net/sched/sch_htb.c ++++ b/net/sched/sch_htb.c +@@ -592,7 +592,7 @@ htb_change_class_mode(struct htb_sched *q, struct htb_class *cl, s64 *diff) + */ + static inline void htb_activate(struct htb_sched *q, struct htb_class *cl) + { +- WARN_ON(cl->level || !cl->leaf.q || !cl->leaf.q->q.qlen); ++ WARN_ON(cl->level || !cl->leaf.q); + + if (!cl->prio_activity) { + cl->prio_activity = 1 << cl->prio; +diff --git a/net/sched/sch_pie.c b/net/sched/sch_pie.c +index ad46ee3ed5a968..0a377313b6a9d2 100644 +--- a/net/sched/sch_pie.c ++++ b/net/sched/sch_pie.c +@@ -141,9 +141,9 @@ static const struct nla_policy pie_policy[TCA_PIE_MAX + 1] = { + static int pie_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) + { ++ unsigned int dropped_pkts = 0, dropped_bytes = 0; + struct pie_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_PIE_MAX + 1]; +- unsigned int qlen, dropped = 0; + int err; + + err = nla_parse_nested_deprecated(tb, TCA_PIE_MAX, opt, pie_policy, +@@ -193,15 +193,17 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt, + nla_get_u32(tb[TCA_PIE_DQ_RATE_ESTIMATOR])); + + /* Drop excess packets if new limit is lower */ +- qlen = sch->q.qlen; + while (sch->q.qlen > sch->limit) { + struct sk_buff *skb = qdisc_dequeue_internal(sch, true); + +- dropped += qdisc_pkt_len(skb); +- qdisc_qstats_backlog_dec(sch, skb); ++ if (!skb) ++ break; ++ ++ dropped_pkts++; ++ dropped_bytes += qdisc_pkt_len(skb); + rtnl_qdisc_drop(skb, sch); + } +- qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped); ++ qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes); + + sch_tree_unlock(sch); + return 0; +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index 1882bab8e00e79..dc72ff353813b3 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -2568,8 +2568,9 @@ static void smc_listen_work(struct work_struct *work) + goto out_decl; + } + +- smc_listen_out_connected(new_smc); + SMC_STAT_SERV_SUCC_INC(sock_net(newclcsock->sk), ini); ++ /* smc_listen_out() will release smcsk */ ++ smc_listen_out_connected(new_smc); + goto out_free; + + out_unlock: +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c +index 51c98a007ddac4..bac65d0d4e3e1e 100644 +--- a/net/tls/tls_sw.c ++++ b/net/tls/tls_sw.c +@@ -1808,6 +1808,9 @@ int decrypt_skb(struct sock *sk, struct scatterlist *sgout) + return tls_decrypt_sg(sk, NULL, sgout, &darg); + } + ++/* All records returned from a recvmsg() call must have the same type. ++ * 0 is not a valid content type. Use it as "no type reported, yet". ++ */ + static int tls_record_content_type(struct msghdr *msg, struct tls_msg *tlm, + u8 *control) + { +@@ -2051,8 +2054,10 @@ int tls_sw_recvmsg(struct sock *sk, + if (err < 0) + goto end; + ++ /* process_rx_list() will set @control if it processed any records */ + copied = err; +- if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more) ++ if (len <= copied || rx_more || ++ (control && control != TLS_RECORD_TYPE_DATA)) + goto end; + + target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); +diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c +index f01f9e8781061e..1ef6f7829d2942 100644 +--- a/net/vmw_vsock/virtio_transport.c ++++ b/net/vmw_vsock/virtio_transport.c +@@ -624,8 +624,9 @@ static void virtio_transport_rx_work(struct work_struct *work) + do { + virtqueue_disable_cb(vq); + for (;;) { ++ unsigned int len, payload_len; ++ struct virtio_vsock_hdr *hdr; + struct sk_buff *skb; +- unsigned int len; + + if (!virtio_transport_more_replies(vsock)) { + /* Stop rx until the device processes already +@@ -642,12 +643,19 @@ static void virtio_transport_rx_work(struct work_struct *work) + vsock->rx_buf_nr--; + + /* Drop short/long packets */ +- if (unlikely(len < sizeof(struct virtio_vsock_hdr) || ++ if (unlikely(len < sizeof(*hdr) || + len > virtio_vsock_skb_len(skb))) { + kfree_skb(skb); + continue; + } + ++ hdr = virtio_vsock_hdr(skb); ++ payload_len = le32_to_cpu(hdr->len); ++ if (unlikely(payload_len > len - sizeof(*hdr))) { ++ kfree_skb(skb); ++ continue; ++ } ++ + virtio_vsock_skb_rx_put(skb); + virtio_transport_deliver_tap_pkt(skb); + virtio_transport_recv_pkt(&virtio_transport, skb); +diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs +index aa2dfa9dca4c30..2692cf90c9482d 100644 +--- a/rust/kernel/alloc/allocator.rs ++++ b/rust/kernel/alloc/allocator.rs +@@ -43,17 +43,6 @@ + /// For more details see [self]. + pub struct KVmalloc; + +-/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. +-fn aligned_size(new_layout: Layout) -> usize { +- // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. +- let layout = new_layout.pad_to_align(); +- +- // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` +- // which together with the slab guarantees means the `krealloc` will return a properly aligned +- // object (see comments in `kmalloc()` for more information). +- layout.size() +-} +- + /// # Invariants + /// + /// One of the following: `krealloc`, `vrealloc`, `kvrealloc`. +@@ -88,7 +77,7 @@ unsafe fn call( + old_layout: Layout, + flags: Flags, + ) -> Result, AllocError> { +- let size = aligned_size(layout); ++ let size = layout.size(); + let ptr = match ptr { + Some(ptr) => { + if old_layout.size() == 0 { +@@ -123,6 +112,17 @@ unsafe fn call( + } + } + ++impl Kmalloc { ++ /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of ++ /// `layout`. ++ pub fn aligned_layout(layout: Layout) -> Layout { ++ // Note that `layout.size()` (after padding) is guaranteed to be a multiple of ++ // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return ++ // a properly aligned object (see comments in `kmalloc()` for more information). ++ layout.pad_to_align() ++ } ++} ++ + // SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that + // - memory remains valid until it is explicitly freed, + // - passing a pointer to a valid memory allocation is OK, +@@ -135,6 +135,8 @@ unsafe fn realloc( + old_layout: Layout, + flags: Flags, + ) -> Result, AllocError> { ++ let layout = Kmalloc::aligned_layout(layout); ++ + // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`. + unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) } + } +@@ -176,6 +178,10 @@ unsafe fn realloc( + old_layout: Layout, + flags: Flags, + ) -> Result, AllocError> { ++ // `KVmalloc` may use the `Kmalloc` backend, hence we have to enforce a `Kmalloc` ++ // compatible layout. ++ let layout = Kmalloc::aligned_layout(layout); ++ + // TODO: Support alignments larger than PAGE_SIZE. + if layout.align() > bindings::PAGE_SIZE { + pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n"); +diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs +index d19c06ef0498c1..981e002ae3fcf0 100644 +--- a/rust/kernel/alloc/allocator_test.rs ++++ b/rust/kernel/alloc/allocator_test.rs +@@ -22,6 +22,17 @@ + pub type Vmalloc = Kmalloc; + pub type KVmalloc = Kmalloc; + ++impl Cmalloc { ++ /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of ++ /// `layout`. ++ pub fn aligned_layout(layout: Layout) -> Layout { ++ // Note that `layout.size()` (after padding) is guaranteed to be a multiple of ++ // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return ++ // a properly aligned object (see comments in `kmalloc()` for more information). ++ layout.pad_to_align() ++ } ++} ++ + extern "C" { + #[link_name = "aligned_alloc"] + fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void; +diff --git a/rust/kernel/drm/device.rs b/rust/kernel/drm/device.rs +index 14c1aa402951a8..3832779f439fd5 100644 +--- a/rust/kernel/drm/device.rs ++++ b/rust/kernel/drm/device.rs +@@ -5,6 +5,7 @@ + //! C header: [`include/linux/drm/drm_device.h`](srctree/include/linux/drm/drm_device.h) + + use crate::{ ++ alloc::allocator::Kmalloc, + bindings, device, drm, + drm::driver::AllocImpl, + error::from_err_ptr, +@@ -12,7 +13,7 @@ + prelude::*, + types::{ARef, AlwaysRefCounted, Opaque}, + }; +-use core::{mem, ops::Deref, ptr, ptr::NonNull}; ++use core::{alloc::Layout, mem, ops::Deref, ptr, ptr::NonNull}; + + #[cfg(CONFIG_DRM_LEGACY)] + macro_rules! drm_legacy_fields { +@@ -53,10 +54,8 @@ macro_rules! drm_legacy_fields { + /// + /// `self.dev` is a valid instance of a `struct device`. + #[repr(C)] +-#[pin_data] + pub struct Device { + dev: Opaque, +- #[pin] + data: T::Data, + } + +@@ -96,6 +95,10 @@ impl Device { + + /// Create a new `drm::Device` for a `drm::Driver`. + pub fn new(dev: &device::Device, data: impl PinInit) -> Result> { ++ // `__drm_dev_alloc` uses `kmalloc()` to allocate memory, hence ensure a `kmalloc()` ++ // compatible `Layout`. ++ let layout = Kmalloc::aligned_layout(Layout::new::()); ++ + // SAFETY: + // - `VTABLE`, as a `const` is pinned to the read-only section of the compilation, + // - `dev` is valid by its type invarants, +@@ -103,7 +106,7 @@ pub fn new(dev: &device::Device, data: impl PinInit) -> Result(), ++ layout.size(), + mem::offset_of!(Self, dev), + ) + } +@@ -117,9 +120,13 @@ pub fn new(dev: &device::Device, data: impl PinInit) -> Result *mut Self { + unsafe { crate::container_of!(ptr, Self, dev) }.cast_mut() + } + ++ /// # Safety ++ /// ++ /// `ptr` must be a valid pointer to `Self`. ++ unsafe fn into_drm_device(ptr: NonNull) -> *mut bindings::drm_device { ++ // SAFETY: By the safety requirements of this function, `ptr` is a valid pointer to `Self`. ++ unsafe { &raw mut (*ptr.as_ptr()).dev }.cast() ++ } ++ + /// Not intended to be called externally, except via declare_drm_ioctls!() + /// + /// # Safety +@@ -191,8 +206,11 @@ fn inc_ref(&self) { + } + + unsafe fn dec_ref(obj: NonNull) { ++ // SAFETY: `obj` is a valid pointer to `Self`. ++ let drm_dev = unsafe { Self::into_drm_device(obj) }; ++ + // SAFETY: The safety requirements guarantee that the refcount is non-zero. +- unsafe { bindings::drm_dev_put(obj.cast().as_ptr()) }; ++ unsafe { bindings::drm_dev_put(drm_dev) }; + } + } + +diff --git a/rust/kernel/faux.rs b/rust/kernel/faux.rs +index 8a50fcd4c9bbba..50c3c55f29e1dc 100644 +--- a/rust/kernel/faux.rs ++++ b/rust/kernel/faux.rs +@@ -4,7 +4,7 @@ + //! + //! This module provides bindings for working with faux devices in kernel modules. + //! +-//! C header: [`include/linux/device/faux.h`] ++//! C header: [`include/linux/device/faux.h`](srctree/include/linux/device/faux.h) + + use crate::{bindings, device, error::code::*, prelude::*}; + use core::ptr::{addr_of_mut, null, null_mut, NonNull}; +diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c +index 9b6c2f157f83e8..531bde29cccb05 100644 +--- a/security/apparmor/lsm.c ++++ b/security/apparmor/lsm.c +@@ -2149,12 +2149,12 @@ static int __init apparmor_nf_ip_init(void) + __initcall(apparmor_nf_ip_init); + #endif + +-static char nulldfa_src[] = { ++static char nulldfa_src[] __aligned(8) = { + #include "nulldfa.in" + }; + static struct aa_dfa *nulldfa; + +-static char stacksplitdfa_src[] = { ++static char stacksplitdfa_src[] __aligned(8) = { + #include "stacksplitdfa.in" + }; + struct aa_dfa *stacksplitdfa; +diff --git a/sound/core/timer.c b/sound/core/timer.c +index 8072183c33d393..a352247519be41 100644 +--- a/sound/core/timer.c ++++ b/sound/core/timer.c +@@ -2139,14 +2139,14 @@ static int snd_utimer_create(struct snd_timer_uinfo *utimer_info, + goto err_take_id; + } + ++ utimer->id = utimer_id; ++ + utimer->name = kasprintf(GFP_KERNEL, "snd-utimer%d", utimer_id); + if (!utimer->name) { + err = -ENOMEM; + goto err_get_name; + } + +- utimer->id = utimer_id; +- + tid.dev_sclass = SNDRV_TIMER_SCLASS_APPLICATION; + tid.dev_class = SNDRV_TIMER_CLASS_GLOBAL; + tid.card = -1; +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 2ab2666d4058d6..0ec98833e3d2e7 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -10662,6 +10662,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), + SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), + SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ++ SND_PCI_QUIRK(0x103c, 0x8548, "HP EliteBook x360 830 G6", ALC285_FIXUP_HP_GPIO_LED), ++ SND_PCI_QUIRK(0x103c, 0x854a, "HP EliteBook 830 G6", ALC285_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), + SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), + SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), +diff --git a/sound/pci/hda/tas2781_hda_i2c.c b/sound/pci/hda/tas2781_hda_i2c.c +index d91eed9f780432..8c5ebbe8a1f315 100644 +--- a/sound/pci/hda/tas2781_hda_i2c.c ++++ b/sound/pci/hda/tas2781_hda_i2c.c +@@ -287,7 +287,7 @@ static int tas2563_save_calibration(struct tas2781_hda *h) + efi_char16_t efi_name[TAS2563_CAL_VAR_NAME_MAX]; + unsigned long max_size = TAS2563_CAL_DATA_SIZE; + unsigned char var8[TAS2563_CAL_VAR_NAME_MAX]; +- struct tasdevice_priv *p = h->hda_priv; ++ struct tasdevice_priv *p = h->priv; + struct calidata *cd = &p->cali_data; + struct cali_reg *r = &cd->cali_reg_array; + unsigned int offset = 0; +diff --git a/sound/soc/codecs/cs35l56-sdw.c b/sound/soc/codecs/cs35l56-sdw.c +index fa9693af3722b1..d7fa12d287e06d 100644 +--- a/sound/soc/codecs/cs35l56-sdw.c ++++ b/sound/soc/codecs/cs35l56-sdw.c +@@ -394,74 +394,6 @@ static int cs35l56_sdw_update_status(struct sdw_slave *peripheral, + return 0; + } + +-static int cs35l63_sdw_kick_divider(struct cs35l56_private *cs35l56, +- struct sdw_slave *peripheral) +-{ +- unsigned int curr_scale_reg, next_scale_reg; +- int curr_scale, next_scale, ret; +- +- if (!cs35l56->base.init_done) +- return 0; +- +- if (peripheral->bus->params.curr_bank) { +- curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1; +- next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0; +- } else { +- curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0; +- next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1; +- } +- +- /* +- * Current clock scale value must be different to new value. +- * Modify current to guarantee this. If next still has the dummy +- * value we wrote when it was current, the core code has not set +- * a new scale so restore its original good value +- */ +- curr_scale = sdw_read_no_pm(peripheral, curr_scale_reg); +- if (curr_scale < 0) { +- dev_err(cs35l56->base.dev, "Failed to read current clock scale: %d\n", curr_scale); +- return curr_scale; +- } +- +- next_scale = sdw_read_no_pm(peripheral, next_scale_reg); +- if (next_scale < 0) { +- dev_err(cs35l56->base.dev, "Failed to read next clock scale: %d\n", next_scale); +- return next_scale; +- } +- +- if (next_scale == CS35L56_SDW_INVALID_BUS_SCALE) { +- next_scale = cs35l56->old_sdw_clock_scale; +- ret = sdw_write_no_pm(peripheral, next_scale_reg, next_scale); +- if (ret < 0) { +- dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", +- ret); +- return ret; +- } +- } +- +- cs35l56->old_sdw_clock_scale = curr_scale; +- ret = sdw_write_no_pm(peripheral, curr_scale_reg, CS35L56_SDW_INVALID_BUS_SCALE); +- if (ret < 0) { +- dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", ret); +- return ret; +- } +- +- dev_dbg(cs35l56->base.dev, "Next bus scale: %#x\n", next_scale); +- +- return 0; +-} +- +-static int cs35l56_sdw_bus_config(struct sdw_slave *peripheral, +- struct sdw_bus_params *params) +-{ +- struct cs35l56_private *cs35l56 = dev_get_drvdata(&peripheral->dev); +- +- if ((cs35l56->base.type == 0x63) && (cs35l56->base.rev < 0xa1)) +- return cs35l63_sdw_kick_divider(cs35l56, peripheral); +- +- return 0; +-} +- + static int __maybe_unused cs35l56_sdw_clk_stop(struct sdw_slave *peripheral, + enum sdw_clk_stop_mode mode, + enum sdw_clk_stop_type type) +@@ -477,7 +409,6 @@ static const struct sdw_slave_ops cs35l56_sdw_ops = { + .read_prop = cs35l56_sdw_read_prop, + .interrupt_callback = cs35l56_sdw_interrupt, + .update_status = cs35l56_sdw_update_status, +- .bus_config = cs35l56_sdw_bus_config, + #ifdef DEBUG + .clk_stop = cs35l56_sdw_clk_stop, + #endif +diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c +index ba653f6ccfaefe..850fcf38599681 100644 +--- a/sound/soc/codecs/cs35l56-shared.c ++++ b/sound/soc/codecs/cs35l56-shared.c +@@ -838,6 +838,15 @@ const struct cirrus_amp_cal_controls cs35l56_calibration_controls = { + }; + EXPORT_SYMBOL_NS_GPL(cs35l56_calibration_controls, "SND_SOC_CS35L56_SHARED"); + ++static const struct cirrus_amp_cal_controls cs35l63_calibration_controls = { ++ .alg_id = 0xbf210, ++ .mem_region = WMFW_ADSP2_YM, ++ .ambient = "CAL_AMBIENT", ++ .calr = "CAL_R", ++ .status = "CAL_STATUS", ++ .checksum = "CAL_CHECKSUM", ++}; ++ + int cs35l56_get_calibration(struct cs35l56_base *cs35l56_base) + { + u64 silicon_uid = 0; +@@ -912,19 +921,31 @@ EXPORT_SYMBOL_NS_GPL(cs35l56_read_prot_status, "SND_SOC_CS35L56_SHARED"); + void cs35l56_log_tuning(struct cs35l56_base *cs35l56_base, struct cs_dsp *cs_dsp) + { + __be32 pid, sid, tid; ++ unsigned int alg_id; + int ret; + ++ switch (cs35l56_base->type) { ++ case 0x54: ++ case 0x56: ++ case 0x57: ++ alg_id = 0x9f212; ++ break; ++ default: ++ alg_id = 0xbf212; ++ break; ++ } ++ + scoped_guard(mutex, &cs_dsp->pwr_lock) { + ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_PRJCT_ID", +- WMFW_ADSP2_XM, 0x9f212), ++ WMFW_ADSP2_XM, alg_id), + 0, &pid, sizeof(pid)); + if (!ret) + ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_CHNNL_ID", +- WMFW_ADSP2_XM, 0x9f212), ++ WMFW_ADSP2_XM, alg_id), + 0, &sid, sizeof(sid)); + if (!ret) + ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_SNPSHT_ID", +- WMFW_ADSP2_XM, 0x9f212), ++ WMFW_ADSP2_XM, alg_id), + 0, &tid, sizeof(tid)); + } + +@@ -974,8 +995,10 @@ int cs35l56_hw_init(struct cs35l56_base *cs35l56_base) + case 0x35A54: + case 0x35A56: + case 0x35A57: ++ cs35l56_base->calibration_controls = &cs35l56_calibration_controls; + break; + case 0x35A630: ++ cs35l56_base->calibration_controls = &cs35l63_calibration_controls; + devid = devid >> 4; + break; + default: +diff --git a/sound/soc/codecs/cs35l56.c b/sound/soc/codecs/cs35l56.c +index 1b42586794ad75..76306282b2e641 100644 +--- a/sound/soc/codecs/cs35l56.c ++++ b/sound/soc/codecs/cs35l56.c +@@ -695,7 +695,7 @@ static int cs35l56_write_cal(struct cs35l56_private *cs35l56) + return ret; + + ret = cs_amp_write_cal_coeffs(&cs35l56->dsp.cs_dsp, +- &cs35l56_calibration_controls, ++ cs35l56->base.calibration_controls, + &cs35l56->base.cal_data); + + wm_adsp_stop(&cs35l56->dsp); +diff --git a/sound/soc/codecs/cs35l56.h b/sound/soc/codecs/cs35l56.h +index bd77a57249d79b..40a1800a458515 100644 +--- a/sound/soc/codecs/cs35l56.h ++++ b/sound/soc/codecs/cs35l56.h +@@ -20,8 +20,6 @@ + #define CS35L56_SDW_GEN_INT_MASK_1 0xc1 + #define CS35L56_SDW_INT_MASK_CODEC_IRQ BIT(0) + +-#define CS35L56_SDW_INVALID_BUS_SCALE 0xf +- + #define CS35L56_RX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) + #define CS35L56_TX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE \ + | SNDRV_PCM_FMTBIT_S32_LE) +@@ -52,7 +50,6 @@ struct cs35l56_private { + u8 asp_slot_count; + bool tdm_mode; + bool sysclk_set; +- u8 old_sdw_clock_scale; + u8 sdw_link_num; + u8 sdw_unique_id; + }; +diff --git a/sound/soc/sof/amd/acp-loader.c b/sound/soc/sof/amd/acp-loader.c +index ea105227227dc4..98324bbade1517 100644 +--- a/sound/soc/sof/amd/acp-loader.c ++++ b/sound/soc/sof/amd/acp-loader.c +@@ -65,7 +65,7 @@ int acp_dsp_block_write(struct snd_sof_dev *sdev, enum snd_sof_fw_blk_type blk_t + dma_size = page_count * ACP_PAGE_SIZE; + adata->bin_buf = dma_alloc_coherent(&pci->dev, dma_size, + &adata->sha_dma_addr, +- GFP_ATOMIC); ++ GFP_KERNEL); + if (!adata->bin_buf) + return -ENOMEM; + } +@@ -77,7 +77,7 @@ int acp_dsp_block_write(struct snd_sof_dev *sdev, enum snd_sof_fw_blk_type blk_t + adata->data_buf = dma_alloc_coherent(&pci->dev, + ACP_DEFAULT_DRAM_LENGTH, + &adata->dma_addr, +- GFP_ATOMIC); ++ GFP_KERNEL); + if (!adata->data_buf) + return -ENOMEM; + } +@@ -90,7 +90,7 @@ int acp_dsp_block_write(struct snd_sof_dev *sdev, enum snd_sof_fw_blk_type blk_t + adata->sram_data_buf = dma_alloc_coherent(&pci->dev, + ACP_DEFAULT_SRAM_LENGTH, + &adata->sram_dma_addr, +- GFP_ATOMIC); ++ GFP_KERNEL); + if (!adata->sram_data_buf) + return -ENOMEM; + } +diff --git a/sound/usb/stream.c b/sound/usb/stream.c +index 1cb52373e70f64..db2c9bac00adca 100644 +--- a/sound/usb/stream.c ++++ b/sound/usb/stream.c +@@ -349,7 +349,7 @@ snd_pcm_chmap_elem *convert_chmap_v3(struct uac3_cluster_header_descriptor + u16 cs_len; + u8 cs_type; + +- if (len < sizeof(*p)) ++ if (len < sizeof(*cs_desc)) + break; + cs_len = le16_to_cpu(cs_desc->wLength); + if (len < cs_len) +diff --git a/sound/usb/validate.c b/sound/usb/validate.c +index 4f4e8e87a14cd0..a0d55b77c9941d 100644 +--- a/sound/usb/validate.c ++++ b/sound/usb/validate.c +@@ -285,7 +285,7 @@ static const struct usb_desc_validator audio_validators[] = { + /* UAC_VERSION_3, UAC3_EXTENDED_TERMINAL: not implemented yet */ + FUNC(UAC_VERSION_3, UAC3_MIXER_UNIT, validate_mixer_unit), + FUNC(UAC_VERSION_3, UAC3_SELECTOR_UNIT, validate_selector_unit), +- FUNC(UAC_VERSION_3, UAC_FEATURE_UNIT, validate_uac3_feature_unit), ++ FUNC(UAC_VERSION_3, UAC3_FEATURE_UNIT, validate_uac3_feature_unit), + /* UAC_VERSION_3, UAC3_EFFECT_UNIT: not implemented yet */ + FUNC(UAC_VERSION_3, UAC3_PROCESSING_UNIT, validate_processing_unit), + FUNC(UAC_VERSION_3, UAC3_EXTENSION_UNIT, validate_processing_unit), +diff --git a/tools/objtool/arch/loongarch/special.c b/tools/objtool/arch/loongarch/special.c +index e39f86d97002b9..a80b75f7b061f7 100644 +--- a/tools/objtool/arch/loongarch/special.c ++++ b/tools/objtool/arch/loongarch/special.c +@@ -27,6 +27,7 @@ static void get_rodata_table_size_by_table_annotate(struct objtool_file *file, + struct table_info *next_table; + unsigned long tmp_insn_offset; + unsigned long tmp_rodata_offset; ++ bool is_valid_list = false; + + rsec = find_section_by_name(file->elf, ".rela.discard.tablejump_annotate"); + if (!rsec) +@@ -35,6 +36,12 @@ static void get_rodata_table_size_by_table_annotate(struct objtool_file *file, + INIT_LIST_HEAD(&table_list); + + for_each_reloc(rsec, reloc) { ++ if (reloc->sym->sec->rodata) ++ continue; ++ ++ if (strcmp(insn->sec->name, reloc->sym->sec->name)) ++ continue; ++ + orig_table = malloc(sizeof(struct table_info)); + if (!orig_table) { + WARN("malloc failed"); +@@ -49,6 +56,22 @@ static void get_rodata_table_size_by_table_annotate(struct objtool_file *file, + + if (reloc_idx(reloc) + 1 == sec_num_entries(rsec)) + break; ++ ++ if (strcmp(insn->sec->name, (reloc + 1)->sym->sec->name)) { ++ list_for_each_entry(orig_table, &table_list, jump_info) { ++ if (orig_table->insn_offset == insn->offset) { ++ is_valid_list = true; ++ break; ++ } ++ } ++ ++ if (!is_valid_list) { ++ list_del_init(&table_list); ++ continue; ++ } ++ ++ break; ++ } + } + + list_for_each_entry(orig_table, &table_list, jump_info) { +diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c +index ac1349c4b9e540..4f07ac9fa207cb 100644 +--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c ++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c +@@ -183,9 +183,10 @@ static void xgetaddrinfo(const char *node, const char *service, + struct addrinfo *hints, + struct addrinfo **res) + { +-again: +- int err = getaddrinfo(node, service, hints, res); ++ int err; + ++again: ++ err = getaddrinfo(node, service, hints, res); + if (err) { + const char *errstr; + +diff --git a/tools/testing/selftests/net/mptcp/mptcp_inq.c b/tools/testing/selftests/net/mptcp/mptcp_inq.c +index 3cf1e2a612cef9..f3bcaa48df8f22 100644 +--- a/tools/testing/selftests/net/mptcp/mptcp_inq.c ++++ b/tools/testing/selftests/net/mptcp/mptcp_inq.c +@@ -75,9 +75,10 @@ static void xgetaddrinfo(const char *node, const char *service, + struct addrinfo *hints, + struct addrinfo **res) + { +-again: +- int err = getaddrinfo(node, service, hints, res); ++ int err; + ++again: ++ err = getaddrinfo(node, service, hints, res); + if (err) { + const char *errstr; + +diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c +index 9934a68df23708..e934dd26a59d9b 100644 +--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.c ++++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.c +@@ -162,9 +162,10 @@ static void xgetaddrinfo(const char *node, const char *service, + struct addrinfo *hints, + struct addrinfo **res) + { +-again: +- int err = getaddrinfo(node, service, hints, res); ++ int err; + ++again: ++ err = getaddrinfo(node, service, hints, res); + if (err) { + const char *errstr; + +diff --git a/tools/testing/selftests/net/mptcp/pm_netlink.sh b/tools/testing/selftests/net/mptcp/pm_netlink.sh +index 2e6648a2b2c0c6..ac7ec6f9402376 100755 +--- a/tools/testing/selftests/net/mptcp/pm_netlink.sh ++++ b/tools/testing/selftests/net/mptcp/pm_netlink.sh +@@ -198,6 +198,7 @@ set_limits 1 9 2>/dev/null + check "get_limits" "${default_limits}" "subflows above hard limit" + + set_limits 8 8 ++flush_endpoint ## to make sure it doesn't affect the limits + check "get_limits" "$(format_limits 8 8)" "set limits" + + flush_endpoint