From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id A8F96158089 for ; Wed, 13 Sep 2023 11:05:22 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id DA1542BC022; Wed, 13 Sep 2023 11:05:21 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 4DFDA2BC022 for ; Wed, 13 Sep 2023 11:05:21 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 6FFF7335CCD for ; Wed, 13 Sep 2023 11:05:19 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id EAF3C1148 for ; Wed, 13 Sep 2023 11:05:17 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1694603107.0efe20bff15b0dcaa94fb25d6e40b0161e8201b3.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1052_linux-6.1.53.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 0efe20bff15b0dcaa94fb25d6e40b0161e8201b3 X-VCS-Branch: 6.1 Date: Wed, 13 Sep 2023 11:05:17 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: d8599b94-d571-4195-95c8-ca925b6d28fa X-Archives-Hash: 75b69284639e443f7d79b73b3f5d09b5 commit: 0efe20bff15b0dcaa94fb25d6e40b0161e8201b3 Author: Mike Pagano gentoo org> AuthorDate: Wed Sep 13 11:05:07 2023 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Sep 13 11:05:07 2023 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0efe20bf Linux patch 6.1.53 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1052_linux-6.1.53.patch | 23057 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 23061 insertions(+) diff --git a/0000_README b/0000_README index 9d50d635..d7316905 100644 --- a/0000_README +++ b/0000_README @@ -251,6 +251,10 @@ Patch: 1051_linux-6.1.52.patch From: https://www.kernel.org Desc: Linux 6.1.52 +Patch: 1052_linux-6.1.53.patch +From: https://www.kernel.org +Desc: Linux 6.1.53 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1052_linux-6.1.53.patch b/1052_linux-6.1.53.patch new file mode 100644 index 00000000..29b394b8 --- /dev/null +++ b/1052_linux-6.1.53.patch @@ -0,0 +1,23057 @@ +diff --git a/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo b/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo +index 531fe9d6b40aa..c7393b4dd2d88 100644 +--- a/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo ++++ b/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo +@@ -5,6 +5,6 @@ Description: + Indicates whether or not this SBE device has experienced a + timeout; i.e. the SBE did not respond within the time allotted + by the driver. A value of 1 indicates that a timeout has +- ocurred and no transfers have completed since the timeout. A +- value of 0 indicates that no timeout has ocurred, or if one +- has, more recent transfers have completed successful. ++ occurred and no transfers have completed since the timeout. A ++ value of 0 indicates that no timeout has occurred, or if one ++ has, more recent transfers have completed successfully. +diff --git a/Documentation/ABI/testing/sysfs-driver-chromeos-acpi b/Documentation/ABI/testing/sysfs-driver-chromeos-acpi +index c308926e1568a..7c8e129fc1005 100644 +--- a/Documentation/ABI/testing/sysfs-driver-chromeos-acpi ++++ b/Documentation/ABI/testing/sysfs-driver-chromeos-acpi +@@ -134,4 +134,4 @@ KernelVersion: 5.19 + Description: + Returns the verified boot data block shared between the + firmware verification step and the kernel verification step +- (binary). ++ (hex dump). +diff --git a/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml b/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml +index 1289605456408..55800fb0221d0 100644 +--- a/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml ++++ b/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml +@@ -23,6 +23,7 @@ properties: + + connector: + $ref: /schemas/connector/usb-connector.yaml# ++ unevaluatedProperties: false + + ports: + $ref: /schemas/graph.yaml#/properties/ports +diff --git a/Documentation/scsi/scsi_mid_low_api.rst b/Documentation/scsi/scsi_mid_low_api.rst +index a8c5bd15a4400..edfd179b9c7cc 100644 +--- a/Documentation/scsi/scsi_mid_low_api.rst ++++ b/Documentation/scsi/scsi_mid_low_api.rst +@@ -1190,11 +1190,11 @@ Members of interest: + - pointer to scsi_device object that this command is + associated with. + resid +- - an LLD should set this signed integer to the requested ++ - an LLD should set this unsigned integer to the requested + transfer length (i.e. 'request_bufflen') less the number + of bytes that are actually transferred. 'resid' is + preset to 0 so an LLD can ignore it if it cannot detect +- underruns (overruns should be rare). If possible an LLD ++ underruns (overruns should not be reported). An LLD + should set 'resid' prior to invoking 'done'. The most + interesting case is data transfers from a SCSI target + device (e.g. READs) that underrun. +diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst +index cd33857d947d3..0ef49647c90bd 100644 +--- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst ++++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst +@@ -2923,6 +2923,13 @@ This structure contains all loop filter related parameters. See sections + - ``poc_lt_curr[V4L2_HEVC_DPB_ENTRIES_NUM_MAX]`` + - PocLtCurr as described in section 8.3.2 "Decoding process for reference + picture set": provides the index of the long term references in DPB array. ++ * - __u8 ++ - ``num_delta_pocs_of_ref_rps_idx`` ++ - When the short_term_ref_pic_set_sps_flag in the slice header is equal to 0, ++ it is the same as the derived value NumDeltaPocs[RefRpsIdx]. It can be used to parse ++ the RPS data in slice headers instead of skipping it with @short_term_ref_pic_set_size. ++ When the value of short_term_ref_pic_set_sps_flag in the slice header is ++ equal to 1, num_delta_pocs_of_ref_rps_idx shall be set to 0. + * - struct :c:type:`v4l2_hevc_dpb_entry` + - ``dpb[V4L2_HEVC_DPB_ENTRIES_NUM_MAX]`` + - The decoded picture buffer, for meta-data about reference frames. +diff --git a/Makefile b/Makefile +index 82aaa3ae7395b..35fc0d62898dc 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 1 +-SUBLEVEL = 52 ++SUBLEVEL = 53 + EXTRAVERSION = + NAME = Curry Ramen + +@@ -1291,7 +1291,7 @@ prepare0: archprepare + # All the preparing.. + prepare: prepare0 + ifdef CONFIG_RUST +- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh -v ++ $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh + $(Q)$(MAKE) $(build)=rust + endif + +@@ -1817,7 +1817,7 @@ $(DOC_TARGETS): + # "Is Rust available?" target + PHONY += rustavailable + rustavailable: +- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh -v && echo "Rust is available!" ++ $(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh && echo "Rust is available!" + + # Documentation target + # +diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile +index 6aa7dc4db2fc8..df6d905eeb877 100644 +--- a/arch/arm/boot/dts/Makefile ++++ b/arch/arm/boot/dts/Makefile +@@ -331,6 +331,7 @@ dtb-$(CONFIG_MACH_KIRKWOOD) += \ + kirkwood-iconnect.dtb \ + kirkwood-iomega_ix2_200.dtb \ + kirkwood-is2.dtb \ ++ kirkwood-km_fixedeth.dtb \ + kirkwood-km_kirkwood.dtb \ + kirkwood-l-50.dtb \ + kirkwood-laplug.dtb \ +@@ -861,7 +862,10 @@ dtb-$(CONFIG_ARCH_OMAP3) += \ + am3517-craneboard.dtb \ + am3517-evm.dtb \ + am3517_mt_ventoux.dtb \ ++ logicpd-torpedo-35xx-devkit.dtb \ + logicpd-torpedo-37xx-devkit.dtb \ ++ logicpd-torpedo-37xx-devkit-28.dtb \ ++ logicpd-som-lv-35xx-devkit.dtb \ + logicpd-som-lv-37xx-devkit.dtb \ + omap3430-sdp.dtb \ + omap3-beagle.dtb \ +@@ -1527,6 +1531,8 @@ dtb-$(CONFIG_MACH_ARMADA_38X) += \ + armada-388-helios4.dtb \ + armada-388-rd.dtb + dtb-$(CONFIG_MACH_ARMADA_39X) += \ ++ armada-390-db.dtb \ ++ armada-395-gp.dtb \ + armada-398-db.dtb + dtb-$(CONFIG_MACH_ARMADA_XP) += \ + armada-xp-axpwifiap.dtb \ +@@ -1556,6 +1562,7 @@ dtb-$(CONFIG_MACH_DOVE) += \ + dtb-$(CONFIG_ARCH_MEDIATEK) += \ + mt2701-evb.dtb \ + mt6580-evbp1.dtb \ ++ mt6582-prestigio-pmt5008-3g.dtb \ + mt6589-aquaris5.dtb \ + mt6589-fairphone-fp1.dtb \ + mt6592-evb.dtb \ +@@ -1608,6 +1615,7 @@ dtb-$(CONFIG_ARCH_ASPEED) += \ + aspeed-bmc-intel-s2600wf.dtb \ + aspeed-bmc-inspur-fp5280g2.dtb \ + aspeed-bmc-inspur-nf5280m6.dtb \ ++ aspeed-bmc-inspur-on5263m5.dtb \ + aspeed-bmc-lenovo-hr630.dtb \ + aspeed-bmc-lenovo-hr855xg2.dtb \ + aspeed-bmc-microsoft-olympus.dtb \ +diff --git a/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts b/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts +index e20b6d2eb274a..1e23e0a807819 100644 +--- a/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts ++++ b/arch/arm/boot/dts/bcm47189-luxul-xap-1440.dts +@@ -46,3 +46,16 @@ + }; + }; + }; ++ ++&gmac0 { ++ phy-mode = "rgmii"; ++ phy-handle = <&bcm54210e>; ++ ++ mdio { ++ /delete-node/ switch@1e; ++ ++ bcm54210e: ethernet-phy@0 { ++ reg = <0>; ++ }; ++ }; ++}; +diff --git a/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts b/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts +index 9d863570fcf3a..5dbb950c8113e 100644 +--- a/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts ++++ b/arch/arm/boot/dts/bcm47189-luxul-xap-810.dts +@@ -83,3 +83,16 @@ + }; + }; + }; ++ ++&gmac0 { ++ phy-mode = "rgmii"; ++ phy-handle = <&bcm54210e>; ++ ++ mdio { ++ /delete-node/ switch@1e; ++ ++ bcm54210e: ethernet-phy@0 { ++ reg = <0>; ++ }; ++ }; ++}; +diff --git a/arch/arm/boot/dts/bcm47189-tenda-ac9.dts b/arch/arm/boot/dts/bcm47189-tenda-ac9.dts +index 55b92645b0f1f..b7c7bf0be76f4 100644 +--- a/arch/arm/boot/dts/bcm47189-tenda-ac9.dts ++++ b/arch/arm/boot/dts/bcm47189-tenda-ac9.dts +@@ -135,8 +135,8 @@ + label = "lan4"; + }; + +- port@5 { +- reg = <5>; ++ port@8 { ++ reg = <8>; + label = "cpu"; + ethernet = <&gmac0>; + }; +diff --git a/arch/arm/boot/dts/bcm53573.dtsi b/arch/arm/boot/dts/bcm53573.dtsi +index 3f03a381db0f2..eed1a6147f0bf 100644 +--- a/arch/arm/boot/dts/bcm53573.dtsi ++++ b/arch/arm/boot/dts/bcm53573.dtsi +@@ -127,6 +127,9 @@ + + pcie0: pcie@2000 { + reg = <0x00002000 0x1000>; ++ ++ #address-cells = <3>; ++ #size-cells = <2>; + }; + + usb2: usb2@4000 { +@@ -156,8 +159,6 @@ + }; + + ohci: usb@d000 { +- #usb-cells = <0>; +- + compatible = "generic-ohci"; + reg = <0xd000 0x1000>; + interrupt-parent = <&gic>; +diff --git a/arch/arm/boot/dts/bcm947189acdbmr.dts b/arch/arm/boot/dts/bcm947189acdbmr.dts +index 16e70a264faf5..458bb6e2f5728 100644 +--- a/arch/arm/boot/dts/bcm947189acdbmr.dts ++++ b/arch/arm/boot/dts/bcm947189acdbmr.dts +@@ -60,9 +60,9 @@ + spi { + compatible = "spi-gpio"; + num-chipselects = <1>; +- gpio-sck = <&chipcommon 21 0>; +- gpio-miso = <&chipcommon 22 0>; +- gpio-mosi = <&chipcommon 23 0>; ++ sck-gpios = <&chipcommon 21 0>; ++ miso-gpios = <&chipcommon 22 0>; ++ mosi-gpios = <&chipcommon 23 0>; + cs-gpios = <&chipcommon 24 0>; + #address-cells = <1>; + #size-cells = <0>; +diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi +index 11b9321badc51..667568aa4326a 100644 +--- a/arch/arm/boot/dts/imx7s.dtsi ++++ b/arch/arm/boot/dts/imx7s.dtsi +@@ -1184,6 +1184,8 @@ + <&clks IMX7D_USDHC1_ROOT_CLK>; + clock-names = "ipg", "ahb", "per"; + bus-width = <4>; ++ fsl,tuning-step = <2>; ++ fsl,tuning-start-tap = <20>; + status = "disabled"; + }; + +@@ -1196,6 +1198,8 @@ + <&clks IMX7D_USDHC2_ROOT_CLK>; + clock-names = "ipg", "ahb", "per"; + bus-width = <4>; ++ fsl,tuning-step = <2>; ++ fsl,tuning-start-tap = <20>; + status = "disabled"; + }; + +@@ -1208,6 +1212,8 @@ + <&clks IMX7D_USDHC3_ROOT_CLK>; + clock-names = "ipg", "ahb", "per"; + bus-width = <4>; ++ fsl,tuning-step = <2>; ++ fsl,tuning-start-tap = <20>; + status = "disabled"; + }; + +diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi +index 02e13d8c222a0..b5e0ed4923b59 100644 +--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi ++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi +@@ -228,9 +228,12 @@ + interrupts = , ; + interrupt-names = "hc_irq", "pwr_irq"; + bus-width = <8>; +- clocks = <&gcc GCC_SDCC1_AHB_CLK>, <&gcc GCC_SDCC1_APPS_CLK>, +- <&gcc GCC_DCD_XO_CLK>; +- clock-names = "iface", "core", "xo"; ++ clocks = <&gcc GCC_SDCC1_AHB_CLK>, ++ <&gcc GCC_SDCC1_APPS_CLK>, ++ <&xo>; ++ clock-names = "iface", ++ "core", ++ "xo"; + status = "disabled"; + }; + +diff --git a/arch/arm/boot/dts/s3c6410-mini6410.dts b/arch/arm/boot/dts/s3c6410-mini6410.dts +index 17097da36f5ed..0b07b3c319604 100644 +--- a/arch/arm/boot/dts/s3c6410-mini6410.dts ++++ b/arch/arm/boot/dts/s3c6410-mini6410.dts +@@ -51,7 +51,7 @@ + + ethernet@18000000 { + compatible = "davicom,dm9000"; +- reg = <0x18000000 0x2 0x18000004 0x2>; ++ reg = <0x18000000 0x2>, <0x18000004 0x2>; + interrupt-parent = <&gpn>; + interrupts = <7 IRQ_TYPE_LEVEL_HIGH>; + davicom,no-eeprom; +diff --git a/arch/arm/boot/dts/s5pv210-smdkv210.dts b/arch/arm/boot/dts/s5pv210-smdkv210.dts +index fbae768d65e27..901e7197b1368 100644 +--- a/arch/arm/boot/dts/s5pv210-smdkv210.dts ++++ b/arch/arm/boot/dts/s5pv210-smdkv210.dts +@@ -41,7 +41,7 @@ + + ethernet@a8000000 { + compatible = "davicom,dm9000"; +- reg = <0xA8000000 0x2 0xA8000002 0x2>; ++ reg = <0xa8000000 0x2>, <0xa8000002 0x2>; + interrupt-parent = <&gph1>; + interrupts = <1 IRQ_TYPE_LEVEL_HIGH>; + local-mac-address = [00 00 de ad be ef]; +@@ -55,6 +55,14 @@ + default-brightness-level = <6>; + pinctrl-names = "default"; + pinctrl-0 = <&pwm3_out>; ++ power-supply = <&dc5v_reg>; ++ }; ++ ++ dc5v_reg: regulator-0 { ++ compatible = "regulator-fixed"; ++ regulator-name = "DC5V"; ++ regulator-min-microvolt = <5000000>; ++ regulator-max-microvolt = <5000000>; + }; + }; + +diff --git a/arch/arm/boot/dts/stm32mp157c-emstamp-argon.dtsi b/arch/arm/boot/dts/stm32mp157c-emstamp-argon.dtsi +index d540550f7da26..fd89542c69c93 100644 +--- a/arch/arm/boot/dts/stm32mp157c-emstamp-argon.dtsi ++++ b/arch/arm/boot/dts/stm32mp157c-emstamp-argon.dtsi +@@ -68,11 +68,6 @@ + reg = <0x38000000 0x10000>; + no-map; + }; +- +- gpu_reserved: gpu@dc000000 { +- reg = <0xdc000000 0x4000000>; +- no-map; +- }; + }; + + led: gpio_leds { +@@ -102,9 +97,11 @@ + adc1: adc@0 { + pinctrl-names = "default"; + pinctrl-0 = <&adc1_in6_pins_a>; +- st,min-sample-time-nsecs = <5000>; +- st,adc-channels = <6>; + status = "disabled"; ++ channel@6 { ++ reg = <6>; ++ st,min-sample-time-ns = <5000>; ++ }; + }; + + adc2: adc@100 { +@@ -173,7 +170,7 @@ + phy-handle = <&phy0>; + st,eth-ref-clk-sel; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +@@ -183,10 +180,6 @@ + }; + }; + +-&gpu { +- contiguous-area = <&gpu_reserved>; +-}; +- + &hash1 { + status = "okay"; + }; +@@ -375,8 +368,8 @@ + &m4_rproc { + memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>, + <&vdev0vring1>, <&vdev0buffer>; +- mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>; +- mbox-names = "vq0", "vq1", "shutdown"; ++ mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>; ++ mbox-names = "vq0", "vq1", "shutdown", "detach"; + interrupt-parent = <&exti>; + interrupts = <68 1>; + interrupt-names = "wdg"; +diff --git a/arch/arm/boot/dts/stm32mp157c-ev1.dts b/arch/arm/boot/dts/stm32mp157c-ev1.dts +index 050c3c27a4203..b72d5e8aa4669 100644 +--- a/arch/arm/boot/dts/stm32mp157c-ev1.dts ++++ b/arch/arm/boot/dts/stm32mp157c-ev1.dts +@@ -144,7 +144,7 @@ + max-speed = <1000>; + phy-handle = <&phy0>; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +diff --git a/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts b/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts +index e8d2ec41d5374..cb00ce7cec8b1 100644 +--- a/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts ++++ b/arch/arm/boot/dts/stm32mp157c-lxa-mc1.dts +@@ -112,7 +112,7 @@ + phy-handle = <ðphy>; + status = "okay"; + +- mdio0 { ++ mdio { + compatible = "snps,dwmac-mdio"; + #address-cells = <1>; + #size-cells = <0>; +diff --git a/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi b/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi +index 2d9461006810c..cf74852514906 100644 +--- a/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi ++++ b/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi +@@ -62,11 +62,6 @@ + reg = <0x38000000 0x10000>; + no-map; + }; +- +- gpu_reserved: gpu@d4000000 { +- reg = <0xd4000000 0x4000000>; +- no-map; +- }; + }; + + led { +@@ -80,11 +75,6 @@ + }; + }; + +-&gpu { +- contiguous-area = <&gpu_reserved>; +- status = "okay"; +-}; +- + &i2c2 { + pinctrl-names = "default"; + pinctrl-0 = <&i2c2_pins_a>; +@@ -240,8 +230,8 @@ + &m4_rproc { + memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>, + <&vdev0vring1>, <&vdev0buffer>; +- mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>; +- mbox-names = "vq0", "vq1", "shutdown"; ++ mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>; ++ mbox-names = "vq0", "vq1", "shutdown", "detach"; + interrupt-parent = <&exti>; + interrupts = <68 1>; + status = "okay"; +diff --git a/arch/arm/boot/dts/stm32mp157c-odyssey.dts b/arch/arm/boot/dts/stm32mp157c-odyssey.dts +index ed66d25b8bf3d..a8b3f7a547036 100644 +--- a/arch/arm/boot/dts/stm32mp157c-odyssey.dts ++++ b/arch/arm/boot/dts/stm32mp157c-odyssey.dts +@@ -41,7 +41,7 @@ + assigned-clock-rates = <125000000>; /* Clock PLL4 to 750Mhz in ATF/U-Boot */ + st,eth-clk-sel; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi +index d3b85a8764d74..74a11ccc5333f 100644 +--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi ++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi +@@ -80,17 +80,19 @@ + vdda-supply = <&vdda>; + vref-supply = <&vdda>; + status = "okay"; ++}; + +- adc1: adc@0 { +- st,min-sample-time-nsecs = <5000>; +- st,adc-channels = <0>; +- status = "okay"; ++&adc1 { ++ channel@0 { ++ reg = <0>; ++ st,min-sample-time-ns = <5000>; + }; ++}; + +- adc2: adc@100 { +- st,adc-channels = <1>; +- st,min-sample-time-nsecs = <5000>; +- status = "okay"; ++&adc2 { ++ channel@1 { ++ reg = <1>; ++ st,min-sample-time-ns = <5000>; + }; + }; + +@@ -125,7 +127,7 @@ + max-speed = <100>; + phy-handle = <&phy0>; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +@@ -414,8 +416,8 @@ + &m4_rproc { + memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>, + <&vdev0vring1>, <&vdev0buffer>; +- mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>; +- mbox-names = "vq0", "vq1", "shutdown"; ++ mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>; ++ mbox-names = "vq0", "vq1", "shutdown", "detach"; + interrupt-parent = <&exti>; + interrupts = <68 1>; + status = "okay"; +diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi +index f068e4fcc404f..b7ba43865514d 100644 +--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi ++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi +@@ -112,17 +112,39 @@ + vdda-supply = <&vdda>; + vref-supply = <&vdda>; + status = "okay"; ++}; + +- adc1: adc@0 { +- st,adc-channels = <0 1 6>; +- st,min-sample-time-nsecs = <5000>; +- status = "okay"; ++&adc1 { ++ channel@0 { ++ reg = <0>; ++ st,min-sample-time-ns = <5000>; + }; + +- adc2: adc@100 { +- st,adc-channels = <0 1 2>; +- st,min-sample-time-nsecs = <5000>; +- status = "okay"; ++ channel@1 { ++ reg = <1>; ++ st,min-sample-time-ns = <5000>; ++ }; ++ ++ channel@6 { ++ reg = <6>; ++ st,min-sample-time-ns = <5000>; ++ }; ++}; ++ ++&adc2 { ++ channel@0 { ++ reg = <0>; ++ st,min-sample-time-ns = <5000>; ++ }; ++ ++ channel@1 { ++ reg = <1>; ++ st,min-sample-time-ns = <5000>; ++ }; ++ ++ channel@2 { ++ reg = <2>; ++ st,min-sample-time-ns = <5000>; + }; + }; + +@@ -151,7 +173,7 @@ + max-speed = <1000>; + phy-handle = <&phy0>; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi +index bb4ac6c13cbd3..39af79dc654cc 100644 +--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi ++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi +@@ -78,7 +78,7 @@ + max-speed = <1000>; + phy-handle = <&phy0>; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi +index fdc48536e97d1..73a6a7b278b90 100644 +--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi ++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi +@@ -141,7 +141,7 @@ + max-speed = <1000>; + phy-handle = <&phy0>; + +- mdio0 { ++ mdio { + #address-cells = <1>; + #size-cells = <0>; + compatible = "snps,dwmac-mdio"; +diff --git a/arch/arm/include/asm/syscall.h b/arch/arm/include/asm/syscall.h +index dfeed440254a8..fe4326d938c18 100644 +--- a/arch/arm/include/asm/syscall.h ++++ b/arch/arm/include/asm/syscall.h +@@ -25,6 +25,9 @@ static inline int syscall_get_nr(struct task_struct *task, + if (IS_ENABLED(CONFIG_AEABI) && !IS_ENABLED(CONFIG_OABI_COMPAT)) + return task_thread_info(task)->abi_syscall; + ++ if (task_thread_info(task)->abi_syscall == -1) ++ return -1; ++ + return task_thread_info(task)->abi_syscall & __NR_SYSCALL_MASK; + } + +diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S +index 405a607b754f4..b413b541c3c71 100644 +--- a/arch/arm/kernel/entry-common.S ++++ b/arch/arm/kernel/entry-common.S +@@ -103,6 +103,7 @@ slow_work_pending: + cmp r0, #0 + beq no_work_pending + movlt scno, #(__NR_restart_syscall - __NR_SYSCALL_BASE) ++ str scno, [tsk, #TI_ABI_SYSCALL] @ make sure tracers see update + ldmia sp, {r0 - r6} @ have to reload r0 - r6 + b local_restart @ ... and off we go + ENDPROC(ret_fast_syscall) +diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c +index bfe88c6e60d58..cef106913ab7b 100644 +--- a/arch/arm/kernel/ptrace.c ++++ b/arch/arm/kernel/ptrace.c +@@ -785,8 +785,9 @@ long arch_ptrace(struct task_struct *child, long request, + break; + + case PTRACE_SET_SYSCALL: +- task_thread_info(child)->abi_syscall = data & +- __NR_SYSCALL_MASK; ++ if (data != -1) ++ data &= __NR_SYSCALL_MASK; ++ task_thread_info(child)->abi_syscall = data; + ret = 0; + break; + +diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c +index 2d747f6cffe8e..0eca44fc11926 100644 +--- a/arch/arm/mach-omap2/powerdomain.c ++++ b/arch/arm/mach-omap2/powerdomain.c +@@ -174,7 +174,7 @@ static int _pwrdm_state_switch(struct powerdomain *pwrdm, int flag) + break; + case PWRDM_STATE_PREV: + prev = pwrdm_read_prev_pwrst(pwrdm); +- if (pwrdm->state != prev) ++ if (prev >= 0 && pwrdm->state != prev) + pwrdm->state_counter[prev]++; + if (prev == PWRDM_POWER_RET) + _update_logic_membank_counters(pwrdm); +diff --git a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts +index 7c569695b7052..2b4dbfac84a70 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts ++++ b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts +@@ -1312,6 +1312,7 @@ + + uartd: serial@70006300 { + compatible = "nvidia,tegra30-hsuart"; ++ reset-names = "serial"; + status = "okay"; + + bluetooth { +diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts +index 57ab753288144..f094011be9ed9 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts ++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts +@@ -2004,6 +2004,7 @@ + + serial@3100000 { + compatible = "nvidia,tegra194-hsuart"; ++ reset-names = "serial"; + status = "okay"; + }; + +diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts +index e3e90ad92cc59..9650ae70c8723 100644 +--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts ++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts +@@ -289,9 +289,9 @@ + clock-names = "xclk"; + clock-frequency = <23880000>; + +- vdddo-supply = <&camera_vdddo_1v8>; +- vdda-supply = <&camera_vdda_2v8>; +- vddd-supply = <&camera_vddd_1v5>; ++ DOVDD-supply = <&camera_vdddo_1v8>; ++ AVDD-supply = <&camera_vdda_2v8>; ++ DVDD-supply = <&camera_vddd_1v5>; + + /* No camera mezzanine by default */ + status = "disabled"; +diff --git a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts +index d85e7f7c0835a..75f7b4f35fe82 100644 +--- a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts ++++ b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts +@@ -163,7 +163,7 @@ + pinctrl-0 = <&light_int_default>; + + vdd-supply = <&pm8916_l17>; +- vio-supply = <&pm8916_l6>; ++ vddio-supply = <&pm8916_l6>; + }; + + gyroscope@68 { +diff --git a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts +index 4e5264f4116a0..3bbafb68ba5c5 100644 +--- a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts ++++ b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts +@@ -81,7 +81,7 @@ + #size-cells = <0>; + interrupt-parent = <&tlmm>; + interrupts = <125 IRQ_TYPE_LEVEL_LOW>; +- vdda-supply = <&vreg_l6a_1p8>; ++ vio-supply = <&vreg_l6a_1p8>; + vdd-supply = <&vdd_3v2_tp>; + reset-gpios = <&tlmm 89 GPIO_ACTIVE_LOW>; + +diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi +index 9d6ec59d1cd3a..9de2248a385a5 100644 +--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi ++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi +@@ -1063,7 +1063,7 @@ + reg-names = "dsi_ctrl"; + + interrupt-parent = <&mdss>; +- interrupts = <4>; ++ interrupts = <5>; + + clocks = <&mmcc MDSS_MDP_CLK>, + <&mmcc MDSS_BYTE1_CLK>, +@@ -3292,6 +3292,9 @@ + #size-cells = <1>; + ranges; + ++ interrupts = ; ++ interrupt-names = "hs_phy_irq"; ++ + clocks = <&gcc GCC_PERIPH_NOC_USB20_AHB_CLK>, + <&gcc GCC_USB20_MASTER_CLK>, + <&gcc GCC_USB20_MOCK_UTMI_CLK>, +diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi +index 29c60bb56ed5f..b00b8164c4aa2 100644 +--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi ++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi +@@ -2418,10 +2418,10 @@ + + clocks = <&mmcc MNOC_AHB_CLK>, + <&mmcc BIMC_SMMU_AHB_CLK>, +- <&rpmcc RPM_SMD_MMAXI_CLK>, + <&mmcc BIMC_SMMU_AXI_CLK>; +- clock-names = "iface-mm", "iface-smmu", +- "bus-mm", "bus-smmu"; ++ clock-names = "iface-mm", ++ "iface-smmu", ++ "bus-smmu"; + + #global-interrupts = <0>; + interrupts = +@@ -2445,6 +2445,8 @@ + , + , + ; ++ ++ power-domains = <&mmcc BIMC_SMMU_GDSC>; + }; + + remoteproc_adsp: remoteproc@17300000 { +diff --git a/arch/arm64/boot/dts/qcom/pm6150l.dtsi b/arch/arm64/boot/dts/qcom/pm6150l.dtsi +index f02c223ef4485..06d729ff65a9d 100644 +--- a/arch/arm64/boot/dts/qcom/pm6150l.dtsi ++++ b/arch/arm64/boot/dts/qcom/pm6150l.dtsi +@@ -75,8 +75,9 @@ + pm6150l_wled: leds@d800 { + compatible = "qcom,pm6150l-wled"; + reg = <0xd800>, <0xd900>; +- interrupts = <0x5 0xd8 0x1 IRQ_TYPE_EDGE_RISING>; +- interrupt-names = "ovp"; ++ interrupts = <0x5 0xd8 0x1 IRQ_TYPE_EDGE_RISING>, ++ <0x5 0xd8 0x2 IRQ_TYPE_EDGE_RISING>; ++ interrupt-names = "ovp", "short"; + label = "backlight"; + + status = "disabled"; +diff --git a/arch/arm64/boot/dts/qcom/pm660l.dtsi b/arch/arm64/boot/dts/qcom/pm660l.dtsi +index 8aa0a5078772b..88606b996d690 100644 +--- a/arch/arm64/boot/dts/qcom/pm660l.dtsi ++++ b/arch/arm64/boot/dts/qcom/pm660l.dtsi +@@ -74,8 +74,9 @@ + pm660l_wled: leds@d800 { + compatible = "qcom,pm660l-wled"; + reg = <0xd800>, <0xd900>; +- interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>; +- interrupt-names = "ovp"; ++ interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>, ++ <0x3 0xd8 0x2 IRQ_TYPE_EDGE_RISING>; ++ interrupt-names = "ovp", "short"; + label = "backlight"; + + status = "disabled"; +diff --git a/arch/arm64/boot/dts/qcom/pm8350.dtsi b/arch/arm64/boot/dts/qcom/pm8350.dtsi +index 2dfeb99300d74..9ed9ba23e81e4 100644 +--- a/arch/arm64/boot/dts/qcom/pm8350.dtsi ++++ b/arch/arm64/boot/dts/qcom/pm8350.dtsi +@@ -8,7 +8,7 @@ + + / { + thermal-zones { +- pm8350_thermal: pm8350c-thermal { ++ pm8350_thermal: pm8350-thermal { + polling-delay-passive = <100>; + polling-delay = <0>; + thermal-sensors = <&pm8350_temp_alarm>; +diff --git a/arch/arm64/boot/dts/qcom/pm8350b.dtsi b/arch/arm64/boot/dts/qcom/pm8350b.dtsi +index f1c7bd9d079c2..05c1058988927 100644 +--- a/arch/arm64/boot/dts/qcom/pm8350b.dtsi ++++ b/arch/arm64/boot/dts/qcom/pm8350b.dtsi +@@ -8,7 +8,7 @@ + + / { + thermal-zones { +- pm8350b_thermal: pm8350c-thermal { ++ pm8350b_thermal: pm8350b-thermal { + polling-delay-passive = <100>; + polling-delay = <0>; + thermal-sensors = <&pm8350b_temp_alarm>; +diff --git a/arch/arm64/boot/dts/qcom/pmi8994.dtsi b/arch/arm64/boot/dts/qcom/pmi8994.dtsi +index 82b60e988d0f5..49902a3e161d9 100644 +--- a/arch/arm64/boot/dts/qcom/pmi8994.dtsi ++++ b/arch/arm64/boot/dts/qcom/pmi8994.dtsi +@@ -54,8 +54,9 @@ + pmi8994_wled: wled@d800 { + compatible = "qcom,pmi8994-wled"; + reg = <0xd800>, <0xd900>; +- interrupts = <3 0xd8 0x02 IRQ_TYPE_EDGE_RISING>; +- interrupt-names = "short"; ++ interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>, ++ <0x3 0xd8 0x2 IRQ_TYPE_EDGE_RISING>; ++ interrupt-names = "ovp", "short"; + qcom,cabc; + qcom,external-pfet; + status = "disabled"; +diff --git a/arch/arm64/boot/dts/qcom/pmk8350.dtsi b/arch/arm64/boot/dts/qcom/pmk8350.dtsi +index f0d256d99e62e..29cfb6fca9bf7 100644 +--- a/arch/arm64/boot/dts/qcom/pmk8350.dtsi ++++ b/arch/arm64/boot/dts/qcom/pmk8350.dtsi +@@ -44,7 +44,7 @@ + }; + + pmk8350_adc_tm: adc-tm@3400 { +- compatible = "qcom,adc-tm7"; ++ compatible = "qcom,spmi-adc-tm5-gen2"; + reg = <0x3400>; + interrupts = <0x0 0x34 0x0 IRQ_TYPE_EDGE_RISING>; + #address-cells = <1>; +diff --git a/arch/arm64/boot/dts/qcom/pmr735b.dtsi b/arch/arm64/boot/dts/qcom/pmr735b.dtsi +index ec24c4478005a..f7473e2473224 100644 +--- a/arch/arm64/boot/dts/qcom/pmr735b.dtsi ++++ b/arch/arm64/boot/dts/qcom/pmr735b.dtsi +@@ -8,7 +8,7 @@ + + / { + thermal-zones { +- pmr735a_thermal: pmr735a-thermal { ++ pmr735b_thermal: pmr735b-thermal { + polling-delay-passive = <100>; + polling-delay = <0>; + thermal-sensors = <&pmr735b_temp_alarm>; +diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts b/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts +index 5e30349efd204..38ec8acb7c40d 100644 +--- a/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts ++++ b/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts +@@ -57,7 +57,7 @@ + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + +- gpio = <&pmc8280_1_gpios 1 GPIO_ACTIVE_HIGH>; ++ gpio = <&pmc8280_1_gpios 2 GPIO_ACTIVE_HIGH>; + enable-active-high; + + pinctrl-names = "default"; +@@ -364,7 +364,7 @@ + }; + + misc_3p3_reg_en: misc-3p3-reg-en-state { +- pins = "gpio1"; ++ pins = "gpio2"; + function = "normal"; + }; + }; +diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts +index b2b744bb8a538..49d15432aeabf 100644 +--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts ++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts +@@ -347,7 +347,7 @@ + }; + + &tlmm { +- gpio-reserved-ranges = <70 2>, <74 6>, <83 4>, <125 2>, <128 2>, <154 7>; ++ gpio-reserved-ranges = <70 2>, <74 6>, <125 2>, <128 2>, <154 4>; + + kybd_default: kybd-default-state { + disable { +diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi +index 1afc960bab5c9..405835ad28bcd 100644 +--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi ++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi +@@ -396,6 +396,7 @@ + firmware { + scm: scm { + compatible = "qcom,scm-sc8280xp", "qcom,scm"; ++ interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>; + }; + }; + +diff --git a/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi b/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi +index 51ee42e3c995c..d6918e6d19799 100644 +--- a/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi ++++ b/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi +@@ -14,6 +14,15 @@ + qcom,msm-id = <321 0x20001>; /* SDM845 v2.1 */ + qcom,board-id = <8 0>; + ++ aliases { ++ serial0 = &uart6; ++ serial1 = &uart9; ++ }; ++ ++ chosen { ++ stdout-path = "serial0:115200n8"; ++ }; ++ + gpio-keys { + compatible = "gpio-keys"; + +diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi +index b7ba70857d0ad..52c9f5639f8a2 100644 +--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi ++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi +@@ -1099,6 +1099,7 @@ + #clock-cells = <1>; + #reset-cells = <1>; + #power-domain-cells = <1>; ++ power-domains = <&rpmhpd SDM845_CX>; + }; + + qfprom@784000 { +@@ -2520,7 +2521,7 @@ + <0 0>, + <0 0>, + <0 0>, +- <0 300000000>; ++ <75000000 300000000>; + + status = "disabled"; + }; +diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi +index 35f621ef9da54..34c8de4f43fba 100644 +--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi +@@ -306,11 +306,6 @@ + no-map; + }; + +- pil_gpu_mem: memory@8b715400 { +- reg = <0 0x8b715400 0 0x2000>; +- no-map; +- }; +- + pil_modem_mem: memory@8b800000 { + reg = <0 0x8b800000 0 0xf800000>; + no-map; +@@ -331,6 +326,11 @@ + no-map; + }; + ++ pil_gpu_mem: memory@f0d00000 { ++ reg = <0 0xf0d00000 0 0x1000>; ++ no-map; ++ }; ++ + debug_region: memory@ffb00000 { + reg = <0 0xffb00000 0 0xc0000>; + no-map; +diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi +index 78ae4b9eaa106..f049fb42e3ca8 100644 +--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi +@@ -1196,7 +1196,7 @@ + dma-names = "tx", "rx"; + pinctrl-names = "default"; + pinctrl-0 = <&qup_i2c7_default>; +- interrupts = ; ++ interrupts = ; + #address-cells = <1>; + #size-cells = <0>; + status = "disabled"; +diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts +index 356a81698731a..62590c6bd3067 100644 +--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts ++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts +@@ -14,3 +14,236 @@ + }; + + /delete-node/ &vreg_l7f_1p8; ++ ++&pm8009_gpios { ++ gpio-line-names = "NC", /* GPIO_1 */ ++ "CAM_PWR_LD_EN", ++ "WIDEC_PWR_EN", ++ "NC"; ++}; ++ ++&pm8150_gpios { ++ gpio-line-names = "VOL_DOWN_N", /* GPIO_1 */ ++ "OPTION_2", ++ "NC", ++ "PM_SLP_CLK_IN", ++ "OPTION_1", ++ "NC", ++ "NC", ++ "SP_ARI_PWR_ALARM", ++ "NC", ++ "NC"; /* GPIO_10 */ ++}; ++ ++&pm8150b_gpios { ++ gpio-line-names = "SNAPSHOT_N", /* GPIO_1 */ ++ "FOCUS_N", ++ "NC", ++ "NC", ++ "RF_LCD_ID_EN", ++ "NC", ++ "NC", ++ "LCD_ID", ++ "NC", ++ "WLC_EN_N", /* GPIO_10 */ ++ "NC", ++ "RF_ID"; ++}; ++ ++&pm8150l_gpios { ++ gpio-line-names = "NC", /* GPIO_1 */ ++ "PM3003A_EN", ++ "NC", ++ "NC", ++ "NC", ++ "AUX2_THERM", ++ "BB_HP_EN", ++ "FP_LDO_EN", ++ "PMX_RESET_N", ++ "AUX3_THERM", /* GPIO_10 */ ++ "DTV_PWR_EN", ++ "PM3003A_MODE"; ++}; ++ ++&tlmm { ++ gpio-line-names = "AP_CTI_IN", /* GPIO_0 */ ++ "MDM2AP_ERR_FATAL", ++ "AP_CTI_OUT", ++ "MDM2AP_STATUS", ++ "NFC_I2C_SDA", ++ "NFC_I2C_SCL", ++ "NFC_EN", ++ "NFC_CLK_REQ", ++ "NFC_ESE_PWR_REQ", ++ "DVDT_WRT_DET_AND", ++ "SPK_AMP_RESET_N", /* GPIO_10 */ ++ "SPK_AMP_INT_N", ++ "APPS_I2C_1_SDA", ++ "APPS_I2C_1_SCL", ++ "NC", ++ "TX_GTR_THRES_IN", ++ "HST_BT_UART_CTS", ++ "HST_BT_UART_RFR", ++ "HST_BT_UART_TX", ++ "HST_BT_UART_RX", ++ "HST_WLAN_EN", /* GPIO_20 */ ++ "HST_BT_EN", ++ "RGBC_IR_PWR_EN", ++ "FP_INT_N", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "NFC_ESE_SPI_MISO", ++ "NFC_ESE_SPI_MOSI", ++ "NFC_ESE_SPI_SCLK", /* GPIO_30 */ ++ "NFC_ESE_SPI_CS_N", ++ "WCD_RST_N", ++ "NC", ++ "SDM_DEBUG_UART_TX", ++ "SDM_DEBUG_UART_RX", ++ "TS_I2C_SDA", ++ "TS_I2C_SCL", ++ "TS_INT_N", ++ "FP_SPI_MISO", /* GPIO_40 */ ++ "FP_SPI_MOSI", ++ "FP_SPI_SCLK", ++ "FP_SPI_CS_N", ++ "APPS_I2C_0_SDA", ++ "APPS_I2C_0_SCL", ++ "DISP_ERR_FG", ++ "UIM2_DETECT_EN", ++ "NC", ++ "NC", ++ "NC", /* GPIO_50 */ ++ "NC", ++ "MDM_UART_CTS", ++ "MDM_UART_RFR", ++ "MDM_UART_TX", ++ "MDM_UART_RX", ++ "AP2MDM_STATUS", ++ "AP2MDM_ERR_FATAL", ++ "MDM_IPC_HS_UART_TX", ++ "MDM_IPC_HS_UART_RX", ++ "NC", /* GPIO_60 */ ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "USB_CC_DIR", ++ "DISP_VSYNC", ++ "NC", ++ "NC", ++ "CAM_PWR_B_CS", ++ "NC", /* GPIO_70 */ ++ "CAM_PWR_A_CS", ++ "SBU_SW_SEL", ++ "SBU_SW_OE", ++ "FP_RESET_N", ++ "FP_RESET_N", ++ "DISP_RESET_N", ++ "DEBUG_GPIO0", ++ "TRAY_DET", ++ "CAM2_RST_N", ++ "PCIE0_RST_N", ++ "PCIE0_CLK_REQ_N", /* GPIO_80 */ ++ "PCIE0_WAKE_N", ++ "DVDT_ENABLE", ++ "DVDT_WRT_DET_OR", ++ "NC", ++ "PCIE2_RST_N", ++ "PCIE2_CLK_REQ_N", ++ "PCIE2_WAKE_N", ++ "MDM_VFR_IRQ0", ++ "MDM_VFR_IRQ1", ++ "SW_SERVICE", /* GPIO_90 */ ++ "CAM_SOF", ++ "CAM1_RST_N", ++ "CAM0_RST_N", ++ "CAM0_MCLK", ++ "CAM1_MCLK", ++ "CAM2_MCLK", ++ "CAM3_MCLK", ++ "CAM4_MCLK", ++ "TOF_RST_N", ++ "NC", /* GPIO_100 */ ++ "CCI0_I2C_SDA", ++ "CCI0_I2C_SCL", ++ "CCI1_I2C_SDA", ++ "CCI1_I2C_SCL_", ++ "CCI2_I2C_SDA", ++ "CCI2_I2C_SCL", ++ "CCI3_I2C_SDA", ++ "CCI3_I2C_SCL", ++ "CAM3_RST_N", ++ "NFC_DWL_REQ", /* GPIO_110 */ ++ "NFC_IRQ", ++ "XVS", ++ "NC", ++ "RF_ID_EXTENSION", ++ "SPK_AMP_I2C_SDA", ++ "SPK_AMP_I2C_SCL", ++ "NC", ++ "NC", ++ "WLC_I2C_SDA", ++ "WLC_I2C_SCL", /* GPIO_120 */ ++ "ACC_COVER_OPEN", ++ "ALS_PROX_INT_N", ++ "ACCEL_INT", ++ "WLAN_SW_CTRL", ++ "CAMSENSOR_I2C_SDA", ++ "CAMSENSOR_I2C_SCL", ++ "UDON_SWITCH_SEL", ++ "WDOG_DISABLE", ++ "BAROMETER_INT", ++ "NC", /* GPIO_130 */ ++ "NC", ++ "FORCED_USB_BOOT", ++ "NC", ++ "NC", ++ "WLC_INT_N", ++ "NC", ++ "NC", ++ "RGBC_IR_INT", ++ "NC", ++ "NC", /* GPIO_140 */ ++ "NC", ++ "BT_SLIMBUS_CLK", ++ "BT_SLIMBUS_DATA", ++ "HW_ID_0", ++ "HW_ID_1", ++ "WCD_SWR_TX_CLK", ++ "WCD_SWR_TX_DATA0", ++ "WCD_SWR_TX_DATA1", ++ "WCD_SWR_RX_CLK", ++ "WCD_SWR_RX_DATA0", /* GPIO_150 */ ++ "WCD_SWR_RX_DATA1", ++ "SDM_DMIC_CLK1", ++ "SDM_DMIC_DATA1", ++ "SDM_DMIC_CLK2", ++ "SDM_DMIC_DATA2", ++ "SPK_AMP_I2S_CLK", ++ "SPK_AMP_I2S_WS", ++ "SPK_AMP_I2S_ASP_DIN", ++ "SPK_AMP_I2S_ASP_DOUT", ++ "COMPASS_I2C_SDA", /* GPIO_160 */ ++ "COMPASS_I2C_SCL", ++ "NC", ++ "NC", ++ "SSC_SPI_1_MISO", ++ "SSC_SPI_1_MOSI", ++ "SSC_SPI_1_CLK", ++ "SSC_SPI_1_CS_N", ++ "NC", ++ "NC", ++ "SSC_SENSOR_I2C_SDA", /* GPIO_170 */ ++ "SSC_SENSOR_I2C_SCL", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "HST_BLE_SNS_UART6_TX", ++ "HST_BLE_SNS_UART6_RX", ++ "HST_WLAN_UART_TX", ++ "HST_WLAN_UART_RX"; ++}; +diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts +index 5ecf7dafb2ec4..0e50661c1b4c1 100644 +--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts ++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts +@@ -20,6 +20,8 @@ + }; + + &gpio_keys { ++ pinctrl-0 = <&focus_n &snapshot_n &vol_down_n &g_assist_n>; ++ + g-assist-key { + label = "Google Assistant Key"; + linux,code = ; +@@ -30,6 +32,247 @@ + }; + }; + ++&pm8009_gpios { ++ gpio-line-names = "NC", /* GPIO_1 */ ++ "NC", ++ "WIDEC_PWR_EN", ++ "NC"; ++}; ++ ++&pm8150_gpios { ++ gpio-line-names = "VOL_DOWN_N", /* GPIO_1 */ ++ "OPTION_2", ++ "NC", ++ "PM_SLP_CLK_IN", ++ "OPTION_1", ++ "G_ASSIST_N", ++ "NC", ++ "SP_ARI_PWR_ALARM", ++ "NC", ++ "NC"; /* GPIO_10 */ ++ ++ g_assist_n: g-assist-n-state { ++ pins = "gpio6"; ++ function = "normal"; ++ power-source = <1>; ++ bias-pull-up; ++ input-enable; ++ }; ++}; ++ ++&pm8150b_gpios { ++ gpio-line-names = "SNAPSHOT_N", /* GPIO_1 */ ++ "FOCUS_N", ++ "NC", ++ "NC", ++ "RF_LCD_ID_EN", ++ "NC", ++ "NC", ++ "LCD_ID", ++ "NC", ++ "NC", /* GPIO_10 */ ++ "NC", ++ "RF_ID"; ++}; ++ ++&pm8150l_gpios { ++ gpio-line-names = "NC", /* GPIO_1 */ ++ "PM3003A_EN", ++ "NC", ++ "NC", ++ "NC", ++ "AUX2_THERM", ++ "BB_HP_EN", ++ "FP_LDO_EN", ++ "PMX_RESET_N", ++ "NC", /* GPIO_10 */ ++ "NC", ++ "PM3003A_MODE"; ++}; ++ ++&tlmm { ++ gpio-line-names = "AP_CTI_IN", /* GPIO_0 */ ++ "MDM2AP_ERR_FATAL", ++ "AP_CTI_OUT", ++ "MDM2AP_STATUS", ++ "NFC_I2C_SDA", ++ "NFC_I2C_SCL", ++ "NFC_EN", ++ "NFC_CLK_REQ", ++ "NFC_ESE_PWR_REQ", ++ "DVDT_WRT_DET_AND", ++ "SPK_AMP_RESET_N", /* GPIO_10 */ ++ "SPK_AMP_INT_N", ++ "APPS_I2C_1_SDA", ++ "APPS_I2C_1_SCL", ++ "NC", ++ "TX_GTR_THRES_IN", ++ "HST_BT_UART_CTS", ++ "HST_BT_UART_RFR", ++ "HST_BT_UART_TX", ++ "HST_BT_UART_RX", ++ "HST_WLAN_EN", /* GPIO_20 */ ++ "HST_BT_EN", ++ "RGBC_IR_PWR_EN", ++ "FP_INT_N", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "NFC_ESE_SPI_MISO", ++ "NFC_ESE_SPI_MOSI", ++ "NFC_ESE_SPI_SCLK", /* GPIO_30 */ ++ "NFC_ESE_SPI_CS_N", ++ "WCD_RST_N", ++ "NC", ++ "SDM_DEBUG_UART_TX", ++ "SDM_DEBUG_UART_RX", ++ "TS_I2C_SDA", ++ "TS_I2C_SCL", ++ "TS_INT_N", ++ "FP_SPI_MISO", /* GPIO_40 */ ++ "FP_SPI_MOSI", ++ "FP_SPI_SCLK", ++ "FP_SPI_CS_N", ++ "APPS_I2C_0_SDA", ++ "APPS_I2C_0_SCL", ++ "DISP_ERR_FG", ++ "UIM2_DETECT_EN", ++ "NC", ++ "NC", ++ "NC", /* GPIO_50 */ ++ "NC", ++ "MDM_UART_CTS", ++ "MDM_UART_RFR", ++ "MDM_UART_TX", ++ "MDM_UART_RX", ++ "AP2MDM_STATUS", ++ "AP2MDM_ERR_FATAL", ++ "MDM_IPC_HS_UART_TX", ++ "MDM_IPC_HS_UART_RX", ++ "NC", /* GPIO_60 */ ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "USB_CC_DIR", ++ "DISP_VSYNC", ++ "NC", ++ "NC", ++ "CAM_PWR_B_CS", ++ "NC", /* GPIO_70 */ ++ "FRONTC_PWR_EN", ++ "SBU_SW_SEL", ++ "SBU_SW_OE", ++ "FP_RESET_N", ++ "FP_RESET_N", ++ "DISP_RESET_N", ++ "DEBUG_GPIO0", ++ "TRAY_DET", ++ "CAM2_RST_N", ++ "PCIE0_RST_N", ++ "PCIE0_CLK_REQ_N", /* GPIO_80 */ ++ "PCIE0_WAKE_N", ++ "DVDT_ENABLE", ++ "DVDT_WRT_DET_OR", ++ "NC", ++ "PCIE2_RST_N", ++ "PCIE2_CLK_REQ_N", ++ "PCIE2_WAKE_N", ++ "MDM_VFR_IRQ0", ++ "MDM_VFR_IRQ1", ++ "SW_SERVICE", /* GPIO_90 */ ++ "CAM_SOF", ++ "CAM1_RST_N", ++ "CAM0_RST_N", ++ "CAM0_MCLK", ++ "CAM1_MCLK", ++ "CAM2_MCLK", ++ "CAM3_MCLK", ++ "NC", ++ "NC", ++ "NC", /* GPIO_100 */ ++ "CCI0_I2C_SDA", ++ "CCI0_I2C_SCL", ++ "CCI1_I2C_SDA", ++ "CCI1_I2C_SCL_", ++ "CCI2_I2C_SDA", ++ "CCI2_I2C_SCL", ++ "CCI3_I2C_SDA", ++ "CCI3_I2C_SCL", ++ "CAM3_RST_N", ++ "NFC_DWL_REQ", /* GPIO_110 */ ++ "NFC_IRQ", ++ "XVS", ++ "NC", ++ "RF_ID_EXTENSION", ++ "SPK_AMP_I2C_SDA", ++ "SPK_AMP_I2C_SCL", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "ACC_COVER_OPEN", ++ "ALS_PROX_INT_N", ++ "ACCEL_INT", ++ "WLAN_SW_CTRL", ++ "CAMSENSOR_I2C_SDA", ++ "CAMSENSOR_I2C_SCL", ++ "UDON_SWITCH_SEL", ++ "WDOG_DISABLE", ++ "BAROMETER_INT", ++ "NC", /* GPIO_130 */ ++ "NC", ++ "FORCED_USB_BOOT", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "RGBC_IR_INT", ++ "NC", ++ "NC", /* GPIO_140 */ ++ "NC", ++ "BT_SLIMBUS_CLK", ++ "BT_SLIMBUS_DATA", ++ "HW_ID_0", ++ "HW_ID_1", ++ "WCD_SWR_TX_CLK", ++ "WCD_SWR_TX_DATA0", ++ "WCD_SWR_TX_DATA1", ++ "WCD_SWR_RX_CLK", ++ "WCD_SWR_RX_DATA0", /* GPIO_150 */ ++ "WCD_SWR_RX_DATA1", ++ "SDM_DMIC_CLK1", ++ "SDM_DMIC_DATA1", ++ "SDM_DMIC_CLK2", ++ "SDM_DMIC_DATA2", ++ "SPK_AMP_I2S_CLK", ++ "SPK_AMP_I2S_WS", ++ "SPK_AMP_I2S_ASP_DIN", ++ "SPK_AMP_I2S_ASP_DOUT", ++ "COMPASS_I2C_SDA", /* GPIO_160 */ ++ "COMPASS_I2C_SCL", ++ "NC", ++ "NC", ++ "SSC_SPI_1_MISO", ++ "SSC_SPI_1_MOSI", ++ "SSC_SPI_1_CLK", ++ "SSC_SPI_1_CS_N", ++ "NC", ++ "NC", ++ "SSC_SENSOR_I2C_SDA", /* GPIO_170 */ ++ "SSC_SENSOR_I2C_SCL", ++ "NC", ++ "NC", ++ "NC", ++ "NC", ++ "HST_BLE_SNS_UART6_TX", ++ "HST_BLE_SNS_UART6_RX", ++ "HST_WLAN_UART_TX", ++ "HST_WLAN_UART_RX"; ++}; ++ + &vreg_l2f_1p3 { + regulator-min-microvolt = <1200000>; + regulator-max-microvolt = <1200000>; +diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi +index 390b90a8ddf70..3b710c6a326a5 100644 +--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi +@@ -51,12 +51,26 @@ + gpio_keys: gpio-keys { + compatible = "gpio-keys"; + +- /* +- * Camera focus (light press) and camera snapshot (full press) +- * seem not to work properly.. Adding the former one stalls the CPU +- * and the latter kills the volume down key for whatever reason. In any +- * case, they are both on &pm8150b_gpios: camera focus(2), camera snapshot(1). +- */ ++ pinctrl-0 = <&focus_n &snapshot_n &vol_down_n>; ++ pinctrl-names = "default"; ++ ++ key-camera-focus { ++ label = "Camera Focus"; ++ linux,code = ; ++ gpios = <&pm8150b_gpios 2 GPIO_ACTIVE_LOW>; ++ debounce-interval = <15>; ++ linux,can-disable; ++ wakeup-source; ++ }; ++ ++ key-camera-snapshot { ++ label = "Camera Snapshot"; ++ linux,code = ; ++ gpios = <&pm8150b_gpios 1 GPIO_ACTIVE_LOW>; ++ debounce-interval = <15>; ++ linux,can-disable; ++ wakeup-source; ++ }; + + key-vol-down { + label = "Volume Down"; +@@ -546,6 +560,34 @@ + vdda-pll-supply = <&vreg_l9a_1p2>; + }; + ++&pm8150_gpios { ++ vol_down_n: vol-down-n-state { ++ pins = "gpio1"; ++ function = "normal"; ++ power-source = <0>; ++ bias-pull-up; ++ input-enable; ++ }; ++}; ++ ++&pm8150b_gpios { ++ snapshot_n: snapshot-n-state { ++ pins = "gpio1"; ++ function = "normal"; ++ power-source = <0>; ++ bias-pull-up; ++ input-enable; ++ }; ++ ++ focus_n: focus-n-state { ++ pins = "gpio2"; ++ function = "normal"; ++ power-source = <0>; ++ bias-pull-up; ++ input-enable; ++ }; ++}; ++ + &pon_pwrkey { + status = "okay"; + }; +diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi +index e93955525a107..4d9b30f0b2841 100644 +--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi +@@ -99,7 +99,7 @@ + reg = <0x0 0x0>; + enable-method = "psci"; + capacity-dmips-mhz = <448>; +- dynamic-power-coefficient = <205>; ++ dynamic-power-coefficient = <105>; + next-level-cache = <&L2_0>; + power-domains = <&CPU_PD0>; + power-domain-names = "psci"; +@@ -123,7 +123,7 @@ + reg = <0x0 0x100>; + enable-method = "psci"; + capacity-dmips-mhz = <448>; +- dynamic-power-coefficient = <205>; ++ dynamic-power-coefficient = <105>; + next-level-cache = <&L2_100>; + power-domains = <&CPU_PD1>; + power-domain-names = "psci"; +@@ -144,7 +144,7 @@ + reg = <0x0 0x200>; + enable-method = "psci"; + capacity-dmips-mhz = <448>; +- dynamic-power-coefficient = <205>; ++ dynamic-power-coefficient = <105>; + next-level-cache = <&L2_200>; + power-domains = <&CPU_PD2>; + power-domain-names = "psci"; +@@ -165,7 +165,7 @@ + reg = <0x0 0x300>; + enable-method = "psci"; + capacity-dmips-mhz = <448>; +- dynamic-power-coefficient = <205>; ++ dynamic-power-coefficient = <105>; + next-level-cache = <&L2_300>; + power-domains = <&CPU_PD3>; + power-domain-names = "psci"; +@@ -1862,6 +1862,7 @@ + + pinctrl-names = "default"; + pinctrl-0 = <&pcie0_default_state>; ++ dma-coherent; + + status = "disabled"; + }; +@@ -1968,6 +1969,7 @@ + + pinctrl-names = "default"; + pinctrl-0 = <&pcie1_default_state>; ++ dma-coherent; + + status = "disabled"; + }; +@@ -2076,6 +2078,7 @@ + + pinctrl-names = "default"; + pinctrl-0 = <&pcie2_default_state>; ++ dma-coherent; + + status = "disabled"; + }; +diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi +index 7fd1c3f71c0f8..b3245b13b2611 100644 +--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi +@@ -63,7 +63,7 @@ + + CPU0: cpu@0 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a55"; + reg = <0x0 0x0>; + enable-method = "psci"; + next-level-cache = <&L2_0>; +@@ -82,7 +82,7 @@ + + CPU1: cpu@100 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a55"; + reg = <0x0 0x100>; + enable-method = "psci"; + next-level-cache = <&L2_100>; +@@ -98,7 +98,7 @@ + + CPU2: cpu@200 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a55"; + reg = <0x0 0x200>; + enable-method = "psci"; + next-level-cache = <&L2_200>; +@@ -114,7 +114,7 @@ + + CPU3: cpu@300 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a55"; + reg = <0x0 0x300>; + enable-method = "psci"; + next-level-cache = <&L2_300>; +@@ -130,7 +130,7 @@ + + CPU4: cpu@400 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a78"; + reg = <0x0 0x400>; + enable-method = "psci"; + next-level-cache = <&L2_400>; +@@ -146,7 +146,7 @@ + + CPU5: cpu@500 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a78"; + reg = <0x0 0x500>; + enable-method = "psci"; + next-level-cache = <&L2_500>; +@@ -163,7 +163,7 @@ + + CPU6: cpu@600 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-a78"; + reg = <0x0 0x600>; + enable-method = "psci"; + next-level-cache = <&L2_600>; +@@ -179,7 +179,7 @@ + + CPU7: cpu@700 { + device_type = "cpu"; +- compatible = "qcom,kryo685"; ++ compatible = "arm,cortex-x1"; + reg = <0x0 0x700>; + enable-method = "psci"; + next-level-cache = <&L2_700>; +@@ -236,8 +236,8 @@ + compatible = "arm,idle-state"; + idle-state-name = "silver-rail-power-collapse"; + arm,psci-suspend-param = <0x40000004>; +- entry-latency-us = <355>; +- exit-latency-us = <909>; ++ entry-latency-us = <360>; ++ exit-latency-us = <531>; + min-residency-us = <3934>; + local-timer-stop; + }; +@@ -246,8 +246,8 @@ + compatible = "arm,idle-state"; + idle-state-name = "gold-rail-power-collapse"; + arm,psci-suspend-param = <0x40000004>; +- entry-latency-us = <241>; +- exit-latency-us = <1461>; ++ entry-latency-us = <702>; ++ exit-latency-us = <1061>; + min-residency-us = <4488>; + local-timer-stop; + }; +@@ -2072,6 +2072,13 @@ + <0 0x18593000 0 0x1000>; + reg-names = "freq-domain0", "freq-domain1", "freq-domain2"; + ++ interrupts = , ++ , ++ ; ++ interrupt-names = "dcvsh-irq-0", ++ "dcvsh-irq-1", ++ "dcvsh-irq-2"; ++ + clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>; + clock-names = "xo", "alternate"; + +diff --git a/arch/arm64/include/asm/sdei.h b/arch/arm64/include/asm/sdei.h +index 4292d9bafb9d2..484cb6972e99a 100644 +--- a/arch/arm64/include/asm/sdei.h ++++ b/arch/arm64/include/asm/sdei.h +@@ -17,6 +17,9 @@ + + #include + ++DECLARE_PER_CPU(struct sdei_registered_event *, sdei_active_normal_event); ++DECLARE_PER_CPU(struct sdei_registered_event *, sdei_active_critical_event); ++ + extern unsigned long sdei_exit_mode; + + /* Software Delegated Exception entry point from firmware*/ +@@ -29,6 +32,9 @@ asmlinkage void __sdei_asm_entry_trampoline(unsigned long event_num, + unsigned long pc, + unsigned long pstate); + ++/* Abort a running handler. Context is discarded. */ ++void __sdei_handler_abort(void); ++ + /* + * The above entry point does the minimum to call C code. This function does + * anything else, before calling the driver. +diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S +index 3671d9521d4f5..beb4db21c89c1 100644 +--- a/arch/arm64/kernel/entry.S ++++ b/arch/arm64/kernel/entry.S +@@ -993,9 +993,13 @@ SYM_CODE_START(__sdei_asm_handler) + + mov x19, x1 + +-#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) ++ /* Store the registered-event for crash_smp_send_stop() */ + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +-#endif ++ cbnz w4, 1f ++ adr_this_cpu dst=x5, sym=sdei_active_normal_event, tmp=x6 ++ b 2f ++1: adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6 ++2: str x19, [x5] + + #ifdef CONFIG_VMAP_STACK + /* +@@ -1062,6 +1066,14 @@ SYM_CODE_START(__sdei_asm_handler) + + ldr_l x2, sdei_exit_mode + ++ /* Clear the registered-event seen by crash_smp_send_stop() */ ++ ldrb w3, [x4, #SDEI_EVENT_PRIORITY] ++ cbnz w3, 1f ++ adr_this_cpu dst=x5, sym=sdei_active_normal_event, tmp=x6 ++ b 2f ++1: adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6 ++2: str xzr, [x5] ++ + alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 + sdei_handler_exit exit_mode=x2 + alternative_else_nop_endif +@@ -1072,4 +1084,15 @@ alternative_else_nop_endif + #endif + SYM_CODE_END(__sdei_asm_handler) + NOKPROBE(__sdei_asm_handler) ++ ++SYM_CODE_START(__sdei_handler_abort) ++ mov_q x0, SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME ++ adr x1, 1f ++ ldr_l x2, sdei_exit_mode ++ sdei_handler_exit exit_mode=x2 ++ // exit the handler and jump to the next instruction. ++ // Exit will stomp x0-x17, PSTATE, ELR_ELx, and SPSR_ELx. ++1: ret ++SYM_CODE_END(__sdei_handler_abort) ++NOKPROBE(__sdei_handler_abort) + #endif /* CONFIG_ARM_SDE_INTERFACE */ +diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c +index 8cd59d387b90b..8c226d79abdfc 100644 +--- a/arch/arm64/kernel/fpsimd.c ++++ b/arch/arm64/kernel/fpsimd.c +@@ -1133,9 +1133,6 @@ void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) + */ + u64 read_zcr_features(void) + { +- u64 zcr; +- unsigned int vq_max; +- + /* + * Set the maximum possible VL, and write zeroes to all other + * bits to see if they stick. +@@ -1143,12 +1140,8 @@ u64 read_zcr_features(void) + sve_kernel_enable(NULL); + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1); + +- zcr = read_sysreg_s(SYS_ZCR_EL1); +- zcr &= ~(u64)ZCR_ELx_LEN_MASK; /* find sticky 1s outside LEN field */ +- vq_max = sve_vq_from_vl(sve_get_vl()); +- zcr |= vq_max - 1; /* set LEN field to maximum effective value */ +- +- return zcr; ++ /* Return LEN value that would be written to get the maximum VL */ ++ return sve_vq_from_vl(sve_get_vl()) - 1; + } + + void __init sve_setup(void) +@@ -1292,11 +1285,7 @@ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) + */ + u64 read_smcr_features(void) + { +- u64 smcr; +- unsigned int vq_max; +- + sme_kernel_enable(NULL); +- sme_smstart_sm(); + + /* + * Set the maximum possible VL. +@@ -1304,14 +1293,8 @@ u64 read_smcr_features(void) + write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_LEN_MASK, + SYS_SMCR_EL1); + +- smcr = read_sysreg_s(SYS_SMCR_EL1); +- smcr &= ~(u64)SMCR_ELx_LEN_MASK; /* Only the LEN field */ +- vq_max = sve_vq_from_vl(sve_get_vl()); +- smcr |= vq_max - 1; /* set LEN field to maximum effective value */ +- +- sme_smstop_sm(); +- +- return smcr; ++ /* Return LEN value that would be written to get the maximum VL */ ++ return sve_vq_from_vl(sme_get_vl()) - 1; + } + + void __init sme_setup(void) +diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c +index f606c942f514e..e1f6366b7ccdf 100644 +--- a/arch/arm64/kernel/ptrace.c ++++ b/arch/arm64/kernel/ptrace.c +@@ -896,7 +896,8 @@ static int sve_set_common(struct task_struct *target, + break; + default: + WARN_ON_ONCE(1); +- return -EINVAL; ++ ret = -EINVAL; ++ goto out; + } + + /* +diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c +index d56e170e1ca7c..48c6457b67db8 100644 +--- a/arch/arm64/kernel/sdei.c ++++ b/arch/arm64/kernel/sdei.c +@@ -47,6 +47,9 @@ DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); + DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); + #endif + ++DEFINE_PER_CPU(struct sdei_registered_event *, sdei_active_normal_event); ++DEFINE_PER_CPU(struct sdei_registered_event *, sdei_active_critical_event); ++ + static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu) + { + unsigned long *p; +diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c +index ffc5d76cf6955..d323621d14a59 100644 +--- a/arch/arm64/kernel/smp.c ++++ b/arch/arm64/kernel/smp.c +@@ -1047,10 +1047,8 @@ void crash_smp_send_stop(void) + * If this cpu is the only one alive at this point in time, online or + * not, there are no stop messages to be sent around, so just back out. + */ +- if (num_other_online_cpus() == 0) { +- sdei_mask_local_cpu(); +- return; +- } ++ if (num_other_online_cpus() == 0) ++ goto skip_ipi; + + cpumask_copy(&mask, cpu_online_mask); + cpumask_clear_cpu(smp_processor_id(), &mask); +@@ -1069,7 +1067,9 @@ void crash_smp_send_stop(void) + pr_warn("SMP: failed to stop secondary CPUs %*pbl\n", + cpumask_pr_args(&mask)); + ++skip_ipi: + sdei_mask_local_cpu(); ++ sdei_handler_abort(); + } + + bool smp_crash_stop_failed(void) +diff --git a/arch/arm64/lib/csum.c b/arch/arm64/lib/csum.c +index 78b87a64ca0a3..2432683e48a61 100644 +--- a/arch/arm64/lib/csum.c ++++ b/arch/arm64/lib/csum.c +@@ -24,7 +24,7 @@ unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len) + const u64 *ptr; + u64 data, sum64 = 0; + +- if (unlikely(len == 0)) ++ if (unlikely(len <= 0)) + return 0; + + offset = (unsigned long)buff & 7; +diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c +index 35e9a468d13e6..134dcf6bc650c 100644 +--- a/arch/arm64/mm/hugetlbpage.c ++++ b/arch/arm64/mm/hugetlbpage.c +@@ -236,7 +236,7 @@ static void clear_flush(struct mm_struct *mm, + unsigned long i, saddr = addr; + + for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) +- pte_clear(mm, addr, ptep); ++ ptep_clear(mm, addr, ptep); + + flush_tlb_range(&vma, saddr, addr); + } +diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h +index 62835d84a647d..3d15fa5bef37d 100644 +--- a/arch/loongarch/include/asm/loongarch.h ++++ b/arch/loongarch/include/asm/loongarch.h +@@ -1488,7 +1488,7 @@ __BUILD_CSR_OP(tlbidx) + #define write_fcsr(dest, val) \ + do { \ + __asm__ __volatile__( \ +- " movgr2fcsr %0, "__stringify(dest)" \n" \ ++ " movgr2fcsr "__stringify(dest)", %0 \n" \ + : : "r" (val)); \ + } while (0) + +diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h +index 3d1e0a69975a5..5f2ebcea509cd 100644 +--- a/arch/loongarch/include/asm/pgtable-bits.h ++++ b/arch/loongarch/include/asm/pgtable-bits.h +@@ -21,12 +21,14 @@ + #define _PAGE_HGLOBAL_SHIFT 12 /* HGlobal is a PMD bit */ + #define _PAGE_PFN_SHIFT 12 + #define _PAGE_PFN_END_SHIFT 48 ++#define _PAGE_PRESENT_INVALID_SHIFT 60 + #define _PAGE_NO_READ_SHIFT 61 + #define _PAGE_NO_EXEC_SHIFT 62 + #define _PAGE_RPLV_SHIFT 63 + + /* Used by software */ + #define _PAGE_PRESENT (_ULCAST_(1) << _PAGE_PRESENT_SHIFT) ++#define _PAGE_PRESENT_INVALID (_ULCAST_(1) << _PAGE_PRESENT_INVALID_SHIFT) + #define _PAGE_WRITE (_ULCAST_(1) << _PAGE_WRITE_SHIFT) + #define _PAGE_ACCESSED (_ULCAST_(1) << _PAGE_ACCESSED_SHIFT) + #define _PAGE_MODIFIED (_ULCAST_(1) << _PAGE_MODIFIED_SHIFT) +diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h +index 79d5bfd913e0f..f991e678ca4b7 100644 +--- a/arch/loongarch/include/asm/pgtable.h ++++ b/arch/loongarch/include/asm/pgtable.h +@@ -208,7 +208,7 @@ static inline int pmd_bad(pmd_t pmd) + static inline int pmd_present(pmd_t pmd) + { + if (unlikely(pmd_val(pmd) & _PAGE_HUGE)) +- return !!(pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE)); ++ return !!(pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PRESENT_INVALID)); + + return pmd_val(pmd) != (unsigned long)invalid_pte_table; + } +@@ -525,6 +525,7 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) + + static inline pmd_t pmd_mkinvalid(pmd_t pmd) + { ++ pmd_val(pmd) |= _PAGE_PRESENT_INVALID; + pmd_val(pmd) &= ~(_PAGE_PRESENT | _PAGE_VALID | _PAGE_DIRTY | _PAGE_PROTNONE); + + return pmd; +@@ -559,6 +560,9 @@ static inline long pmd_protnone(pmd_t pmd) + } + #endif /* CONFIG_NUMA_BALANCING */ + ++#define pmd_leaf(pmd) ((pmd_val(pmd) & _PAGE_HUGE) != 0) ++#define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0) ++ + /* + * We provide our own get_unmapped area to cope with the virtual aliasing + * constraints placed on us by the cache architecture. +diff --git a/arch/m68k/fpsp040/skeleton.S b/arch/m68k/fpsp040/skeleton.S +index 439395aa6fb42..081922c72daaa 100644 +--- a/arch/m68k/fpsp040/skeleton.S ++++ b/arch/m68k/fpsp040/skeleton.S +@@ -499,13 +499,13 @@ in_ea: + dbf %d0,morein + rts + +- .section .fixup,#alloc,#execinstr ++ .section .fixup,"ax" + .even + 1: + jbsr fpsp040_die + jbra .Lnotkern + +- .section __ex_table,#alloc ++ .section __ex_table,"a" + .align 4 + + .long in_ea,1b +diff --git a/arch/m68k/ifpsp060/os.S b/arch/m68k/ifpsp060/os.S +index 7a0d6e4280665..89e2ec224ab6c 100644 +--- a/arch/m68k/ifpsp060/os.S ++++ b/arch/m68k/ifpsp060/os.S +@@ -379,11 +379,11 @@ _060_real_access: + + + | Execption handling for movs access to illegal memory +- .section .fixup,#alloc,#execinstr ++ .section .fixup,"ax" + .even + 1: moveq #-1,%d1 + rts +-.section __ex_table,#alloc ++.section __ex_table,"a" + .align 4 + .long dmrbuae,1b + .long dmrwuae,1b +diff --git a/arch/m68k/kernel/relocate_kernel.S b/arch/m68k/kernel/relocate_kernel.S +index ab0f1e7d46535..f7667079e08e9 100644 +--- a/arch/m68k/kernel/relocate_kernel.S ++++ b/arch/m68k/kernel/relocate_kernel.S +@@ -26,7 +26,7 @@ ENTRY(relocate_new_kernel) + lea %pc@(.Lcopy),%a4 + 2: addl #0x00000000,%a4 /* virt_to_phys() */ + +- .section ".m68k_fixup","aw" ++ .section .m68k_fixup,"aw" + .long M68K_FIXUP_MEMOFFSET, 2b+2 + .previous + +@@ -49,7 +49,7 @@ ENTRY(relocate_new_kernel) + lea %pc@(.Lcont040),%a4 + 5: addl #0x00000000,%a4 /* virt_to_phys() */ + +- .section ".m68k_fixup","aw" ++ .section .m68k_fixup,"aw" + .long M68K_FIXUP_MEMOFFSET, 5b+2 + .previous + +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig +index cf1fbf4eaa8a0..0e62f5edaee2e 100644 +--- a/arch/mips/Kconfig ++++ b/arch/mips/Kconfig +@@ -83,7 +83,6 @@ config MIPS + select HAVE_LD_DEAD_CODE_DATA_ELIMINATION + select HAVE_MOD_ARCH_SPECIFIC + select HAVE_NMI +- select HAVE_PATA_PLATFORM + select HAVE_PERF_EVENTS + select HAVE_PERF_REGS + select HAVE_PERF_USER_STACK_DUMP +diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c +index dddaaa6e7a825..1f6c776d80813 100644 +--- a/arch/parisc/kernel/processor.c ++++ b/arch/parisc/kernel/processor.c +@@ -372,10 +372,18 @@ int + show_cpuinfo (struct seq_file *m, void *v) + { + unsigned long cpu; ++ char cpu_name[60], *p; ++ ++ /* strip PA path from CPU name to not confuse lscpu */ ++ strlcpy(cpu_name, per_cpu(cpu_data, 0).dev->name, sizeof(cpu_name)); ++ p = strrchr(cpu_name, '['); ++ if (p) ++ *(--p) = 0; + + for_each_online_cpu(cpu) { +- const struct cpuinfo_parisc *cpuinfo = &per_cpu(cpu_data, cpu); + #ifdef CONFIG_SMP ++ const struct cpuinfo_parisc *cpuinfo = &per_cpu(cpu_data, cpu); ++ + if (0 == cpuinfo->hpa) + continue; + #endif +@@ -420,8 +428,7 @@ show_cpuinfo (struct seq_file *m, void *v) + + seq_printf(m, "model\t\t: %s - %s\n", + boot_cpu_data.pdc.sys_model_name, +- cpuinfo->dev ? +- cpuinfo->dev->name : "Unknown"); ++ cpu_name); + + seq_printf(m, "hversion\t: 0x%08x\n" + "sversion\t: 0x%08x\n", +diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile +index 13fad4f0a6d8f..b13324b1a1696 100644 +--- a/arch/powerpc/boot/Makefile ++++ b/arch/powerpc/boot/Makefile +@@ -34,8 +34,6 @@ endif + + BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ + -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \ +- $(call cc-option,-mno-prefixed) $(call cc-option,-mno-pcrel) \ +- $(call cc-option,-mno-mma) \ + $(call cc-option,-mno-spe) $(call cc-option,-mspe=no) \ + -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \ + $(LINUXINCLUDE) +@@ -71,6 +69,10 @@ BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -nostdinc + + BOOTARFLAGS := -crD + ++BOOTCFLAGS += $(call cc-option,-mno-prefixed) \ ++ $(call cc-option,-mno-pcrel) \ ++ $(call cc-option,-mno-mma) ++ + ifdef CONFIG_CC_IS_CLANG + BOOTCFLAGS += $(CLANG_FLAGS) + BOOTAFLAGS += $(CLANG_FLAGS) +diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h +index 34d44cb17c874..ee1488d38fdc1 100644 +--- a/arch/powerpc/include/asm/lppaca.h ++++ b/arch/powerpc/include/asm/lppaca.h +@@ -45,6 +45,7 @@ + #include + #include + #include ++#include + + /* + * The lppaca is the "virtual processor area" registered with the hypervisor, +@@ -127,13 +128,23 @@ struct lppaca { + */ + #define LPPACA_OLD_SHARED_PROC 2 + +-static inline bool lppaca_shared_proc(struct lppaca *l) ++#ifdef CONFIG_PPC_PSERIES ++/* ++ * All CPUs should have the same shared proc value, so directly access the PACA ++ * to avoid false positives from DEBUG_PREEMPT. ++ */ ++static inline bool lppaca_shared_proc(void) + { ++ struct lppaca *l = local_paca->lppaca_ptr; ++ + if (!firmware_has_feature(FW_FEATURE_SPLPAR)) + return false; + return !!(l->__old_status & LPPACA_OLD_SHARED_PROC); + } + ++#define get_lppaca() (get_paca()->lppaca_ptr) ++#endif ++ + /* + * SLB shadow buffer structure as defined in the PAPR. The save_area + * contains adjacent ESID and VSID pairs for each shadowed SLB. The +diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h +index 0ab3511a47d77..183b5a251804c 100644 +--- a/arch/powerpc/include/asm/paca.h ++++ b/arch/powerpc/include/asm/paca.h +@@ -15,7 +15,6 @@ + #include + #include + #include +-#include + #include + #include + #ifdef CONFIG_PPC_BOOK3E_64 +@@ -47,14 +46,11 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */ + #define get_paca() local_paca + #endif + +-#ifdef CONFIG_PPC_PSERIES +-#define get_lppaca() (get_paca()->lppaca_ptr) +-#endif +- + #define get_slb_shadow() (get_paca()->slb_shadow_ptr) + + struct task_struct; + struct rtas_args; ++struct lppaca; + + /* + * Defines the layout of the paca. +diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h +index f5ba1a3c41f8e..e08513d731193 100644 +--- a/arch/powerpc/include/asm/paravirt.h ++++ b/arch/powerpc/include/asm/paravirt.h +@@ -6,6 +6,7 @@ + #include + #ifdef CONFIG_PPC64 + #include ++#include + #include + #endif + +diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h +index 8239c0af5eb2b..fe3d0ea0058ac 100644 +--- a/arch/powerpc/include/asm/plpar_wrappers.h ++++ b/arch/powerpc/include/asm/plpar_wrappers.h +@@ -9,6 +9,7 @@ + + #include + #include ++#include + #include + + static inline long poll_pending(void) +diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c +index ea0a073abd969..3ff2da7b120b5 100644 +--- a/arch/powerpc/kernel/fadump.c ++++ b/arch/powerpc/kernel/fadump.c +@@ -654,6 +654,7 @@ int __init fadump_reserve_mem(void) + return ret; + error_out: + fw_dump.fadump_enabled = 0; ++ fw_dump.reserve_dump_area_size = 0; + return 0; + } + +diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c +index b8b7a189cd3ce..a612abe4bfd57 100644 +--- a/arch/powerpc/kernel/iommu.c ++++ b/arch/powerpc/kernel/iommu.c +@@ -171,17 +171,28 @@ static int fail_iommu_bus_notify(struct notifier_block *nb, + return 0; + } + +-static struct notifier_block fail_iommu_bus_notifier = { ++/* ++ * PCI and VIO buses need separate notifier_block structs, since they're linked ++ * list nodes. Sharing a notifier_block would mean that any notifiers later ++ * registered for PCI buses would also get called by VIO buses and vice versa. ++ */ ++static struct notifier_block fail_iommu_pci_bus_notifier = { + .notifier_call = fail_iommu_bus_notify + }; + ++#ifdef CONFIG_IBMVIO ++static struct notifier_block fail_iommu_vio_bus_notifier = { ++ .notifier_call = fail_iommu_bus_notify ++}; ++#endif ++ + static int __init fail_iommu_setup(void) + { + #ifdef CONFIG_PCI +- bus_register_notifier(&pci_bus_type, &fail_iommu_bus_notifier); ++ bus_register_notifier(&pci_bus_type, &fail_iommu_pci_bus_notifier); + #endif + #ifdef CONFIG_IBMVIO +- bus_register_notifier(&vio_bus_type, &fail_iommu_bus_notifier); ++ bus_register_notifier(&vio_bus_type, &fail_iommu_vio_bus_notifier); + #endif + + return 0; +diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c +index ccfd969656306..82be6d87514b7 100644 +--- a/arch/powerpc/kvm/book3s_hv_ras.c ++++ b/arch/powerpc/kvm/book3s_hv_ras.c +@@ -9,6 +9,7 @@ + #include + #include + #include ++#include + #include + #include + #include +diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c +index 6d7a1ef723e69..a8ba04dcb20fa 100644 +--- a/arch/powerpc/mm/book3s64/radix_tlb.c ++++ b/arch/powerpc/mm/book3s64/radix_tlb.c +@@ -127,21 +127,6 @@ static __always_inline void __tlbie_pid(unsigned long pid, unsigned long ric) + trace_tlbie(0, 0, rb, rs, ric, prs, r); + } + +-static __always_inline void __tlbie_pid_lpid(unsigned long pid, +- unsigned long lpid, +- unsigned long ric) +-{ +- unsigned long rb, rs, prs, r; +- +- rb = PPC_BIT(53); /* IS = 1 */ +- rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); +- prs = 1; /* process scoped */ +- r = 1; /* radix format */ +- +- asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) +- : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); +- trace_tlbie(0, 0, rb, rs, ric, prs, r); +-} + static __always_inline void __tlbie_lpid(unsigned long lpid, unsigned long ric) + { + unsigned long rb,rs,prs,r; +@@ -202,23 +187,6 @@ static __always_inline void __tlbie_va(unsigned long va, unsigned long pid, + trace_tlbie(0, 0, rb, rs, ric, prs, r); + } + +-static __always_inline void __tlbie_va_lpid(unsigned long va, unsigned long pid, +- unsigned long lpid, +- unsigned long ap, unsigned long ric) +-{ +- unsigned long rb, rs, prs, r; +- +- rb = va & ~(PPC_BITMASK(52, 63)); +- rb |= ap << PPC_BITLSHIFT(58); +- rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); +- prs = 1; /* process scoped */ +- r = 1; /* radix format */ +- +- asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) +- : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); +- trace_tlbie(0, 0, rb, rs, ric, prs, r); +-} +- + static __always_inline void __tlbie_lpid_va(unsigned long va, unsigned long lpid, + unsigned long ap, unsigned long ric) + { +@@ -264,22 +232,6 @@ static inline void fixup_tlbie_va_range(unsigned long va, unsigned long pid, + } + } + +-static inline void fixup_tlbie_va_range_lpid(unsigned long va, +- unsigned long pid, +- unsigned long lpid, +- unsigned long ap) +-{ +- if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { +- asm volatile("ptesync" : : : "memory"); +- __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); +- } +- +- if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { +- asm volatile("ptesync" : : : "memory"); +- __tlbie_va_lpid(va, pid, lpid, ap, RIC_FLUSH_TLB); +- } +-} +- + static inline void fixup_tlbie_pid(unsigned long pid) + { + /* +@@ -299,26 +251,6 @@ static inline void fixup_tlbie_pid(unsigned long pid) + } + } + +-static inline void fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid) +-{ +- /* +- * We can use any address for the invalidation, pick one which is +- * probably unused as an optimisation. +- */ +- unsigned long va = ((1UL << 52) - 1); +- +- if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { +- asm volatile("ptesync" : : : "memory"); +- __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); +- } +- +- if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { +- asm volatile("ptesync" : : : "memory"); +- __tlbie_va_lpid(va, pid, lpid, mmu_get_ap(MMU_PAGE_64K), +- RIC_FLUSH_TLB); +- } +-} +- + static inline void fixup_tlbie_lpid_va(unsigned long va, unsigned long lpid, + unsigned long ap) + { +@@ -416,31 +348,6 @@ static inline void _tlbie_pid(unsigned long pid, unsigned long ric) + asm volatile("eieio; tlbsync; ptesync": : :"memory"); + } + +-static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid, +- unsigned long ric) +-{ +- asm volatile("ptesync" : : : "memory"); +- +- /* +- * Workaround the fact that the "ric" argument to __tlbie_pid +- * must be a compile-time contraint to match the "i" constraint +- * in the asm statement. +- */ +- switch (ric) { +- case RIC_FLUSH_TLB: +- __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_TLB); +- fixup_tlbie_pid_lpid(pid, lpid); +- break; +- case RIC_FLUSH_PWC: +- __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); +- break; +- case RIC_FLUSH_ALL: +- default: +- __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_ALL); +- fixup_tlbie_pid_lpid(pid, lpid); +- } +- asm volatile("eieio; tlbsync; ptesync" : : : "memory"); +-} + struct tlbiel_pid { + unsigned long pid; + unsigned long ric; +@@ -566,20 +473,6 @@ static inline void __tlbie_va_range(unsigned long start, unsigned long end, + fixup_tlbie_va_range(addr - page_size, pid, ap); + } + +-static inline void __tlbie_va_range_lpid(unsigned long start, unsigned long end, +- unsigned long pid, unsigned long lpid, +- unsigned long page_size, +- unsigned long psize) +-{ +- unsigned long addr; +- unsigned long ap = mmu_get_ap(psize); +- +- for (addr = start; addr < end; addr += page_size) +- __tlbie_va_lpid(addr, pid, lpid, ap, RIC_FLUSH_TLB); +- +- fixup_tlbie_va_range_lpid(addr - page_size, pid, lpid, ap); +-} +- + static __always_inline void _tlbie_va(unsigned long va, unsigned long pid, + unsigned long psize, unsigned long ric) + { +@@ -660,18 +553,6 @@ static inline void _tlbie_va_range(unsigned long start, unsigned long end, + asm volatile("eieio; tlbsync; ptesync": : :"memory"); + } + +-static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end, +- unsigned long pid, unsigned long lpid, +- unsigned long page_size, +- unsigned long psize, bool also_pwc) +-{ +- asm volatile("ptesync" : : : "memory"); +- if (also_pwc) +- __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); +- __tlbie_va_range_lpid(start, end, pid, lpid, page_size, psize); +- asm volatile("eieio; tlbsync; ptesync" : : : "memory"); +-} +- + static inline void _tlbiel_va_range_multicast(struct mm_struct *mm, + unsigned long start, unsigned long end, + unsigned long pid, unsigned long page_size, +@@ -1476,6 +1357,127 @@ void radix__flush_tlb_all(void) + } + + #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE ++static __always_inline void __tlbie_pid_lpid(unsigned long pid, ++ unsigned long lpid, ++ unsigned long ric) ++{ ++ unsigned long rb, rs, prs, r; ++ ++ rb = PPC_BIT(53); /* IS = 1 */ ++ rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); ++ prs = 1; /* process scoped */ ++ r = 1; /* radix format */ ++ ++ asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) ++ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); ++ trace_tlbie(0, 0, rb, rs, ric, prs, r); ++} ++ ++static __always_inline void __tlbie_va_lpid(unsigned long va, unsigned long pid, ++ unsigned long lpid, ++ unsigned long ap, unsigned long ric) ++{ ++ unsigned long rb, rs, prs, r; ++ ++ rb = va & ~(PPC_BITMASK(52, 63)); ++ rb |= ap << PPC_BITLSHIFT(58); ++ rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); ++ prs = 1; /* process scoped */ ++ r = 1; /* radix format */ ++ ++ asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) ++ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); ++ trace_tlbie(0, 0, rb, rs, ric, prs, r); ++} ++ ++static inline void fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid) ++{ ++ /* ++ * We can use any address for the invalidation, pick one which is ++ * probably unused as an optimisation. ++ */ ++ unsigned long va = ((1UL << 52) - 1); ++ ++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { ++ asm volatile("ptesync" : : : "memory"); ++ __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); ++ } ++ ++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { ++ asm volatile("ptesync" : : : "memory"); ++ __tlbie_va_lpid(va, pid, lpid, mmu_get_ap(MMU_PAGE_64K), ++ RIC_FLUSH_TLB); ++ } ++} ++ ++static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid, ++ unsigned long ric) ++{ ++ asm volatile("ptesync" : : : "memory"); ++ ++ /* ++ * Workaround the fact that the "ric" argument to __tlbie_pid ++ * must be a compile-time contraint to match the "i" constraint ++ * in the asm statement. ++ */ ++ switch (ric) { ++ case RIC_FLUSH_TLB: ++ __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_TLB); ++ fixup_tlbie_pid_lpid(pid, lpid); ++ break; ++ case RIC_FLUSH_PWC: ++ __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); ++ break; ++ case RIC_FLUSH_ALL: ++ default: ++ __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_ALL); ++ fixup_tlbie_pid_lpid(pid, lpid); ++ } ++ asm volatile("eieio; tlbsync; ptesync" : : : "memory"); ++} ++ ++static inline void fixup_tlbie_va_range_lpid(unsigned long va, ++ unsigned long pid, ++ unsigned long lpid, ++ unsigned long ap) ++{ ++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { ++ asm volatile("ptesync" : : : "memory"); ++ __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); ++ } ++ ++ if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { ++ asm volatile("ptesync" : : : "memory"); ++ __tlbie_va_lpid(va, pid, lpid, ap, RIC_FLUSH_TLB); ++ } ++} ++ ++static inline void __tlbie_va_range_lpid(unsigned long start, unsigned long end, ++ unsigned long pid, unsigned long lpid, ++ unsigned long page_size, ++ unsigned long psize) ++{ ++ unsigned long addr; ++ unsigned long ap = mmu_get_ap(psize); ++ ++ for (addr = start; addr < end; addr += page_size) ++ __tlbie_va_lpid(addr, pid, lpid, ap, RIC_FLUSH_TLB); ++ ++ fixup_tlbie_va_range_lpid(addr - page_size, pid, lpid, ap); ++} ++ ++static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end, ++ unsigned long pid, unsigned long lpid, ++ unsigned long page_size, ++ unsigned long psize, bool also_pwc) ++{ ++ asm volatile("ptesync" : : : "memory"); ++ if (also_pwc) ++ __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); ++ __tlbie_va_range_lpid(start, end, pid, lpid, page_size, psize); ++ asm volatile("eieio; tlbsync; ptesync" : : : "memory"); ++} ++ + /* + * Performs process-scoped invalidations for a given LPID + * as part of H_RPT_INVALIDATE hcall. +diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c +index 6956f637a38c1..f2708c8629a52 100644 +--- a/arch/powerpc/mm/book3s64/slb.c ++++ b/arch/powerpc/mm/book3s64/slb.c +@@ -13,6 +13,7 @@ + #include + #include + #include ++#include + #include + #include + #include +diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c +index ee721f420a7ba..1a53ab08447cb 100644 +--- a/arch/powerpc/perf/core-fsl-emb.c ++++ b/arch/powerpc/perf/core-fsl-emb.c +@@ -645,7 +645,6 @@ static void perf_event_interrupt(struct pt_regs *regs) + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); + struct perf_event *event; + unsigned long val; +- int found = 0; + + for (i = 0; i < ppmu->n_counter; ++i) { + event = cpuhw->event[i]; +@@ -654,7 +653,6 @@ static void perf_event_interrupt(struct pt_regs *regs) + if ((int)val < 0) { + if (event) { + /* event has overflowed */ +- found = 1; + record_and_restart(event, val, regs); + } else { + /* +@@ -672,11 +670,13 @@ static void perf_event_interrupt(struct pt_regs *regs) + isync(); + } + +-void hw_perf_event_setup(int cpu) ++static int fsl_emb_pmu_prepare_cpu(unsigned int cpu) + { + struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu); + + memset(cpuhw, 0, sizeof(*cpuhw)); ++ ++ return 0; + } + + int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu) +@@ -689,6 +689,8 @@ int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu) + pmu->name); + + perf_pmu_register(&fsl_emb_pmu, "cpu", PERF_TYPE_RAW); ++ cpuhp_setup_state(CPUHP_PERF_POWER, "perf/powerpc:prepare", ++ fsl_emb_pmu_prepare_cpu, NULL); + + return 0; + } +diff --git a/arch/powerpc/platforms/powermac/time.c b/arch/powerpc/platforms/powermac/time.c +index 4c5790aff1b54..8633891b7aa58 100644 +--- a/arch/powerpc/platforms/powermac/time.c ++++ b/arch/powerpc/platforms/powermac/time.c +@@ -26,8 +26,8 @@ + #include + #include + ++#include + #include +-#include + #include + #include + #include +@@ -182,7 +182,7 @@ static int __init via_calibrate_decr(void) + return 0; + } + of_node_put(vias); +- via = ioremap(rsrc.start, resource_size(&rsrc)); ++ via = early_ioremap(rsrc.start, resource_size(&rsrc)); + if (via == NULL) { + printk(KERN_ERR "Failed to map VIA for timer calibration !\n"); + return 0; +@@ -207,7 +207,7 @@ static int __init via_calibrate_decr(void) + + ppc_tb_freq = (dstart - dend) * 100 / 6; + +- iounmap(via); ++ early_iounmap((void *)via, resource_size(&rsrc)); + + return 1; + } +diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S +index 762eb15d3bd42..fc50b9c27c1ba 100644 +--- a/arch/powerpc/platforms/pseries/hvCall.S ++++ b/arch/powerpc/platforms/pseries/hvCall.S +@@ -89,6 +89,7 @@ BEGIN_FTR_SECTION; \ + b 1f; \ + END_FTR_SECTION(0, 1); \ + LOAD_REG_ADDR(r12, hcall_tracepoint_refcount) ; \ ++ ld r12,0(r12); \ + std r12,32(r1); \ + cmpdi r12,0; \ + bne- LABEL; \ +diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c +index 97ef6499e5019..2c2812a87d470 100644 +--- a/arch/powerpc/platforms/pseries/lpar.c ++++ b/arch/powerpc/platforms/pseries/lpar.c +@@ -638,16 +638,8 @@ static const struct proc_ops vcpudispatch_stats_freq_proc_ops = { + + static int __init vcpudispatch_stats_procfs_init(void) + { +- /* +- * Avoid smp_processor_id while preemptible. All CPUs should have +- * the same value for lppaca_shared_proc. +- */ +- preempt_disable(); +- if (!lppaca_shared_proc(get_lppaca())) { +- preempt_enable(); ++ if (!lppaca_shared_proc()) + return 0; +- } +- preempt_enable(); + + if (!proc_create("powerpc/vcpudispatch_stats", 0600, NULL, + &vcpudispatch_stats_proc_ops)) +diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c +index 63fd925ccbb83..ca10a3682c46e 100644 +--- a/arch/powerpc/platforms/pseries/lparcfg.c ++++ b/arch/powerpc/platforms/pseries/lparcfg.c +@@ -205,7 +205,7 @@ static void parse_ppp_data(struct seq_file *m) + ppp_data.active_system_procs); + + /* pool related entries are appropriate for shared configs */ +- if (lppaca_shared_proc(get_lppaca())) { ++ if (lppaca_shared_proc()) { + unsigned long pool_idle_time, pool_procs; + + seq_printf(m, "pool=%d\n", ppp_data.pool_num); +@@ -616,7 +616,7 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v) + partition_potential_processors); + + seq_printf(m, "shared_processor_mode=%d\n", +- lppaca_shared_proc(get_lppaca())); ++ lppaca_shared_proc()); + + #ifdef CONFIG_PPC_64S_HASH_MMU + if (!radix_enabled()) +diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c +index 8ef3270515a9b..a0701dbdb1348 100644 +--- a/arch/powerpc/platforms/pseries/setup.c ++++ b/arch/powerpc/platforms/pseries/setup.c +@@ -846,7 +846,7 @@ static void __init pSeries_setup_arch(void) + if (firmware_has_feature(FW_FEATURE_LPAR)) { + vpa_init(boot_cpuid); + +- if (lppaca_shared_proc(get_lppaca())) { ++ if (lppaca_shared_proc()) { + static_branch_enable(&shared_processor); + pv_spinlocks_init(); + #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING +diff --git a/arch/powerpc/sysdev/mpc5xxx_clocks.c b/arch/powerpc/sysdev/mpc5xxx_clocks.c +index c5bf7e1b37804..58cee28e23992 100644 +--- a/arch/powerpc/sysdev/mpc5xxx_clocks.c ++++ b/arch/powerpc/sysdev/mpc5xxx_clocks.c +@@ -25,8 +25,10 @@ unsigned long mpc5xxx_fwnode_get_bus_frequency(struct fwnode_handle *fwnode) + + fwnode_for_each_parent_node(fwnode, parent) { + ret = fwnode_property_read_u32(parent, "bus-frequency", &bus_freq); +- if (!ret) ++ if (!ret) { ++ fwnode_handle_put(parent); + return bus_freq; ++ } + } + + return 0; +diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c +index bd8e80936f44d..cd692f399cd18 100644 +--- a/arch/powerpc/xmon/xmon.c ++++ b/arch/powerpc/xmon/xmon.c +@@ -58,6 +58,7 @@ + #ifdef CONFIG_PPC64 + #include + #include ++#include + #endif + + #include "nonstdio.h" +diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c +index a279b7d23a5e2..621322eb0e681 100644 +--- a/arch/s390/crypto/paes_s390.c ++++ b/arch/s390/crypto/paes_s390.c +@@ -35,7 +35,7 @@ + * and padding is also possible, the limits need to be generous. + */ + #define PAES_MIN_KEYSIZE 16 +-#define PAES_MAX_KEYSIZE 320 ++#define PAES_MAX_KEYSIZE MAXEP11AESKEYBLOBSIZE + + static u8 *ctrblk; + static DEFINE_MUTEX(ctrblk_lock); +diff --git a/arch/s390/include/uapi/asm/pkey.h b/arch/s390/include/uapi/asm/pkey.h +index 924b876f992c1..29c6fd369761e 100644 +--- a/arch/s390/include/uapi/asm/pkey.h ++++ b/arch/s390/include/uapi/asm/pkey.h +@@ -26,7 +26,7 @@ + #define MAXCLRKEYSIZE 32 /* a clear key value may be up to 32 bytes */ + #define MAXAESCIPHERKEYSIZE 136 /* our aes cipher keys have always 136 bytes */ + #define MINEP11AESKEYBLOBSIZE 256 /* min EP11 AES key blob size */ +-#define MAXEP11AESKEYBLOBSIZE 320 /* max EP11 AES key blob size */ ++#define MAXEP11AESKEYBLOBSIZE 336 /* max EP11 AES key blob size */ + + /* Minimum size of a key blob */ + #define MINKEYBLOBSIZE SECKEYBLOBSIZE +diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c +index 325cbf69ebbde..df5d2ec737d80 100644 +--- a/arch/s390/kernel/ipl.c ++++ b/arch/s390/kernel/ipl.c +@@ -503,6 +503,8 @@ static struct attribute_group ipl_ccw_attr_group_lpar = { + + static struct attribute *ipl_unknown_attrs[] = { + &sys_ipl_type_attr.attr, ++ &sys_ipl_secure_attr.attr, ++ &sys_ipl_has_secure_attr.attr, + NULL, + }; + +diff --git a/arch/um/configs/i386_defconfig b/arch/um/configs/i386_defconfig +index c0162286d68b7..c33a6880a437a 100644 +--- a/arch/um/configs/i386_defconfig ++++ b/arch/um/configs/i386_defconfig +@@ -35,6 +35,7 @@ CONFIG_TTY_CHAN=y + CONFIG_XTERM_CHAN=y + CONFIG_CON_CHAN="pts" + CONFIG_SSL_CHAN="pts" ++CONFIG_SOUND=m + CONFIG_UML_SOUND=m + CONFIG_DEVTMPFS=y + CONFIG_DEVTMPFS_MOUNT=y +diff --git a/arch/um/configs/x86_64_defconfig b/arch/um/configs/x86_64_defconfig +index bec6e5d956873..df29f282b6ac2 100644 +--- a/arch/um/configs/x86_64_defconfig ++++ b/arch/um/configs/x86_64_defconfig +@@ -33,6 +33,7 @@ CONFIG_TTY_CHAN=y + CONFIG_XTERM_CHAN=y + CONFIG_CON_CHAN="pts" + CONFIG_SSL_CHAN="pts" ++CONFIG_SOUND=m + CONFIG_UML_SOUND=m + CONFIG_DEVTMPFS=y + CONFIG_DEVTMPFS_MOUNT=y +diff --git a/arch/um/drivers/Kconfig b/arch/um/drivers/Kconfig +index 5903e2b598aae..fe0210eaf9bb6 100644 +--- a/arch/um/drivers/Kconfig ++++ b/arch/um/drivers/Kconfig +@@ -111,24 +111,14 @@ config SSL_CHAN + + config UML_SOUND + tristate "Sound support" ++ depends on SOUND ++ select SOUND_OSS_CORE + help + This option enables UML sound support. If enabled, it will pull in +- soundcore and the UML hostaudio relay, which acts as a intermediary ++ the UML hostaudio relay, which acts as a intermediary + between the host's dsp and mixer devices and the UML sound system. + It is safe to say 'Y' here. + +-config SOUND +- tristate +- default UML_SOUND +- +-config SOUND_OSS_CORE +- bool +- default UML_SOUND +- +-config HOSTAUDIO +- tristate +- default UML_SOUND +- + endmenu + + menu "UML Network Devices" +diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile +index 65b449c992d2c..079556ec044b8 100644 +--- a/arch/um/drivers/Makefile ++++ b/arch/um/drivers/Makefile +@@ -54,7 +54,7 @@ obj-$(CONFIG_UML_NET) += net.o + obj-$(CONFIG_MCONSOLE) += mconsole.o + obj-$(CONFIG_MMAPPER) += mmapper_kern.o + obj-$(CONFIG_BLK_DEV_UBD) += ubd.o +-obj-$(CONFIG_HOSTAUDIO) += hostaudio.o ++obj-$(CONFIG_UML_SOUND) += hostaudio.o + obj-$(CONFIG_NULL_CHAN) += null.o + obj-$(CONFIG_PORT_CHAN) += port.o + obj-$(CONFIG_PTY_CHAN) += pty.o +diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S +index d33f060900d23..b4bd6df29116f 100644 +--- a/arch/x86/boot/compressed/head_64.S ++++ b/arch/x86/boot/compressed/head_64.S +@@ -485,11 +485,25 @@ SYM_CODE_START(startup_64) + /* Save the trampoline address in RCX */ + movq %rax, %rcx + ++ /* Set up 32-bit addressable stack */ ++ leaq TRAMPOLINE_32BIT_STACK_END(%rcx), %rsp ++ ++ /* ++ * Preserve live 64-bit registers on the stack: this is necessary ++ * because the architecture does not guarantee that GPRs will retain ++ * their full 64-bit values across a 32-bit mode switch. ++ */ ++ pushq %rbp ++ pushq %rbx ++ pushq %rsi ++ + /* +- * Load the address of trampoline_return() into RDI. +- * It will be used by the trampoline to return to the main code. ++ * Push the 64-bit address of trampoline_return() onto the new stack. ++ * It will be used by the trampoline to return to the main code. Due to ++ * the 32-bit mode switch, it cannot be kept it in a register either. + */ + leaq trampoline_return(%rip), %rdi ++ pushq %rdi + + /* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far return */ + pushq $__KERNEL32_CS +@@ -497,6 +511,11 @@ SYM_CODE_START(startup_64) + pushq %rax + lretq + trampoline_return: ++ /* Restore live 64-bit registers */ ++ popq %rsi ++ popq %rbx ++ popq %rbp ++ + /* Restore the stack, the 32-bit trampoline uses its own stack */ + leaq rva(boot_stack_end)(%rbx), %rsp + +@@ -606,7 +625,7 @@ SYM_FUNC_END(.Lrelocated) + /* + * This is the 32-bit trampoline that will be copied over to low memory. + * +- * RDI contains the return address (might be above 4G). ++ * Return address is at the top of the stack (might be above 4G). + * ECX contains the base address of the trampoline memory. + * Non zero RDX means trampoline needs to enable 5-level paging. + */ +@@ -616,9 +635,6 @@ SYM_CODE_START(trampoline_32bit_src) + movl %eax, %ds + movl %eax, %ss + +- /* Set up new stack */ +- leal TRAMPOLINE_32BIT_STACK_END(%ecx), %esp +- + /* Disable paging */ + movl %cr0, %eax + btrl $X86_CR0_PG_BIT, %eax +@@ -695,7 +711,7 @@ SYM_CODE_END(trampoline_32bit_src) + .code64 + SYM_FUNC_START_LOCAL_NOALIGN(.Lpaging_enabled) + /* Return from the trampoline */ +- jmp *%rdi ++ retq + SYM_FUNC_END(.Lpaging_enabled) + + /* +diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c +index 935081ddf60bc..9b5859812f4fb 100644 +--- a/arch/x86/events/intel/uncore_snbep.c ++++ b/arch/x86/events/intel/uncore_snbep.c +@@ -6078,8 +6078,18 @@ void spr_uncore_cpu_init(void) + + type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA); + if (type) { ++ /* ++ * The value from the discovery table (stored in the type->num_boxes ++ * of UNCORE_SPR_CHA) is incorrect on some SPR variants because of a ++ * firmware bug. Using the value from SPR_MSR_UNC_CBO_CONFIG to replace it. ++ */ + rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo); +- type->num_boxes = num_cbo; ++ /* ++ * The MSR doesn't work on the EMR XCC, but the firmware bug doesn't impact ++ * the EMR XCC. Don't let the value from the MSR replace the existing value. ++ */ ++ if (num_cbo) ++ type->num_boxes = num_cbo; + } + spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO); + } +diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h +index 8f513372cd8d4..c91326593e741 100644 +--- a/arch/x86/include/asm/mem_encrypt.h ++++ b/arch/x86/include/asm/mem_encrypt.h +@@ -50,8 +50,8 @@ void __init sme_enable(struct boot_params *bp); + + int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size); + int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size); +-void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, +- bool enc); ++void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, ++ unsigned long size, bool enc); + + void __init mem_encrypt_free_decrypted_mem(void); + +@@ -84,7 +84,7 @@ early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; + static inline int __init + early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; } + static inline void __init +-early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {} ++early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) {} + + static inline void mem_encrypt_free_decrypted_mem(void) { } + +diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h +index aa174fed3a71c..f6116b66f2892 100644 +--- a/arch/x86/include/asm/pgtable_types.h ++++ b/arch/x86/include/asm/pgtable_types.h +@@ -125,11 +125,12 @@ + * instance, and is *not* included in this mask since + * pte_modify() does modify it. + */ +-#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ +- _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \ +- _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC | \ +- _PAGE_UFFD_WP) +-#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE) ++#define _COMMON_PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ ++ _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |\ ++ _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC | \ ++ _PAGE_UFFD_WP) ++#define _PAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PAT) ++#define _HPAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_PAT_LARGE) + + /* + * The cache modes defined here are used to translate between pure SW usage +diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c +index 60e330cdbd175..6e38188633a4d 100644 +--- a/arch/x86/kernel/apm_32.c ++++ b/arch/x86/kernel/apm_32.c +@@ -238,12 +238,6 @@ + extern int (*console_blank_hook)(int); + #endif + +-/* +- * The apm_bios device is one of the misc char devices. +- * This is its minor number. +- */ +-#define APM_MINOR_DEV 134 +- + /* + * Various options can be changed at boot time as follows: + * (We allow underscores for compatibility with the modules code) +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index d38ae25e7c01f..b723368dbc644 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1259,11 +1259,11 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), + VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), +- VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED | GDS), +- VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), +- VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED | GDS), +- VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED | GDS), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS), ++ VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS), ++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS), + VULNBL_INTEL_STEPPINGS(CANNONLAKE_L, X86_STEPPING_ANY, RETBLEED), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS), + VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO | GDS), +diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c +index e228d58ee2645..f1a748da5fabb 100644 +--- a/arch/x86/kernel/cpu/mce/core.c ++++ b/arch/x86/kernel/cpu/mce/core.c +@@ -856,6 +856,26 @@ static noinstr bool quirk_skylake_repmov(void) + return false; + } + ++/* ++ * Some Zen-based Instruction Fetch Units set EIPV=RIPV=0 on poison consumption ++ * errors. This means mce_gather_info() will not save the "ip" and "cs" registers. ++ * ++ * However, the context is still valid, so save the "cs" register for later use. ++ * ++ * The "ip" register is truly unknown, so don't save it or fixup EIPV/RIPV. ++ * ++ * The Instruction Fetch Unit is at MCA bank 1 for all affected systems. ++ */ ++static __always_inline void quirk_zen_ifu(int bank, struct mce *m, struct pt_regs *regs) ++{ ++ if (bank != 1) ++ return; ++ if (!(m->status & MCI_STATUS_POISON)) ++ return; ++ ++ m->cs = regs->cs; ++} ++ + /* + * Do a quick check if any of the events requires a panic. + * This decides if we keep the events around or clear them. +@@ -875,6 +895,9 @@ static __always_inline int mce_no_way_out(struct mce *m, char **msg, unsigned lo + if (mce_flags.snb_ifu_quirk) + quirk_sandybridge_ifu(i, m, regs); + ++ if (mce_flags.zen_ifu_quirk) ++ quirk_zen_ifu(i, m, regs); ++ + m->bank = i; + if (mce_severity(m, regs, &tmp, true) >= MCE_PANIC_SEVERITY) { + mce_read_aux(m, i); +@@ -1852,6 +1875,9 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c) + if (c->x86 == 0x15 && c->x86_model <= 0xf) + mce_flags.overflow_recov = 1; + ++ if (c->x86 >= 0x17 && c->x86 <= 0x1A) ++ mce_flags.zen_ifu_quirk = 1; ++ + } + + if (c->x86_vendor == X86_VENDOR_INTEL) { +diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h +index 7e03f5b7f6bd7..0bed57ac86c51 100644 +--- a/arch/x86/kernel/cpu/mce/internal.h ++++ b/arch/x86/kernel/cpu/mce/internal.h +@@ -157,6 +157,9 @@ struct mce_vendor_flags { + */ + smca : 1, + ++ /* Zen IFU quirk */ ++ zen_ifu_quirk : 1, ++ + /* AMD-style error thresholding banks present. */ + amd_threshold : 1, + +@@ -172,7 +175,7 @@ struct mce_vendor_flags { + /* Skylake, Cascade Lake, Cooper Lake REP;MOVS* quirk */ + skx_repmov_quirk : 1, + +- __reserved_0 : 56; ++ __reserved_0 : 55; + }; + + extern struct mce_vendor_flags mce_flags; +diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c +index 6a77a14eee38c..f5549704ac4cb 100644 +--- a/arch/x86/kernel/cpu/sgx/virt.c ++++ b/arch/x86/kernel/cpu/sgx/virt.c +@@ -204,6 +204,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file) + continue; + + xa_erase(&vepc->page_array, index); ++ cond_resched(); + } + + /* +@@ -222,6 +223,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file) + list_add_tail(&epc_page->list, &secs_pages); + + xa_erase(&vepc->page_array, index); ++ cond_resched(); + } + + /* +@@ -243,6 +245,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file) + + if (sgx_vepc_free_page(epc_page)) + list_add_tail(&epc_page->list, &secs_pages); ++ cond_resched(); + } + + if (!list_empty(&secs_pages)) +diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c +index d4e48b4a438b2..796e2f9e87619 100644 +--- a/arch/x86/kernel/kvm.c ++++ b/arch/x86/kernel/kvm.c +@@ -972,10 +972,8 @@ static void __init kvm_init_platform(void) + * Ensure that _bss_decrypted section is marked as decrypted in the + * shared pages list. + */ +- nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted, +- PAGE_SIZE); + early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted, +- nr_pages, 0); ++ __end_bss_decrypted - __start_bss_decrypted, 0); + + /* + * If not booted using EFI, enable Live migration support. +diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c +index beca03556379d..7a6df4b62c1bd 100644 +--- a/arch/x86/kvm/mmu/mmu.c ++++ b/arch/x86/kvm/mmu/mmu.c +@@ -42,6 +42,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -55,6 +56,8 @@ + + extern bool itlb_multihit_kvm_mitigation; + ++static bool nx_hugepage_mitigation_hard_disabled; ++ + int __read_mostly nx_huge_pages = -1; + static uint __read_mostly nx_huge_pages_recovery_period_ms; + #ifdef CONFIG_PREEMPT_RT +@@ -64,12 +67,13 @@ static uint __read_mostly nx_huge_pages_recovery_ratio = 0; + static uint __read_mostly nx_huge_pages_recovery_ratio = 60; + #endif + ++static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp); + static int set_nx_huge_pages(const char *val, const struct kernel_param *kp); + static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel_param *kp); + + static const struct kernel_param_ops nx_huge_pages_ops = { + .set = set_nx_huge_pages, +- .get = param_get_bool, ++ .get = get_nx_huge_pages, + }; + + static const struct kernel_param_ops nx_huge_pages_recovery_param_ops = { +@@ -6644,6 +6648,14 @@ static void mmu_destroy_caches(void) + kmem_cache_destroy(mmu_page_header_cache); + } + ++static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp) ++{ ++ if (nx_hugepage_mitigation_hard_disabled) ++ return sprintf(buffer, "never\n"); ++ ++ return param_get_bool(buffer, kp); ++} ++ + static bool get_nx_auto_mode(void) + { + /* Return true when CPU has the bug, and mitigations are ON */ +@@ -6660,15 +6672,29 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) + bool old_val = nx_huge_pages; + bool new_val; + ++ if (nx_hugepage_mitigation_hard_disabled) ++ return -EPERM; ++ + /* In "auto" mode deploy workaround only if CPU has the bug. */ +- if (sysfs_streq(val, "off")) ++ if (sysfs_streq(val, "off")) { + new_val = 0; +- else if (sysfs_streq(val, "force")) ++ } else if (sysfs_streq(val, "force")) { + new_val = 1; +- else if (sysfs_streq(val, "auto")) ++ } else if (sysfs_streq(val, "auto")) { + new_val = get_nx_auto_mode(); +- else if (strtobool(val, &new_val) < 0) ++ } else if (sysfs_streq(val, "never")) { ++ new_val = 0; ++ ++ mutex_lock(&kvm_lock); ++ if (!list_empty(&vm_list)) { ++ mutex_unlock(&kvm_lock); ++ return -EBUSY; ++ } ++ nx_hugepage_mitigation_hard_disabled = true; ++ mutex_unlock(&kvm_lock); ++ } else if (kstrtobool(val, &new_val) < 0) { + return -EINVAL; ++ } + + __set_nx_huge_pages(new_val); + +@@ -6799,6 +6825,9 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel + uint old_period, new_period; + int err; + ++ if (nx_hugepage_mitigation_hard_disabled) ++ return -EPERM; ++ + was_recovery_enabled = calc_nx_huge_pages_recovery_period(&old_period); + + err = param_set_uint(val, kp); +@@ -6922,6 +6951,9 @@ int kvm_mmu_post_init_vm(struct kvm *kvm) + { + int err; + ++ if (nx_hugepage_mitigation_hard_disabled) ++ return 0; ++ + err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0, + "kvm-nx-lpage-recovery", + &kvm->arch.nx_lpage_recovery_thread); +diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c +index ff6c0462beee7..3ea0f763540a4 100644 +--- a/arch/x86/mm/mem_encrypt_amd.c ++++ b/arch/x86/mm/mem_encrypt_amd.c +@@ -288,11 +288,10 @@ static bool amd_enc_cache_flush_required(void) + return !cpu_feature_enabled(X86_FEATURE_SME_COHERENT); + } + +-static void enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) ++static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) + { + #ifdef CONFIG_PARAVIRT +- unsigned long sz = npages << PAGE_SHIFT; +- unsigned long vaddr_end = vaddr + sz; ++ unsigned long vaddr_end = vaddr + size; + + while (vaddr < vaddr_end) { + int psize, pmask, level; +@@ -342,7 +341,7 @@ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool e + snp_set_memory_private(vaddr, npages); + + if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) +- enc_dec_hypercall(vaddr, npages, enc); ++ enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc); + + return true; + } +@@ -466,7 +465,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr, + + ret = 0; + +- early_set_mem_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc); ++ early_set_mem_enc_dec_hypercall(start, size, enc); + out: + __flush_tlb_all(); + return ret; +@@ -482,9 +481,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) + return early_set_memory_enc_dec(vaddr, size, true); + } + +-void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) ++void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) + { +- enc_dec_hypercall(vaddr, npages, enc); ++ enc_dec_hypercall(vaddr, size, enc); + } + + void __init sme_early_init(void) +diff --git a/arch/xtensa/include/asm/core.h b/arch/xtensa/include/asm/core.h +index f856d2bcb9f36..7cef85ad9741a 100644 +--- a/arch/xtensa/include/asm/core.h ++++ b/arch/xtensa/include/asm/core.h +@@ -44,4 +44,13 @@ + #define XTENSA_STACK_ALIGNMENT 16 + #endif + ++#ifndef XCHAL_HW_MIN_VERSION ++#if defined(XCHAL_HW_MIN_VERSION_MAJOR) && defined(XCHAL_HW_MIN_VERSION_MINOR) ++#define XCHAL_HW_MIN_VERSION (XCHAL_HW_MIN_VERSION_MAJOR * 100 + \ ++ XCHAL_HW_MIN_VERSION_MINOR) ++#else ++#define XCHAL_HW_MIN_VERSION 0 ++#endif ++#endif ++ + #endif +diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c +index a0d05c8598d0f..183618090d05b 100644 +--- a/arch/xtensa/kernel/perf_event.c ++++ b/arch/xtensa/kernel/perf_event.c +@@ -13,17 +13,26 @@ + #include + #include + ++#include + #include + #include + ++#define XTENSA_HWVERSION_RG_2015_0 260000 ++ ++#if XCHAL_HW_MIN_VERSION >= XTENSA_HWVERSION_RG_2015_0 ++#define XTENSA_PMU_ERI_BASE 0x00101000 ++#else ++#define XTENSA_PMU_ERI_BASE 0x00001000 ++#endif ++ + /* Global control/status for all perf counters */ +-#define XTENSA_PMU_PMG 0x1000 ++#define XTENSA_PMU_PMG XTENSA_PMU_ERI_BASE + /* Perf counter values */ +-#define XTENSA_PMU_PM(i) (0x1080 + (i) * 4) ++#define XTENSA_PMU_PM(i) (XTENSA_PMU_ERI_BASE + 0x80 + (i) * 4) + /* Perf counter control registers */ +-#define XTENSA_PMU_PMCTRL(i) (0x1100 + (i) * 4) ++#define XTENSA_PMU_PMCTRL(i) (XTENSA_PMU_ERI_BASE + 0x100 + (i) * 4) + /* Perf counter status registers */ +-#define XTENSA_PMU_PMSTAT(i) (0x1180 + (i) * 4) ++#define XTENSA_PMU_PMSTAT(i) (XTENSA_PMU_ERI_BASE + 0x180 + (i) * 4) + + #define XTENSA_PMU_PMG_PMEN 0x1 + +diff --git a/block/blk-settings.c b/block/blk-settings.c +index 291cf9df7fc29..86ff375c00ce4 100644 +--- a/block/blk-settings.c ++++ b/block/blk-settings.c +@@ -824,10 +824,13 @@ EXPORT_SYMBOL(blk_set_queue_depth); + */ + void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua) + { +- if (wc) ++ if (wc) { ++ blk_queue_flag_set(QUEUE_FLAG_HW_WC, q); + blk_queue_flag_set(QUEUE_FLAG_WC, q); +- else ++ } else { ++ blk_queue_flag_clear(QUEUE_FLAG_HW_WC, q); + blk_queue_flag_clear(QUEUE_FLAG_WC, q); ++ } + if (fua) + blk_queue_flag_set(QUEUE_FLAG_FUA, q); + else +diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c +index e71b3b43927c0..a582ea0da74f5 100644 +--- a/block/blk-sysfs.c ++++ b/block/blk-sysfs.c +@@ -528,21 +528,16 @@ static ssize_t queue_wc_show(struct request_queue *q, char *page) + static ssize_t queue_wc_store(struct request_queue *q, const char *page, + size_t count) + { +- int set = -1; +- +- if (!strncmp(page, "write back", 10)) +- set = 1; +- else if (!strncmp(page, "write through", 13) || +- !strncmp(page, "none", 4)) +- set = 0; +- +- if (set == -1) +- return -EINVAL; +- +- if (set) ++ if (!strncmp(page, "write back", 10)) { ++ if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags)) ++ return -EINVAL; + blk_queue_flag_set(QUEUE_FLAG_WC, q); +- else ++ } else if (!strncmp(page, "write through", 13) || ++ !strncmp(page, "none", 4)) { + blk_queue_flag_clear(QUEUE_FLAG_WC, q); ++ } else { ++ return -EINVAL; ++ } + + return count; + } +diff --git a/block/ioctl.c b/block/ioctl.c +index 9c5f637ff153f..3c475e4166e9f 100644 +--- a/block/ioctl.c ++++ b/block/ioctl.c +@@ -20,6 +20,8 @@ static int blkpg_do_ioctl(struct block_device *bdev, + struct blkpg_partition p; + long long start, length; + ++ if (disk->flags & GENHD_FL_NO_PART) ++ return -EINVAL; + if (!capable(CAP_SYS_ADMIN)) + return -EACCES; + if (copy_from_user(&p, upart, sizeof(struct blkpg_partition))) +diff --git a/block/mq-deadline.c b/block/mq-deadline.c +index f10c2a0d18d41..55e26065c2e27 100644 +--- a/block/mq-deadline.c ++++ b/block/mq-deadline.c +@@ -622,8 +622,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) + struct request_queue *q = hctx->queue; + struct deadline_data *dd = q->elevator->elevator_data; + struct blk_mq_tags *tags = hctx->sched_tags; ++ unsigned int shift = tags->bitmap_tags.sb.shift; + +- dd->async_depth = max(1UL, 3 * q->nr_requests / 4); ++ dd->async_depth = max(1U, 3 * (1U << shift) / 4); + + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth); + } +diff --git a/crypto/algapi.c b/crypto/algapi.c +index 8c3a869cc43a9..5dc9ccdd5a510 100644 +--- a/crypto/algapi.c ++++ b/crypto/algapi.c +@@ -17,6 +17,7 @@ + #include + #include + #include ++#include + + #include "internal.h" + +@@ -74,15 +75,26 @@ static void crypto_free_instance(struct crypto_instance *inst) + inst->alg.cra_type->free(inst); + } + +-static void crypto_destroy_instance(struct crypto_alg *alg) ++static void crypto_destroy_instance_workfn(struct work_struct *w) + { +- struct crypto_instance *inst = (void *)alg; ++ struct crypto_instance *inst = container_of(w, struct crypto_instance, ++ free_work); + struct crypto_template *tmpl = inst->tmpl; + + crypto_free_instance(inst); + crypto_tmpl_put(tmpl); + } + ++static void crypto_destroy_instance(struct crypto_alg *alg) ++{ ++ struct crypto_instance *inst = container_of(alg, ++ struct crypto_instance, ++ alg); ++ ++ INIT_WORK(&inst->free_work, crypto_destroy_instance_workfn); ++ schedule_work(&inst->free_work); ++} ++ + /* + * This function adds a spawn to the list secondary_spawns which + * will be used at the end of crypto_remove_spawns to unregister +diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c +index 0b4943a4592b7..1815024bead38 100644 +--- a/crypto/asymmetric_keys/x509_public_key.c ++++ b/crypto/asymmetric_keys/x509_public_key.c +@@ -117,6 +117,11 @@ int x509_check_for_self_signed(struct x509_certificate *cert) + goto out; + } + ++ if (cert->unsupported_sig) { ++ ret = 0; ++ goto out; ++ } ++ + ret = public_key_verify_signature(cert->pub, cert->sig); + if (ret < 0) { + if (ret == -ENOPKG) { +diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c +index 3237b50baf3c5..1cf267bc6f9ea 100644 +--- a/crypto/rsa-pkcs1pad.c ++++ b/crypto/rsa-pkcs1pad.c +@@ -575,6 +575,10 @@ static int pkcs1pad_init_tfm(struct crypto_akcipher *tfm) + return PTR_ERR(child_tfm); + + ctx->child = child_tfm; ++ ++ akcipher_set_reqsize(tfm, sizeof(struct pkcs1pad_request) + ++ crypto_akcipher_reqsize(child_tfm)); ++ + return 0; + } + +@@ -670,7 +674,6 @@ static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb) + inst->alg.set_pub_key = pkcs1pad_set_pub_key; + inst->alg.set_priv_key = pkcs1pad_set_priv_key; + inst->alg.max_size = pkcs1pad_get_max_size; +- inst->alg.reqsize = sizeof(struct pkcs1pad_request) + rsa_alg->reqsize; + + inst->free = pkcs1pad_free; + +diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c +index e499c60c45791..ec84da6cc1bff 100644 +--- a/drivers/acpi/x86/s2idle.c ++++ b/drivers/acpi/x86/s2idle.c +@@ -122,17 +122,16 @@ static void lpi_device_get_constraints_amd(void) + acpi_handle_debug(lps0_device_handle, + "LPI: constraints list begin:\n"); + +- for (j = 0; j < package->package.count; ++j) { ++ for (j = 0; j < package->package.count; j++) { + union acpi_object *info_obj = &package->package.elements[j]; + struct lpi_device_constraint_amd dev_info = {}; + struct lpi_constraints *list; + acpi_status status; + +- for (k = 0; k < info_obj->package.count; ++k) { +- union acpi_object *obj = &info_obj->package.elements[k]; ++ list = &lpi_constraints_table[lpi_constraints_table_size]; + +- list = &lpi_constraints_table[lpi_constraints_table_size]; +- list->min_dstate = -1; ++ for (k = 0; k < info_obj->package.count; k++) { ++ union acpi_object *obj = &info_obj->package.elements[k]; + + switch (k) { + case 0: +@@ -148,27 +147,21 @@ static void lpi_device_get_constraints_amd(void) + dev_info.min_dstate = obj->integer.value; + break; + } ++ } + +- if (!dev_info.enabled || !dev_info.name || +- !dev_info.min_dstate) +- continue; ++ if (!dev_info.enabled || !dev_info.name || ++ !dev_info.min_dstate) ++ continue; + +- status = acpi_get_handle(NULL, dev_info.name, +- &list->handle); +- if (ACPI_FAILURE(status)) +- continue; ++ status = acpi_get_handle(NULL, dev_info.name, &list->handle); ++ if (ACPI_FAILURE(status)) ++ continue; + +- acpi_handle_debug(lps0_device_handle, +- "Name:%s\n", dev_info.name); ++ acpi_handle_debug(lps0_device_handle, ++ "Name:%s\n", dev_info.name); + +- list->min_dstate = dev_info.min_dstate; ++ list->min_dstate = dev_info.min_dstate; + +- if (list->min_dstate < 0) { +- acpi_handle_debug(lps0_device_handle, +- "Incomplete constraint defined\n"); +- continue; +- } +- } + lpi_constraints_table_size++; + } + } +@@ -213,7 +206,7 @@ static void lpi_device_get_constraints(void) + if (!package) + continue; + +- for (j = 0; j < package->package.count; ++j) { ++ for (j = 0; j < package->package.count; j++) { + union acpi_object *element = + &(package->package.elements[j]); + +@@ -245,7 +238,7 @@ static void lpi_device_get_constraints(void) + + constraint->min_dstate = -1; + +- for (j = 0; j < package_count; ++j) { ++ for (j = 0; j < package_count; j++) { + union acpi_object *info_obj = &info.package[j]; + union acpi_object *cnstr_pkg; + union acpi_object *obj; +diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c +index 110a535648d2e..0aa2d3111ae6e 100644 +--- a/drivers/amba/bus.c ++++ b/drivers/amba/bus.c +@@ -534,6 +534,7 @@ static void amba_device_release(struct device *dev) + { + struct amba_device *d = to_amba_device(dev); + ++ of_node_put(d->dev.of_node); + if (d->res.parent) + release_resource(&d->res); + mutex_destroy(&d->periphid_lock); +diff --git a/drivers/ata/pata_arasan_cf.c b/drivers/ata/pata_arasan_cf.c +index e89617ed9175b..46588fc829432 100644 +--- a/drivers/ata/pata_arasan_cf.c ++++ b/drivers/ata/pata_arasan_cf.c +@@ -529,7 +529,8 @@ static void data_xfer(struct work_struct *work) + /* dma_request_channel may sleep, so calling from process context */ + acdev->dma_chan = dma_request_chan(acdev->host->dev, "data"); + if (IS_ERR(acdev->dma_chan)) { +- dev_err(acdev->host->dev, "Unable to get dma_chan\n"); ++ dev_err_probe(acdev->host->dev, PTR_ERR(acdev->dma_chan), ++ "Unable to get dma_chan\n"); + acdev->dma_chan = NULL; + goto chan_request_fail; + } +diff --git a/drivers/base/core.c b/drivers/base/core.c +index e30223c2672fc..af90bfb0cc3d8 100644 +--- a/drivers/base/core.c ++++ b/drivers/base/core.c +@@ -3855,6 +3855,17 @@ void device_del(struct device *dev) + device_platform_notify_remove(dev); + device_links_purge(dev); + ++ /* ++ * If a device does not have a driver attached, we need to clean ++ * up any managed resources. We do this in device_release(), but ++ * it's never called (and we leak the device) if a managed ++ * resource holds a reference to the device. So release all ++ * managed resources here, like we do in driver_detach(). We ++ * still need to do so again in device_release() in case someone ++ * adds a new resource after this point, though. ++ */ ++ devres_release_all(dev); ++ + if (dev->bus) + blocking_notifier_call_chain(&dev->bus->p->bus_notifier, + BUS_NOTIFY_REMOVED_DEVICE, dev); +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index 97ab1468a8760..380a53b6aee81 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -674,6 +674,8 @@ re_probe: + + device_remove(dev); + driver_sysfs_remove(dev); ++ if (dev->bus && dev->bus->dma_cleanup) ++ dev->bus->dma_cleanup(dev); + device_unbind_cleanup(dev); + + goto re_probe; +diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c +index fabf87058d80b..ae6b8788d5f3f 100644 +--- a/drivers/base/regmap/regcache-rbtree.c ++++ b/drivers/base/regmap/regcache-rbtree.c +@@ -277,7 +277,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map, + + blk = krealloc(rbnode->block, + blklen * map->cache_word_size, +- GFP_KERNEL); ++ map->alloc_flags); + if (!blk) + return -ENOMEM; + +@@ -286,7 +286,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map, + if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) { + present = krealloc(rbnode->cache_present, + BITS_TO_LONGS(blklen) * sizeof(*present), +- GFP_KERNEL); ++ map->alloc_flags); + if (!present) + return -ENOMEM; + +@@ -320,7 +320,7 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg) + const struct regmap_range *range; + int i; + +- rbnode = kzalloc(sizeof(*rbnode), GFP_KERNEL); ++ rbnode = kzalloc(sizeof(*rbnode), map->alloc_flags); + if (!rbnode) + return NULL; + +@@ -346,13 +346,13 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg) + } + + rbnode->block = kmalloc_array(rbnode->blklen, map->cache_word_size, +- GFP_KERNEL); ++ map->alloc_flags); + if (!rbnode->block) + goto err_free; + + rbnode->cache_present = kcalloc(BITS_TO_LONGS(rbnode->blklen), + sizeof(*rbnode->cache_present), +- GFP_KERNEL); ++ map->alloc_flags); + if (!rbnode->cache_present) + goto err_free_block; + +diff --git a/drivers/base/test/test_async_driver_probe.c b/drivers/base/test/test_async_driver_probe.c +index 929410d0dd6fe..3465800baa6c8 100644 +--- a/drivers/base/test/test_async_driver_probe.c ++++ b/drivers/base/test/test_async_driver_probe.c +@@ -84,7 +84,7 @@ test_platform_device_register_node(char *name, int id, int nid) + + pdev = platform_device_alloc(name, id); + if (!pdev) +- return NULL; ++ return ERR_PTR(-ENOMEM); + + if (nid != NUMA_NO_NODE) + set_dev_node(&pdev->dev, nid); +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index d6f405763c56f..f2062c2a28da8 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -1984,7 +1984,7 @@ static int btusb_switch_alt_setting(struct hci_dev *hdev, int new_alts) + * alternate setting. + */ + spin_lock_irqsave(&data->rxlock, flags); +- kfree_skb(data->sco_skb); ++ dev_kfree_skb_irq(data->sco_skb); + data->sco_skb = NULL; + spin_unlock_irqrestore(&data->rxlock, flags); + +diff --git a/drivers/bluetooth/hci_nokia.c b/drivers/bluetooth/hci_nokia.c +index 05f7f6de6863d..97da0b2bfd17e 100644 +--- a/drivers/bluetooth/hci_nokia.c ++++ b/drivers/bluetooth/hci_nokia.c +@@ -734,7 +734,11 @@ static int nokia_bluetooth_serdev_probe(struct serdev_device *serdev) + return err; + } + +- clk_prepare_enable(sysclk); ++ err = clk_prepare_enable(sysclk); ++ if (err) { ++ dev_err(dev, "could not enable sysclk: %d", err); ++ return err; ++ } + btdev->sysclk_speed = clk_get_rate(sysclk); + clk_disable_unprepare(sysclk); + +diff --git a/drivers/bus/imx-weim.c b/drivers/bus/imx-weim.c +index 55d917bd1f3f8..64f9eacd1b38d 100644 +--- a/drivers/bus/imx-weim.c ++++ b/drivers/bus/imx-weim.c +@@ -331,6 +331,12 @@ static int of_weim_notify(struct notifier_block *nb, unsigned long action, + "Failed to setup timing for '%pOF'\n", rd->dn); + + if (!of_node_check_flag(rd->dn, OF_POPULATED)) { ++ /* ++ * Clear the flag before adding the device so that ++ * fw_devlink doesn't skip adding consumers to this ++ * device. ++ */ ++ rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; + if (!of_platform_device_create(rd->dn, NULL, &pdev->dev)) { + dev_err(&pdev->dev, + "Failed to create child device '%pOF'\n", +diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c +index 9b7268bae66ab..ac36b01cf6d5d 100644 +--- a/drivers/bus/ti-sysc.c ++++ b/drivers/bus/ti-sysc.c +@@ -3125,7 +3125,7 @@ static int sysc_init_static_data(struct sysc *ddata) + + match = soc_device_match(sysc_soc_match); + if (match && match->data) +- sysc_soc->soc = (int)match->data; ++ sysc_soc->soc = (enum sysc_soc)(uintptr_t)match->data; + + /* + * Check and warn about possible old incomplete dtb. We now want to see +diff --git a/drivers/char/hw_random/iproc-rng200.c b/drivers/char/hw_random/iproc-rng200.c +index 06bc060534d81..c0df053cbe4b2 100644 +--- a/drivers/char/hw_random/iproc-rng200.c ++++ b/drivers/char/hw_random/iproc-rng200.c +@@ -182,6 +182,8 @@ static int iproc_rng200_probe(struct platform_device *pdev) + return PTR_ERR(priv->base); + } + ++ dev_set_drvdata(dev, priv); ++ + priv->rng.name = "iproc-rng200"; + priv->rng.read = iproc_rng200_read; + priv->rng.init = iproc_rng200_init; +@@ -199,6 +201,28 @@ static int iproc_rng200_probe(struct platform_device *pdev) + return 0; + } + ++static int __maybe_unused iproc_rng200_suspend(struct device *dev) ++{ ++ struct iproc_rng200_dev *priv = dev_get_drvdata(dev); ++ ++ iproc_rng200_cleanup(&priv->rng); ++ ++ return 0; ++} ++ ++static int __maybe_unused iproc_rng200_resume(struct device *dev) ++{ ++ struct iproc_rng200_dev *priv = dev_get_drvdata(dev); ++ ++ iproc_rng200_init(&priv->rng); ++ ++ return 0; ++} ++ ++static const struct dev_pm_ops iproc_rng200_pm_ops = { ++ SET_SYSTEM_SLEEP_PM_OPS(iproc_rng200_suspend, iproc_rng200_resume) ++}; ++ + static const struct of_device_id iproc_rng200_of_match[] = { + { .compatible = "brcm,bcm2711-rng200", }, + { .compatible = "brcm,bcm7211-rng200", }, +@@ -212,6 +236,7 @@ static struct platform_driver iproc_rng200_driver = { + .driver = { + .name = "iproc-rng200", + .of_match_table = iproc_rng200_of_match, ++ .pm = &iproc_rng200_pm_ops, + }, + .probe = iproc_rng200_probe, + }; +diff --git a/drivers/char/hw_random/nomadik-rng.c b/drivers/char/hw_random/nomadik-rng.c +index e8f9621e79541..3774adf903a83 100644 +--- a/drivers/char/hw_random/nomadik-rng.c ++++ b/drivers/char/hw_random/nomadik-rng.c +@@ -13,8 +13,6 @@ + #include + #include + +-static struct clk *rng_clk; +- + static int nmk_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) + { + void __iomem *base = (void __iomem *)rng->priv; +@@ -36,21 +34,20 @@ static struct hwrng nmk_rng = { + + static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id) + { ++ struct clk *rng_clk; + void __iomem *base; + int ret; + +- rng_clk = devm_clk_get(&dev->dev, NULL); ++ rng_clk = devm_clk_get_enabled(&dev->dev, NULL); + if (IS_ERR(rng_clk)) { + dev_err(&dev->dev, "could not get rng clock\n"); + ret = PTR_ERR(rng_clk); + return ret; + } + +- clk_prepare_enable(rng_clk); +- + ret = amba_request_regions(dev, dev->dev.init_name); + if (ret) +- goto out_clk; ++ return ret; + ret = -ENOMEM; + base = devm_ioremap(&dev->dev, dev->res.start, + resource_size(&dev->res)); +@@ -64,15 +61,12 @@ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id) + + out_release: + amba_release_regions(dev); +-out_clk: +- clk_disable_unprepare(rng_clk); + return ret; + } + + static void nmk_rng_remove(struct amba_device *dev) + { + amba_release_regions(dev); +- clk_disable_unprepare(rng_clk); + } + + static const struct amba_id nmk_rng_ids[] = { +diff --git a/drivers/char/hw_random/pic32-rng.c b/drivers/char/hw_random/pic32-rng.c +index 99c8bd0859a14..e04a054e89307 100644 +--- a/drivers/char/hw_random/pic32-rng.c ++++ b/drivers/char/hw_random/pic32-rng.c +@@ -36,7 +36,6 @@ + struct pic32_rng { + void __iomem *base; + struct hwrng rng; +- struct clk *clk; + }; + + /* +@@ -70,6 +69,7 @@ static int pic32_rng_read(struct hwrng *rng, void *buf, size_t max, + static int pic32_rng_probe(struct platform_device *pdev) + { + struct pic32_rng *priv; ++ struct clk *clk; + u32 v; + int ret; + +@@ -81,13 +81,9 @@ static int pic32_rng_probe(struct platform_device *pdev) + if (IS_ERR(priv->base)) + return PTR_ERR(priv->base); + +- priv->clk = devm_clk_get(&pdev->dev, NULL); +- if (IS_ERR(priv->clk)) +- return PTR_ERR(priv->clk); +- +- ret = clk_prepare_enable(priv->clk); +- if (ret) +- return ret; ++ clk = devm_clk_get_enabled(&pdev->dev, NULL); ++ if (IS_ERR(clk)) ++ return PTR_ERR(clk); + + /* enable TRNG in enhanced mode */ + v = TRNGEN | TRNGMOD; +@@ -98,15 +94,11 @@ static int pic32_rng_probe(struct platform_device *pdev) + + ret = devm_hwrng_register(&pdev->dev, &priv->rng); + if (ret) +- goto err_register; ++ return ret; + + platform_set_drvdata(pdev, priv); + + return 0; +- +-err_register: +- clk_disable_unprepare(priv->clk); +- return ret; + } + + static int pic32_rng_remove(struct platform_device *pdev) +@@ -114,7 +106,6 @@ static int pic32_rng_remove(struct platform_device *pdev) + struct pic32_rng *rng = platform_get_drvdata(pdev); + + writel(0, rng->base + RNGCON); +- clk_disable_unprepare(rng->clk); + return 0; + } + +diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c +index abddd7e43a9a6..5cd031f3fc970 100644 +--- a/drivers/char/ipmi/ipmi_si_intf.c ++++ b/drivers/char/ipmi/ipmi_si_intf.c +@@ -2082,6 +2082,11 @@ static int try_smi_init(struct smi_info *new_smi) + new_smi->io.io_cleanup = NULL; + } + ++ if (rv && new_smi->si_sm) { ++ kfree(new_smi->si_sm); ++ new_smi->si_sm = NULL; ++ } ++ + return rv; + } + +diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c +index d48061ec27dd9..248459f97c67b 100644 +--- a/drivers/char/ipmi/ipmi_ssif.c ++++ b/drivers/char/ipmi/ipmi_ssif.c +@@ -1403,7 +1403,7 @@ static struct ssif_addr_info *ssif_info_find(unsigned short addr, + restart: + list_for_each_entry(info, &ssif_infos, link) { + if (info->binfo.addr == addr) { +- if (info->addr_src == SI_SMBIOS) ++ if (info->addr_src == SI_SMBIOS && !info->adapter_name) + info->adapter_name = kstrdup(adapter_name, + GFP_KERNEL); + +@@ -1603,6 +1603,11 @@ static int ssif_add_infos(struct i2c_client *client) + info->addr_src = SI_ACPI; + info->client = client; + info->adapter_name = kstrdup(client->adapter->name, GFP_KERNEL); ++ if (!info->adapter_name) { ++ kfree(info); ++ return -ENOMEM; ++ } ++ + info->binfo.addr = client->addr; + list_add_tail(&info->link, &ssif_infos); + return 0; +diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c +index 7f7f3bded4535..db0b774207d35 100644 +--- a/drivers/char/tpm/tpm_crb.c ++++ b/drivers/char/tpm/tpm_crb.c +@@ -463,28 +463,6 @@ static bool crb_req_canceled(struct tpm_chip *chip, u8 status) + return (cancel & CRB_CANCEL_INVOKE) == CRB_CANCEL_INVOKE; + } + +-static int crb_check_flags(struct tpm_chip *chip) +-{ +- u32 val; +- int ret; +- +- ret = crb_request_locality(chip, 0); +- if (ret) +- return ret; +- +- ret = tpm2_get_tpm_pt(chip, TPM2_PT_MANUFACTURER, &val, NULL); +- if (ret) +- goto release; +- +- if (val == 0x414D4400U /* AMD */) +- chip->flags |= TPM_CHIP_FLAG_HWRNG_DISABLED; +- +-release: +- crb_relinquish_locality(chip, 0); +- +- return ret; +-} +- + static const struct tpm_class_ops tpm_crb = { + .flags = TPM_OPS_AUTO_STARTUP, + .status = crb_status, +@@ -826,9 +804,14 @@ static int crb_acpi_add(struct acpi_device *device) + if (rc) + goto out; + +- rc = crb_check_flags(chip); +- if (rc) +- goto out; ++#ifdef CONFIG_X86 ++ /* A quirk for https://www.amd.com/en/support/kb/faq/pa-410 */ ++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD && ++ priv->sm != ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON) { ++ dev_info(dev, "Disabling hwrng\n"); ++ chip->flags |= TPM_CHIP_FLAG_HWRNG_DISABLED; ++ } ++#endif /* CONFIG_X86 */ + + rc = tpm_chip_register(chip); + +diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig +index 5da82f2bdd211..a5dcc7293a836 100644 +--- a/drivers/clk/Kconfig ++++ b/drivers/clk/Kconfig +@@ -427,6 +427,7 @@ config COMMON_CLK_BD718XX + config COMMON_CLK_FIXED_MMIO + bool "Clock driver for Memory Mapped Fixed values" + depends on COMMON_CLK && OF ++ depends on HAS_IOMEM + help + Support for Memory Mapped IO Fixed clocks + +diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c +index cbf0d7955a00a..3e9a092e136c1 100644 +--- a/drivers/clk/imx/clk-composite-8m.c ++++ b/drivers/clk/imx/clk-composite-8m.c +@@ -97,7 +97,7 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw, + int prediv_value; + int div_value; + int ret; +- u32 val; ++ u32 orig, val; + + ret = imx8m_clk_composite_compute_dividers(rate, parent_rate, + &prediv_value, &div_value); +@@ -106,13 +106,15 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw, + + spin_lock_irqsave(divider->lock, flags); + +- val = readl(divider->reg); +- val &= ~((clk_div_mask(divider->width) << divider->shift) | +- (clk_div_mask(PCG_DIV_WIDTH) << PCG_DIV_SHIFT)); ++ orig = readl(divider->reg); ++ val = orig & ~((clk_div_mask(divider->width) << divider->shift) | ++ (clk_div_mask(PCG_DIV_WIDTH) << PCG_DIV_SHIFT)); + + val |= (u32)(prediv_value - 1) << divider->shift; + val |= (u32)(div_value - 1) << PCG_DIV_SHIFT; +- writel(val, divider->reg); ++ ++ if (val != orig) ++ writel(val, divider->reg); + + spin_unlock_irqrestore(divider->lock, flags); + +diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c +index 05c02f4e2a143..3d0d8f2c02dc1 100644 +--- a/drivers/clk/imx/clk-imx8mp.c ++++ b/drivers/clk/imx/clk-imx8mp.c +@@ -177,10 +177,6 @@ static const char * const imx8mp_sai3_sels[] = {"osc_24m", "audio_pll1_out", "au + "video_pll1_out", "sys_pll1_133m", "osc_hdmi", + "clk_ext3", "clk_ext4", }; + +-static const char * const imx8mp_sai4_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out", +- "video_pll1_out", "sys_pll1_133m", "osc_hdmi", +- "clk_ext1", "clk_ext2", }; +- + static const char * const imx8mp_sai5_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out", + "video_pll1_out", "sys_pll1_133m", "osc_hdmi", + "clk_ext2", "clk_ext3", }; +@@ -566,7 +562,6 @@ static int imx8mp_clocks_probe(struct platform_device *pdev) + hws[IMX8MP_CLK_SAI1] = imx8m_clk_hw_composite("sai1", imx8mp_sai1_sels, ccm_base + 0xa580); + hws[IMX8MP_CLK_SAI2] = imx8m_clk_hw_composite("sai2", imx8mp_sai2_sels, ccm_base + 0xa600); + hws[IMX8MP_CLK_SAI3] = imx8m_clk_hw_composite("sai3", imx8mp_sai3_sels, ccm_base + 0xa680); +- hws[IMX8MP_CLK_SAI4] = imx8m_clk_hw_composite("sai4", imx8mp_sai4_sels, ccm_base + 0xa700); + hws[IMX8MP_CLK_SAI5] = imx8m_clk_hw_composite("sai5", imx8mp_sai5_sels, ccm_base + 0xa780); + hws[IMX8MP_CLK_SAI6] = imx8m_clk_hw_composite("sai6", imx8mp_sai6_sels, ccm_base + 0xa800); + hws[IMX8MP_CLK_ENET_QOS] = imx8m_clk_hw_composite("enet_qos", imx8mp_enet_qos_sels, ccm_base + 0xa880); +diff --git a/drivers/clk/imx/clk-imx8ulp.c b/drivers/clk/imx/clk-imx8ulp.c +index ca0e4a3aa454e..fa9121b3cf36a 100644 +--- a/drivers/clk/imx/clk-imx8ulp.c ++++ b/drivers/clk/imx/clk-imx8ulp.c +@@ -167,7 +167,7 @@ static int imx8ulp_clk_cgc1_init(struct platform_device *pdev) + clks[IMX8ULP_CLK_SPLL2_PRE_SEL] = imx_clk_hw_mux_flags("spll2_pre_sel", base + 0x510, 0, 1, pll_pre_sels, ARRAY_SIZE(pll_pre_sels), CLK_SET_PARENT_GATE); + clks[IMX8ULP_CLK_SPLL3_PRE_SEL] = imx_clk_hw_mux_flags("spll3_pre_sel", base + 0x610, 0, 1, pll_pre_sels, ARRAY_SIZE(pll_pre_sels), CLK_SET_PARENT_GATE); + +- clks[IMX8ULP_CLK_SPLL2] = imx_clk_hw_pllv4(IMX_PLLV4_IMX8ULP, "spll2", "spll2_pre_sel", base + 0x500); ++ clks[IMX8ULP_CLK_SPLL2] = imx_clk_hw_pllv4(IMX_PLLV4_IMX8ULP_1GHZ, "spll2", "spll2_pre_sel", base + 0x500); + clks[IMX8ULP_CLK_SPLL3] = imx_clk_hw_pllv4(IMX_PLLV4_IMX8ULP, "spll3", "spll3_pre_sel", base + 0x600); + clks[IMX8ULP_CLK_SPLL3_VCODIV] = imx_clk_hw_divider("spll3_vcodiv", "spll3", base + 0x604, 0, 6); + +diff --git a/drivers/clk/imx/clk-pllv4.c b/drivers/clk/imx/clk-pllv4.c +index 6e7e34571fc8d..9b136c951762c 100644 +--- a/drivers/clk/imx/clk-pllv4.c ++++ b/drivers/clk/imx/clk-pllv4.c +@@ -44,11 +44,15 @@ struct clk_pllv4 { + u32 cfg_offset; + u32 num_offset; + u32 denom_offset; ++ bool use_mult_range; + }; + + /* Valid PLL MULT Table */ + static const int pllv4_mult_table[] = {33, 27, 22, 20, 17, 16}; + ++/* Valid PLL MULT range, (max, min) */ ++static const int pllv4_mult_range[] = {54, 27}; ++ + #define to_clk_pllv4(__hw) container_of(__hw, struct clk_pllv4, hw) + + #define LOCK_TIMEOUT_US USEC_PER_MSEC +@@ -94,17 +98,30 @@ static unsigned long clk_pllv4_recalc_rate(struct clk_hw *hw, + static long clk_pllv4_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate) + { ++ struct clk_pllv4 *pll = to_clk_pllv4(hw); + unsigned long parent_rate = *prate; + unsigned long round_rate, i; + u32 mfn, mfd = DEFAULT_MFD; + bool found = false; + u64 temp64; +- +- for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) { +- round_rate = parent_rate * pllv4_mult_table[i]; +- if (rate >= round_rate) { ++ u32 mult; ++ ++ if (pll->use_mult_range) { ++ temp64 = (u64)rate; ++ do_div(temp64, parent_rate); ++ mult = temp64; ++ if (mult >= pllv4_mult_range[1] && ++ mult <= pllv4_mult_range[0]) { ++ round_rate = parent_rate * mult; + found = true; +- break; ++ } ++ } else { ++ for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) { ++ round_rate = parent_rate * pllv4_mult_table[i]; ++ if (rate >= round_rate) { ++ found = true; ++ break; ++ } + } + } + +@@ -138,14 +155,20 @@ static long clk_pllv4_round_rate(struct clk_hw *hw, unsigned long rate, + return round_rate + (u32)temp64; + } + +-static bool clk_pllv4_is_valid_mult(unsigned int mult) ++static bool clk_pllv4_is_valid_mult(struct clk_pllv4 *pll, unsigned int mult) + { + int i; + + /* check if mult is in valid MULT table */ +- for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) { +- if (pllv4_mult_table[i] == mult) ++ if (pll->use_mult_range) { ++ if (mult >= pllv4_mult_range[1] && ++ mult <= pllv4_mult_range[0]) + return true; ++ } else { ++ for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) { ++ if (pllv4_mult_table[i] == mult) ++ return true; ++ } + } + + return false; +@@ -160,7 +183,7 @@ static int clk_pllv4_set_rate(struct clk_hw *hw, unsigned long rate, + + mult = rate / parent_rate; + +- if (!clk_pllv4_is_valid_mult(mult)) ++ if (!clk_pllv4_is_valid_mult(pll, mult)) + return -EINVAL; + + if (parent_rate <= MAX_MFD) +@@ -227,10 +250,13 @@ struct clk_hw *imx_clk_hw_pllv4(enum imx_pllv4_type type, const char *name, + + pll->base = base; + +- if (type == IMX_PLLV4_IMX8ULP) { ++ if (type == IMX_PLLV4_IMX8ULP || ++ type == IMX_PLLV4_IMX8ULP_1GHZ) { + pll->cfg_offset = IMX8ULP_PLL_CFG_OFFSET; + pll->num_offset = IMX8ULP_PLL_NUM_OFFSET; + pll->denom_offset = IMX8ULP_PLL_DENOM_OFFSET; ++ if (type == IMX_PLLV4_IMX8ULP_1GHZ) ++ pll->use_mult_range = true; + } else { + pll->cfg_offset = PLL_CFG_OFFSET; + pll->num_offset = PLL_NUM_OFFSET; +diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h +index dd49f90110e8b..fb59131395f03 100644 +--- a/drivers/clk/imx/clk.h ++++ b/drivers/clk/imx/clk.h +@@ -46,6 +46,7 @@ enum imx_pll14xx_type { + enum imx_pllv4_type { + IMX_PLLV4_IMX7ULP, + IMX_PLLV4_IMX8ULP, ++ IMX_PLLV4_IMX8ULP_1GHZ, + }; + + enum imx_pfdv2_type { +diff --git a/drivers/clk/keystone/pll.c b/drivers/clk/keystone/pll.c +index d59a7621bb204..ee5c72369334f 100644 +--- a/drivers/clk/keystone/pll.c ++++ b/drivers/clk/keystone/pll.c +@@ -209,7 +209,7 @@ static void __init _of_pll_clk_init(struct device_node *node, bool pllctrl) + } + + clk = clk_register_pll(NULL, node->name, parent_name, pll_data); +- if (clk) { ++ if (!IS_ERR_OR_NULL(clk)) { + of_clk_add_provider(node, of_clk_src_simple_get, clk); + return; + } +diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c +index 2d3980251e78e..5822db4f4f358 100644 +--- a/drivers/clk/qcom/gcc-sc7180.c ++++ b/drivers/clk/qcom/gcc-sc7180.c +@@ -667,6 +667,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .name = "gcc_sdcc2_apps_clk_src", + .parent_data = gcc_parent_data_5, + .num_parents = ARRAY_SIZE(gcc_parent_data_5), ++ .flags = CLK_OPS_PARENT_ENABLE, + .ops = &clk_rcg2_floor_ops, + }, + }; +diff --git a/drivers/clk/qcom/gcc-sc8280xp.c b/drivers/clk/qcom/gcc-sc8280xp.c +index b3198784e1c3d..57bbd609151cd 100644 +--- a/drivers/clk/qcom/gcc-sc8280xp.c ++++ b/drivers/clk/qcom/gcc-sc8280xp.c +@@ -6760,7 +6760,7 @@ static struct gdsc pcie_0_tunnel_gdsc = { + .name = "pcie_0_tunnel_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE, ++ .flags = VOTABLE | RETAIN_FF_ENABLE, + }; + + static struct gdsc pcie_1_tunnel_gdsc = { +@@ -6771,7 +6771,7 @@ static struct gdsc pcie_1_tunnel_gdsc = { + .name = "pcie_1_tunnel_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE, ++ .flags = VOTABLE | RETAIN_FF_ENABLE, + }; + + /* +@@ -6786,7 +6786,7 @@ static struct gdsc pcie_2a_gdsc = { + .name = "pcie_2a_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE | ALWAYS_ON, ++ .flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON, + }; + + static struct gdsc pcie_2b_gdsc = { +@@ -6797,7 +6797,7 @@ static struct gdsc pcie_2b_gdsc = { + .name = "pcie_2b_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE | ALWAYS_ON, ++ .flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON, + }; + + static struct gdsc pcie_3a_gdsc = { +@@ -6808,7 +6808,7 @@ static struct gdsc pcie_3a_gdsc = { + .name = "pcie_3a_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE | ALWAYS_ON, ++ .flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON, + }; + + static struct gdsc pcie_3b_gdsc = { +@@ -6819,7 +6819,7 @@ static struct gdsc pcie_3b_gdsc = { + .name = "pcie_3b_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE | ALWAYS_ON, ++ .flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON, + }; + + static struct gdsc pcie_4_gdsc = { +@@ -6830,7 +6830,7 @@ static struct gdsc pcie_4_gdsc = { + .name = "pcie_4_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, +- .flags = VOTABLE | ALWAYS_ON, ++ .flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON, + }; + + static struct gdsc ufs_card_gdsc = { +@@ -6839,6 +6839,7 @@ static struct gdsc ufs_card_gdsc = { + .name = "ufs_card_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, ++ .flags = RETAIN_FF_ENABLE, + }; + + static struct gdsc ufs_phy_gdsc = { +@@ -6847,6 +6848,7 @@ static struct gdsc ufs_phy_gdsc = { + .name = "ufs_phy_gdsc", + }, + .pwrsts = PWRSTS_OFF_ON, ++ .flags = RETAIN_FF_ENABLE, + }; + + static struct gdsc usb30_mp_gdsc = { +@@ -6855,6 +6857,7 @@ static struct gdsc usb30_mp_gdsc = { + .name = "usb30_mp_gdsc", + }, + .pwrsts = PWRSTS_RET_ON, ++ .flags = RETAIN_FF_ENABLE, + }; + + static struct gdsc usb30_prim_gdsc = { +@@ -6863,6 +6866,7 @@ static struct gdsc usb30_prim_gdsc = { + .name = "usb30_prim_gdsc", + }, + .pwrsts = PWRSTS_RET_ON, ++ .flags = RETAIN_FF_ENABLE, + }; + + static struct gdsc usb30_sec_gdsc = { +@@ -6871,6 +6875,115 @@ static struct gdsc usb30_sec_gdsc = { + .name = "usb30_sec_gdsc", + }, + .pwrsts = PWRSTS_RET_ON, ++ .flags = RETAIN_FF_ENABLE, ++}; ++ ++static struct gdsc emac_0_gdsc = { ++ .gdscr = 0xaa004, ++ .pd = { ++ .name = "emac_0_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = RETAIN_FF_ENABLE, ++}; ++ ++static struct gdsc emac_1_gdsc = { ++ .gdscr = 0xba004, ++ .pd = { ++ .name = "emac_1_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = RETAIN_FF_ENABLE, ++}; ++ ++static struct gdsc usb4_1_gdsc = { ++ .gdscr = 0xb8004, ++ .pd = { ++ .name = "usb4_1_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = RETAIN_FF_ENABLE, ++}; ++ ++static struct gdsc usb4_gdsc = { ++ .gdscr = 0x2a004, ++ .pd = { ++ .name = "usb4_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = RETAIN_FF_ENABLE, ++}; ++ ++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc = { ++ .gdscr = 0x7d050, ++ .pd = { ++ .name = "hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc = { ++ .gdscr = 0x7d058, ++ .pd = { ++ .name = "hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc = { ++ .gdscr = 0x7d054, ++ .pd = { ++ .name = "hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc = { ++ .gdscr = 0x7d06c, ++ .pd = { ++ .name = "hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = { ++ .gdscr = 0x7d05c, ++ .pd = { ++ .name = "hlos1_vote_turing_mmu_tbu0_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_turing_mmu_tbu1_gdsc = { ++ .gdscr = 0x7d060, ++ .pd = { ++ .name = "hlos1_vote_turing_mmu_tbu1_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_turing_mmu_tbu2_gdsc = { ++ .gdscr = 0x7d0a0, ++ .pd = { ++ .name = "hlos1_vote_turing_mmu_tbu2_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, ++}; ++ ++static struct gdsc hlos1_vote_turing_mmu_tbu3_gdsc = { ++ .gdscr = 0x7d0a4, ++ .pd = { ++ .name = "hlos1_vote_turing_mmu_tbu3_gdsc", ++ }, ++ .pwrsts = PWRSTS_OFF_ON, ++ .flags = VOTABLE, + }; + + static struct clk_regmap *gcc_sc8280xp_clocks[] = { +@@ -7351,6 +7464,18 @@ static struct gdsc *gcc_sc8280xp_gdscs[] = { + [USB30_MP_GDSC] = &usb30_mp_gdsc, + [USB30_PRIM_GDSC] = &usb30_prim_gdsc, + [USB30_SEC_GDSC] = &usb30_sec_gdsc, ++ [EMAC_0_GDSC] = &emac_0_gdsc, ++ [EMAC_1_GDSC] = &emac_1_gdsc, ++ [USB4_1_GDSC] = &usb4_1_gdsc, ++ [USB4_GDSC] = &usb4_gdsc, ++ [HLOS1_VOTE_MMNOC_MMU_TBU_HF0_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc, ++ [HLOS1_VOTE_MMNOC_MMU_TBU_HF1_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc, ++ [HLOS1_VOTE_MMNOC_MMU_TBU_SF0_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc, ++ [HLOS1_VOTE_MMNOC_MMU_TBU_SF1_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc, ++ [HLOS1_VOTE_TURING_MMU_TBU0_GDSC] = &hlos1_vote_turing_mmu_tbu0_gdsc, ++ [HLOS1_VOTE_TURING_MMU_TBU1_GDSC] = &hlos1_vote_turing_mmu_tbu1_gdsc, ++ [HLOS1_VOTE_TURING_MMU_TBU2_GDSC] = &hlos1_vote_turing_mmu_tbu2_gdsc, ++ [HLOS1_VOTE_TURING_MMU_TBU3_GDSC] = &hlos1_vote_turing_mmu_tbu3_gdsc, + }; + + static const struct clk_rcg_dfs_data gcc_dfs_clocks[] = { +diff --git a/drivers/clk/qcom/gcc-sm6350.c b/drivers/clk/qcom/gcc-sm6350.c +index 9b4e4bb059635..cf4a7b6e0b23a 100644 +--- a/drivers/clk/qcom/gcc-sm6350.c ++++ b/drivers/clk/qcom/gcc-sm6350.c +@@ -641,6 +641,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .name = "gcc_sdcc2_apps_clk_src", + .parent_data = gcc_parent_data_8, + .num_parents = ARRAY_SIZE(gcc_parent_data_8), ++ .flags = CLK_OPS_PARENT_ENABLE, + .ops = &clk_rcg2_floor_ops, + }, + }; +diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c +index a0ba37656b07b..30bd561461074 100644 +--- a/drivers/clk/qcom/gcc-sm8250.c ++++ b/drivers/clk/qcom/gcc-sm8250.c +@@ -721,6 +721,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .name = "gcc_sdcc2_apps_clk_src", + .parent_data = gcc_parent_data_4, + .num_parents = ARRAY_SIZE(gcc_parent_data_4), ++ .flags = CLK_OPS_PARENT_ENABLE, + .ops = &clk_rcg2_floor_ops, + }, + }; +diff --git a/drivers/clk/qcom/gcc-sm8450.c b/drivers/clk/qcom/gcc-sm8450.c +index 666efa5ff9780..59c567e76d656 100644 +--- a/drivers/clk/qcom/gcc-sm8450.c ++++ b/drivers/clk/qcom/gcc-sm8450.c +@@ -904,7 +904,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .parent_data = gcc_parent_data_7, + .num_parents = ARRAY_SIZE(gcc_parent_data_7), + .flags = CLK_SET_RATE_PARENT, +- .ops = &clk_rcg2_ops, ++ .ops = &clk_rcg2_floor_ops, + }, + }; + +@@ -926,7 +926,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = { + .parent_data = gcc_parent_data_0, + .num_parents = ARRAY_SIZE(gcc_parent_data_0), + .flags = CLK_SET_RATE_PARENT, +- .ops = &clk_rcg2_ops, ++ .ops = &clk_rcg2_floor_ops, + }, + }; + +diff --git a/drivers/clk/qcom/gpucc-sm6350.c b/drivers/clk/qcom/gpucc-sm6350.c +index ef15185a99c31..0bcbba2a29436 100644 +--- a/drivers/clk/qcom/gpucc-sm6350.c ++++ b/drivers/clk/qcom/gpucc-sm6350.c +@@ -24,6 +24,12 @@ + #define CX_GMU_CBCR_WAKE_MASK 0xF + #define CX_GMU_CBCR_WAKE_SHIFT 8 + ++enum { ++ DT_BI_TCXO, ++ DT_GPLL0_OUT_MAIN, ++ DT_GPLL0_OUT_MAIN_DIV, ++}; ++ + enum { + P_BI_TCXO, + P_GPLL0_OUT_MAIN, +@@ -61,6 +67,7 @@ static struct clk_alpha_pll gpu_cc_pll0 = { + .hw.init = &(struct clk_init_data){ + .name = "gpu_cc_pll0", + .parent_data = &(const struct clk_parent_data){ ++ .index = DT_BI_TCXO, + .fw_name = "bi_tcxo", + }, + .num_parents = 1, +@@ -104,6 +111,7 @@ static struct clk_alpha_pll gpu_cc_pll1 = { + .hw.init = &(struct clk_init_data){ + .name = "gpu_cc_pll1", + .parent_data = &(const struct clk_parent_data){ ++ .index = DT_BI_TCXO, + .fw_name = "bi_tcxo", + }, + .num_parents = 1, +@@ -121,11 +129,11 @@ static const struct parent_map gpu_cc_parent_map_0[] = { + }; + + static const struct clk_parent_data gpu_cc_parent_data_0[] = { +- { .fw_name = "bi_tcxo" }, ++ { .index = DT_BI_TCXO, .fw_name = "bi_tcxo" }, + { .hw = &gpu_cc_pll0.clkr.hw }, + { .hw = &gpu_cc_pll1.clkr.hw }, +- { .fw_name = "gcc_gpu_gpll0_clk" }, +- { .fw_name = "gcc_gpu_gpll0_div_clk" }, ++ { .index = DT_GPLL0_OUT_MAIN, .fw_name = "gcc_gpu_gpll0_clk_src" }, ++ { .index = DT_GPLL0_OUT_MAIN_DIV, .fw_name = "gcc_gpu_gpll0_div_clk_src" }, + }; + + static const struct parent_map gpu_cc_parent_map_1[] = { +@@ -138,12 +146,12 @@ static const struct parent_map gpu_cc_parent_map_1[] = { + }; + + static const struct clk_parent_data gpu_cc_parent_data_1[] = { +- { .fw_name = "bi_tcxo" }, ++ { .index = DT_BI_TCXO, .fw_name = "bi_tcxo" }, + { .hw = &crc_div.hw }, + { .hw = &gpu_cc_pll0.clkr.hw }, + { .hw = &gpu_cc_pll1.clkr.hw }, + { .hw = &gpu_cc_pll1.clkr.hw }, +- { .fw_name = "gcc_gpu_gpll0_clk" }, ++ { .index = DT_GPLL0_OUT_MAIN, .fw_name = "gcc_gpu_gpll0_clk_src" }, + }; + + static const struct freq_tbl ftbl_gpu_cc_gmu_clk_src[] = { +diff --git a/drivers/clk/qcom/reset.c b/drivers/clk/qcom/reset.c +index 0e914ec7aeae1..e45e32804d2c7 100644 +--- a/drivers/clk/qcom/reset.c ++++ b/drivers/clk/qcom/reset.c +@@ -16,7 +16,8 @@ static int qcom_reset(struct reset_controller_dev *rcdev, unsigned long id) + struct qcom_reset_controller *rst = to_qcom_reset_controller(rcdev); + + rcdev->ops->assert(rcdev, id); +- udelay(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */ ++ fsleep(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */ ++ + rcdev->ops->deassert(rcdev, id); + return 0; + } +diff --git a/drivers/clk/rockchip/clk-rk3568.c b/drivers/clk/rockchip/clk-rk3568.c +index f85902e2590c7..2f54f630c8b65 100644 +--- a/drivers/clk/rockchip/clk-rk3568.c ++++ b/drivers/clk/rockchip/clk-rk3568.c +@@ -81,7 +81,7 @@ static struct rockchip_pll_rate_table rk3568_pll_rates[] = { + RK3036_PLL_RATE(108000000, 2, 45, 5, 1, 1, 0), + RK3036_PLL_RATE(100000000, 1, 150, 6, 6, 1, 0), + RK3036_PLL_RATE(96000000, 1, 96, 6, 4, 1, 0), +- RK3036_PLL_RATE(78750000, 1, 96, 6, 4, 1, 0), ++ RK3036_PLL_RATE(78750000, 4, 315, 6, 4, 1, 0), + RK3036_PLL_RATE(74250000, 2, 99, 4, 4, 1, 0), + { /* sentinel */ }, + }; +diff --git a/drivers/clk/sunxi-ng/ccu_mmc_timing.c b/drivers/clk/sunxi-ng/ccu_mmc_timing.c +index de33414fc5c28..c6a6ce98ca03a 100644 +--- a/drivers/clk/sunxi-ng/ccu_mmc_timing.c ++++ b/drivers/clk/sunxi-ng/ccu_mmc_timing.c +@@ -43,7 +43,7 @@ int sunxi_ccu_set_mmc_timing_mode(struct clk *clk, bool new_mode) + EXPORT_SYMBOL_GPL(sunxi_ccu_set_mmc_timing_mode); + + /** +- * sunxi_ccu_set_mmc_timing_mode: Get the current MMC clock timing mode ++ * sunxi_ccu_get_mmc_timing_mode: Get the current MMC clock timing mode + * @clk: clock to query + * + * Returns 0 if the clock is in old timing mode, > 0 if it is in +diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c +index e4a5b4d90f833..b448c8d6a16dd 100644 +--- a/drivers/cpufreq/amd-pstate-ut.c ++++ b/drivers/cpufreq/amd-pstate-ut.c +@@ -64,27 +64,9 @@ static struct amd_pstate_ut_struct amd_pstate_ut_cases[] = { + static bool get_shared_mem(void) + { + bool result = false; +- char path[] = "/sys/module/amd_pstate/parameters/shared_mem"; +- char buf[5] = {0}; +- struct file *filp = NULL; +- loff_t pos = 0; +- ssize_t ret; +- +- if (!boot_cpu_has(X86_FEATURE_CPPC)) { +- filp = filp_open(path, O_RDONLY, 0); +- if (IS_ERR(filp)) +- pr_err("%s unable to open %s file!\n", __func__, path); +- else { +- ret = kernel_read(filp, &buf, sizeof(buf), &pos); +- if (ret < 0) +- pr_err("%s read %s file fail ret=%ld!\n", +- __func__, path, (long)ret); +- filp_close(filp, NULL); +- } + +- if ('Y' == *buf) +- result = true; +- } ++ if (!boot_cpu_has(X86_FEATURE_CPPC)) ++ result = true; + + return result; + } +@@ -158,7 +140,7 @@ static void amd_pstate_ut_check_perf(u32 index) + if (ret) { + amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret); +- return; ++ goto skip_test; + } + + nominal_perf = cppc_perf.nominal_perf; +@@ -169,7 +151,7 @@ static void amd_pstate_ut_check_perf(u32 index) + if (ret) { + amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret); +- return; ++ goto skip_test; + } + + nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); +@@ -187,7 +169,7 @@ static void amd_pstate_ut_check_perf(u32 index) + nominal_perf, cpudata->nominal_perf, + lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf, + lowest_perf, cpudata->lowest_perf); +- return; ++ goto skip_test; + } + + if (!((highest_perf >= nominal_perf) && +@@ -198,11 +180,15 @@ static void amd_pstate_ut_check_perf(u32 index) + pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n", + __func__, cpu, highest_perf, nominal_perf, + lowest_nonlinear_perf, lowest_perf); +- return; ++ goto skip_test; + } ++ cpufreq_cpu_put(policy); + } + + amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; ++ return; ++skip_test: ++ cpufreq_cpu_put(policy); + } + + /* +@@ -230,14 +216,14 @@ static void amd_pstate_ut_check_freq(u32 index) + pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n", + __func__, cpu, cpudata->max_freq, cpudata->nominal_freq, + cpudata->lowest_nonlinear_freq, cpudata->min_freq); +- return; ++ goto skip_test; + } + + if (cpudata->min_freq != policy->min) { + amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + pr_err("%s cpu%d cpudata_min_freq=%d policy_min=%d, they should be equal!\n", + __func__, cpu, cpudata->min_freq, policy->min); +- return; ++ goto skip_test; + } + + if (cpudata->boost_supported) { +@@ -249,16 +235,20 @@ static void amd_pstate_ut_check_freq(u32 index) + pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n", + __func__, cpu, policy->max, cpudata->max_freq, + cpudata->nominal_freq); +- return; ++ goto skip_test; + } + } else { + amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; + pr_err("%s cpu%d must support boost!\n", __func__, cpu); +- return; ++ goto skip_test; + } ++ cpufreq_cpu_put(policy); + } + + amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS; ++ return; ++skip_test: ++ cpufreq_cpu_put(policy); + } + + static int __init amd_pstate_ut_init(void) +diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c +index 4153150e20db5..f644c5e325fb2 100644 +--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c ++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c +@@ -434,7 +434,11 @@ brcm_avs_get_freq_table(struct device *dev, struct private_data *priv) + if (ret) + return ERR_PTR(ret); + +- table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1, sizeof(*table), ++ /* ++ * We allocate space for the 5 different P-STATES AVS, ++ * plus extra space for a terminating element. ++ */ ++ table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1 + 1, sizeof(*table), + GFP_KERNEL); + if (!table) + return ERR_PTR(-ENOMEM); +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 285ba51b31f60..c8912756fc06d 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -450,8 +450,10 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy, + policy->cur, + policy->cpuinfo.max_freq); + ++ spin_lock(&policy->transition_lock); + policy->transition_ongoing = false; + policy->transition_task = NULL; ++ spin_unlock(&policy->transition_lock); + + wake_up(&policy->transition_wait); + } +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c +index d51f90f55c05c..fbe3a40987438 100644 +--- a/drivers/cpufreq/intel_pstate.c ++++ b/drivers/cpufreq/intel_pstate.c +@@ -2574,6 +2574,11 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) + intel_pstate_clear_update_util_hook(policy->cpu); + intel_pstate_hwp_set(policy->cpu); + } ++ /* ++ * policy->cur is never updated with the intel_pstate driver, but it ++ * is used as a stale frequency value. So, keep it within limits. ++ */ ++ policy->cur = policy->min; + + mutex_unlock(&intel_pstate_limits_lock); + +diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c +index d289036beff23..b10f7a1b77f11 100644 +--- a/drivers/cpufreq/powernow-k8.c ++++ b/drivers/cpufreq/powernow-k8.c +@@ -1101,7 +1101,8 @@ static int powernowk8_cpu_exit(struct cpufreq_policy *pol) + + kfree(data->powernow_table); + kfree(data); +- for_each_cpu(cpu, pol->cpus) ++ /* pol->cpus will be empty here, use related_cpus instead. */ ++ for_each_cpu(cpu, pol->related_cpus) + per_cpu(powernow_data, cpu) = NULL; + + return 0; +diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c +index 7e7ab5597d7ac..0590001db6532 100644 +--- a/drivers/cpuidle/cpuidle-pseries.c ++++ b/drivers/cpuidle/cpuidle-pseries.c +@@ -410,13 +410,7 @@ static int __init pseries_idle_probe(void) + return -ENODEV; + + if (firmware_has_feature(FW_FEATURE_SPLPAR)) { +- /* +- * Use local_paca instead of get_lppaca() since +- * preemption is not disabled, and it is not required in +- * fact, since lppaca_ptr does not need to be the value +- * associated to the current CPU, it can be from any CPU. +- */ +- if (lppaca_shared_proc(local_paca->lppaca_ptr)) { ++ if (lppaca_shared_proc()) { + cpuidle_state_table = shared_states; + max_idle_state = ARRAY_SIZE(shared_states); + } else { +diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c +index 8867275767101..51b48b57266a6 100644 +--- a/drivers/crypto/caam/caampkc.c ++++ b/drivers/crypto/caam/caampkc.c +@@ -223,7 +223,9 @@ static int caam_rsa_count_leading_zeros(struct scatterlist *sgl, + if (len && *buff) + break; + +- sg_miter_next(&miter); ++ if (!sg_miter_next(&miter)) ++ break; ++ + buff = miter.addr; + len = miter.length; + +diff --git a/drivers/crypto/qat/qat_common/adf_gen4_pm.h b/drivers/crypto/qat/qat_common/adf_gen4_pm.h +index f8f8a9ee29e5b..db4326933d1c0 100644 +--- a/drivers/crypto/qat/qat_common/adf_gen4_pm.h ++++ b/drivers/crypto/qat/qat_common/adf_gen4_pm.h +@@ -35,7 +35,7 @@ + #define ADF_GEN4_PM_MSG_PENDING BIT(0) + #define ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK GENMASK(28, 1) + +-#define ADF_GEN4_PM_DEFAULT_IDLE_FILTER (0x0) ++#define ADF_GEN4_PM_DEFAULT_IDLE_FILTER (0x6) + #define ADF_GEN4_PM_MAX_IDLE_FILTER (0x7) + + int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev); +diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c +index d33006d43f761..4df5330afaa1d 100644 +--- a/drivers/crypto/stm32/stm32-hash.c ++++ b/drivers/crypto/stm32/stm32-hash.c +@@ -565,9 +565,9 @@ static int stm32_hash_dma_send(struct stm32_hash_dev *hdev) + } + + for_each_sg(rctx->sg, tsg, rctx->nents, i) { ++ sg[0] = *tsg; + len = sg->length; + +- sg[0] = *tsg; + if (sg_is_last(sg)) { + if (hdev->dma_mode == 1) { + len = (ALIGN(sg->length, 16) - 16); +@@ -1566,9 +1566,7 @@ static int stm32_hash_remove(struct platform_device *pdev) + if (!hdev) + return -ENODEV; + +- ret = pm_runtime_resume_and_get(hdev->dev); +- if (ret < 0) +- return ret; ++ ret = pm_runtime_get_sync(hdev->dev); + + stm32_hash_unregister_algs(hdev); + +@@ -1584,7 +1582,8 @@ static int stm32_hash_remove(struct platform_device *pdev) + pm_runtime_disable(hdev->dev); + pm_runtime_put_noidle(hdev->dev); + +- clk_disable_unprepare(hdev->clk); ++ if (ret >= 0) ++ clk_disable_unprepare(hdev->clk); + + return 0; + } +diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c +index 8c5f6f7fca112..fe6644f998872 100644 +--- a/drivers/devfreq/devfreq.c ++++ b/drivers/devfreq/devfreq.c +@@ -763,6 +763,7 @@ static void devfreq_dev_release(struct device *dev) + dev_pm_opp_put_opp_table(devfreq->opp_table); + + mutex_destroy(&devfreq->lock); ++ srcu_cleanup_notifier_head(&devfreq->transition_notifier_list); + kfree(devfreq); + } + +diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig +index b64ae02c26f8c..81de833ccd041 100644 +--- a/drivers/dma/Kconfig ++++ b/drivers/dma/Kconfig +@@ -210,6 +210,7 @@ config FSL_DMA + config FSL_EDMA + tristate "Freescale eDMA engine support" + depends on OF ++ depends on HAS_IOMEM + select DMA_ENGINE + select DMA_VIRTUAL_CHANNELS + help +@@ -279,6 +280,7 @@ config IMX_SDMA + + config INTEL_IDMA64 + tristate "Intel integrated DMA 64-bit support" ++ depends on HAS_IOMEM + select DMA_ENGINE + select DMA_VIRTUAL_CHANNELS + help +diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c +index 18cd8151dee02..6e1e14b376e65 100644 +--- a/drivers/dma/idxd/sysfs.c ++++ b/drivers/dma/idxd/sysfs.c +@@ -1426,7 +1426,7 @@ static ssize_t pasid_enabled_show(struct device *dev, + { + struct idxd_device *idxd = confdev_to_idxd(dev); + +- return sysfs_emit(buf, "%u\n", device_pasid_enabled(idxd)); ++ return sysfs_emit(buf, "%u\n", device_user_pasid_enabled(idxd)); + } + static DEVICE_ATTR_RO(pasid_enabled); + +diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c +index f093e08c23b16..3b09fdc507e04 100644 +--- a/drivers/dma/ste_dma40.c ++++ b/drivers/dma/ste_dma40.c +@@ -3597,6 +3597,10 @@ static int __init d40_probe(struct platform_device *pdev) + spin_lock_init(&base->lcla_pool.lock); + + base->irq = platform_get_irq(pdev, 0); ++ if (base->irq < 0) { ++ ret = base->irq; ++ goto destroy_cache; ++ } + + ret = request_irq(base->irq, d40_handle_interrupt, 0, D40_NAME, base); + if (ret) { +diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c +index a07bbfd075d06..8ec70da8d84fe 100644 +--- a/drivers/edac/igen6_edac.c ++++ b/drivers/edac/igen6_edac.c +@@ -27,7 +27,7 @@ + #include "edac_mc.h" + #include "edac_module.h" + +-#define IGEN6_REVISION "v2.5" ++#define IGEN6_REVISION "v2.5.1" + + #define EDAC_MOD_STR "igen6_edac" + #define IGEN6_NMI_NAME "igen6_ibecc" +@@ -1216,9 +1216,6 @@ static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + INIT_WORK(&ecclog_work, ecclog_work_cb); + init_irq_work(&ecclog_irq_work, ecclog_irq_work_cb); + +- /* Check if any pending errors before registering the NMI handler */ +- ecclog_handler(); +- + rc = register_err_handler(); + if (rc) + goto fail3; +@@ -1230,6 +1227,9 @@ static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + goto fail4; + } + ++ /* Check if any pending errors before/during the registration of the error handler */ ++ ecclog_handler(); ++ + igen6_debug_setup(); + return 0; + fail4: +diff --git a/drivers/extcon/Kconfig b/drivers/extcon/Kconfig +index 290186e44e6bd..4dd52a6a5b48d 100644 +--- a/drivers/extcon/Kconfig ++++ b/drivers/extcon/Kconfig +@@ -62,6 +62,7 @@ config EXTCON_INTEL_CHT_WC + tristate "Intel Cherrytrail Whiskey Cove PMIC extcon driver" + depends on INTEL_SOC_PMIC_CHTWC + depends on USB_SUPPORT ++ depends on POWER_SUPPLY + select USB_ROLE_SWITCH + help + Say Y here to enable extcon support for charger detection / control +diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c +index f9040bd610812..285fe7ad490d1 100644 +--- a/drivers/firmware/arm_sdei.c ++++ b/drivers/firmware/arm_sdei.c +@@ -1095,3 +1095,22 @@ int sdei_event_handler(struct pt_regs *regs, + return err; + } + NOKPROBE_SYMBOL(sdei_event_handler); ++ ++void sdei_handler_abort(void) ++{ ++ /* ++ * If the crash happened in an SDEI event handler then we need to ++ * finish the handler with the firmware so that we can have working ++ * interrupts in the crash kernel. ++ */ ++ if (__this_cpu_read(sdei_active_critical_event)) { ++ pr_warn("still in SDEI critical event context, attempting to finish handler.\n"); ++ __sdei_handler_abort(); ++ __this_cpu_write(sdei_active_critical_event, NULL); ++ } ++ if (__this_cpu_read(sdei_active_normal_event)) { ++ pr_warn("still in SDEI normal event context, attempting to finish handler.\n"); ++ __sdei_handler_abort(); ++ __this_cpu_write(sdei_active_normal_event, NULL); ++ } ++} +diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c +index 81cc3d0f6eec1..81c5f94b1be11 100644 +--- a/drivers/firmware/cirrus/cs_dsp.c ++++ b/drivers/firmware/cirrus/cs_dsp.c +@@ -939,7 +939,8 @@ static int cs_dsp_create_control(struct cs_dsp *dsp, + ctl->alg_region.alg == alg_region->alg && + ctl->alg_region.type == alg_region->type) { + if ((!subname && !ctl->subname) || +- (subname && !strncmp(ctl->subname, subname, ctl->subname_len))) { ++ (subname && (ctl->subname_len == subname_len) && ++ !strncmp(ctl->subname, subname, ctl->subname_len))) { + if (!ctl->enabled) + ctl->enabled = 1; + return 0; +diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c +index 33a7811e12c65..4f0152b11a890 100644 +--- a/drivers/firmware/efi/libstub/x86-stub.c ++++ b/drivers/firmware/efi/libstub/x86-stub.c +@@ -61,7 +61,7 @@ preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom) + rom->data.type = SETUP_PCI; + rom->data.len = size - sizeof(struct setup_data); + rom->data.next = 0; +- rom->pcilen = pci->romsize; ++ rom->pcilen = romsize; + *__rom = rom; + + status = efi_call_proto(pci, pci.read, EfiPciIoWidthUint16, +diff --git a/drivers/firmware/meson/meson_sm.c b/drivers/firmware/meson/meson_sm.c +index 77aa5c6398aa6..d081a6312627b 100644 +--- a/drivers/firmware/meson/meson_sm.c ++++ b/drivers/firmware/meson/meson_sm.c +@@ -292,6 +292,8 @@ static int __init meson_sm_probe(struct platform_device *pdev) + return -ENOMEM; + + chip = of_match_device(meson_sm_ids, dev)->data; ++ if (!chip) ++ return -EINVAL; + + if (chip->cmd_shmem_in_base) { + fw->sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base, +diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c +index 6281e7153b475..4c550cfbc086c 100644 +--- a/drivers/firmware/ti_sci.c ++++ b/drivers/firmware/ti_sci.c +@@ -97,7 +97,6 @@ struct ti_sci_desc { + * @node: list head + * @host_id: Host ID + * @users: Number of users of this instance +- * @is_suspending: Flag set to indicate in suspend path. + */ + struct ti_sci_info { + struct device *dev; +@@ -116,7 +115,6 @@ struct ti_sci_info { + u8 host_id; + /* protected by ti_sci_list_mutex */ + int users; +- bool is_suspending; + }; + + #define cl_to_ti_sci_info(c) container_of(c, struct ti_sci_info, cl) +@@ -418,14 +416,14 @@ static inline int ti_sci_do_xfer(struct ti_sci_info *info, + + ret = 0; + +- if (!info->is_suspending) { ++ if (system_state <= SYSTEM_RUNNING) { + /* And we wait for the response. */ + timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms); + if (!wait_for_completion_timeout(&xfer->done, timeout)) + ret = -ETIMEDOUT; + } else { + /* +- * If we are suspending, we cannot use wait_for_completion_timeout ++ * If we are !running, we cannot use wait_for_completion_timeout + * during noirq phase, so we must manually poll the completion. + */ + ret = read_poll_timeout_atomic(try_wait_for_completion, done_state, +@@ -3282,35 +3280,6 @@ static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode, + return NOTIFY_BAD; + } + +-static void ti_sci_set_is_suspending(struct ti_sci_info *info, bool is_suspending) +-{ +- info->is_suspending = is_suspending; +-} +- +-static int ti_sci_suspend(struct device *dev) +-{ +- struct ti_sci_info *info = dev_get_drvdata(dev); +- /* +- * We must switch operation to polled mode now as drivers and the genpd +- * layer may make late TI SCI calls to change clock and device states +- * from the noirq phase of suspend. +- */ +- ti_sci_set_is_suspending(info, true); +- +- return 0; +-} +- +-static int ti_sci_resume(struct device *dev) +-{ +- struct ti_sci_info *info = dev_get_drvdata(dev); +- +- ti_sci_set_is_suspending(info, false); +- +- return 0; +-} +- +-static DEFINE_SIMPLE_DEV_PM_OPS(ti_sci_pm_ops, ti_sci_suspend, ti_sci_resume); +- + /* Description for K2G */ + static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = { + .default_host_id = 2, +@@ -3519,7 +3488,6 @@ static struct platform_driver ti_sci_driver = { + .driver = { + .name = "ti-sci", + .of_match_table = of_match_ptr(ti_sci_of_match), +- .pm = &ti_sci_pm_ops, + }, + }; + module_platform_driver(ti_sci_driver); +diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c +index 7cec1772820d3..5eccab175e86b 100644 +--- a/drivers/fsi/fsi-master-aspeed.c ++++ b/drivers/fsi/fsi-master-aspeed.c +@@ -454,6 +454,8 @@ static ssize_t cfam_reset_store(struct device *dev, struct device_attribute *att + gpiod_set_value(aspeed->cfam_reset_gpio, 1); + usleep_range(900, 1000); + gpiod_set_value(aspeed->cfam_reset_gpio, 0); ++ usleep_range(900, 1000); ++ opb_writel(aspeed, ctrl_base + FSI_MRESP0, cpu_to_be32(FSI_MRESP_RST_ALL_MASTER)); + mutex_unlock(&aspeed->lock); + trace_fsi_master_aspeed_cfam_reset(false); + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +index e6427a00cf6d6..9aac9e755609d 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +@@ -1212,6 +1212,9 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev) + u16 cmd; + int r; + ++ if (!IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT)) ++ return 0; ++ + /* Bypass for VF */ + if (amdgpu_sriov_vf(adev)) + return 0; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c +index 4e42dcb1950f7..9e3313dd956ae 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c +@@ -554,6 +554,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) + crtc = (struct drm_crtc *)minfo->crtcs[i]; + if (crtc && crtc->base.id == info->mode_crtc.id) { + struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); ++ + ui32 = amdgpu_crtc->crtc_id; + found = 1; + break; +@@ -572,7 +573,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) + if (ret) + return ret; + +- ret = copy_to_user(out, &ip, min((size_t)size, sizeof(ip))); ++ ret = copy_to_user(out, &ip, min_t(size_t, size, sizeof(ip))); + return ret ? -EFAULT : 0; + } + case AMDGPU_INFO_HW_IP_COUNT: { +@@ -718,17 +719,18 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) + ? -EFAULT : 0; + } + case AMDGPU_INFO_READ_MMR_REG: { +- unsigned n, alloc_size; ++ unsigned int n, alloc_size; + uint32_t *regs; +- unsigned se_num = (info->read_mmr_reg.instance >> ++ unsigned int se_num = (info->read_mmr_reg.instance >> + AMDGPU_INFO_MMR_SE_INDEX_SHIFT) & + AMDGPU_INFO_MMR_SE_INDEX_MASK; +- unsigned sh_num = (info->read_mmr_reg.instance >> ++ unsigned int sh_num = (info->read_mmr_reg.instance >> + AMDGPU_INFO_MMR_SH_INDEX_SHIFT) & + AMDGPU_INFO_MMR_SH_INDEX_MASK; + + /* set full masks if the userspace set all bits +- * in the bitfields */ ++ * in the bitfields ++ */ + if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK) + se_num = 0xffffffff; + else if (se_num >= AMDGPU_GFX_MAX_SE) +@@ -852,7 +854,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) + return ret; + } + case AMDGPU_INFO_VCE_CLOCK_TABLE: { +- unsigned i; ++ unsigned int i; + struct drm_amdgpu_info_vce_clock_table vce_clk_table = {}; + struct amd_vce_state *vce_state; + +diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c +index de6d10390ab2f..9be6da37032a7 100644 +--- a/drivers/gpu/drm/amd/amdgpu/cik.c ++++ b/drivers/gpu/drm/amd/amdgpu/cik.c +@@ -1574,17 +1574,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) + u16 bridge_cfg2, gpu_cfg2; + u32 max_lw, current_lw, tmp; + +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &bridge_cfg); +- pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, +- &gpu_cfg); +- +- tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); +- +- tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); + + tmp = RREG32_PCIE(ixPCIE_LC_STATUS1); + max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >> +@@ -1637,21 +1628,14 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev) + msleep(100); + + /* linkctl */ +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, +- tmp16); +- +- pcie_capability_read_word(adev->pdev, +- PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(adev->pdev, +- PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ bridge_cfg & ++ PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ gpu_cfg & ++ PCI_EXP_LNKCTL_HAWD); + + /* linkctl2 */ + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, +diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c +index 8c5fa4b7b68a2..c7cb30efe43de 100644 +--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c +@@ -147,14 +147,15 @@ static int psp_v13_0_wait_for_bootloader(struct psp_context *psp) + int ret; + int retry_loop; + ++ /* Wait for bootloader to signify that it is ready having bit 31 of ++ * C2PMSG_35 set to 1. All other bits are expected to be cleared. ++ * If there is an error in processing command, bits[7:0] will be set. ++ * This is applicable for PSP v13.0.6 and newer. ++ */ + for (retry_loop = 0; retry_loop < 10; retry_loop++) { +- /* Wait for bootloader to signify that is +- ready having bit 31 of C2PMSG_35 set to 1 */ +- ret = psp_wait_for(psp, +- SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_35), +- 0x80000000, +- 0x80000000, +- false); ++ ret = psp_wait_for( ++ psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_35), ++ 0x80000000, 0xffffffff, false); + + if (ret == 0) + return 0; +diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c +index 7f99e130acd06..fd34c2100bd96 100644 +--- a/drivers/gpu/drm/amd/amdgpu/si.c ++++ b/drivers/gpu/drm/amd/amdgpu/si.c +@@ -2276,17 +2276,8 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) + u16 bridge_cfg2, gpu_cfg2; + u32 max_lw, current_lw, tmp; + +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &bridge_cfg); +- pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL, +- &gpu_cfg); +- +- tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); +- +- tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); + + tmp = RREG32_PCIE(PCIE_LC_STATUS1); + max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; +@@ -2331,21 +2322,14 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev) + + mdelay(100); + +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, +- tmp16); +- +- pcie_capability_read_word(adev->pdev, +- PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(adev->pdev, +- PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ bridge_cfg & ++ PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ gpu_cfg & ++ PCI_EXP_LNKCTL_HAWD); + + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, + &tmp16); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 249b269e2cc53..c8e562dcd99d0 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -5921,8 +5921,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector, + */ + DRM_DEBUG_DRIVER("No preferred mode found\n"); + } else { +- recalculate_timing = amdgpu_freesync_vid_mode && +- is_freesync_video_mode(&mode, aconnector); ++ recalculate_timing = is_freesync_video_mode(&mode, aconnector); + if (recalculate_timing) { + freesync_mode = get_highest_refresh_rate_mode(aconnector, false); + drm_mode_copy(&saved_mode, &mode); +@@ -7016,7 +7015,7 @@ static void amdgpu_dm_connector_add_freesync_modes(struct drm_connector *connect + struct amdgpu_dm_connector *amdgpu_dm_connector = + to_amdgpu_dm_connector(connector); + +- if (!(amdgpu_freesync_vid_mode && edid)) ++ if (!edid) + return; + + if (amdgpu_dm_connector->max_vfreq - amdgpu_dm_connector->min_vfreq > 10) +@@ -7859,10 +7858,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, + * fast updates. + */ + if (crtc->state->async_flip && +- acrtc_state->update_type != UPDATE_TYPE_FAST) ++ (acrtc_state->update_type != UPDATE_TYPE_FAST || ++ get_mem_type(old_plane_state->fb) != get_mem_type(fb))) + drm_warn_once(state->dev, + "[PLANE:%d:%s] async flip with non-fast update\n", + plane->base.id, plane->name); ++ + bundle->flip_addrs[planes_count].flip_immediate = + crtc->state->async_flip && + acrtc_state->update_type == UPDATE_TYPE_FAST && +@@ -9022,8 +9023,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, + * TODO: Refactor this function to allow this check to work + * in all conditions. + */ +- if (amdgpu_freesync_vid_mode && +- dm_new_crtc_state->stream && ++ if (dm_new_crtc_state->stream && + is_timing_unchanged_for_freesync(new_crtc_state, old_crtc_state)) + goto skip_modeset; + +@@ -9063,7 +9063,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, + } + + /* Now check if we should set freesync video mode */ +- if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream && ++ if (dm_new_crtc_state->stream && + dc_is_stream_unchanged(new_stream, dm_old_crtc_state->stream) && + dc_is_stream_scaling_unchanged(new_stream, dm_old_crtc_state->stream) && + is_timing_unchanged_for_freesync(new_crtc_state, +@@ -9076,7 +9076,7 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, + set_freesync_fixed_config(dm_new_crtc_state); + + goto skip_modeset; +- } else if (amdgpu_freesync_vid_mode && aconnector && ++ } else if (aconnector && + is_freesync_video_mode(&new_crtc_state->mode, + aconnector)) { + struct drm_display_mode *high_mode; +@@ -9815,6 +9815,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, + + /* Remove exiting planes if they are modified */ + for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ++ if (old_plane_state->fb && new_plane_state->fb && ++ get_mem_type(old_plane_state->fb) != ++ get_mem_type(new_plane_state->fb)) ++ lock_and_validation_needed = true; ++ + ret = dm_update_plane_state(dc, state, plane, + old_plane_state, + new_plane_state, +@@ -10066,9 +10071,20 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, + struct dm_crtc_state *dm_new_crtc_state = + to_dm_crtc_state(new_crtc_state); + ++ /* ++ * Only allow async flips for fast updates that don't change ++ * the FB pitch, the DCC state, rotation, etc. ++ */ ++ if (new_crtc_state->async_flip && lock_and_validation_needed) { ++ drm_dbg_atomic(crtc->dev, ++ "[CRTC:%d:%s] async flips are only supported for fast updates\n", ++ crtc->base.id, crtc->name); ++ ret = -EINVAL; ++ goto fail; ++ } ++ + dm_new_crtc_state->update_type = lock_and_validation_needed ? +- UPDATE_TYPE_FULL : +- UPDATE_TYPE_FAST; ++ UPDATE_TYPE_FULL : UPDATE_TYPE_FAST; + } + + /* Must be success */ +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +index b9b70f4562c72..1ec643a0d00d2 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +@@ -406,18 +406,6 @@ static int dm_crtc_helper_atomic_check(struct drm_crtc *crtc, + return -EINVAL; + } + +- /* +- * Only allow async flips for fast updates that don't change the FB +- * pitch, the DCC state, rotation, etc. +- */ +- if (crtc_state->async_flip && +- dm_crtc_state->update_type != UPDATE_TYPE_FAST) { +- drm_dbg_atomic(crtc->dev, +- "[CRTC:%d:%s] async flips are only supported for fast updates\n", +- crtc->base.id, crtc->name); +- return -EINVAL; +- } +- + /* In some use cases, like reset, no stream is attached */ + if (!dm_crtc_state->stream) + return 0; +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c +index 925d6e13620ec..1bbf85defd611 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c +@@ -32,6 +32,7 @@ + + #define MAX_INSTANCE 6 + #define MAX_SEGMENT 6 ++#define SMU_REGISTER_WRITE_RETRY_COUNT 5 + + struct IP_BASE_INSTANCE + { +@@ -134,6 +135,8 @@ static int dcn315_smu_send_msg_with_param( + unsigned int msg_id, unsigned int param) + { + uint32_t result; ++ uint32_t i = 0; ++ uint32_t read_back_data; + + result = dcn315_smu_wait_for_response(clk_mgr, 10, 200000); + +@@ -150,10 +153,19 @@ static int dcn315_smu_send_msg_with_param( + /* Set the parameter register for the SMU message, unit is Mhz */ + REG_WRITE(MP1_SMN_C2PMSG_37, param); + +- /* Trigger the message transaction by writing the message ID */ +- generic_write_indirect_reg(CTX, +- REG_NBIO(RSMU_INDEX), REG_NBIO(RSMU_DATA), +- mmMP1_C2PMSG_3, msg_id); ++ for (i = 0; i < SMU_REGISTER_WRITE_RETRY_COUNT; i++) { ++ /* Trigger the message transaction by writing the message ID */ ++ generic_write_indirect_reg(CTX, ++ REG_NBIO(RSMU_INDEX), REG_NBIO(RSMU_DATA), ++ mmMP1_C2PMSG_3, msg_id); ++ read_back_data = generic_read_indirect_reg(CTX, ++ REG_NBIO(RSMU_INDEX), REG_NBIO(RSMU_DATA), ++ mmMP1_C2PMSG_3); ++ if (read_back_data == msg_id) ++ break; ++ udelay(2); ++ smu_print("SMU msg id write fail %x times. \n", i + 1); ++ } + + result = dcn315_smu_wait_for_response(clk_mgr, 10, 200000); + +diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c +index d260eaa1509ed..9378c98d02cfe 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c ++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c +@@ -1813,10 +1813,13 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context) + hws->funcs.edp_backlight_control(edp_link_with_sink, false); + } + /*resume from S3, no vbios posting, no need to power down again*/ ++ clk_mgr_exit_optimized_pwr_state(dc, dc->clk_mgr); ++ + power_down_all_hw_blocks(dc); + disable_vga_and_power_gate_all_controllers(dc); + if (edp_link_with_sink && !keep_edp_vdd_on) + dc->hwss.edp_power_control(edp_link_with_sink, false); ++ clk_mgr_optimize_pwr_state(dc, dc->clk_mgr); + } + bios_set_scratch_acc_mode_change(dc->ctx->dc_bios, 1); + } +diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c +index 6192851c59ed8..51265a812bdc8 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c +@@ -75,6 +75,7 @@ static const struct hw_sequencer_funcs dcn301_funcs = { + .get_hw_state = dcn10_get_hw_state, + .clear_status_bits = dcn10_clear_status_bits, + .wait_for_mpcc_disconnect = dcn10_wait_for_mpcc_disconnect, ++ .edp_backlight_control = dce110_edp_backlight_control, + .edp_power_control = dce110_edp_power_control, + .edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready, + .set_cursor_position = dcn10_set_cursor_position, +diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c +index cef32a1f91cdc..b735e548e26dc 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c +@@ -84,7 +84,8 @@ static enum phyd32clk_clock_source get_phy_mux_symclk( + struct dcn_dccg *dccg_dcn, + enum phyd32clk_clock_source src) + { +- if (dccg_dcn->base.ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) { ++ if (dccg_dcn->base.ctx->asic_id.chip_family == FAMILY_YELLOW_CARP && ++ dccg_dcn->base.ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) { + if (src == PHYD32CLKC) + src = PHYD32CLKF; + if (src == PHYD32CLKD) +diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +index 8a88605827a84..551a63f7064bb 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c ++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/dcn314_fpu.c +@@ -32,7 +32,7 @@ + #include "dml/display_mode_vba.h" + + struct _vcs_dpi_ip_params_st dcn3_14_ip = { +- .VBlankNomDefaultUS = 800, ++ .VBlankNomDefaultUS = 668, + .gpuvm_enable = 1, + .gpuvm_max_page_table_levels = 1, + .hostvm_enable = 1, +diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c +index 7d613118cb713..8472013ff38a2 100644 +--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c ++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c +@@ -2072,15 +2072,19 @@ static int amdgpu_device_attr_create(struct amdgpu_device *adev, + uint32_t mask, struct list_head *attr_list) + { + int ret = 0; +- struct device_attribute *dev_attr = &attr->dev_attr; +- const char *name = dev_attr->attr.name; + enum amdgpu_device_attr_states attr_states = ATTR_STATE_SUPPORTED; + struct amdgpu_device_attr_entry *attr_entry; ++ struct device_attribute *dev_attr; ++ const char *name; + + int (*attr_update)(struct amdgpu_device *adev, struct amdgpu_device_attr *attr, + uint32_t mask, enum amdgpu_device_attr_states *states) = default_attr_update; + +- BUG_ON(!attr); ++ if (!attr) ++ return -EINVAL; ++ ++ dev_attr = &attr->dev_attr; ++ name = dev_attr->attr.name; + + attr_update = attr->attr_update ? attr->attr_update : default_attr_update; + +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c +index f7ac488a3da20..503e844baede2 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c +@@ -1305,7 +1305,7 @@ static ssize_t smu_v13_0_0_get_gpu_metrics(struct smu_context *smu, + gpu_metrics->average_vclk1_frequency = metrics->AverageVclk1Frequency; + gpu_metrics->average_dclk1_frequency = metrics->AverageDclk1Frequency; + +- gpu_metrics->current_gfxclk = metrics->CurrClock[PPCLK_GFXCLK]; ++ gpu_metrics->current_gfxclk = gpu_metrics->average_gfxclk_frequency; + gpu_metrics->current_socclk = metrics->CurrClock[PPCLK_SOCCLK]; + gpu_metrics->current_uclk = metrics->CurrClock[PPCLK_UCLK]; + gpu_metrics->current_vclk0 = metrics->CurrClock[PPCLK_VCLK_0]; +diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c +index f21eb8fb76d87..3b9bd8ecda137 100644 +--- a/drivers/gpu/drm/armada/armada_overlay.c ++++ b/drivers/gpu/drm/armada/armada_overlay.c +@@ -4,6 +4,8 @@ + * Rewritten from the dovefb driver, and Armada510 manuals. + */ + ++#include ++ + #include + #include + #include +@@ -445,8 +447,8 @@ static int armada_overlay_get_property(struct drm_plane *plane, + drm_to_overlay_state(state)->colorkey_ug, + drm_to_overlay_state(state)->colorkey_vb, 0); + } else if (property == priv->colorkey_mode_prop) { +- *val = (drm_to_overlay_state(state)->colorkey_mode & +- CFG_CKMODE_MASK) >> ffs(CFG_CKMODE_MASK); ++ *val = FIELD_GET(CFG_CKMODE_MASK, ++ drm_to_overlay_state(state)->colorkey_mode); + } else if (property == priv->brightness_prop) { + *val = drm_to_overlay_state(state)->brightness + 256; + } else if (property == priv->contrast_prop) { +diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +index 78b72739e5c3e..9f9874acfb2b7 100644 +--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c ++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +@@ -786,8 +786,13 @@ static void adv7511_mode_set(struct adv7511 *adv7511, + else + low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE; + +- regmap_update_bits(adv7511->regmap, 0xfb, +- 0x6, low_refresh_rate << 1); ++ if (adv7511->type == ADV7511) ++ regmap_update_bits(adv7511->regmap, 0xfb, ++ 0x6, low_refresh_rate << 1); ++ else ++ regmap_update_bits(adv7511->regmap, 0x4a, ++ 0xc, low_refresh_rate << 2); ++ + regmap_update_bits(adv7511->regmap, 0x17, + 0x60, (vsync_polarity << 6) | (hsync_polarity << 5)); + +diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c +index 213263ad6a064..cf86cc05b7fca 100644 +--- a/drivers/gpu/drm/bridge/analogix/anx7625.c ++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c +@@ -874,11 +874,11 @@ static int anx7625_hdcp_enable(struct anx7625_data *ctx) + } + + /* Read downstream capability */ +- ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, 0x68028, 1, &bcap); ++ ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, DP_AUX_HDCP_BCAPS, 1, &bcap); + if (ret < 0) + return ret; + +- if (!(bcap & 0x01)) { ++ if (!(bcap & DP_BCAPS_HDCP_CAPABLE)) { + pr_warn("downstream not support HDCP 1.4, cap(%x).\n", bcap); + return 0; + } +@@ -933,8 +933,8 @@ static void anx7625_dp_start(struct anx7625_data *ctx) + + dev_dbg(dev, "set downstream sink into normal\n"); + /* Downstream sink enter into normal mode */ +- data = 1; +- ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, 0x000600, 1, &data); ++ data = DP_SET_POWER_D0; ++ ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, DP_SET_POWER, 1, &data); + if (ret < 0) + dev_err(dev, "IO error : set sink into normal mode fail\n"); + +@@ -973,8 +973,8 @@ static void anx7625_dp_stop(struct anx7625_data *ctx) + + dev_dbg(dev, "notify downstream enter into standby\n"); + /* Downstream monitor enter into standby mode */ +- data = 2; +- ret |= anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, 0x000600, 1, &data); ++ data = DP_SET_POWER_D3; ++ ret |= anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, DP_SET_POWER, 1, &data); + if (ret < 0) + DRM_DEV_ERROR(dev, "IO error : mute video fail\n"); + +diff --git a/drivers/gpu/drm/bridge/tc358764.c b/drivers/gpu/drm/bridge/tc358764.c +index 53259c12d7778..e0f583a88789d 100644 +--- a/drivers/gpu/drm/bridge/tc358764.c ++++ b/drivers/gpu/drm/bridge/tc358764.c +@@ -176,7 +176,7 @@ static void tc358764_read(struct tc358764 *ctx, u16 addr, u32 *val) + if (ret >= 0) + le32_to_cpus(val); + +- dev_dbg(ctx->dev, "read: %d, addr: %d\n", addr, *val); ++ dev_dbg(ctx->dev, "read: addr=0x%04x data=0x%08x\n", addr, *val); + } + + static void tc358764_write(struct tc358764 *ctx, u16 addr, u32 val) +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c +index f418e0b75772e..0edcf8ceb4a78 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c +@@ -125,9 +125,9 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit) + return; + etnaviv_dump_core = false; + +- mutex_lock(&gpu->mmu_context->lock); ++ mutex_lock(&submit->mmu_context->lock); + +- mmu_size = etnaviv_iommu_dump_size(gpu->mmu_context); ++ mmu_size = etnaviv_iommu_dump_size(submit->mmu_context); + + /* We always dump registers, mmu, ring, hanging cmdbuf and end marker */ + n_obj = 5; +@@ -157,7 +157,7 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit) + iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN | + __GFP_NORETRY); + if (!iter.start) { +- mutex_unlock(&gpu->mmu_context->lock); ++ mutex_unlock(&submit->mmu_context->lock); + dev_warn(gpu->dev, "failed to allocate devcoredump file\n"); + return; + } +@@ -169,18 +169,18 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit) + memset(iter.hdr, 0, iter.data - iter.start); + + etnaviv_core_dump_registers(&iter, gpu); +- etnaviv_core_dump_mmu(&iter, gpu->mmu_context, mmu_size); ++ etnaviv_core_dump_mmu(&iter, submit->mmu_context, mmu_size); + etnaviv_core_dump_mem(&iter, ETDUMP_BUF_RING, gpu->buffer.vaddr, + gpu->buffer.size, + etnaviv_cmdbuf_get_va(&gpu->buffer, +- &gpu->mmu_context->cmdbuf_mapping)); ++ &submit->mmu_context->cmdbuf_mapping)); + + etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, + submit->cmdbuf.vaddr, submit->cmdbuf.size, + etnaviv_cmdbuf_get_va(&submit->cmdbuf, +- &gpu->mmu_context->cmdbuf_mapping)); ++ &submit->mmu_context->cmdbuf_mapping)); + +- mutex_unlock(&gpu->mmu_context->lock); ++ mutex_unlock(&submit->mmu_context->lock); + + /* Reserve space for the bomap */ + if (n_bomap_pages) { +diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c +index 29ee0814bccc8..68050409dd26c 100644 +--- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c ++++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c +@@ -7,6 +7,7 @@ + #include + #include + #include ++#include + + #include + #include +diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c +index 007af69e5026f..4c249939a6c3b 100644 +--- a/drivers/gpu/drm/mediatek/mtk_dp.c ++++ b/drivers/gpu/drm/mediatek/mtk_dp.c +@@ -1588,7 +1588,9 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp) + u8 val; + ssize_t ret; + +- drm_dp_read_dpcd_caps(&mtk_dp->aux, mtk_dp->rx_cap); ++ ret = drm_dp_read_dpcd_caps(&mtk_dp->aux, mtk_dp->rx_cap); ++ if (ret < 0) ++ return ret; + + if (drm_dp_tps4_supported(mtk_dp->rx_cap)) + mtk_dp->train_info.channel_eq_pattern = DP_TRAINING_PATTERN_4; +@@ -1615,10 +1617,13 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp) + return ret == 0 ? -EIO : ret; + } + +- if (val) +- drm_dp_dpcd_writeb(&mtk_dp->aux, +- DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0, +- val); ++ if (val) { ++ ret = drm_dp_dpcd_writeb(&mtk_dp->aux, ++ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0, ++ val); ++ if (ret < 0) ++ return ret; ++ } + } + + return 0; +diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c +index 5071f1263216b..14ddfe3a6be77 100644 +--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c ++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c +@@ -115,10 +115,9 @@ static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt * + dma_addr_t dma_addr; + + pkt->va_base = kzalloc(size, GFP_KERNEL); +- if (!pkt->va_base) { +- kfree(pkt); ++ if (!pkt->va_base) + return -ENOMEM; +- } ++ + pkt->buf_size = size; + pkt->cl = (void *)client; + +@@ -128,7 +127,6 @@ static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt * + if (dma_mapping_error(dev, dma_addr)) { + dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size); + kfree(pkt->va_base); +- kfree(pkt); + return -ENOMEM; + } + +@@ -144,7 +142,6 @@ static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt) + dma_unmap_single(client->chan->mbox->dev, pkt->pa_base, pkt->buf_size, + DMA_TO_DEVICE); + kfree(pkt->va_base); +- kfree(pkt); + } + #endif + +diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c +index 6c204ccfb9ece..1d0374a577a5e 100644 +--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c ++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c +@@ -242,7 +242,11 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) + + mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP, + pgprot_writecombine(PAGE_KERNEL)); +- ++ if (!mtk_gem->kvaddr) { ++ kfree(sgt); ++ kfree(mtk_gem->pages); ++ return -ENOMEM; ++ } + out: + kfree(sgt); + iosys_map_set_vaddr(map, mtk_gem->kvaddr); +diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +index 6c9a747eb4ad5..2428d6ac5fe96 100644 +--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +@@ -521,6 +521,10 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) + gpu->perfcntrs = perfcntrs; + gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs); + ++ ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); ++ if (ret) ++ goto fail; ++ + if (adreno_is_a20x(adreno_gpu)) + adreno_gpu->registers = a200_registers; + else if (adreno_is_a225(adreno_gpu)) +@@ -528,10 +532,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) + else + adreno_gpu->registers = a220_registers; + +- ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1); +- if (ret) +- goto fail; +- + if (!gpu->aspace) { + dev_err(dev->dev, "No memory protection without MMU\n"); + if (!allow_vram_carveout) { +diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +index 62f6ff6abf410..42c7e378d504d 100644 +--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c ++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +@@ -460,7 +460,8 @@ static int dpu_encoder_phys_wb_wait_for_commit_done( + wait_info.atomic_cnt = &phys_enc->pending_kickoff_cnt; + wait_info.timeout_ms = KICKOFF_TIMEOUT_MS; + +- ret = dpu_encoder_helper_wait_for_irq(phys_enc, INTR_IDX_WB_DONE, ++ ret = dpu_encoder_helper_wait_for_irq(phys_enc, ++ phys_enc->irq[INTR_IDX_WB_DONE], + dpu_encoder_phys_wb_done_irq, &wait_info); + if (ret == -ETIMEDOUT) + _dpu_encoder_phys_wb_handle_wbdone_timeout(phys_enc); +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +index bd2c4ac456017..0d5ff03cb0910 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +@@ -130,8 +130,7 @@ static void mdp5_plane_destroy_state(struct drm_plane *plane, + { + struct mdp5_plane_state *pstate = to_mdp5_plane_state(state); + +- if (state->fb) +- drm_framebuffer_put(state->fb); ++ __drm_atomic_helper_plane_destroy_state(state); + + kfree(pstate); + } +diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c +index acfe1b31e0792..add72bbc28b17 100644 +--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c ++++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c +@@ -192,5 +192,5 @@ void msm_disp_snapshot_add_block(struct msm_disp_state *disp_state, u32 len, + new_blk->base_addr = base_addr; + + msm_disp_state_dump_regs(&new_blk->state, new_blk->size, base_addr); +- list_add(&new_blk->node, &disp_state->blocks); ++ list_add_tail(&new_blk->node, &disp_state->blocks); + } +diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c +index 5e067ba7e5fba..0e8622ccd3a0f 100644 +--- a/drivers/gpu/drm/panel/panel-simple.c ++++ b/drivers/gpu/drm/panel/panel-simple.c +@@ -1159,7 +1159,9 @@ static const struct panel_desc auo_t215hvn01 = { + .delay = { + .disable = 5, + .unprepare = 1000, +- } ++ }, ++ .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, ++ .connector_type = DRM_MODE_CONNECTOR_LVDS, + }; + + static const struct drm_display_mode avic_tm070ddh03_mode = { +diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c +index 5819737c21c67..a6f3c811ceb8e 100644 +--- a/drivers/gpu/drm/radeon/cik.c ++++ b/drivers/gpu/drm/radeon/cik.c +@@ -9534,17 +9534,8 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) + u16 bridge_cfg2, gpu_cfg2; + u32 max_lw, current_lw, tmp; + +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &bridge_cfg); +- pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL, +- &gpu_cfg); +- +- tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); +- +- tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); + + tmp = RREG32_PCIE_PORT(PCIE_LC_STATUS1); + max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; +@@ -9591,21 +9582,14 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev) + msleep(100); + + /* linkctl */ +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, +- tmp16); +- +- pcie_capability_read_word(rdev->pdev, +- PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(rdev->pdev, +- PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ bridge_cfg & ++ PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ gpu_cfg & ++ PCI_EXP_LNKCTL_HAWD); + + /* linkctl2 */ + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, +diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c +index 8d5e4b25609d5..a91012447b56e 100644 +--- a/drivers/gpu/drm/radeon/si.c ++++ b/drivers/gpu/drm/radeon/si.c +@@ -7131,17 +7131,8 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) + u16 bridge_cfg2, gpu_cfg2; + u32 max_lw, current_lw, tmp; + +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &bridge_cfg); +- pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL, +- &gpu_cfg); +- +- tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16); +- +- tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD; +- pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD); + + tmp = RREG32_PCIE(PCIE_LC_STATUS1); + max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT; +@@ -7188,22 +7179,14 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev) + msleep(100); + + /* linkctl */ +- pcie_capability_read_word(root, PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(root, +- PCI_EXP_LNKCTL, +- tmp16); +- +- pcie_capability_read_word(rdev->pdev, +- PCI_EXP_LNKCTL, +- &tmp16); +- tmp16 &= ~PCI_EXP_LNKCTL_HAWD; +- tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD); +- pcie_capability_write_word(rdev->pdev, +- PCI_EXP_LNKCTL, +- tmp16); ++ pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ bridge_cfg & ++ PCI_EXP_LNKCTL_HAWD); ++ pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_HAWD, ++ gpu_cfg & ++ PCI_EXP_LNKCTL_HAWD); + + /* linkctl2 */ + pcie_capability_read_word(root, PCI_EXP_LNKCTL2, +diff --git a/drivers/gpu/drm/tegra/dpaux.c b/drivers/gpu/drm/tegra/dpaux.c +index 7dc681e2ee90b..d773ef4854188 100644 +--- a/drivers/gpu/drm/tegra/dpaux.c ++++ b/drivers/gpu/drm/tegra/dpaux.c +@@ -468,7 +468,7 @@ static int tegra_dpaux_probe(struct platform_device *pdev) + + dpaux->irq = platform_get_irq(pdev, 0); + if (dpaux->irq < 0) +- return -ENXIO; ++ return dpaux->irq; + + if (!pdev->dev.pm_domain) { + dpaux->rst = devm_reset_control_get(&pdev->dev, "dpaux"); +diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c +index e62f4d16b2c6b..7e2b0e2241358 100644 +--- a/drivers/gpu/drm/tiny/repaper.c ++++ b/drivers/gpu/drm/tiny/repaper.c +@@ -533,7 +533,7 @@ static int repaper_fb_dirty(struct drm_framebuffer *fb) + DRM_DEBUG("Flushing [FB:%d] st=%ums\n", fb->base.id, + epd->factored_stage_time); + +- buf = kmalloc_array(fb->width, fb->height, GFP_KERNEL); ++ buf = kmalloc(fb->width * fb->height / 8, GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto out_exit; +diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c +index 1de2d927c32b0..fcaa958d841c9 100644 +--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c ++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c +@@ -201,7 +201,9 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev) + dpsub->dev = &pdev->dev; + platform_set_drvdata(pdev, dpsub); + +- dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT)); ++ ret = dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT)); ++ if (ret) ++ return ret; + + /* Try the reserved memory. Proceed if there's none. */ + of_reserved_mem_device_init(&pdev->dev); +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c +index 3acaaca888acd..77ee5e01e6111 100644 +--- a/drivers/hid/hid-input.c ++++ b/drivers/hid/hid-input.c +@@ -961,6 +961,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel + return; + + case 0x3c: /* Invert */ ++ device->quirks &= ~HID_QUIRK_NOINVERT; + map_key_clear(BTN_TOOL_RUBBER); + break; + +@@ -986,9 +987,13 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel + case 0x45: /* ERASER */ + /* + * This event is reported when eraser tip touches the surface. +- * Actual eraser (BTN_TOOL_RUBBER) is set by Invert usage when +- * tool gets in proximity. ++ * Actual eraser (BTN_TOOL_RUBBER) is set and released either ++ * by Invert if tool reports proximity or by Eraser directly. + */ ++ if (!test_bit(BTN_TOOL_RUBBER, input->keybit)) { ++ device->quirks |= HID_QUIRK_NOINVERT; ++ set_bit(BTN_TOOL_RUBBER, input->keybit); ++ } + map_key_clear(BTN_TOUCH); + break; + +@@ -1532,6 +1537,15 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct + else if (report->tool != BTN_TOOL_RUBBER) + /* value is off, tool is not rubber, ignore */ + return; ++ else if (*quirks & HID_QUIRK_NOINVERT && ++ !test_bit(BTN_TOUCH, input->key)) { ++ /* ++ * There is no invert to release the tool, let hid_input ++ * send BTN_TOUCH with scancode and release the tool after. ++ */ ++ hid_report_release_tool(report, input, BTN_TOOL_RUBBER); ++ return; ++ } + + /* let hid-input set BTN_TOUCH */ + break; +diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c +index c358778e070bc..08768e5accedc 100644 +--- a/drivers/hid/hid-logitech-dj.c ++++ b/drivers/hid/hid-logitech-dj.c +@@ -1285,6 +1285,9 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev, + * 50 msec should gives enough time to the receiver to be ready. + */ + msleep(50); ++ ++ if (retval) ++ return retval; + } + + /* +@@ -1306,7 +1309,7 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev, + buf[5] = 0x09; + buf[6] = 0x00; + +- hid_hw_raw_request(hdev, REPORT_ID_HIDPP_SHORT, buf, ++ retval = hid_hw_raw_request(hdev, REPORT_ID_HIDPP_SHORT, buf, + HIDPP_REPORT_SHORT_LENGTH, HID_OUTPUT_REPORT, + HID_REQ_SET_REPORT); + +diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c +index e31be0cb8b850..521b2ffb42449 100644 +--- a/drivers/hid/hid-multitouch.c ++++ b/drivers/hid/hid-multitouch.c +@@ -1594,7 +1594,6 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app) + static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi) + { + struct mt_device *td = hid_get_drvdata(hdev); +- char *name; + const char *suffix = NULL; + struct mt_report_data *rdata; + struct mt_application *mt_application = NULL; +@@ -1645,15 +1644,9 @@ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi) + break; + } + +- if (suffix) { +- name = devm_kzalloc(&hi->input->dev, +- strlen(hdev->name) + strlen(suffix) + 2, +- GFP_KERNEL); +- if (name) { +- sprintf(name, "%s %s", hdev->name, suffix); +- hi->input->name = name; +- } +- } ++ if (suffix) ++ hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL, ++ "%s %s", hdev->name, suffix); + + return 0; + } +diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c +index bfbb51f8b5beb..39114d5c55a0e 100644 +--- a/drivers/hid/hid-uclogic-core.c ++++ b/drivers/hid/hid-uclogic-core.c +@@ -85,10 +85,8 @@ static int uclogic_input_configured(struct hid_device *hdev, + { + struct uclogic_drvdata *drvdata = hid_get_drvdata(hdev); + struct uclogic_params *params = &drvdata->params; +- char *name; + const char *suffix = NULL; + struct hid_field *field; +- size_t len; + size_t i; + const struct uclogic_params_frame *frame; + +@@ -146,14 +144,9 @@ static int uclogic_input_configured(struct hid_device *hdev, + } + } + +- if (suffix) { +- len = strlen(hdev->name) + 2 + strlen(suffix); +- name = devm_kzalloc(&hi->input->dev, len, GFP_KERNEL); +- if (name) { +- snprintf(name, len, "%s %s", hdev->name, suffix); +- hi->input->name = name; +- } +- } ++ if (suffix) ++ hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL, ++ "%s %s", hdev->name, suffix); + + return 0; + } +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c +index b03cb7ae7fd38..e9c3f1e826baa 100644 +--- a/drivers/hv/vmbus_drv.c ++++ b/drivers/hv/vmbus_drv.c +@@ -2452,7 +2452,8 @@ static int vmbus_acpi_add(struct acpi_device *device) + * Some ancestor of the vmbus acpi device (Gen1 or Gen2 + * firmware) is the VMOD that has the mmio ranges. Get that. + */ +- for (ancestor = acpi_dev_parent(device); ancestor; ++ for (ancestor = acpi_dev_parent(device); ++ ancestor && ancestor->handle != ACPI_ROOT_OBJECT; + ancestor = acpi_dev_parent(ancestor)) { + result = acpi_walk_resources(ancestor->handle, METHOD_NAME__CRS, + vmbus_walk_resources, NULL); +diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c +index 7d5f7441aceb1..b9a93ee9c2364 100644 +--- a/drivers/hwmon/tmp513.c ++++ b/drivers/hwmon/tmp513.c +@@ -434,7 +434,7 @@ static umode_t tmp51x_is_visible(const void *_data, + + switch (type) { + case hwmon_temp: +- if (data->id == tmp512 && channel == 4) ++ if (data->id == tmp512 && channel == 3) + return 0; + switch (attr) { + case hwmon_temp_input: +diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c +index 4c4cbd1f72584..3f207999377f0 100644 +--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c ++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c +@@ -428,7 +428,7 @@ static int tmc_set_etf_buffer(struct coresight_device *csdev, + return -EINVAL; + + /* wrap head around to the amount of space we have */ +- head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1); ++ head = handle->head & (((unsigned long)buf->nr_pages << PAGE_SHIFT) - 1); + + /* find the page to write to */ + buf->cur = head / PAGE_SIZE; +diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c +index 368f2e5a86278..1be0e5e0e80b2 100644 +--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c ++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c +@@ -45,7 +45,8 @@ struct etr_perf_buffer { + }; + + /* Convert the perf index to an offset within the ETR buffer */ +-#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT)) ++#define PERF_IDX2OFF(idx, buf) \ ++ ((idx) % ((unsigned long)(buf)->nr_pages << PAGE_SHIFT)) + + /* Lower limit for ETR hardware buffer */ + #define TMC_ETR_PERF_MIN_BUF_SIZE SZ_1M +@@ -1249,7 +1250,7 @@ alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event, + * than the size requested via sysfs. + */ + if ((nr_pages << PAGE_SHIFT) > drvdata->size) { +- etr_buf = tmc_alloc_etr_buf(drvdata, (nr_pages << PAGE_SHIFT), ++ etr_buf = tmc_alloc_etr_buf(drvdata, ((ssize_t)nr_pages << PAGE_SHIFT), + 0, node, NULL); + if (!IS_ERR(etr_buf)) + goto done; +diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h +index 66959557cf398..946aab12f9807 100644 +--- a/drivers/hwtracing/coresight/coresight-tmc.h ++++ b/drivers/hwtracing/coresight/coresight-tmc.h +@@ -325,7 +325,7 @@ ssize_t tmc_sg_table_get_data(struct tmc_sg_table *sg_table, + static inline unsigned long + tmc_sg_table_buf_size(struct tmc_sg_table *sg_table) + { +- return sg_table->data_pages.nr_pages << PAGE_SHIFT; ++ return (unsigned long)sg_table->data_pages.nr_pages << PAGE_SHIFT; + } + + struct coresight_device *tmc_etr_get_catu_device(struct tmc_drvdata *drvdata); +diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c +index 1fc4fd79a1c69..925f6c9cecff4 100644 +--- a/drivers/hwtracing/coresight/coresight-trbe.c ++++ b/drivers/hwtracing/coresight/coresight-trbe.c +@@ -1223,6 +1223,16 @@ static void arm_trbe_enable_cpu(void *info) + enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE); + } + ++static void arm_trbe_disable_cpu(void *info) ++{ ++ struct trbe_drvdata *drvdata = info; ++ struct trbe_cpudata *cpudata = this_cpu_ptr(drvdata->cpudata); ++ ++ disable_percpu_irq(drvdata->irq); ++ trbe_reset_local(cpudata); ++} ++ ++ + static void arm_trbe_register_coresight_cpu(struct trbe_drvdata *drvdata, int cpu) + { + struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); +@@ -1324,18 +1334,12 @@ cpu_clear: + cpumask_clear_cpu(cpu, &drvdata->supported_cpus); + } + +-static void arm_trbe_remove_coresight_cpu(void *info) ++static void arm_trbe_remove_coresight_cpu(struct trbe_drvdata *drvdata, int cpu) + { +- int cpu = smp_processor_id(); +- struct trbe_drvdata *drvdata = info; +- struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); + struct coresight_device *trbe_csdev = coresight_get_percpu_sink(cpu); + +- disable_percpu_irq(drvdata->irq); +- trbe_reset_local(cpudata); + if (trbe_csdev) { + coresight_unregister(trbe_csdev); +- cpudata->drvdata = NULL; + coresight_set_percpu_sink(cpu, NULL); + } + } +@@ -1364,8 +1368,10 @@ static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata) + { + int cpu; + +- for_each_cpu(cpu, &drvdata->supported_cpus) +- smp_call_function_single(cpu, arm_trbe_remove_coresight_cpu, drvdata, 1); ++ for_each_cpu(cpu, &drvdata->supported_cpus) { ++ smp_call_function_single(cpu, arm_trbe_disable_cpu, drvdata, 1); ++ arm_trbe_remove_coresight_cpu(drvdata, cpu); ++ } + free_percpu(drvdata->cpudata); + return 0; + } +@@ -1404,12 +1410,8 @@ static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node) + { + struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node); + +- if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) { +- struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu); +- +- disable_percpu_irq(drvdata->irq); +- trbe_reset_local(cpudata); +- } ++ if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) ++ arm_trbe_disable_cpu(drvdata); + return 0; + } + +diff --git a/drivers/i2c/i2c-core-of.c b/drivers/i2c/i2c-core-of.c +index 3ed74aa4b44bb..1073f82d5dd47 100644 +--- a/drivers/i2c/i2c-core-of.c ++++ b/drivers/i2c/i2c-core-of.c +@@ -244,6 +244,11 @@ static int of_i2c_notify(struct notifier_block *nb, unsigned long action, + return NOTIFY_OK; + } + ++ /* ++ * Clear the flag before adding the device so that fw_devlink ++ * doesn't skip adding consumers to this device. ++ */ ++ rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; + client = of_i2c_register_device(adap, rd->dn); + if (IS_ERR(client)) { + dev_err(&adap->dev, "failed to create client for '%pOF'\n", +diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c +index d47360f8a1f36..4eebf15f685a3 100644 +--- a/drivers/i3c/master/svc-i3c-master.c ++++ b/drivers/i3c/master/svc-i3c-master.c +@@ -782,6 +782,10 @@ static int svc_i3c_master_do_daa_locked(struct svc_i3c_master *master, + */ + break; + } else if (SVC_I3C_MSTATUS_NACKED(reg)) { ++ /* No I3C devices attached */ ++ if (dev_nb == 0) ++ break; ++ + /* + * A slave device nacked the address, this is + * allowed only once, DAA will be stopped and +@@ -1251,11 +1255,17 @@ static int svc_i3c_master_send_ccc_cmd(struct i3c_master_controller *m, + { + struct svc_i3c_master *master = to_svc_i3c_master(m); + bool broadcast = cmd->id < 0x80; ++ int ret; + + if (broadcast) +- return svc_i3c_master_send_bdcast_ccc_cmd(master, cmd); ++ ret = svc_i3c_master_send_bdcast_ccc_cmd(master, cmd); + else +- return svc_i3c_master_send_direct_ccc_cmd(master, cmd); ++ ret = svc_i3c_master_send_direct_ccc_cmd(master, cmd); ++ ++ if (ret) ++ cmd->err = I3C_ERROR_M2; ++ ++ return ret; + } + + static int svc_i3c_master_priv_xfers(struct i3c_dev_desc *dev, +diff --git a/drivers/iio/accel/adxl313_i2c.c b/drivers/iio/accel/adxl313_i2c.c +index 99cc7fc294882..68785bd3ef2f0 100644 +--- a/drivers/iio/accel/adxl313_i2c.c ++++ b/drivers/iio/accel/adxl313_i2c.c +@@ -40,8 +40,8 @@ static const struct regmap_config adxl31x_i2c_regmap_config[] = { + + static const struct i2c_device_id adxl313_i2c_id[] = { + { .name = "adxl312", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL312] }, +- { .name = "adxl313", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL312] }, +- { .name = "adxl314", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL312] }, ++ { .name = "adxl313", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL313] }, ++ { .name = "adxl314", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL314] }, + { } + }; + +diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c +index 999da9c798668..381aa57976417 100644 +--- a/drivers/infiniband/core/uverbs_std_types_counters.c ++++ b/drivers/infiniband/core/uverbs_std_types_counters.c +@@ -107,6 +107,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)( + return ret; + + uattr = uverbs_attr_get(attrs, UVERBS_ATTR_READ_COUNTERS_BUFF); ++ if (IS_ERR(uattr)) ++ return PTR_ERR(uattr); + read_attr.ncounters = uattr->ptr_attr.len / sizeof(u64); + read_attr.counters_buff = uverbs_zalloc( + attrs, array_size(read_attr.ncounters, sizeof(u64))); +diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c +index f9526a4c75b26..90d5f1a96f3e5 100644 +--- a/drivers/infiniband/hw/efa/efa_verbs.c ++++ b/drivers/infiniband/hw/efa/efa_verbs.c +@@ -443,12 +443,12 @@ int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) + + ibdev_dbg(&dev->ibdev, "Destroy qp[%u]\n", ibqp->qp_num); + +- efa_qp_user_mmap_entries_remove(qp); +- + err = efa_destroy_qp_handle(dev, qp->qp_handle); + if (err) + return err; + ++ efa_qp_user_mmap_entries_remove(qp); ++ + if (qp->rq_cpu_addr) { + ibdev_dbg(&dev->ibdev, + "qp->cpu_addr[0x%p] freed: size[%lu], dma[%pad]\n", +@@ -1007,8 +1007,8 @@ int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) + "Destroy cq[%d] virt[0x%p] freed: size[%lu], dma[%pad]\n", + cq->cq_idx, cq->cpu_addr, cq->size, &cq->dma_addr); + +- efa_cq_user_mmap_entries_remove(cq); + efa_destroy_cq_idx(dev, cq->cq_idx); ++ efa_cq_user_mmap_entries_remove(cq); + if (cq->eq) { + xa_erase(&dev->cqs_xa, cq->cq_idx); + synchronize_irq(cq->eq->irq.irqn); +diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h +index f701cc86896b3..1112afa0af552 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_device.h ++++ b/drivers/infiniband/hw/hns/hns_roce_device.h +@@ -97,6 +97,7 @@ + #define HNS_ROCE_CQ_BANK_NUM 4 + + #define CQ_BANKID_SHIFT 2 ++#define CQ_BANKID_MASK GENMASK(1, 0) + + enum { + SERV_TYPE_RC, +diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +index 34a270b6891a9..33980485ef5ba 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c ++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +@@ -757,7 +757,8 @@ out: + qp->sq.head += nreq; + qp->next_sge = sge_idx; + +- if (nreq == 1 && (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE)) ++ if (nreq == 1 && !ret && ++ (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE)) + write_dwqe(hr_dev, qp, wqe); + else + update_sq_db(hr_dev, qp); +@@ -6864,14 +6865,14 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle) + ret = hns_roce_init(hr_dev); + if (ret) { + dev_err(hr_dev->dev, "RoCE Engine init failed!\n"); +- goto error_failed_cfg; ++ goto error_failed_roce_init; + } + + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { + ret = free_mr_init(hr_dev); + if (ret) { + dev_err(hr_dev->dev, "failed to init free mr!\n"); +- goto error_failed_roce_init; ++ goto error_failed_free_mr_init; + } + } + +@@ -6879,10 +6880,10 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle) + + return 0; + +-error_failed_roce_init: ++error_failed_free_mr_init: + hns_roce_exit(hr_dev); + +-error_failed_cfg: ++error_failed_roce_init: + kfree(hr_dev->priv); + + error_failed_kzalloc: +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c +index 946ba1109e878..da1b33d818d82 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c +@@ -219,6 +219,7 @@ static int hns_roce_query_port(struct ib_device *ib_dev, u32 port_num, + unsigned long flags; + enum ib_mtu mtu; + u32 port; ++ int ret; + + port = port_num - 1; + +@@ -231,8 +232,10 @@ static int hns_roce_query_port(struct ib_device *ib_dev, u32 port_num, + IB_PORT_BOOT_MGMT_SUP; + props->max_msg_sz = HNS_ROCE_MAX_MSG_LEN; + props->pkey_tbl_len = 1; +- props->active_width = IB_WIDTH_4X; +- props->active_speed = 1; ++ ret = ib_get_eth_speed(ib_dev, port_num, &props->active_speed, ++ &props->active_width); ++ if (ret) ++ ibdev_warn(ib_dev, "failed to get speed, ret = %d.\n", ret); + + spin_lock_irqsave(&hr_dev->iboe.lock, flags); + +diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c +index 0ae335fb205ca..7a95f8677a02c 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_qp.c ++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c +@@ -170,14 +170,29 @@ static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp, + } + } + +-static u8 get_least_load_bankid_for_qp(struct hns_roce_bank *bank) ++static u8 get_affinity_cq_bank(u8 qp_bank) + { +- u32 least_load = bank[0].inuse; ++ return (qp_bank >> 1) & CQ_BANKID_MASK; ++} ++ ++static u8 get_least_load_bankid_for_qp(struct ib_qp_init_attr *init_attr, ++ struct hns_roce_bank *bank) ++{ ++#define INVALID_LOAD_QPNUM 0xFFFFFFFF ++ struct ib_cq *scq = init_attr->send_cq; ++ u32 least_load = INVALID_LOAD_QPNUM; ++ unsigned long cqn = 0; + u8 bankid = 0; + u32 bankcnt; + u8 i; + +- for (i = 1; i < HNS_ROCE_QP_BANK_NUM; i++) { ++ if (scq) ++ cqn = to_hr_cq(scq)->cqn; ++ ++ for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++) { ++ if (scq && (get_affinity_cq_bank(i) != (cqn & CQ_BANKID_MASK))) ++ continue; ++ + bankcnt = bank[i].inuse; + if (bankcnt < least_load) { + least_load = bankcnt; +@@ -209,7 +224,8 @@ static int alloc_qpn_with_bankid(struct hns_roce_bank *bank, u8 bankid, + + return 0; + } +-static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) ++static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp, ++ struct ib_qp_init_attr *init_attr) + { + struct hns_roce_qp_table *qp_table = &hr_dev->qp_table; + unsigned long num = 0; +@@ -220,7 +236,7 @@ static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) + num = 1; + } else { + mutex_lock(&qp_table->bank_mutex); +- bankid = get_least_load_bankid_for_qp(qp_table->bank); ++ bankid = get_least_load_bankid_for_qp(init_attr, qp_table->bank); + + ret = alloc_qpn_with_bankid(&qp_table->bank[bankid], bankid, + &num); +@@ -1146,7 +1162,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev, + goto err_buf; + } + +- ret = alloc_qpn(hr_dev, hr_qp); ++ ret = alloc_qpn(hr_dev, hr_qp, init_attr); + if (ret) { + ibdev_err(ibdev, "failed to alloc QPN, ret = %d.\n", ret); + goto err_qpn; +diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c +index 6544c9c60b7db..d98bfb83c3b4b 100644 +--- a/drivers/infiniband/hw/irdma/ctrl.c ++++ b/drivers/infiniband/hw/irdma/ctrl.c +@@ -1061,6 +1061,9 @@ static int irdma_sc_alloc_stag(struct irdma_sc_dev *dev, + u64 hdr; + enum irdma_page_size page_size; + ++ if (!info->total_len && !info->all_memory) ++ return -EINVAL; ++ + if (info->page_size == 0x40000000) + page_size = IRDMA_PAGE_SIZE_1G; + else if (info->page_size == 0x200000) +@@ -1126,6 +1129,9 @@ static int irdma_sc_mr_reg_non_shared(struct irdma_sc_dev *dev, + u8 addr_type; + enum irdma_page_size page_size; + ++ if (!info->total_len && !info->all_memory) ++ return -EINVAL; ++ + if (info->page_size == 0x40000000) + page_size = IRDMA_PAGE_SIZE_1G; + else if (info->page_size == 0x200000) +diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h +index e64205839d039..9cbe64311f985 100644 +--- a/drivers/infiniband/hw/irdma/main.h ++++ b/drivers/infiniband/hw/irdma/main.h +@@ -236,7 +236,7 @@ struct irdma_qv_info { + + struct irdma_qvlist_info { + u32 num_vectors; +- struct irdma_qv_info qv_info[1]; ++ struct irdma_qv_info qv_info[]; + }; + + struct irdma_gen_ops { +diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h +index d6cb94dc744c5..1c7cbf7c67bed 100644 +--- a/drivers/infiniband/hw/irdma/type.h ++++ b/drivers/infiniband/hw/irdma/type.h +@@ -1015,6 +1015,7 @@ struct irdma_allocate_stag_info { + bool remote_access:1; + bool use_hmc_fcn_index:1; + bool use_pf_rid:1; ++ bool all_memory:1; + u8 hmc_fcn_index; + }; + +@@ -1042,6 +1043,7 @@ struct irdma_reg_ns_stag_info { + bool use_hmc_fcn_index:1; + u8 hmc_fcn_index; + bool use_pf_rid:1; ++ bool all_memory:1; + }; + + struct irdma_fast_reg_stag_info { +diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c +index 6a8bb6ed4bf43..3b8b2341981ea 100644 +--- a/drivers/infiniband/hw/irdma/verbs.c ++++ b/drivers/infiniband/hw/irdma/verbs.c +@@ -2557,7 +2557,8 @@ static int irdma_hw_alloc_stag(struct irdma_device *iwdev, + struct irdma_mr *iwmr) + { + struct irdma_allocate_stag_info *info; +- struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd); ++ struct ib_pd *pd = iwmr->ibmr.pd; ++ struct irdma_pd *iwpd = to_iwpd(pd); + int status; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; +@@ -2573,6 +2574,7 @@ static int irdma_hw_alloc_stag(struct irdma_device *iwdev, + info->stag_idx = iwmr->stag >> IRDMA_CQPSQ_STAG_IDX_S; + info->pd_id = iwpd->sc_pd.pd_id; + info->total_len = iwmr->len; ++ info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; + info->remote_access = true; + cqp_info->cqp_cmd = IRDMA_OP_ALLOC_STAG; + cqp_info->post_sq = 1; +@@ -2620,6 +2622,8 @@ static struct ib_mr *irdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, + iwmr->type = IRDMA_MEMREG_TYPE_MEM; + palloc = &iwpbl->pble_alloc; + iwmr->page_cnt = max_num_sg; ++ /* Use system PAGE_SIZE as the sg page sizes are unknown at this point */ ++ iwmr->len = max_num_sg * PAGE_SIZE; + err_code = irdma_get_pble(iwdev->rf->pble_rsrc, palloc, iwmr->page_cnt, + false); + if (err_code) +@@ -2699,7 +2703,8 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr, + { + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_reg_ns_stag_info *stag_info; +- struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd); ++ struct ib_pd *pd = iwmr->ibmr.pd; ++ struct irdma_pd *iwpd = to_iwpd(pd); + struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc; + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; +@@ -2718,6 +2723,7 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr, + stag_info->total_len = iwmr->len; + stag_info->access_rights = irdma_get_mr_access(access); + stag_info->pd_id = iwpd->sc_pd.pd_id; ++ stag_info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; + if (stag_info->access_rights & IRDMA_ACCESS_FLAGS_ZERO_BASED) + stag_info->addr_type = IRDMA_ADDR_TYPE_ZERO_BASED; + else +@@ -4354,7 +4360,6 @@ static int irdma_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr) + ah_attr->grh.traffic_class = ah->sc_ah.ah_info.tc_tos; + ah_attr->grh.hop_limit = ah->sc_ah.ah_info.hop_ttl; + ah_attr->grh.sgid_index = ah->sgid_index; +- ah_attr->grh.sgid_index = ah->sgid_index; + memcpy(&ah_attr->grh.dgid, &ah->dgid, + sizeof(ah_attr->grh.dgid)); + } +diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c +index fb0c008af78cc..d2a2501236174 100644 +--- a/drivers/infiniband/sw/rxe/rxe_comp.c ++++ b/drivers/infiniband/sw/rxe/rxe_comp.c +@@ -118,7 +118,7 @@ void retransmit_timer(struct timer_list *t) + + if (qp->valid) { + qp->comp.timeout = 1; +- rxe_run_task(&qp->comp.task, 1); ++ rxe_sched_task(&qp->comp.task); + } + } + +@@ -132,7 +132,10 @@ void rxe_comp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) + if (must_sched != 0) + rxe_counter_inc(SKB_TO_PKT(skb)->rxe, RXE_CNT_COMPLETER_SCHED); + +- rxe_run_task(&qp->comp.task, must_sched); ++ if (must_sched) ++ rxe_sched_task(&qp->comp.task); ++ else ++ rxe_run_task(&qp->comp.task); + } + + static inline enum comp_state get_wqe(struct rxe_qp *qp, +@@ -305,7 +308,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp, + qp->comp.psn = pkt->psn; + if (qp->req.wait_psn) { + qp->req.wait_psn = 0; +- rxe_run_task(&qp->req.task, 0); ++ rxe_run_task(&qp->req.task); + } + } + return COMPST_ERROR_RETRY; +@@ -452,7 +455,7 @@ static void do_complete(struct rxe_qp *qp, struct rxe_send_wqe *wqe) + */ + if (qp->req.wait_fence) { + qp->req.wait_fence = 0; +- rxe_run_task(&qp->req.task, 0); ++ rxe_run_task(&qp->req.task); + } + } + +@@ -466,7 +469,7 @@ static inline enum comp_state complete_ack(struct rxe_qp *qp, + if (qp->req.need_rd_atomic) { + qp->comp.timeout_retry = 0; + qp->req.need_rd_atomic = 0; +- rxe_run_task(&qp->req.task, 0); ++ rxe_run_task(&qp->req.task); + } + } + +@@ -512,7 +515,7 @@ static inline enum comp_state complete_wqe(struct rxe_qp *qp, + + if (qp->req.wait_psn) { + qp->req.wait_psn = 0; +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + } + } + +@@ -646,7 +649,7 @@ int rxe_completer(void *arg) + + if (qp->req.wait_psn) { + qp->req.wait_psn = 0; +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + } + + state = COMPST_DONE; +@@ -714,7 +717,7 @@ int rxe_completer(void *arg) + RXE_CNT_COMP_RETRY); + qp->req.need_retry = 1; + qp->comp.started_retry = 1; +- rxe_run_task(&qp->req.task, 0); ++ rxe_run_task(&qp->req.task); + } + goto done; + +diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c +index 65d16024b3bf6..719432808a063 100644 +--- a/drivers/infiniband/sw/rxe/rxe_net.c ++++ b/drivers/infiniband/sw/rxe/rxe_net.c +@@ -348,7 +348,7 @@ static void rxe_skb_tx_dtor(struct sk_buff *skb) + + if (unlikely(qp->need_req_skb && + skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)) +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + + rxe_put(qp); + } +@@ -435,7 +435,7 @@ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt, + if ((qp_type(qp) != IB_QPT_RC) && + (pkt->mask & RXE_END_MASK)) { + pkt->wqe->state = wqe_state_done; +- rxe_run_task(&qp->comp.task, 1); ++ rxe_sched_task(&qp->comp.task); + } + + rxe_counter_inc(rxe, RXE_CNT_SENT_PKTS); +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c +index 59b2024b34ef4..709c63e9773c5 100644 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c +@@ -539,10 +539,10 @@ static void rxe_qp_drain(struct rxe_qp *qp) + if (qp->req.state != QP_STATE_DRAINED) { + qp->req.state = QP_STATE_DRAIN; + if (qp_type(qp) == IB_QPT_RC) +- rxe_run_task(&qp->comp.task, 1); ++ rxe_sched_task(&qp->comp.task); + else + __rxe_do_task(&qp->comp.task); +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + } + } + } +@@ -556,13 +556,13 @@ void rxe_qp_error(struct rxe_qp *qp) + qp->attr.qp_state = IB_QPS_ERR; + + /* drain work and packet queues */ +- rxe_run_task(&qp->resp.task, 1); ++ rxe_sched_task(&qp->resp.task); + + if (qp_type(qp) == IB_QPT_RC) +- rxe_run_task(&qp->comp.task, 1); ++ rxe_sched_task(&qp->comp.task); + else + __rxe_do_task(&qp->comp.task); +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + } + + /* called by the modify qp verb */ +diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c +index f637712079705..2ace1007a4195 100644 +--- a/drivers/infiniband/sw/rxe/rxe_req.c ++++ b/drivers/infiniband/sw/rxe/rxe_req.c +@@ -105,7 +105,7 @@ void rnr_nak_timer(struct timer_list *t) + /* request a send queue retry */ + qp->req.need_retry = 1; + qp->req.wait_for_rnr_timer = 0; +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + } + + static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp) +@@ -529,10 +529,11 @@ static void save_state(struct rxe_send_wqe *wqe, + struct rxe_send_wqe *rollback_wqe, + u32 *rollback_psn) + { +- rollback_wqe->state = wqe->state; ++ rollback_wqe->state = wqe->state; + rollback_wqe->first_psn = wqe->first_psn; +- rollback_wqe->last_psn = wqe->last_psn; +- *rollback_psn = qp->req.psn; ++ rollback_wqe->last_psn = wqe->last_psn; ++ rollback_wqe->dma = wqe->dma; ++ *rollback_psn = qp->req.psn; + } + + static void rollback_state(struct rxe_send_wqe *wqe, +@@ -540,10 +541,11 @@ static void rollback_state(struct rxe_send_wqe *wqe, + struct rxe_send_wqe *rollback_wqe, + u32 rollback_psn) + { +- wqe->state = rollback_wqe->state; ++ wqe->state = rollback_wqe->state; + wqe->first_psn = rollback_wqe->first_psn; +- wqe->last_psn = rollback_wqe->last_psn; +- qp->req.psn = rollback_psn; ++ wqe->last_psn = rollback_wqe->last_psn; ++ wqe->dma = rollback_wqe->dma; ++ qp->req.psn = rollback_psn; + } + + static void update_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt) +@@ -608,7 +610,7 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe) + * which can lead to a deadlock. So go ahead and complete + * it now. + */ +- rxe_run_task(&qp->comp.task, 1); ++ rxe_sched_task(&qp->comp.task); + + return 0; + } +@@ -733,7 +735,7 @@ int rxe_requester(void *arg) + qp->req.wqe_index); + wqe->state = wqe_state_done; + wqe->status = IB_WC_SUCCESS; +- rxe_run_task(&qp->comp.task, 0); ++ rxe_run_task(&qp->comp.task); + goto done; + } + payload = mtu; +@@ -746,6 +748,9 @@ int rxe_requester(void *arg) + pkt.mask = rxe_opcode[opcode].mask; + pkt.wqe = wqe; + ++ /* save wqe state before we build and send packet */ ++ save_state(wqe, qp, &rollback_wqe, &rollback_psn); ++ + av = rxe_get_av(&pkt, &ah); + if (unlikely(!av)) { + pr_err("qp#%d Failed no address vector\n", qp_num(qp)); +@@ -778,29 +783,29 @@ int rxe_requester(void *arg) + if (ah) + rxe_put(ah); + +- /* +- * To prevent a race on wqe access between requester and completer, +- * wqe members state and psn need to be set before calling +- * rxe_xmit_packet(). +- * Otherwise, completer might initiate an unjustified retry flow. +- */ +- save_state(wqe, qp, &rollback_wqe, &rollback_psn); ++ /* update wqe state as though we had sent it */ + update_wqe_state(qp, wqe, &pkt); + update_wqe_psn(qp, wqe, &pkt, payload); + + err = rxe_xmit_packet(qp, &pkt, skb); + if (err) { +- qp->need_req_skb = 1; ++ if (err != -EAGAIN) { ++ wqe->status = IB_WC_LOC_QP_OP_ERR; ++ goto err; ++ } + ++ /* the packet was dropped so reset wqe to the state ++ * before we sent it so we can try to resend ++ */ + rollback_state(wqe, qp, &rollback_wqe, rollback_psn); + +- if (err == -EAGAIN) { +- rxe_run_task(&qp->req.task, 1); +- goto exit; +- } ++ /* force a delay until the dropped packet is freed and ++ * the send queue is drained below the low water mark ++ */ ++ qp->need_req_skb = 1; + +- wqe->status = IB_WC_LOC_QP_OP_ERR; +- goto err; ++ rxe_sched_task(&qp->req.task); ++ goto exit; + } + + update_state(qp, &pkt); +@@ -817,7 +822,7 @@ err: + qp->req.wqe_index = queue_next_index(qp->sq.queue, qp->req.wqe_index); + wqe->state = wqe_state_error; + qp->req.state = QP_STATE_ERROR; +- rxe_run_task(&qp->comp.task, 0); ++ rxe_run_task(&qp->comp.task); + exit: + ret = -EAGAIN; + out: +diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c +index 9f65c346d8432..a45202cecf2d7 100644 +--- a/drivers/infiniband/sw/rxe/rxe_resp.c ++++ b/drivers/infiniband/sw/rxe/rxe_resp.c +@@ -91,7 +91,10 @@ void rxe_resp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) + must_sched = (pkt->opcode == IB_OPCODE_RC_RDMA_READ_REQUEST) || + (skb_queue_len(&qp->req_pkts) > 1); + +- rxe_run_task(&qp->resp.task, must_sched); ++ if (must_sched) ++ rxe_sched_task(&qp->resp.task); ++ else ++ rxe_run_task(&qp->resp.task); + } + + static inline enum resp_states get_req(struct rxe_qp *qp, +diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c +index 182d0532a8ab9..446ee2c3d3813 100644 +--- a/drivers/infiniband/sw/rxe/rxe_task.c ++++ b/drivers/infiniband/sw/rxe/rxe_task.c +@@ -127,15 +127,20 @@ void rxe_cleanup_task(struct rxe_task *task) + tasklet_kill(&task->tasklet); + } + +-void rxe_run_task(struct rxe_task *task, int sched) ++void rxe_run_task(struct rxe_task *task) + { + if (task->destroyed) + return; + +- if (sched) +- tasklet_schedule(&task->tasklet); +- else +- rxe_do_task(&task->tasklet); ++ rxe_do_task(&task->tasklet); ++} ++ ++void rxe_sched_task(struct rxe_task *task) ++{ ++ if (task->destroyed) ++ return; ++ ++ tasklet_schedule(&task->tasklet); + } + + void rxe_disable_task(struct rxe_task *task) +diff --git a/drivers/infiniband/sw/rxe/rxe_task.h b/drivers/infiniband/sw/rxe/rxe_task.h +index b3dfd970d1dc6..590b1c1d7e7ca 100644 +--- a/drivers/infiniband/sw/rxe/rxe_task.h ++++ b/drivers/infiniband/sw/rxe/rxe_task.h +@@ -52,10 +52,9 @@ int __rxe_do_task(struct rxe_task *task); + */ + void rxe_do_task(struct tasklet_struct *t); + +-/* run a task, else schedule it to run as a tasklet, The decision +- * to run or schedule tasklet is based on the parameter sched. +- */ +-void rxe_run_task(struct rxe_task *task, int sched); ++void rxe_run_task(struct rxe_task *task); ++ ++void rxe_sched_task(struct rxe_task *task); + + /* keep a task from scheduling */ + void rxe_disable_task(struct rxe_task *task); +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c +index be13bcb4cc406..065717c11cba5 100644 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.c ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c +@@ -678,9 +678,9 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, const struct ib_send_wr *wr, + wr = next; + } + +- rxe_run_task(&qp->req.task, 1); ++ rxe_sched_task(&qp->req.task); + if (unlikely(qp->req.state == QP_STATE_ERROR)) +- rxe_run_task(&qp->comp.task, 1); ++ rxe_sched_task(&qp->comp.task); + + return err; + } +@@ -702,7 +702,7 @@ static int rxe_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, + + if (qp->is_user) { + /* Utilize process context to do protocol processing */ +- rxe_run_task(&qp->req.task, 0); ++ rxe_run_task(&qp->req.task); + return 0; + } else + return rxe_post_send_kernel(qp, wr, bad_wr); +@@ -740,7 +740,7 @@ static int rxe_post_recv(struct ib_qp *ibqp, const struct ib_recv_wr *wr, + spin_unlock_irqrestore(&rq->producer_lock, flags); + + if (qp->resp.state == QP_STATE_ERROR) +- rxe_run_task(&qp->resp.task, 1); ++ rxe_sched_task(&qp->resp.task); + + return err; + } +diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h +index 2f3a9cda3850f..8b4a710b82bc1 100644 +--- a/drivers/infiniband/sw/siw/siw.h ++++ b/drivers/infiniband/sw/siw/siw.h +@@ -74,6 +74,7 @@ struct siw_device { + + u32 vendor_part_id; + int numa_node; ++ char raw_gid[ETH_ALEN]; + + /* physical port state (only one port per device) */ + enum ib_port_state state; +diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c +index f88d2971c2c63..552d8271e423b 100644 +--- a/drivers/infiniband/sw/siw/siw_cm.c ++++ b/drivers/infiniband/sw/siw/siw_cm.c +@@ -1496,7 +1496,6 @@ error: + + cep->cm_id = NULL; + id->rem_ref(id); +- siw_cep_put(cep); + + qp->cep = NULL; + siw_cep_put(cep); +diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c +index 65b5cda5457ba..f45600d169ae7 100644 +--- a/drivers/infiniband/sw/siw/siw_main.c ++++ b/drivers/infiniband/sw/siw/siw_main.c +@@ -75,8 +75,7 @@ static int siw_device_register(struct siw_device *sdev, const char *name) + return rv; + } + +- siw_dbg(base_dev, "HWaddr=%pM\n", sdev->netdev->dev_addr); +- ++ siw_dbg(base_dev, "HWaddr=%pM\n", sdev->raw_gid); + return 0; + } + +@@ -313,24 +312,19 @@ static struct siw_device *siw_device_create(struct net_device *netdev) + return NULL; + + base_dev = &sdev->base_dev; +- + sdev->netdev = netdev; + +- if (netdev->type != ARPHRD_LOOPBACK && netdev->type != ARPHRD_NONE) { +- addrconf_addr_eui48((unsigned char *)&base_dev->node_guid, +- netdev->dev_addr); ++ if (netdev->addr_len) { ++ memcpy(sdev->raw_gid, netdev->dev_addr, ++ min_t(unsigned int, netdev->addr_len, ETH_ALEN)); + } else { + /* +- * This device does not have a HW address, +- * but connection mangagement lib expects gid != 0 ++ * This device does not have a HW address, but ++ * connection mangagement requires a unique gid. + */ +- size_t len = min_t(size_t, strlen(base_dev->name), 6); +- char addr[6] = { }; +- +- memcpy(addr, base_dev->name, len); +- addrconf_addr_eui48((unsigned char *)&base_dev->node_guid, +- addr); ++ eth_random_addr(sdev->raw_gid); + } ++ addrconf_addr_eui48((u8 *)&base_dev->node_guid, sdev->raw_gid); + + base_dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND); + +diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c +index 906fde1a2a0de..193f7d58d3845 100644 +--- a/drivers/infiniband/sw/siw/siw_verbs.c ++++ b/drivers/infiniband/sw/siw/siw_verbs.c +@@ -157,7 +157,7 @@ int siw_query_device(struct ib_device *base_dev, struct ib_device_attr *attr, + attr->vendor_part_id = sdev->vendor_part_id; + + addrconf_addr_eui48((u8 *)&attr->sys_image_guid, +- sdev->netdev->dev_addr); ++ sdev->raw_gid); + + return 0; + } +@@ -218,7 +218,7 @@ int siw_query_gid(struct ib_device *base_dev, u32 port, int idx, + + /* subnet_prefix == interface_id == 0; */ + memset(gid, 0, sizeof(*gid)); +- memcpy(&gid->raw[0], sdev->netdev->dev_addr, 6); ++ memcpy(gid->raw, sdev->raw_gid, ETH_ALEN); + + return 0; + } +@@ -1494,7 +1494,7 @@ int siw_map_mr_sg(struct ib_mr *base_mr, struct scatterlist *sl, int num_sle, + + if (pbl->max_buf < num_sle) { + siw_dbg_mem(mem, "too many SGE's: %d > %d\n", +- mem->pbl->max_buf, num_sle); ++ num_sle, pbl->max_buf); + return -ENOMEM; + } + for_each_sg(sl, slp, num_sle, i) { +diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c +index a7fef3ea77fe3..a02a3caeaa4e7 100644 +--- a/drivers/infiniband/ulp/isert/ib_isert.c ++++ b/drivers/infiniband/ulp/isert/ib_isert.c +@@ -2571,6 +2571,8 @@ static void isert_wait_conn(struct iscsit_conn *conn) + isert_put_unsol_pending_cmds(conn); + isert_wait4cmds(conn); + isert_wait4logout(isert_conn); ++ ++ queue_work(isert_release_wq, &isert_conn->release_work); + } + + static void isert_free_conn(struct iscsit_conn *conn) +diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c +index b4d6a4a5ae81e..a7580c4855fec 100644 +--- a/drivers/infiniband/ulp/srp/ib_srp.c ++++ b/drivers/infiniband/ulp/srp/ib_srp.c +@@ -1984,12 +1984,8 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp) + + if (unlikely(rsp->flags & SRP_RSP_FLAG_DIUNDER)) + scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt)); +- else if (unlikely(rsp->flags & SRP_RSP_FLAG_DIOVER)) +- scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_in_res_cnt)); + else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOUNDER)) + scsi_set_resid(scmnd, be32_to_cpu(rsp->data_out_res_cnt)); +- else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOOVER)) +- scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_out_res_cnt)); + + srp_free_req(ch, req, scmnd, + be32_to_cpu(rsp->req_lim_delta)); +diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h +index 028e45bd050bf..1724d6cb8649d 100644 +--- a/drivers/input/serio/i8042-acpipnpio.h ++++ b/drivers/input/serio/i8042-acpipnpio.h +@@ -1281,6 +1281,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | + SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) + }, ++ /* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */ ++ { ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "PD5x_7xPNP_PNR_PNN_PNT"), ++ }, ++ .driver_data = (void *)(SERIO_QUIRK_NOAUX) ++ }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "X170SM"), +diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c +index d5f2a6b5376bd..a2d437a05a11f 100644 +--- a/drivers/interconnect/qcom/bcm-voter.c ++++ b/drivers/interconnect/qcom/bcm-voter.c +@@ -58,6 +58,36 @@ static u64 bcm_div(u64 num, u32 base) + return num; + } + ++/* BCMs with enable_mask use one-hot-encoding for on/off signaling */ ++static void bcm_aggregate_mask(struct qcom_icc_bcm *bcm) ++{ ++ struct qcom_icc_node *node; ++ int bucket, i; ++ ++ for (bucket = 0; bucket < QCOM_ICC_NUM_BUCKETS; bucket++) { ++ bcm->vote_x[bucket] = 0; ++ bcm->vote_y[bucket] = 0; ++ ++ for (i = 0; i < bcm->num_nodes; i++) { ++ node = bcm->nodes[i]; ++ ++ /* If any vote in this bucket exists, keep the BCM enabled */ ++ if (node->sum_avg[bucket] || node->max_peak[bucket]) { ++ bcm->vote_x[bucket] = 0; ++ bcm->vote_y[bucket] = bcm->enable_mask; ++ break; ++ } ++ } ++ } ++ ++ if (bcm->keepalive) { ++ bcm->vote_x[QCOM_ICC_BUCKET_AMC] = bcm->enable_mask; ++ bcm->vote_x[QCOM_ICC_BUCKET_WAKE] = bcm->enable_mask; ++ bcm->vote_y[QCOM_ICC_BUCKET_AMC] = bcm->enable_mask; ++ bcm->vote_y[QCOM_ICC_BUCKET_WAKE] = bcm->enable_mask; ++ } ++} ++ + static void bcm_aggregate(struct qcom_icc_bcm *bcm) + { + struct qcom_icc_node *node; +@@ -83,11 +113,6 @@ static void bcm_aggregate(struct qcom_icc_bcm *bcm) + + temp = agg_peak[bucket] * bcm->vote_scale; + bcm->vote_y[bucket] = bcm_div(temp, bcm->aux_data.unit); +- +- if (bcm->enable_mask && (bcm->vote_x[bucket] || bcm->vote_y[bucket])) { +- bcm->vote_x[bucket] = 0; +- bcm->vote_y[bucket] = bcm->enable_mask; +- } + } + + if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 && +@@ -260,8 +285,12 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter) + return 0; + + mutex_lock(&voter->lock); +- list_for_each_entry(bcm, &voter->commit_list, list) +- bcm_aggregate(bcm); ++ list_for_each_entry(bcm, &voter->commit_list, list) { ++ if (bcm->enable_mask) ++ bcm_aggregate_mask(bcm); ++ else ++ bcm_aggregate(bcm); ++ } + + /* + * Pre sort the BCMs based on VCD for ease of generating a command list +diff --git a/drivers/interconnect/qcom/qcm2290.c b/drivers/interconnect/qcom/qcm2290.c +index a29cdb4fac03f..82a2698ad66b1 100644 +--- a/drivers/interconnect/qcom/qcm2290.c ++++ b/drivers/interconnect/qcom/qcm2290.c +@@ -1355,6 +1355,7 @@ static struct platform_driver qcm2290_noc_driver = { + .driver = { + .name = "qnoc-qcm2290", + .of_match_table = qcm2290_noc_of_match, ++ .sync_state = icc_sync_state, + }, + }; + module_platform_driver(qcm2290_noc_driver); +diff --git a/drivers/interconnect/qcom/sm8450.c b/drivers/interconnect/qcom/sm8450.c +index e64c214b40209..d6e582a02e628 100644 +--- a/drivers/interconnect/qcom/sm8450.c ++++ b/drivers/interconnect/qcom/sm8450.c +@@ -1886,6 +1886,7 @@ static struct platform_driver qnoc_driver = { + .driver = { + .name = "qnoc-sm8450", + .of_match_table = qnoc_of_match, ++ .sync_state = icc_sync_state, + }, + }; + +diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c +index 75355ddca6575..4caa023048a08 100644 +--- a/drivers/iommu/amd/iommu_v2.c ++++ b/drivers/iommu/amd/iommu_v2.c +@@ -262,8 +262,8 @@ static void put_pasid_state(struct pasid_state *pasid_state) + + static void put_pasid_state_wait(struct pasid_state *pasid_state) + { +- refcount_dec(&pasid_state->count); +- wait_event(pasid_state->wq, !refcount_read(&pasid_state->count)); ++ if (!refcount_dec_and_test(&pasid_state->count)) ++ wait_event(pasid_state->wq, !refcount_read(&pasid_state->count)); + free_pasid_state(pasid_state); + } + +diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c +index 3869c3ecda8cd..5b9cb9fcc352b 100644 +--- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c ++++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c +@@ -273,6 +273,13 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, + ctx->secure_init = true; + } + ++ /* Disable context bank before programming */ ++ iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0); ++ ++ /* Clear context bank fault address fault status registers */ ++ iommu_writel(ctx, ARM_SMMU_CB_FAR, 0); ++ iommu_writel(ctx, ARM_SMMU_CB_FSR, ARM_SMMU_FSR_FAULT); ++ + /* TTBRs */ + iommu_writeq(ctx, ARM_SMMU_CB_TTBR0, + pgtbl_cfg.arm_lpae_s1_cfg.ttbr | +diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c +index a39aab66a01b1..3f03039e5cce5 100644 +--- a/drivers/iommu/intel/pasid.c ++++ b/drivers/iommu/intel/pasid.c +@@ -127,7 +127,7 @@ int intel_pasid_alloc_table(struct device *dev) + info->pasid_table = pasid_table; + + if (!ecap_coherent(info->iommu->ecap)) +- clflush_cache_range(pasid_table->table, size); ++ clflush_cache_range(pasid_table->table, (1 << order) * PAGE_SIZE); + + return 0; + } +diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c +index 2ae5a6058a34a..9673cd60c84fc 100644 +--- a/drivers/iommu/mtk_iommu.c ++++ b/drivers/iommu/mtk_iommu.c +@@ -223,10 +223,9 @@ struct mtk_iommu_data { + struct device *smicomm_dev; + + struct mtk_iommu_bank_data *bank; ++ struct mtk_iommu_domain *share_dom; /* For 2 HWs share pgtable */ + +- struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */ + struct regmap *pericfg; +- + struct mutex mutex; /* Protect m4u_group/m4u_dom above */ + + /* +@@ -577,15 +576,14 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom, + struct mtk_iommu_data *data, + unsigned int region_id) + { ++ struct mtk_iommu_domain *share_dom = data->share_dom; + const struct mtk_iommu_iova_region *region; +- struct mtk_iommu_domain *m4u_dom; +- +- /* Always use bank0 in sharing pgtable case */ +- m4u_dom = data->bank[0].m4u_dom; +- if (m4u_dom) { +- dom->iop = m4u_dom->iop; +- dom->cfg = m4u_dom->cfg; +- dom->domain.pgsize_bitmap = m4u_dom->cfg.pgsize_bitmap; ++ ++ /* Always use share domain in sharing pgtable case */ ++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, SHARE_PGTABLE) && share_dom) { ++ dom->iop = share_dom->iop; ++ dom->cfg = share_dom->cfg; ++ dom->domain.pgsize_bitmap = share_dom->cfg.pgsize_bitmap; + goto update_iova_region; + } + +@@ -615,6 +613,9 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom, + /* Update our support page sizes bitmap */ + dom->domain.pgsize_bitmap = dom->cfg.pgsize_bitmap; + ++ if (MTK_IOMMU_HAS_FLAG(data->plat_data, SHARE_PGTABLE)) ++ data->share_dom = dom; ++ + update_iova_region: + /* Update the iova region for this domain */ + region = data->plat_data->iova_region + region_id; +@@ -665,7 +666,9 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain, + /* Data is in the frstdata in sharing pgtable case. */ + frstdata = mtk_iommu_get_frst_data(hw_list); + ++ mutex_lock(&frstdata->mutex); + ret = mtk_iommu_domain_finalise(dom, frstdata, region_id); ++ mutex_unlock(&frstdata->mutex); + if (ret) { + mutex_unlock(&dom->mutex); + return -ENODEV; +diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c +index f7e9b56be174f..43bb577a26e59 100644 +--- a/drivers/iommu/rockchip-iommu.c ++++ b/drivers/iommu/rockchip-iommu.c +@@ -98,8 +98,6 @@ struct rk_iommu_ops { + phys_addr_t (*pt_address)(u32 dte); + u32 (*mk_dtentries)(dma_addr_t pt_dma); + u32 (*mk_ptentries)(phys_addr_t page, int prot); +- phys_addr_t (*dte_addr_phys)(u32 addr); +- u32 (*dma_addr_dte)(dma_addr_t dt_dma); + u64 dma_bit_mask; + }; + +@@ -277,8 +275,8 @@ static u32 rk_mk_pte(phys_addr_t page, int prot) + /* + * In v2: + * 31:12 - Page address bit 31:0 +- * 11:9 - Page address bit 34:32 +- * 8:4 - Page address bit 39:35 ++ * 11: 8 - Page address bit 35:32 ++ * 7: 4 - Page address bit 39:36 + * 3 - Security + * 2 - Writable + * 1 - Readable +@@ -505,7 +503,7 @@ static int rk_iommu_force_reset(struct rk_iommu *iommu) + + /* + * Check if register DTE_ADDR is working by writing DTE_ADDR_DUMMY +- * and verifying that upper 5 nybbles are read back. ++ * and verifying that upper 5 (v1) or 7 (v2) nybbles are read back. + */ + for (i = 0; i < iommu->num_mmu; i++) { + dte_addr = rk_ops->pt_address(DTE_ADDR_DUMMY); +@@ -530,33 +528,6 @@ static int rk_iommu_force_reset(struct rk_iommu *iommu) + return 0; + } + +-static inline phys_addr_t rk_dte_addr_phys(u32 addr) +-{ +- return (phys_addr_t)addr; +-} +- +-static inline u32 rk_dma_addr_dte(dma_addr_t dt_dma) +-{ +- return dt_dma; +-} +- +-#define DT_HI_MASK GENMASK_ULL(39, 32) +-#define DTE_BASE_HI_MASK GENMASK(11, 4) +-#define DT_SHIFT 28 +- +-static inline phys_addr_t rk_dte_addr_phys_v2(u32 addr) +-{ +- u64 addr64 = addr; +- return (phys_addr_t)(addr64 & RK_DTE_PT_ADDRESS_MASK) | +- ((addr64 & DTE_BASE_HI_MASK) << DT_SHIFT); +-} +- +-static inline u32 rk_dma_addr_dte_v2(dma_addr_t dt_dma) +-{ +- return (dt_dma & RK_DTE_PT_ADDRESS_MASK) | +- ((dt_dma & DT_HI_MASK) >> DT_SHIFT); +-} +- + static void log_iova(struct rk_iommu *iommu, int index, dma_addr_t iova) + { + void __iomem *base = iommu->bases[index]; +@@ -576,7 +547,7 @@ static void log_iova(struct rk_iommu *iommu, int index, dma_addr_t iova) + page_offset = rk_iova_page_offset(iova); + + mmu_dte_addr = rk_iommu_read(base, RK_MMU_DTE_ADDR); +- mmu_dte_addr_phys = rk_ops->dte_addr_phys(mmu_dte_addr); ++ mmu_dte_addr_phys = rk_ops->pt_address(mmu_dte_addr); + + dte_addr_phys = mmu_dte_addr_phys + (4 * dte_index); + dte_addr = phys_to_virt(dte_addr_phys); +@@ -966,7 +937,7 @@ static int rk_iommu_enable(struct rk_iommu *iommu) + + for (i = 0; i < iommu->num_mmu; i++) { + rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, +- rk_ops->dma_addr_dte(rk_domain->dt_dma)); ++ rk_ops->mk_dtentries(rk_domain->dt_dma)); + rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE); + rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, RK_MMU_IRQ_MASK); + } +@@ -1373,8 +1344,6 @@ static struct rk_iommu_ops iommu_data_ops_v1 = { + .pt_address = &rk_dte_pt_address, + .mk_dtentries = &rk_mk_dte, + .mk_ptentries = &rk_mk_pte, +- .dte_addr_phys = &rk_dte_addr_phys, +- .dma_addr_dte = &rk_dma_addr_dte, + .dma_bit_mask = DMA_BIT_MASK(32), + }; + +@@ -1382,8 +1351,6 @@ static struct rk_iommu_ops iommu_data_ops_v2 = { + .pt_address = &rk_dte_pt_address_v2, + .mk_dtentries = &rk_mk_dte_v2, + .mk_ptentries = &rk_mk_pte_v2, +- .dte_addr_phys = &rk_dte_addr_phys_v2, +- .dma_addr_dte = &rk_dma_addr_dte_v2, + .dma_bit_mask = DMA_BIT_MASK(40), + }; + +diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c +index fadd2c907222b..8261066de07d7 100644 +--- a/drivers/iommu/sprd-iommu.c ++++ b/drivers/iommu/sprd-iommu.c +@@ -147,6 +147,7 @@ static struct iommu_domain *sprd_iommu_domain_alloc(unsigned int domain_type) + + dom->domain.geometry.aperture_start = 0; + dom->domain.geometry.aperture_end = SZ_256M - 1; ++ dom->domain.geometry.force_aperture = true; + + return &dom->domain; + } +diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c +index ac04aeaa2d308..3d99b8bdd8ef1 100644 +--- a/drivers/irqchip/irq-loongson-eiointc.c ++++ b/drivers/irqchip/irq-loongson-eiointc.c +@@ -145,7 +145,7 @@ static int eiointc_router_init(unsigned int cpu) + int i, bit; + uint32_t data; + uint32_t node = cpu_to_eio_node(cpu); +- uint32_t index = eiointc_index(node); ++ int index = eiointc_index(node); + + if (index < 0) { + pr_err("Error: invalid nodemap!\n"); +diff --git a/drivers/leds/led-class-multicolor.c b/drivers/leds/led-class-multicolor.c +index e317408583df9..ec62a48116135 100644 +--- a/drivers/leds/led-class-multicolor.c ++++ b/drivers/leds/led-class-multicolor.c +@@ -6,6 +6,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -19,9 +20,10 @@ int led_mc_calc_color_components(struct led_classdev_mc *mcled_cdev, + int i; + + for (i = 0; i < mcled_cdev->num_colors; i++) +- mcled_cdev->subled_info[i].brightness = brightness * +- mcled_cdev->subled_info[i].intensity / +- led_cdev->max_brightness; ++ mcled_cdev->subled_info[i].brightness = ++ DIV_ROUND_CLOSEST(brightness * ++ mcled_cdev->subled_info[i].intensity, ++ led_cdev->max_brightness); + + return 0; + } +diff --git a/drivers/leds/led-core.c b/drivers/leds/led-core.c +index 4a97cb7457888..aad8bc44459fe 100644 +--- a/drivers/leds/led-core.c ++++ b/drivers/leds/led-core.c +@@ -419,15 +419,15 @@ int led_compose_name(struct device *dev, struct led_init_data *init_data, + struct fwnode_handle *fwnode = init_data->fwnode; + const char *devicename = init_data->devicename; + +- /* We want to label LEDs that can produce full range of colors +- * as RGB, not multicolor */ +- BUG_ON(props.color == LED_COLOR_ID_MULTI); +- + if (!led_classdev_name) + return -EINVAL; + + led_parse_fwnode_props(dev, fwnode, &props); + ++ /* We want to label LEDs that can produce full range of colors ++ * as RGB, not multicolor */ ++ BUG_ON(props.color == LED_COLOR_ID_MULTI); ++ + if (props.label) { + /* + * If init_data.devicename is NULL, then it indicates that +diff --git a/drivers/leds/leds-pwm.c b/drivers/leds/leds-pwm.c +index 6832180c1c54f..cc892ecd52408 100644 +--- a/drivers/leds/leds-pwm.c ++++ b/drivers/leds/leds-pwm.c +@@ -146,7 +146,7 @@ static int led_pwm_create_fwnode(struct device *dev, struct led_pwm_priv *priv) + led.name = to_of_node(fwnode)->name; + + if (!led.name) { +- ret = EINVAL; ++ ret = -EINVAL; + goto err_child_out; + } + +diff --git a/drivers/leds/trigger/ledtrig-tty.c b/drivers/leds/trigger/ledtrig-tty.c +index f62db7e520b52..8ae0d2d284aff 100644 +--- a/drivers/leds/trigger/ledtrig-tty.c ++++ b/drivers/leds/trigger/ledtrig-tty.c +@@ -7,6 +7,8 @@ + #include + #include + ++#define LEDTRIG_TTY_INTERVAL 50 ++ + struct ledtrig_tty_data { + struct led_classdev *led_cdev; + struct delayed_work dwork; +@@ -122,17 +124,19 @@ static void ledtrig_tty_work(struct work_struct *work) + + if (icount.rx != trigger_data->rx || + icount.tx != trigger_data->tx) { +- led_set_brightness_sync(trigger_data->led_cdev, LED_ON); ++ unsigned long interval = LEDTRIG_TTY_INTERVAL; ++ ++ led_blink_set_oneshot(trigger_data->led_cdev, &interval, ++ &interval, 0); + + trigger_data->rx = icount.rx; + trigger_data->tx = icount.tx; +- } else { +- led_set_brightness_sync(trigger_data->led_cdev, LED_OFF); + } + + out: + mutex_unlock(&trigger_data->mutex); +- schedule_delayed_work(&trigger_data->dwork, msecs_to_jiffies(100)); ++ schedule_delayed_work(&trigger_data->dwork, ++ msecs_to_jiffies(LEDTRIG_TTY_INTERVAL * 2)); + } + + static struct attribute *ledtrig_tty_attrs[] = { +diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c +index 8bbeeec70905c..5200bba63708e 100644 +--- a/drivers/md/md-bitmap.c ++++ b/drivers/md/md-bitmap.c +@@ -2481,6 +2481,10 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len) + if (backlog > COUNTER_MAX) + return -EINVAL; + ++ rv = mddev_lock(mddev); ++ if (rv) ++ return rv; ++ + /* + * Without write mostly device, it doesn't make sense to set + * backlog for max_write_behind. +@@ -2494,6 +2498,7 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len) + if (!has_write_mostly) { + pr_warn_ratelimited("%s: can't set backlog, no write mostly device available\n", + mdname(mddev)); ++ mddev_unlock(mddev); + return -EINVAL; + } + +@@ -2504,13 +2509,13 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len) + mddev_destroy_serial_pool(mddev, NULL, false); + } else if (backlog && !mddev->serial_info_pool) { + /* serial_info_pool is needed since backlog is not zero */ +- struct md_rdev *rdev; +- + rdev_for_each(rdev, mddev) + mddev_create_serial_pool(mddev, rdev, false); + } + if (old_mwb != backlog) + md_bitmap_update_sb(mddev->bitmap); ++ ++ mddev_unlock(mddev); + return len; + } + +diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c +index 6e7797b4e7381..4eb72b9dd9336 100644 +--- a/drivers/md/md-linear.c ++++ b/drivers/md/md-linear.c +@@ -223,7 +223,8 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio) + bio_sector < start_sector)) + goto out_of_bounds; + +- if (unlikely(is_mddev_broken(tmp_dev->rdev, "linear"))) { ++ if (unlikely(is_rdev_broken(tmp_dev->rdev))) { ++ md_error(mddev, tmp_dev->rdev); + bio_io_error(bio); + return true; + } +@@ -270,6 +271,16 @@ static void linear_status (struct seq_file *seq, struct mddev *mddev) + seq_printf(seq, " %dk rounding", mddev->chunk_sectors / 2); + } + ++static void linear_error(struct mddev *mddev, struct md_rdev *rdev) ++{ ++ if (!test_and_set_bit(MD_BROKEN, &mddev->flags)) { ++ char *md_name = mdname(mddev); ++ ++ pr_crit("md/linear%s: Disk failure on %pg detected, failing array.\n", ++ md_name, rdev->bdev); ++ } ++} ++ + static void linear_quiesce(struct mddev *mddev, int state) + { + } +@@ -286,6 +297,7 @@ static struct md_personality linear_personality = + .hot_add_disk = linear_add, + .size = linear_size, + .quiesce = linear_quiesce, ++ .error_handler = linear_error, + }; + + static int __init linear_init (void) +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 45daba0eb9310..86b2acfba1a7f 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -368,6 +368,10 @@ EXPORT_SYMBOL_GPL(md_new_event); + static LIST_HEAD(all_mddevs); + static DEFINE_SPINLOCK(all_mddevs_lock); + ++static bool is_md_suspended(struct mddev *mddev) ++{ ++ return percpu_ref_is_dying(&mddev->active_io); ++} + /* Rather than calling directly into the personality make_request function, + * IO requests come here first so that we can check if the device is + * being suspended pending a reconfiguration. +@@ -377,7 +381,7 @@ static DEFINE_SPINLOCK(all_mddevs_lock); + */ + static bool is_suspended(struct mddev *mddev, struct bio *bio) + { +- if (mddev->suspended) ++ if (is_md_suspended(mddev)) + return true; + if (bio_data_dir(bio) != WRITE) + return false; +@@ -393,12 +397,10 @@ static bool is_suspended(struct mddev *mddev, struct bio *bio) + void md_handle_request(struct mddev *mddev, struct bio *bio) + { + check_suspended: +- rcu_read_lock(); + if (is_suspended(mddev, bio)) { + DEFINE_WAIT(__wait); + /* Bail out if REQ_NOWAIT is set for the bio */ + if (bio->bi_opf & REQ_NOWAIT) { +- rcu_read_unlock(); + bio_wouldblock_error(bio); + return; + } +@@ -407,23 +409,19 @@ check_suspended: + TASK_UNINTERRUPTIBLE); + if (!is_suspended(mddev, bio)) + break; +- rcu_read_unlock(); + schedule(); +- rcu_read_lock(); + } + finish_wait(&mddev->sb_wait, &__wait); + } +- atomic_inc(&mddev->active_io); +- rcu_read_unlock(); ++ if (!percpu_ref_tryget_live(&mddev->active_io)) ++ goto check_suspended; + + if (!mddev->pers->make_request(mddev, bio)) { +- atomic_dec(&mddev->active_io); +- wake_up(&mddev->sb_wait); ++ percpu_ref_put(&mddev->active_io); + goto check_suspended; + } + +- if (atomic_dec_and_test(&mddev->active_io) && mddev->suspended) +- wake_up(&mddev->sb_wait); ++ percpu_ref_put(&mddev->active_io); + } + EXPORT_SYMBOL(md_handle_request); + +@@ -471,11 +469,10 @@ void mddev_suspend(struct mddev *mddev) + lockdep_assert_held(&mddev->reconfig_mutex); + if (mddev->suspended++) + return; +- synchronize_rcu(); + wake_up(&mddev->sb_wait); + set_bit(MD_ALLOW_SB_UPDATE, &mddev->flags); +- smp_mb__after_atomic(); +- wait_event(mddev->sb_wait, atomic_read(&mddev->active_io) == 0); ++ percpu_ref_kill(&mddev->active_io); ++ wait_event(mddev->sb_wait, percpu_ref_is_zero(&mddev->active_io)); + mddev->pers->quiesce(mddev, 1); + clear_bit_unlock(MD_ALLOW_SB_UPDATE, &mddev->flags); + wait_event(mddev->sb_wait, !test_bit(MD_UPDATING_SB, &mddev->flags)); +@@ -488,11 +485,14 @@ EXPORT_SYMBOL_GPL(mddev_suspend); + + void mddev_resume(struct mddev *mddev) + { +- /* entred the memalloc scope from mddev_suspend() */ +- memalloc_noio_restore(mddev->noio_flag); + lockdep_assert_held(&mddev->reconfig_mutex); + if (--mddev->suspended) + return; ++ ++ /* entred the memalloc scope from mddev_suspend() */ ++ memalloc_noio_restore(mddev->noio_flag); ++ ++ percpu_ref_resurrect(&mddev->active_io); + wake_up(&mddev->sb_wait); + mddev->pers->quiesce(mddev, 0); + +@@ -671,7 +671,6 @@ void mddev_init(struct mddev *mddev) + timer_setup(&mddev->safemode_timer, md_safemode_timeout, 0); + atomic_set(&mddev->active, 1); + atomic_set(&mddev->openers, 0); +- atomic_set(&mddev->active_io, 0); + spin_lock_init(&mddev->lock); + atomic_set(&mddev->flush_pending, 0); + init_waitqueue_head(&mddev->sb_wait); +@@ -5779,6 +5778,12 @@ static void md_safemode_timeout(struct timer_list *t) + } + + static int start_dirty_degraded; ++static void active_io_release(struct percpu_ref *ref) ++{ ++ struct mddev *mddev = container_of(ref, struct mddev, active_io); ++ ++ wake_up(&mddev->sb_wait); ++} + + int md_run(struct mddev *mddev) + { +@@ -5859,10 +5864,15 @@ int md_run(struct mddev *mddev) + nowait = nowait && bdev_nowait(rdev->bdev); + } + ++ err = percpu_ref_init(&mddev->active_io, active_io_release, ++ PERCPU_REF_ALLOW_REINIT, GFP_KERNEL); ++ if (err) ++ return err; ++ + if (!bioset_initialized(&mddev->bio_set)) { + err = bioset_init(&mddev->bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); + if (err) +- return err; ++ goto exit_active_io; + } + if (!bioset_initialized(&mddev->sync_set)) { + err = bioset_init(&mddev->sync_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS); +@@ -6050,6 +6060,8 @@ abort: + bioset_exit(&mddev->sync_set); + exit_bio_set: + bioset_exit(&mddev->bio_set); ++exit_active_io: ++ percpu_ref_exit(&mddev->active_io); + return err; + } + EXPORT_SYMBOL_GPL(md_run); +@@ -6238,7 +6250,7 @@ EXPORT_SYMBOL_GPL(md_stop_writes); + static void mddev_detach(struct mddev *mddev) + { + md_bitmap_wait_behind_writes(mddev); +- if (mddev->pers && mddev->pers->quiesce && !mddev->suspended) { ++ if (mddev->pers && mddev->pers->quiesce && !is_md_suspended(mddev)) { + mddev->pers->quiesce(mddev, 1); + mddev->pers->quiesce(mddev, 0); + } +@@ -6265,6 +6277,10 @@ static void __md_stop(struct mddev *mddev) + mddev->to_remove = &md_redundancy_group; + module_put(pers->owner); + clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); ++ ++ percpu_ref_exit(&mddev->active_io); ++ bioset_exit(&mddev->bio_set); ++ bioset_exit(&mddev->sync_set); + } + + void md_stop(struct mddev *mddev) +@@ -6276,8 +6292,7 @@ void md_stop(struct mddev *mddev) + */ + __md_stop_writes(mddev); + __md_stop(mddev); +- bioset_exit(&mddev->bio_set); +- bioset_exit(&mddev->sync_set); ++ percpu_ref_exit(&mddev->writes_pending); + } + + EXPORT_SYMBOL_GPL(md_stop); +@@ -7845,9 +7860,6 @@ static void md_free_disk(struct gendisk *disk) + struct mddev *mddev = disk->private_data; + + percpu_ref_exit(&mddev->writes_pending); +- bioset_exit(&mddev->bio_set); +- bioset_exit(&mddev->sync_set); +- + mddev_free(mddev); + } + +@@ -7978,6 +7990,9 @@ void md_error(struct mddev *mddev, struct md_rdev *rdev) + return; + mddev->pers->error_handler(mddev, rdev); + ++ if (mddev->pers->level == 0 || mddev->pers->level == LEVEL_LINEAR) ++ return; ++ + if (mddev->degraded && !test_bit(MD_BROKEN, &mddev->flags)) + set_bit(MD_RECOVERY_RECOVER, &mddev->recovery); + sysfs_notify_dirent_safe(rdev->sysfs_state); +@@ -8548,7 +8563,7 @@ bool md_write_start(struct mddev *mddev, struct bio *bi) + return true; + wait_event(mddev->sb_wait, + !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags) || +- mddev->suspended); ++ is_md_suspended(mddev)); + if (test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) { + percpu_ref_put(&mddev->writes_pending); + return false; +@@ -9276,7 +9291,7 @@ void md_check_recovery(struct mddev *mddev) + wake_up(&mddev->sb_wait); + } + +- if (mddev->suspended) ++ if (is_md_suspended(mddev)) + return; + + if (mddev->bitmap) +diff --git a/drivers/md/md.h b/drivers/md/md.h +index b4e2d8b87b611..4f0b480974552 100644 +--- a/drivers/md/md.h ++++ b/drivers/md/md.h +@@ -315,7 +315,7 @@ struct mddev { + unsigned long sb_flags; + + int suspended; +- atomic_t active_io; ++ struct percpu_ref active_io; + int ro; + int sysfs_active; /* set when sysfs deletes + * are happening, so run/ +@@ -790,15 +790,9 @@ extern void mddev_destroy_serial_pool(struct mddev *mddev, struct md_rdev *rdev, + struct md_rdev *md_find_rdev_nr_rcu(struct mddev *mddev, int nr); + struct md_rdev *md_find_rdev_rcu(struct mddev *mddev, dev_t dev); + +-static inline bool is_mddev_broken(struct md_rdev *rdev, const char *md_type) ++static inline bool is_rdev_broken(struct md_rdev *rdev) + { +- if (!disk_live(rdev->bdev->bd_disk)) { +- if (!test_and_set_bit(MD_BROKEN, &rdev->mddev->flags)) +- pr_warn("md: %s: %s array has a missing/failed member\n", +- mdname(rdev->mddev), md_type); +- return true; +- } +- return false; ++ return !disk_live(rdev->bdev->bd_disk); + } + + static inline void rdev_dec_pending(struct md_rdev *rdev, struct mddev *mddev) +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c +index 0f7c3b3c62b25..7c6a0b4437d8f 100644 +--- a/drivers/md/raid0.c ++++ b/drivers/md/raid0.c +@@ -557,14 +557,50 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio) + bio_endio(bio); + } + +-static bool raid0_make_request(struct mddev *mddev, struct bio *bio) ++static void raid0_map_submit_bio(struct mddev *mddev, struct bio *bio) + { + struct r0conf *conf = mddev->private; + struct strip_zone *zone; + struct md_rdev *tmp_dev; +- sector_t bio_sector; ++ sector_t bio_sector = bio->bi_iter.bi_sector; ++ sector_t sector = bio_sector; ++ ++ md_account_bio(mddev, &bio); ++ ++ zone = find_zone(mddev->private, §or); ++ switch (conf->layout) { ++ case RAID0_ORIG_LAYOUT: ++ tmp_dev = map_sector(mddev, zone, bio_sector, §or); ++ break; ++ case RAID0_ALT_MULTIZONE_LAYOUT: ++ tmp_dev = map_sector(mddev, zone, sector, §or); ++ break; ++ default: ++ WARN(1, "md/raid0:%s: Invalid layout\n", mdname(mddev)); ++ bio_io_error(bio); ++ return; ++ } ++ ++ if (unlikely(is_rdev_broken(tmp_dev))) { ++ bio_io_error(bio); ++ md_error(mddev, tmp_dev); ++ return; ++ } ++ ++ bio_set_dev(bio, tmp_dev->bdev); ++ bio->bi_iter.bi_sector = sector + zone->dev_start + ++ tmp_dev->data_offset; ++ ++ if (mddev->gendisk) ++ trace_block_bio_remap(bio, disk_devt(mddev->gendisk), ++ bio_sector); ++ mddev_check_write_zeroes(mddev, bio); ++ submit_bio_noacct(bio); ++} ++ ++static bool raid0_make_request(struct mddev *mddev, struct bio *bio) ++{ + sector_t sector; +- sector_t orig_sector; + unsigned chunk_sects; + unsigned sectors; + +@@ -577,8 +613,7 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio) + return true; + } + +- bio_sector = bio->bi_iter.bi_sector; +- sector = bio_sector; ++ sector = bio->bi_iter.bi_sector; + chunk_sects = mddev->chunk_sectors; + + sectors = chunk_sects - +@@ -586,49 +621,15 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio) + ? (sector & (chunk_sects-1)) + : sector_div(sector, chunk_sects)); + +- /* Restore due to sector_div */ +- sector = bio_sector; +- + if (sectors < bio_sectors(bio)) { + struct bio *split = bio_split(bio, sectors, GFP_NOIO, + &mddev->bio_set); + bio_chain(split, bio); +- submit_bio_noacct(bio); ++ raid0_map_submit_bio(mddev, bio); + bio = split; + } + +- if (bio->bi_pool != &mddev->bio_set) +- md_account_bio(mddev, &bio); +- +- orig_sector = sector; +- zone = find_zone(mddev->private, §or); +- switch (conf->layout) { +- case RAID0_ORIG_LAYOUT: +- tmp_dev = map_sector(mddev, zone, orig_sector, §or); +- break; +- case RAID0_ALT_MULTIZONE_LAYOUT: +- tmp_dev = map_sector(mddev, zone, sector, §or); +- break; +- default: +- WARN(1, "md/raid0:%s: Invalid layout\n", mdname(mddev)); +- bio_io_error(bio); +- return true; +- } +- +- if (unlikely(is_mddev_broken(tmp_dev, "raid0"))) { +- bio_io_error(bio); +- return true; +- } +- +- bio_set_dev(bio, tmp_dev->bdev); +- bio->bi_iter.bi_sector = sector + zone->dev_start + +- tmp_dev->data_offset; +- +- if (mddev->gendisk) +- trace_block_bio_remap(bio, disk_devt(mddev->gendisk), +- bio_sector); +- mddev_check_write_zeroes(mddev, bio); +- submit_bio_noacct(bio); ++ raid0_map_submit_bio(mddev, bio); + return true; + } + +@@ -638,6 +639,16 @@ static void raid0_status(struct seq_file *seq, struct mddev *mddev) + return; + } + ++static void raid0_error(struct mddev *mddev, struct md_rdev *rdev) ++{ ++ if (!test_and_set_bit(MD_BROKEN, &mddev->flags)) { ++ char *md_name = mdname(mddev); ++ ++ pr_crit("md/raid0%s: Disk failure on %pg detected, failing array.\n", ++ md_name, rdev->bdev); ++ } ++} ++ + static void *raid0_takeover_raid45(struct mddev *mddev) + { + struct md_rdev *rdev; +@@ -813,6 +824,7 @@ static struct md_personality raid0_personality= + .size = raid0_size, + .takeover = raid0_takeover, + .quiesce = raid0_quiesce, ++ .error_handler = raid0_error, + }; + + static int __init raid0_init (void) +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index d2098fcd6a270..7b318e7e8d459 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -1317,6 +1317,25 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, + } + } + ++static struct md_rdev *dereference_rdev_and_rrdev(struct raid10_info *mirror, ++ struct md_rdev **prrdev) ++{ ++ struct md_rdev *rdev, *rrdev; ++ ++ rrdev = rcu_dereference(mirror->replacement); ++ /* ++ * Read replacement first to prevent reading both rdev and ++ * replacement as NULL during replacement replace rdev. ++ */ ++ smp_mb(); ++ rdev = rcu_dereference(mirror->rdev); ++ if (rdev == rrdev) ++ rrdev = NULL; ++ ++ *prrdev = rrdev; ++ return rdev; ++} ++ + static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio) + { + int i; +@@ -1327,11 +1346,9 @@ retry_wait: + blocked_rdev = NULL; + rcu_read_lock(); + for (i = 0; i < conf->copies; i++) { +- struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); +- struct md_rdev *rrdev = rcu_dereference( +- conf->mirrors[i].replacement); +- if (rdev == rrdev) +- rrdev = NULL; ++ struct md_rdev *rdev, *rrdev; ++ ++ rdev = dereference_rdev_and_rrdev(&conf->mirrors[i], &rrdev); + if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) { + atomic_inc(&rdev->nr_pending); + blocked_rdev = rdev; +@@ -1460,15 +1477,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, + int d = r10_bio->devs[i].devnum; + struct md_rdev *rdev, *rrdev; + +- rrdev = rcu_dereference(conf->mirrors[d].replacement); +- /* +- * Read replacement first to prevent reading both rdev and +- * replacement as NULL during replacement replace rdev. +- */ +- smp_mb(); +- rdev = rcu_dereference(conf->mirrors[d].rdev); +- if (rdev == rrdev) +- rrdev = NULL; ++ rdev = dereference_rdev_and_rrdev(&conf->mirrors[d], &rrdev); + if (rdev && (test_bit(Faulty, &rdev->flags))) + rdev = NULL; + if (rrdev && (test_bit(Faulty, &rrdev->flags))) +@@ -1775,10 +1784,9 @@ retry_discard: + */ + rcu_read_lock(); + for (disk = 0; disk < geo->raid_disks; disk++) { +- struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev); +- struct md_rdev *rrdev = rcu_dereference( +- conf->mirrors[disk].replacement); ++ struct md_rdev *rdev, *rrdev; + ++ rdev = dereference_rdev_and_rrdev(&conf->mirrors[disk], &rrdev); + r10_bio->devs[disk].bio = NULL; + r10_bio->devs[disk].repl_bio = NULL; + +diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c +index 832d8566e1656..eb66d0bfe39d2 100644 +--- a/drivers/md/raid5-cache.c ++++ b/drivers/md/raid5-cache.c +@@ -1260,14 +1260,13 @@ static void r5l_log_flush_endio(struct bio *bio) + + if (bio->bi_status) + md_error(log->rdev->mddev, log->rdev); ++ bio_uninit(bio); + + spin_lock_irqsave(&log->io_list_lock, flags); + list_for_each_entry(io, &log->flushing_ios, log_sibling) + r5l_io_run_stripes(io); + list_splice_tail_init(&log->flushing_ios, &log->finished_ios); + spin_unlock_irqrestore(&log->io_list_lock, flags); +- +- bio_uninit(bio); + } + + /* +@@ -3166,12 +3165,15 @@ void r5l_exit_log(struct r5conf *conf) + { + struct r5l_log *log = conf->log; + +- /* Ensure disable_writeback_work wakes up and exits */ +- wake_up(&conf->mddev->sb_wait); +- flush_work(&log->disable_writeback_work); + md_unregister_thread(&log->reclaim_thread); + ++ /* ++ * 'reconfig_mutex' is held by caller, set 'confg->log' to NULL to ++ * ensure disable_writeback_work wakes up and exits. ++ */ + conf->log = NULL; ++ wake_up(&conf->mddev->sb_wait); ++ flush_work(&log->disable_writeback_work); + + mempool_exit(&log->meta_pool); + bioset_exit(&log->bs); +diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c +index b1512f9c5895c..4bc2a705029e6 100644 +--- a/drivers/media/cec/core/cec-adap.c ++++ b/drivers/media/cec/core/cec-adap.c +@@ -385,8 +385,8 @@ static void cec_data_cancel(struct cec_data *data, u8 tx_status, u8 rx_status) + cec_queue_msg_monitor(adap, &data->msg, 1); + + if (!data->blocking && data->msg.sequence) +- /* Allow drivers to process the message first */ +- call_op(adap, received, &data->msg); ++ /* Allow drivers to react to a canceled transmit */ ++ call_void_op(adap, adap_nb_transmit_canceled, &data->msg); + + cec_data_completed(data); + } +@@ -1345,7 +1345,7 @@ static void cec_adap_unconfigure(struct cec_adapter *adap) + cec_flush(adap); + wake_up_interruptible(&adap->kthread_waitq); + cec_post_state_event(adap); +- call_void_op(adap, adap_configured, false); ++ call_void_op(adap, adap_unconfigured); + } + + /* +@@ -1536,7 +1536,7 @@ configured: + adap->kthread_config = NULL; + complete(&adap->config_completion); + mutex_unlock(&adap->lock); +- call_void_op(adap, adap_configured, true); ++ call_void_op(adap, configured); + return 0; + + unconfigure: +diff --git a/drivers/media/cec/usb/pulse8/pulse8-cec.c b/drivers/media/cec/usb/pulse8/pulse8-cec.c +index 04b13cdc38d2c..ba67587bd43ec 100644 +--- a/drivers/media/cec/usb/pulse8/pulse8-cec.c ++++ b/drivers/media/cec/usb/pulse8/pulse8-cec.c +@@ -809,8 +809,11 @@ static void pulse8_ping_eeprom_work_handler(struct work_struct *work) + + mutex_lock(&pulse8->lock); + cmd = MSGCODE_PING; +- pulse8_send_and_wait(pulse8, &cmd, 1, +- MSGCODE_COMMAND_ACCEPTED, 0); ++ if (pulse8_send_and_wait(pulse8, &cmd, 1, ++ MSGCODE_COMMAND_ACCEPTED, 0)) { ++ dev_warn(pulse8->dev, "failed to ping EEPROM\n"); ++ goto unlock; ++ } + + if (pulse8->vers < 2) + goto unlock; +diff --git a/drivers/media/dvb-frontends/ascot2e.c b/drivers/media/dvb-frontends/ascot2e.c +index 9b00b56230b61..cf8e5f1bd1018 100644 +--- a/drivers/media/dvb-frontends/ascot2e.c ++++ b/drivers/media/dvb-frontends/ascot2e.c +@@ -533,7 +533,7 @@ struct dvb_frontend *ascot2e_attach(struct dvb_frontend *fe, + priv->i2c_address, priv->i2c); + return fe; + } +-EXPORT_SYMBOL(ascot2e_attach); ++EXPORT_SYMBOL_GPL(ascot2e_attach); + + MODULE_DESCRIPTION("Sony ASCOT2E terr/cab tuner driver"); + MODULE_AUTHOR("info@netup.ru"); +diff --git a/drivers/media/dvb-frontends/atbm8830.c b/drivers/media/dvb-frontends/atbm8830.c +index bdd16b9c58244..778c865085bf9 100644 +--- a/drivers/media/dvb-frontends/atbm8830.c ++++ b/drivers/media/dvb-frontends/atbm8830.c +@@ -489,7 +489,7 @@ error_out: + return NULL; + + } +-EXPORT_SYMBOL(atbm8830_attach); ++EXPORT_SYMBOL_GPL(atbm8830_attach); + + MODULE_DESCRIPTION("AltoBeam ATBM8830/8831 GB20600 demodulator driver"); + MODULE_AUTHOR("David T. L. Wong "); +diff --git a/drivers/media/dvb-frontends/au8522_dig.c b/drivers/media/dvb-frontends/au8522_dig.c +index 78cafdf279618..230436bf6cbd9 100644 +--- a/drivers/media/dvb-frontends/au8522_dig.c ++++ b/drivers/media/dvb-frontends/au8522_dig.c +@@ -879,7 +879,7 @@ error: + au8522_release_state(state); + return NULL; + } +-EXPORT_SYMBOL(au8522_attach); ++EXPORT_SYMBOL_GPL(au8522_attach); + + static const struct dvb_frontend_ops au8522_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/bcm3510.c b/drivers/media/dvb-frontends/bcm3510.c +index 68b92b4419cff..b3f5c49accafd 100644 +--- a/drivers/media/dvb-frontends/bcm3510.c ++++ b/drivers/media/dvb-frontends/bcm3510.c +@@ -835,7 +835,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(bcm3510_attach); ++EXPORT_SYMBOL_GPL(bcm3510_attach); + + static const struct dvb_frontend_ops bcm3510_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/cx22700.c b/drivers/media/dvb-frontends/cx22700.c +index b39ff516271b2..1d04c0a652b26 100644 +--- a/drivers/media/dvb-frontends/cx22700.c ++++ b/drivers/media/dvb-frontends/cx22700.c +@@ -432,4 +432,4 @@ MODULE_DESCRIPTION("Conexant CX22700 DVB-T Demodulator driver"); + MODULE_AUTHOR("Holger Waechtler"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(cx22700_attach); ++EXPORT_SYMBOL_GPL(cx22700_attach); +diff --git a/drivers/media/dvb-frontends/cx22702.c b/drivers/media/dvb-frontends/cx22702.c +index cc6acbf6393d4..61ad34b7004b5 100644 +--- a/drivers/media/dvb-frontends/cx22702.c ++++ b/drivers/media/dvb-frontends/cx22702.c +@@ -604,7 +604,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(cx22702_attach); ++EXPORT_SYMBOL_GPL(cx22702_attach); + + static const struct dvb_frontend_ops cx22702_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/cx24110.c b/drivers/media/dvb-frontends/cx24110.c +index 6f99d6a27be2d..9aeea089756fe 100644 +--- a/drivers/media/dvb-frontends/cx24110.c ++++ b/drivers/media/dvb-frontends/cx24110.c +@@ -653,4 +653,4 @@ MODULE_DESCRIPTION("Conexant CX24110 DVB-S Demodulator driver"); + MODULE_AUTHOR("Peter Hettkamp"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(cx24110_attach); ++EXPORT_SYMBOL_GPL(cx24110_attach); +diff --git a/drivers/media/dvb-frontends/cx24113.c b/drivers/media/dvb-frontends/cx24113.c +index dd55d314bf9af..203cb6b3f941b 100644 +--- a/drivers/media/dvb-frontends/cx24113.c ++++ b/drivers/media/dvb-frontends/cx24113.c +@@ -590,7 +590,7 @@ error: + + return NULL; + } +-EXPORT_SYMBOL(cx24113_attach); ++EXPORT_SYMBOL_GPL(cx24113_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Activates frontend debugging (default:0)"); +diff --git a/drivers/media/dvb-frontends/cx24116.c b/drivers/media/dvb-frontends/cx24116.c +index ea8264ccbb4e8..8b978a9f74a4e 100644 +--- a/drivers/media/dvb-frontends/cx24116.c ++++ b/drivers/media/dvb-frontends/cx24116.c +@@ -1133,7 +1133,7 @@ struct dvb_frontend *cx24116_attach(const struct cx24116_config *config, + state->frontend.demodulator_priv = state; + return &state->frontend; + } +-EXPORT_SYMBOL(cx24116_attach); ++EXPORT_SYMBOL_GPL(cx24116_attach); + + /* + * Initialise or wake up device +diff --git a/drivers/media/dvb-frontends/cx24120.c b/drivers/media/dvb-frontends/cx24120.c +index d8acd582c7111..44515fdbe91d4 100644 +--- a/drivers/media/dvb-frontends/cx24120.c ++++ b/drivers/media/dvb-frontends/cx24120.c +@@ -305,7 +305,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(cx24120_attach); ++EXPORT_SYMBOL_GPL(cx24120_attach); + + static int cx24120_test_rom(struct cx24120_state *state) + { +@@ -973,7 +973,9 @@ static void cx24120_set_clock_ratios(struct dvb_frontend *fe) + cmd.arg[8] = (clock_ratios_table[idx].rate >> 8) & 0xff; + cmd.arg[9] = (clock_ratios_table[idx].rate >> 0) & 0xff; + +- cx24120_message_send(state, &cmd); ++ ret = cx24120_message_send(state, &cmd); ++ if (ret != 0) ++ return; + + /* Calculate ber window rates for stat work */ + cx24120_calculate_ber_window(state, clock_ratios_table[idx].rate); +diff --git a/drivers/media/dvb-frontends/cx24123.c b/drivers/media/dvb-frontends/cx24123.c +index 3d84ee17e54c6..539889e638ccc 100644 +--- a/drivers/media/dvb-frontends/cx24123.c ++++ b/drivers/media/dvb-frontends/cx24123.c +@@ -1096,7 +1096,7 @@ error: + + return NULL; + } +-EXPORT_SYMBOL(cx24123_attach); ++EXPORT_SYMBOL_GPL(cx24123_attach); + + static const struct dvb_frontend_ops cx24123_ops = { + .delsys = { SYS_DVBS }, +diff --git a/drivers/media/dvb-frontends/cxd2820r_core.c b/drivers/media/dvb-frontends/cxd2820r_core.c +index 5d98222f9df09..8870aeac2872f 100644 +--- a/drivers/media/dvb-frontends/cxd2820r_core.c ++++ b/drivers/media/dvb-frontends/cxd2820r_core.c +@@ -536,7 +536,7 @@ struct dvb_frontend *cxd2820r_attach(const struct cxd2820r_config *config, + + return pdata.get_dvb_frontend(client); + } +-EXPORT_SYMBOL(cxd2820r_attach); ++EXPORT_SYMBOL_GPL(cxd2820r_attach); + + static struct dvb_frontend *cxd2820r_get_dvb_frontend(struct i2c_client *client) + { +diff --git a/drivers/media/dvb-frontends/cxd2841er.c b/drivers/media/dvb-frontends/cxd2841er.c +index 5431f922f55e4..e9d1eef40c627 100644 +--- a/drivers/media/dvb-frontends/cxd2841er.c ++++ b/drivers/media/dvb-frontends/cxd2841er.c +@@ -3930,14 +3930,14 @@ struct dvb_frontend *cxd2841er_attach_s(struct cxd2841er_config *cfg, + { + return cxd2841er_attach(cfg, i2c, SYS_DVBS); + } +-EXPORT_SYMBOL(cxd2841er_attach_s); ++EXPORT_SYMBOL_GPL(cxd2841er_attach_s); + + struct dvb_frontend *cxd2841er_attach_t_c(struct cxd2841er_config *cfg, + struct i2c_adapter *i2c) + { + return cxd2841er_attach(cfg, i2c, 0); + } +-EXPORT_SYMBOL(cxd2841er_attach_t_c); ++EXPORT_SYMBOL_GPL(cxd2841er_attach_t_c); + + static const struct dvb_frontend_ops cxd2841er_dvbs_s2_ops = { + .delsys = { SYS_DVBS, SYS_DVBS2 }, +diff --git a/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c b/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c +index d5b1b3788e392..09d31c368741d 100644 +--- a/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c ++++ b/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c +@@ -1950,7 +1950,7 @@ struct dvb_frontend *cxd2880_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(cxd2880_attach); ++EXPORT_SYMBOL_GPL(cxd2880_attach); + + MODULE_DESCRIPTION("Sony CXD2880 DVB-T2/T tuner + demod driver"); + MODULE_AUTHOR("Sony Semiconductor Solutions Corporation"); +diff --git a/drivers/media/dvb-frontends/dib0070.c b/drivers/media/dvb-frontends/dib0070.c +index cafb41dba861c..9a8e7cdd2a247 100644 +--- a/drivers/media/dvb-frontends/dib0070.c ++++ b/drivers/media/dvb-frontends/dib0070.c +@@ -762,7 +762,7 @@ free_mem: + fe->tuner_priv = NULL; + return NULL; + } +-EXPORT_SYMBOL(dib0070_attach); ++EXPORT_SYMBOL_GPL(dib0070_attach); + + MODULE_AUTHOR("Patrick Boettcher "); + MODULE_DESCRIPTION("Driver for the DiBcom 0070 base-band RF Tuner"); +diff --git a/drivers/media/dvb-frontends/dib0090.c b/drivers/media/dvb-frontends/dib0090.c +index 903da33642dff..c958bcff026ec 100644 +--- a/drivers/media/dvb-frontends/dib0090.c ++++ b/drivers/media/dvb-frontends/dib0090.c +@@ -2634,7 +2634,7 @@ struct dvb_frontend *dib0090_register(struct dvb_frontend *fe, struct i2c_adapte + return NULL; + } + +-EXPORT_SYMBOL(dib0090_register); ++EXPORT_SYMBOL_GPL(dib0090_register); + + struct dvb_frontend *dib0090_fw_register(struct dvb_frontend *fe, struct i2c_adapter *i2c, const struct dib0090_config *config) + { +@@ -2660,7 +2660,7 @@ free_mem: + fe->tuner_priv = NULL; + return NULL; + } +-EXPORT_SYMBOL(dib0090_fw_register); ++EXPORT_SYMBOL_GPL(dib0090_fw_register); + + MODULE_AUTHOR("Patrick Boettcher "); + MODULE_AUTHOR("Olivier Grenie "); +diff --git a/drivers/media/dvb-frontends/dib3000mb.c b/drivers/media/dvb-frontends/dib3000mb.c +index a6c2fc4586eb3..c598b2a633256 100644 +--- a/drivers/media/dvb-frontends/dib3000mb.c ++++ b/drivers/media/dvb-frontends/dib3000mb.c +@@ -815,4 +815,4 @@ MODULE_AUTHOR(DRIVER_AUTHOR); + MODULE_DESCRIPTION(DRIVER_DESC); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(dib3000mb_attach); ++EXPORT_SYMBOL_GPL(dib3000mb_attach); +diff --git a/drivers/media/dvb-frontends/dib3000mc.c b/drivers/media/dvb-frontends/dib3000mc.c +index 2e11a246aae0d..c2fca8289abae 100644 +--- a/drivers/media/dvb-frontends/dib3000mc.c ++++ b/drivers/media/dvb-frontends/dib3000mc.c +@@ -935,7 +935,7 @@ error: + kfree(st); + return NULL; + } +-EXPORT_SYMBOL(dib3000mc_attach); ++EXPORT_SYMBOL_GPL(dib3000mc_attach); + + static const struct dvb_frontend_ops dib3000mc_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/dib7000m.c b/drivers/media/dvb-frontends/dib7000m.c +index 97ce97789c9e3..fdb22f32e3a11 100644 +--- a/drivers/media/dvb-frontends/dib7000m.c ++++ b/drivers/media/dvb-frontends/dib7000m.c +@@ -1434,7 +1434,7 @@ error: + kfree(st); + return NULL; + } +-EXPORT_SYMBOL(dib7000m_attach); ++EXPORT_SYMBOL_GPL(dib7000m_attach); + + static const struct dvb_frontend_ops dib7000m_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/dib7000p.c b/drivers/media/dvb-frontends/dib7000p.c +index a90d2f51868ff..d1e53de5206ae 100644 +--- a/drivers/media/dvb-frontends/dib7000p.c ++++ b/drivers/media/dvb-frontends/dib7000p.c +@@ -497,7 +497,7 @@ static int dib7000p_update_pll(struct dvb_frontend *fe, struct dibx000_bandwidth + prediv = reg_1856 & 0x3f; + loopdiv = (reg_1856 >> 6) & 0x3f; + +- if ((bw != NULL) && (bw->pll_prediv != prediv || bw->pll_ratio != loopdiv)) { ++ if (loopdiv && bw && (bw->pll_prediv != prediv || bw->pll_ratio != loopdiv)) { + dprintk("Updating pll (prediv: old = %d new = %d ; loopdiv : old = %d new = %d)\n", prediv, bw->pll_prediv, loopdiv, bw->pll_ratio); + reg_1856 &= 0xf000; + reg_1857 = dib7000p_read_word(state, 1857); +@@ -2822,7 +2822,7 @@ void *dib7000p_attach(struct dib7000p_ops *ops) + + return ops; + } +-EXPORT_SYMBOL(dib7000p_attach); ++EXPORT_SYMBOL_GPL(dib7000p_attach); + + static const struct dvb_frontend_ops dib7000p_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c +index fe19d127abb3f..301d8eca7a6f9 100644 +--- a/drivers/media/dvb-frontends/dib8000.c ++++ b/drivers/media/dvb-frontends/dib8000.c +@@ -4527,7 +4527,7 @@ void *dib8000_attach(struct dib8000_ops *ops) + + return ops; + } +-EXPORT_SYMBOL(dib8000_attach); ++EXPORT_SYMBOL_GPL(dib8000_attach); + + MODULE_AUTHOR("Olivier Grenie "); + MODULE_DESCRIPTION("Driver for the DiBcom 8000 ISDB-T demodulator"); +diff --git a/drivers/media/dvb-frontends/dib9000.c b/drivers/media/dvb-frontends/dib9000.c +index 914ca820c174b..6f81890b31eeb 100644 +--- a/drivers/media/dvb-frontends/dib9000.c ++++ b/drivers/media/dvb-frontends/dib9000.c +@@ -2546,7 +2546,7 @@ error: + kfree(st); + return NULL; + } +-EXPORT_SYMBOL(dib9000_attach); ++EXPORT_SYMBOL_GPL(dib9000_attach); + + static const struct dvb_frontend_ops dib9000_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/drx39xyj/drxj.c b/drivers/media/dvb-frontends/drx39xyj/drxj.c +index bf9e4ef35684b..88860d08f9c12 100644 +--- a/drivers/media/dvb-frontends/drx39xyj/drxj.c ++++ b/drivers/media/dvb-frontends/drx39xyj/drxj.c +@@ -12368,7 +12368,7 @@ error: + + return NULL; + } +-EXPORT_SYMBOL(drx39xxj_attach); ++EXPORT_SYMBOL_GPL(drx39xxj_attach); + + static const struct dvb_frontend_ops drx39xxj_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/drxd_hard.c b/drivers/media/dvb-frontends/drxd_hard.c +index 9860cae65f1cf..6a531937f4bbb 100644 +--- a/drivers/media/dvb-frontends/drxd_hard.c ++++ b/drivers/media/dvb-frontends/drxd_hard.c +@@ -2939,7 +2939,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(drxd_attach); ++EXPORT_SYMBOL_GPL(drxd_attach); + + MODULE_DESCRIPTION("DRXD driver"); + MODULE_AUTHOR("Micronas"); +diff --git a/drivers/media/dvb-frontends/drxk_hard.c b/drivers/media/dvb-frontends/drxk_hard.c +index 9807f54119965..ff864c9bb7743 100644 +--- a/drivers/media/dvb-frontends/drxk_hard.c ++++ b/drivers/media/dvb-frontends/drxk_hard.c +@@ -6833,7 +6833,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(drxk_attach); ++EXPORT_SYMBOL_GPL(drxk_attach); + + MODULE_DESCRIPTION("DRX-K driver"); + MODULE_AUTHOR("Ralph Metzler"); +diff --git a/drivers/media/dvb-frontends/ds3000.c b/drivers/media/dvb-frontends/ds3000.c +index 20fcf31af1658..515aa7c7baf2a 100644 +--- a/drivers/media/dvb-frontends/ds3000.c ++++ b/drivers/media/dvb-frontends/ds3000.c +@@ -859,7 +859,7 @@ struct dvb_frontend *ds3000_attach(const struct ds3000_config *config, + ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF); + return &state->frontend; + } +-EXPORT_SYMBOL(ds3000_attach); ++EXPORT_SYMBOL_GPL(ds3000_attach); + + static int ds3000_set_carrier_offset(struct dvb_frontend *fe, + s32 carrier_offset_khz) +diff --git a/drivers/media/dvb-frontends/dvb-pll.c b/drivers/media/dvb-frontends/dvb-pll.c +index baf2a378e565f..fcf322ff82356 100644 +--- a/drivers/media/dvb-frontends/dvb-pll.c ++++ b/drivers/media/dvb-frontends/dvb-pll.c +@@ -866,7 +866,7 @@ out: + + return NULL; + } +-EXPORT_SYMBOL(dvb_pll_attach); ++EXPORT_SYMBOL_GPL(dvb_pll_attach); + + + static int +diff --git a/drivers/media/dvb-frontends/ec100.c b/drivers/media/dvb-frontends/ec100.c +index 03bd80666cf83..2ad0a3c2f7567 100644 +--- a/drivers/media/dvb-frontends/ec100.c ++++ b/drivers/media/dvb-frontends/ec100.c +@@ -299,7 +299,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(ec100_attach); ++EXPORT_SYMBOL_GPL(ec100_attach); + + static const struct dvb_frontend_ops ec100_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c +index 8c1310c6b0bc2..c299d31dc7d27 100644 +--- a/drivers/media/dvb-frontends/helene.c ++++ b/drivers/media/dvb-frontends/helene.c +@@ -1025,7 +1025,7 @@ struct dvb_frontend *helene_attach_s(struct dvb_frontend *fe, + priv->i2c_address, priv->i2c); + return fe; + } +-EXPORT_SYMBOL(helene_attach_s); ++EXPORT_SYMBOL_GPL(helene_attach_s); + + struct dvb_frontend *helene_attach(struct dvb_frontend *fe, + const struct helene_config *config, +@@ -1061,7 +1061,7 @@ struct dvb_frontend *helene_attach(struct dvb_frontend *fe, + priv->i2c_address, priv->i2c); + return fe; + } +-EXPORT_SYMBOL(helene_attach); ++EXPORT_SYMBOL_GPL(helene_attach); + + static int helene_probe(struct i2c_client *client, + const struct i2c_device_id *id) +diff --git a/drivers/media/dvb-frontends/horus3a.c b/drivers/media/dvb-frontends/horus3a.c +index 24bf5cbcc1846..0330b78a5b3f2 100644 +--- a/drivers/media/dvb-frontends/horus3a.c ++++ b/drivers/media/dvb-frontends/horus3a.c +@@ -395,7 +395,7 @@ struct dvb_frontend *horus3a_attach(struct dvb_frontend *fe, + priv->i2c_address, priv->i2c); + return fe; + } +-EXPORT_SYMBOL(horus3a_attach); ++EXPORT_SYMBOL_GPL(horus3a_attach); + + MODULE_DESCRIPTION("Sony HORUS3A satellite tuner driver"); + MODULE_AUTHOR("Sergey Kozlov "); +diff --git a/drivers/media/dvb-frontends/isl6405.c b/drivers/media/dvb-frontends/isl6405.c +index 2cd69b4ff82cb..7d28a743f97eb 100644 +--- a/drivers/media/dvb-frontends/isl6405.c ++++ b/drivers/media/dvb-frontends/isl6405.c +@@ -141,7 +141,7 @@ struct dvb_frontend *isl6405_attach(struct dvb_frontend *fe, struct i2c_adapter + + return fe; + } +-EXPORT_SYMBOL(isl6405_attach); ++EXPORT_SYMBOL_GPL(isl6405_attach); + + MODULE_DESCRIPTION("Driver for lnb supply and control ic isl6405"); + MODULE_AUTHOR("Hartmut Hackmann & Oliver Endriss"); +diff --git a/drivers/media/dvb-frontends/isl6421.c b/drivers/media/dvb-frontends/isl6421.c +index 43b0dfc6f453e..2e9f6f12f849e 100644 +--- a/drivers/media/dvb-frontends/isl6421.c ++++ b/drivers/media/dvb-frontends/isl6421.c +@@ -213,7 +213,7 @@ struct dvb_frontend *isl6421_attach(struct dvb_frontend *fe, struct i2c_adapter + + return fe; + } +-EXPORT_SYMBOL(isl6421_attach); ++EXPORT_SYMBOL_GPL(isl6421_attach); + + MODULE_DESCRIPTION("Driver for lnb supply and control ic isl6421"); + MODULE_AUTHOR("Andrew de Quincey & Oliver Endriss"); +diff --git a/drivers/media/dvb-frontends/isl6423.c b/drivers/media/dvb-frontends/isl6423.c +index 8cd1bb88ce6e7..a0d0a38340574 100644 +--- a/drivers/media/dvb-frontends/isl6423.c ++++ b/drivers/media/dvb-frontends/isl6423.c +@@ -289,7 +289,7 @@ exit: + fe->sec_priv = NULL; + return NULL; + } +-EXPORT_SYMBOL(isl6423_attach); ++EXPORT_SYMBOL_GPL(isl6423_attach); + + MODULE_DESCRIPTION("ISL6423 SEC"); + MODULE_AUTHOR("Manu Abraham"); +diff --git a/drivers/media/dvb-frontends/itd1000.c b/drivers/media/dvb-frontends/itd1000.c +index 1b33478653d16..f8f362f50e78d 100644 +--- a/drivers/media/dvb-frontends/itd1000.c ++++ b/drivers/media/dvb-frontends/itd1000.c +@@ -389,7 +389,7 @@ struct dvb_frontend *itd1000_attach(struct dvb_frontend *fe, struct i2c_adapter + + return fe; + } +-EXPORT_SYMBOL(itd1000_attach); ++EXPORT_SYMBOL_GPL(itd1000_attach); + + MODULE_AUTHOR("Patrick Boettcher "); + MODULE_DESCRIPTION("Integrant ITD1000 driver"); +diff --git a/drivers/media/dvb-frontends/ix2505v.c b/drivers/media/dvb-frontends/ix2505v.c +index 73f27105c139d..3212e333d472b 100644 +--- a/drivers/media/dvb-frontends/ix2505v.c ++++ b/drivers/media/dvb-frontends/ix2505v.c +@@ -302,7 +302,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(ix2505v_attach); ++EXPORT_SYMBOL_GPL(ix2505v_attach); + + module_param_named(debug, ix2505v_debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/l64781.c b/drivers/media/dvb-frontends/l64781.c +index c5106a1ea1cd0..fe5af2453d559 100644 +--- a/drivers/media/dvb-frontends/l64781.c ++++ b/drivers/media/dvb-frontends/l64781.c +@@ -593,4 +593,4 @@ MODULE_DESCRIPTION("LSI L64781 DVB-T Demodulator driver"); + MODULE_AUTHOR("Holger Waechtler, Marko Kohtala"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(l64781_attach); ++EXPORT_SYMBOL_GPL(l64781_attach); +diff --git a/drivers/media/dvb-frontends/lg2160.c b/drivers/media/dvb-frontends/lg2160.c +index f343066c297e2..fe700aa56bff3 100644 +--- a/drivers/media/dvb-frontends/lg2160.c ++++ b/drivers/media/dvb-frontends/lg2160.c +@@ -1426,7 +1426,7 @@ struct dvb_frontend *lg2160_attach(const struct lg2160_config *config, + + return &state->frontend; + } +-EXPORT_SYMBOL(lg2160_attach); ++EXPORT_SYMBOL_GPL(lg2160_attach); + + MODULE_DESCRIPTION("LG Electronics LG216x ATSC/MH Demodulator Driver"); + MODULE_AUTHOR("Michael Krufky "); +diff --git a/drivers/media/dvb-frontends/lgdt3305.c b/drivers/media/dvb-frontends/lgdt3305.c +index 62d7439889196..60a97f1cc74e5 100644 +--- a/drivers/media/dvb-frontends/lgdt3305.c ++++ b/drivers/media/dvb-frontends/lgdt3305.c +@@ -1148,7 +1148,7 @@ fail: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(lgdt3305_attach); ++EXPORT_SYMBOL_GPL(lgdt3305_attach); + + static const struct dvb_frontend_ops lgdt3304_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c +index 424311afb2bfa..6dfa8b18ed671 100644 +--- a/drivers/media/dvb-frontends/lgdt3306a.c ++++ b/drivers/media/dvb-frontends/lgdt3306a.c +@@ -1859,7 +1859,7 @@ fail: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(lgdt3306a_attach); ++EXPORT_SYMBOL_GPL(lgdt3306a_attach); + + #ifdef DBG_DUMP + +diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c +index ea9ae22fd2016..cb07869ea2fb3 100644 +--- a/drivers/media/dvb-frontends/lgdt330x.c ++++ b/drivers/media/dvb-frontends/lgdt330x.c +@@ -928,7 +928,7 @@ struct dvb_frontend *lgdt330x_attach(const struct lgdt330x_config *_config, + + return lgdt330x_get_dvb_frontend(client); + } +-EXPORT_SYMBOL(lgdt330x_attach); ++EXPORT_SYMBOL_GPL(lgdt330x_attach); + + static const struct dvb_frontend_ops lgdt3302_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/lgs8gxx.c b/drivers/media/dvb-frontends/lgs8gxx.c +index 30014979b985b..ffaf60e16ecd4 100644 +--- a/drivers/media/dvb-frontends/lgs8gxx.c ++++ b/drivers/media/dvb-frontends/lgs8gxx.c +@@ -1043,7 +1043,7 @@ error_out: + return NULL; + + } +-EXPORT_SYMBOL(lgs8gxx_attach); ++EXPORT_SYMBOL_GPL(lgs8gxx_attach); + + MODULE_DESCRIPTION("Legend Silicon LGS8913/LGS8GXX DMB-TH demodulator driver"); + MODULE_AUTHOR("David T. L. Wong "); +diff --git a/drivers/media/dvb-frontends/lnbh25.c b/drivers/media/dvb-frontends/lnbh25.c +index 9ffe06cd787dd..41bec050642b5 100644 +--- a/drivers/media/dvb-frontends/lnbh25.c ++++ b/drivers/media/dvb-frontends/lnbh25.c +@@ -173,7 +173,7 @@ struct dvb_frontend *lnbh25_attach(struct dvb_frontend *fe, + __func__, priv->i2c_address); + return fe; + } +-EXPORT_SYMBOL(lnbh25_attach); ++EXPORT_SYMBOL_GPL(lnbh25_attach); + + MODULE_DESCRIPTION("ST LNBH25 driver"); + MODULE_AUTHOR("info@netup.ru"); +diff --git a/drivers/media/dvb-frontends/lnbp21.c b/drivers/media/dvb-frontends/lnbp21.c +index e564974162d65..32593b1f75a38 100644 +--- a/drivers/media/dvb-frontends/lnbp21.c ++++ b/drivers/media/dvb-frontends/lnbp21.c +@@ -155,7 +155,7 @@ struct dvb_frontend *lnbh24_attach(struct dvb_frontend *fe, + return lnbx2x_attach(fe, i2c, override_set, override_clear, + i2c_addr, LNBH24_TTX); + } +-EXPORT_SYMBOL(lnbh24_attach); ++EXPORT_SYMBOL_GPL(lnbh24_attach); + + struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe, + struct i2c_adapter *i2c, u8 override_set, +@@ -164,7 +164,7 @@ struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe, + return lnbx2x_attach(fe, i2c, override_set, override_clear, + 0x08, LNBP21_ISEL); + } +-EXPORT_SYMBOL(lnbp21_attach); ++EXPORT_SYMBOL_GPL(lnbp21_attach); + + MODULE_DESCRIPTION("Driver for lnb supply and control ic lnbp21, lnbh24"); + MODULE_AUTHOR("Oliver Endriss, Igor M. Liplianin"); +diff --git a/drivers/media/dvb-frontends/lnbp22.c b/drivers/media/dvb-frontends/lnbp22.c +index b8c7145d4cefe..cb4ea5d3fad4a 100644 +--- a/drivers/media/dvb-frontends/lnbp22.c ++++ b/drivers/media/dvb-frontends/lnbp22.c +@@ -125,7 +125,7 @@ struct dvb_frontend *lnbp22_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(lnbp22_attach); ++EXPORT_SYMBOL_GPL(lnbp22_attach); + + MODULE_DESCRIPTION("Driver for lnb supply and control ic lnbp22"); + MODULE_AUTHOR("Dominik Kuhlen"); +diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c +index 4e844b2ef5971..9a0d43c7ba9e0 100644 +--- a/drivers/media/dvb-frontends/m88ds3103.c ++++ b/drivers/media/dvb-frontends/m88ds3103.c +@@ -1695,7 +1695,7 @@ struct dvb_frontend *m88ds3103_attach(const struct m88ds3103_config *cfg, + *tuner_i2c_adapter = pdata.get_i2c_adapter(client); + return pdata.get_dvb_frontend(client); + } +-EXPORT_SYMBOL(m88ds3103_attach); ++EXPORT_SYMBOL_GPL(m88ds3103_attach); + + static const struct dvb_frontend_ops m88ds3103_ops = { + .delsys = {SYS_DVBS, SYS_DVBS2}, +diff --git a/drivers/media/dvb-frontends/m88rs2000.c b/drivers/media/dvb-frontends/m88rs2000.c +index b294ba87e934f..2aa98203cd659 100644 +--- a/drivers/media/dvb-frontends/m88rs2000.c ++++ b/drivers/media/dvb-frontends/m88rs2000.c +@@ -808,7 +808,7 @@ error: + + return NULL; + } +-EXPORT_SYMBOL(m88rs2000_attach); ++EXPORT_SYMBOL_GPL(m88rs2000_attach); + + MODULE_DESCRIPTION("M88RS2000 DVB-S Demodulator driver"); + MODULE_AUTHOR("Malcolm Priestley tvboxspy@gmail.com"); +diff --git a/drivers/media/dvb-frontends/mb86a16.c b/drivers/media/dvb-frontends/mb86a16.c +index 2505f1e5794e7..ed08e0c2cf512 100644 +--- a/drivers/media/dvb-frontends/mb86a16.c ++++ b/drivers/media/dvb-frontends/mb86a16.c +@@ -1848,6 +1848,6 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(mb86a16_attach); ++EXPORT_SYMBOL_GPL(mb86a16_attach); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Manu Abraham"); +diff --git a/drivers/media/dvb-frontends/mb86a20s.c b/drivers/media/dvb-frontends/mb86a20s.c +index b74b9afed9a2e..9f5c61d4f23c5 100644 +--- a/drivers/media/dvb-frontends/mb86a20s.c ++++ b/drivers/media/dvb-frontends/mb86a20s.c +@@ -2081,7 +2081,7 @@ struct dvb_frontend *mb86a20s_attach(const struct mb86a20s_config *config, + dev_info(&i2c->dev, "Detected a Fujitsu mb86a20s frontend\n"); + return &state->frontend; + } +-EXPORT_SYMBOL(mb86a20s_attach); ++EXPORT_SYMBOL_GPL(mb86a20s_attach); + + static const struct dvb_frontend_ops mb86a20s_ops = { + .delsys = { SYS_ISDBT }, +diff --git a/drivers/media/dvb-frontends/mt312.c b/drivers/media/dvb-frontends/mt312.c +index d43a67045dbe7..fb867dd8a26be 100644 +--- a/drivers/media/dvb-frontends/mt312.c ++++ b/drivers/media/dvb-frontends/mt312.c +@@ -827,7 +827,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(mt312_attach); ++EXPORT_SYMBOL_GPL(mt312_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/mt352.c b/drivers/media/dvb-frontends/mt352.c +index 399d5c519027e..1b2889f5cf67d 100644 +--- a/drivers/media/dvb-frontends/mt352.c ++++ b/drivers/media/dvb-frontends/mt352.c +@@ -593,4 +593,4 @@ MODULE_DESCRIPTION("Zarlink MT352 DVB-T Demodulator driver"); + MODULE_AUTHOR("Holger Waechtler, Daniel Mack, Antonio Mancuso"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(mt352_attach); ++EXPORT_SYMBOL_GPL(mt352_attach); +diff --git a/drivers/media/dvb-frontends/nxt200x.c b/drivers/media/dvb-frontends/nxt200x.c +index 200b6dbc75f81..1c549ada6ebf9 100644 +--- a/drivers/media/dvb-frontends/nxt200x.c ++++ b/drivers/media/dvb-frontends/nxt200x.c +@@ -1216,5 +1216,5 @@ MODULE_DESCRIPTION("NXT200X (ATSC 8VSB & ITU-T J.83 AnnexB 64/256 QAM) Demodulat + MODULE_AUTHOR("Kirk Lapray, Michael Krufky, Jean-Francois Thibert, and Taylor Jacob"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(nxt200x_attach); ++EXPORT_SYMBOL_GPL(nxt200x_attach); + +diff --git a/drivers/media/dvb-frontends/nxt6000.c b/drivers/media/dvb-frontends/nxt6000.c +index 136918f82dda0..e8d4940370ddf 100644 +--- a/drivers/media/dvb-frontends/nxt6000.c ++++ b/drivers/media/dvb-frontends/nxt6000.c +@@ -621,4 +621,4 @@ MODULE_DESCRIPTION("NxtWave NXT6000 DVB-T demodulator driver"); + MODULE_AUTHOR("Florian Schirmer"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(nxt6000_attach); ++EXPORT_SYMBOL_GPL(nxt6000_attach); +diff --git a/drivers/media/dvb-frontends/or51132.c b/drivers/media/dvb-frontends/or51132.c +index 24de1b1151583..144a1f25dec0a 100644 +--- a/drivers/media/dvb-frontends/or51132.c ++++ b/drivers/media/dvb-frontends/or51132.c +@@ -605,4 +605,4 @@ MODULE_AUTHOR("Kirk Lapray"); + MODULE_AUTHOR("Trent Piepho"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(or51132_attach); ++EXPORT_SYMBOL_GPL(or51132_attach); +diff --git a/drivers/media/dvb-frontends/or51211.c b/drivers/media/dvb-frontends/or51211.c +index ddcaea5c9941f..dc60482162c54 100644 +--- a/drivers/media/dvb-frontends/or51211.c ++++ b/drivers/media/dvb-frontends/or51211.c +@@ -551,5 +551,5 @@ MODULE_DESCRIPTION("Oren OR51211 VSB [pcHDTV HD-2000] Demodulator Driver"); + MODULE_AUTHOR("Kirk Lapray"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(or51211_attach); ++EXPORT_SYMBOL_GPL(or51211_attach); + +diff --git a/drivers/media/dvb-frontends/s5h1409.c b/drivers/media/dvb-frontends/s5h1409.c +index 3089cc174a6f5..28b1dca077ead 100644 +--- a/drivers/media/dvb-frontends/s5h1409.c ++++ b/drivers/media/dvb-frontends/s5h1409.c +@@ -981,7 +981,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(s5h1409_attach); ++EXPORT_SYMBOL_GPL(s5h1409_attach); + + static const struct dvb_frontend_ops s5h1409_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/s5h1411.c b/drivers/media/dvb-frontends/s5h1411.c +index 2563a72e98b70..fc48e659c2d8a 100644 +--- a/drivers/media/dvb-frontends/s5h1411.c ++++ b/drivers/media/dvb-frontends/s5h1411.c +@@ -900,7 +900,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(s5h1411_attach); ++EXPORT_SYMBOL_GPL(s5h1411_attach); + + static const struct dvb_frontend_ops s5h1411_ops = { + .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B }, +diff --git a/drivers/media/dvb-frontends/s5h1420.c b/drivers/media/dvb-frontends/s5h1420.c +index 6bdec2898bc81..d700de1ea6c24 100644 +--- a/drivers/media/dvb-frontends/s5h1420.c ++++ b/drivers/media/dvb-frontends/s5h1420.c +@@ -918,7 +918,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(s5h1420_attach); ++EXPORT_SYMBOL_GPL(s5h1420_attach); + + static const struct dvb_frontend_ops s5h1420_ops = { + .delsys = { SYS_DVBS }, +diff --git a/drivers/media/dvb-frontends/s5h1432.c b/drivers/media/dvb-frontends/s5h1432.c +index 956e8ee4b388e..ff5d3bdf3bc67 100644 +--- a/drivers/media/dvb-frontends/s5h1432.c ++++ b/drivers/media/dvb-frontends/s5h1432.c +@@ -355,7 +355,7 @@ struct dvb_frontend *s5h1432_attach(const struct s5h1432_config *config, + + return &state->frontend; + } +-EXPORT_SYMBOL(s5h1432_attach); ++EXPORT_SYMBOL_GPL(s5h1432_attach); + + static const struct dvb_frontend_ops s5h1432_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/s921.c b/drivers/media/dvb-frontends/s921.c +index f118d8e641030..7e461ac159fc1 100644 +--- a/drivers/media/dvb-frontends/s921.c ++++ b/drivers/media/dvb-frontends/s921.c +@@ -495,7 +495,7 @@ struct dvb_frontend *s921_attach(const struct s921_config *config, + + return &state->frontend; + } +-EXPORT_SYMBOL(s921_attach); ++EXPORT_SYMBOL_GPL(s921_attach); + + static const struct dvb_frontend_ops s921_ops = { + .delsys = { SYS_ISDBT }, +diff --git a/drivers/media/dvb-frontends/si21xx.c b/drivers/media/dvb-frontends/si21xx.c +index 2d29d2c4d434c..210ccd356e2bf 100644 +--- a/drivers/media/dvb-frontends/si21xx.c ++++ b/drivers/media/dvb-frontends/si21xx.c +@@ -937,7 +937,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(si21xx_attach); ++EXPORT_SYMBOL_GPL(si21xx_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/sp887x.c b/drivers/media/dvb-frontends/sp887x.c +index 146e7f2dd3c5e..f59c0f96416b5 100644 +--- a/drivers/media/dvb-frontends/sp887x.c ++++ b/drivers/media/dvb-frontends/sp887x.c +@@ -624,4 +624,4 @@ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); + MODULE_DESCRIPTION("Spase sp887x DVB-T demodulator driver"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(sp887x_attach); ++EXPORT_SYMBOL_GPL(sp887x_attach); +diff --git a/drivers/media/dvb-frontends/stb0899_drv.c b/drivers/media/dvb-frontends/stb0899_drv.c +index 4ee6c1e1e9f7d..2f4d8fb400cd6 100644 +--- a/drivers/media/dvb-frontends/stb0899_drv.c ++++ b/drivers/media/dvb-frontends/stb0899_drv.c +@@ -1638,7 +1638,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(stb0899_attach); ++EXPORT_SYMBOL_GPL(stb0899_attach); + MODULE_PARM_DESC(verbose, "Set Verbosity level"); + MODULE_AUTHOR("Manu Abraham"); + MODULE_DESCRIPTION("STB0899 Multi-Std frontend"); +diff --git a/drivers/media/dvb-frontends/stb6000.c b/drivers/media/dvb-frontends/stb6000.c +index 8c9800d577e03..d74e34677b925 100644 +--- a/drivers/media/dvb-frontends/stb6000.c ++++ b/drivers/media/dvb-frontends/stb6000.c +@@ -232,7 +232,7 @@ struct dvb_frontend *stb6000_attach(struct dvb_frontend *fe, int addr, + + return fe; + } +-EXPORT_SYMBOL(stb6000_attach); ++EXPORT_SYMBOL_GPL(stb6000_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/stb6100.c b/drivers/media/dvb-frontends/stb6100.c +index 698866c4f15a7..c5818a15a0d70 100644 +--- a/drivers/media/dvb-frontends/stb6100.c ++++ b/drivers/media/dvb-frontends/stb6100.c +@@ -557,7 +557,7 @@ static void stb6100_release(struct dvb_frontend *fe) + kfree(state); + } + +-EXPORT_SYMBOL(stb6100_attach); ++EXPORT_SYMBOL_GPL(stb6100_attach); + MODULE_PARM_DESC(verbose, "Set Verbosity level"); + + MODULE_AUTHOR("Manu Abraham"); +diff --git a/drivers/media/dvb-frontends/stv0288.c b/drivers/media/dvb-frontends/stv0288.c +index 3ae1f3a2f1420..a5581bd60f9e8 100644 +--- a/drivers/media/dvb-frontends/stv0288.c ++++ b/drivers/media/dvb-frontends/stv0288.c +@@ -590,7 +590,7 @@ error: + + return NULL; + } +-EXPORT_SYMBOL(stv0288_attach); ++EXPORT_SYMBOL_GPL(stv0288_attach); + + module_param(debug_legacy_dish_switch, int, 0444); + MODULE_PARM_DESC(debug_legacy_dish_switch, +diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c +index 6d5962d5697ac..9d4dbd99a5a79 100644 +--- a/drivers/media/dvb-frontends/stv0297.c ++++ b/drivers/media/dvb-frontends/stv0297.c +@@ -710,4 +710,4 @@ MODULE_DESCRIPTION("ST STV0297 DVB-C Demodulator driver"); + MODULE_AUTHOR("Dennis Noermann and Andrew de Quincey"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(stv0297_attach); ++EXPORT_SYMBOL_GPL(stv0297_attach); +diff --git a/drivers/media/dvb-frontends/stv0299.c b/drivers/media/dvb-frontends/stv0299.c +index b5263a0ee5aa5..da7ff2c2e8e55 100644 +--- a/drivers/media/dvb-frontends/stv0299.c ++++ b/drivers/media/dvb-frontends/stv0299.c +@@ -752,4 +752,4 @@ MODULE_DESCRIPTION("ST STV0299 DVB Demodulator driver"); + MODULE_AUTHOR("Ralph Metzler, Holger Waechtler, Peter Schildmann, Felix Domke, Andreas Oberritter, Andrew de Quincey, Kenneth Aafly"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(stv0299_attach); ++EXPORT_SYMBOL_GPL(stv0299_attach); +diff --git a/drivers/media/dvb-frontends/stv0367.c b/drivers/media/dvb-frontends/stv0367.c +index 95e376f23506f..04556b77c16c9 100644 +--- a/drivers/media/dvb-frontends/stv0367.c ++++ b/drivers/media/dvb-frontends/stv0367.c +@@ -1750,7 +1750,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(stv0367ter_attach); ++EXPORT_SYMBOL_GPL(stv0367ter_attach); + + static int stv0367cab_gate_ctrl(struct dvb_frontend *fe, int enable) + { +@@ -2919,7 +2919,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(stv0367cab_attach); ++EXPORT_SYMBOL_GPL(stv0367cab_attach); + + /* + * Functions for operation on Digital Devices hardware +@@ -3340,7 +3340,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(stv0367ddb_attach); ++EXPORT_SYMBOL_GPL(stv0367ddb_attach); + + MODULE_PARM_DESC(debug, "Set debug"); + MODULE_PARM_DESC(i2c_debug, "Set i2c debug"); +diff --git a/drivers/media/dvb-frontends/stv0900_core.c b/drivers/media/dvb-frontends/stv0900_core.c +index 212312d20ff62..e7b9b9b11d7df 100644 +--- a/drivers/media/dvb-frontends/stv0900_core.c ++++ b/drivers/media/dvb-frontends/stv0900_core.c +@@ -1957,7 +1957,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(stv0900_attach); ++EXPORT_SYMBOL_GPL(stv0900_attach); + + MODULE_PARM_DESC(debug, "Set debug"); + +diff --git a/drivers/media/dvb-frontends/stv090x.c b/drivers/media/dvb-frontends/stv090x.c +index 0a600c1d7d1b1..aba64162dac45 100644 +--- a/drivers/media/dvb-frontends/stv090x.c ++++ b/drivers/media/dvb-frontends/stv090x.c +@@ -5072,7 +5072,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(stv090x_attach); ++EXPORT_SYMBOL_GPL(stv090x_attach); + + static const struct i2c_device_id stv090x_id_table[] = { + {"stv090x", 0}, +diff --git a/drivers/media/dvb-frontends/stv6110.c b/drivers/media/dvb-frontends/stv6110.c +index 963f6a896102a..1cf9c095dbff0 100644 +--- a/drivers/media/dvb-frontends/stv6110.c ++++ b/drivers/media/dvb-frontends/stv6110.c +@@ -427,7 +427,7 @@ struct dvb_frontend *stv6110_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(stv6110_attach); ++EXPORT_SYMBOL_GPL(stv6110_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/stv6110x.c b/drivers/media/dvb-frontends/stv6110x.c +index fbc4dbd62151d..6ab2001215917 100644 +--- a/drivers/media/dvb-frontends/stv6110x.c ++++ b/drivers/media/dvb-frontends/stv6110x.c +@@ -468,7 +468,7 @@ const struct stv6110x_devctl *stv6110x_attach(struct dvb_frontend *fe, + dev_info(&stv6110x->i2c->dev, "Attaching STV6110x\n"); + return stv6110x->devctl; + } +-EXPORT_SYMBOL(stv6110x_attach); ++EXPORT_SYMBOL_GPL(stv6110x_attach); + + static const struct i2c_device_id stv6110x_id_table[] = { + {"stv6110x", 0}, +diff --git a/drivers/media/dvb-frontends/tda10021.c b/drivers/media/dvb-frontends/tda10021.c +index faa6e54b33729..462e12ab6bd14 100644 +--- a/drivers/media/dvb-frontends/tda10021.c ++++ b/drivers/media/dvb-frontends/tda10021.c +@@ -523,4 +523,4 @@ MODULE_DESCRIPTION("Philips TDA10021 DVB-C demodulator driver"); + MODULE_AUTHOR("Ralph Metzler, Holger Waechtler, Markus Schulz"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(tda10021_attach); ++EXPORT_SYMBOL_GPL(tda10021_attach); +diff --git a/drivers/media/dvb-frontends/tda10023.c b/drivers/media/dvb-frontends/tda10023.c +index 8f32edf6b700e..4c2541ecd7433 100644 +--- a/drivers/media/dvb-frontends/tda10023.c ++++ b/drivers/media/dvb-frontends/tda10023.c +@@ -594,4 +594,4 @@ MODULE_DESCRIPTION("Philips TDA10023 DVB-C demodulator driver"); + MODULE_AUTHOR("Georg Acher, Hartmut Birr"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(tda10023_attach); ++EXPORT_SYMBOL_GPL(tda10023_attach); +diff --git a/drivers/media/dvb-frontends/tda10048.c b/drivers/media/dvb-frontends/tda10048.c +index 0b3f6999515e3..f6d8a64762b99 100644 +--- a/drivers/media/dvb-frontends/tda10048.c ++++ b/drivers/media/dvb-frontends/tda10048.c +@@ -1138,7 +1138,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(tda10048_attach); ++EXPORT_SYMBOL_GPL(tda10048_attach); + + static const struct dvb_frontend_ops tda10048_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/dvb-frontends/tda1004x.c b/drivers/media/dvb-frontends/tda1004x.c +index 83a798ca9b002..6f306db6c615f 100644 +--- a/drivers/media/dvb-frontends/tda1004x.c ++++ b/drivers/media/dvb-frontends/tda1004x.c +@@ -1378,5 +1378,5 @@ MODULE_DESCRIPTION("Philips TDA10045H & TDA10046H DVB-T Demodulator"); + MODULE_AUTHOR("Andrew de Quincey & Robert Schlabbach"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(tda10045_attach); +-EXPORT_SYMBOL(tda10046_attach); ++EXPORT_SYMBOL_GPL(tda10045_attach); ++EXPORT_SYMBOL_GPL(tda10046_attach); +diff --git a/drivers/media/dvb-frontends/tda10086.c b/drivers/media/dvb-frontends/tda10086.c +index cdcf97664bba8..b449514ae5854 100644 +--- a/drivers/media/dvb-frontends/tda10086.c ++++ b/drivers/media/dvb-frontends/tda10086.c +@@ -764,4 +764,4 @@ MODULE_DESCRIPTION("Philips TDA10086 DVB-S Demodulator"); + MODULE_AUTHOR("Andrew de Quincey"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(tda10086_attach); ++EXPORT_SYMBOL_GPL(tda10086_attach); +diff --git a/drivers/media/dvb-frontends/tda665x.c b/drivers/media/dvb-frontends/tda665x.c +index 13e8969da7f89..346be5011fb73 100644 +--- a/drivers/media/dvb-frontends/tda665x.c ++++ b/drivers/media/dvb-frontends/tda665x.c +@@ -227,7 +227,7 @@ struct dvb_frontend *tda665x_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(tda665x_attach); ++EXPORT_SYMBOL_GPL(tda665x_attach); + + MODULE_DESCRIPTION("TDA665x driver"); + MODULE_AUTHOR("Manu Abraham"); +diff --git a/drivers/media/dvb-frontends/tda8083.c b/drivers/media/dvb-frontends/tda8083.c +index e3e1c3db2c856..44f53624557bc 100644 +--- a/drivers/media/dvb-frontends/tda8083.c ++++ b/drivers/media/dvb-frontends/tda8083.c +@@ -481,4 +481,4 @@ MODULE_DESCRIPTION("Philips TDA8083 DVB-S Demodulator"); + MODULE_AUTHOR("Ralph Metzler, Holger Waechtler"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(tda8083_attach); ++EXPORT_SYMBOL_GPL(tda8083_attach); +diff --git a/drivers/media/dvb-frontends/tda8261.c b/drivers/media/dvb-frontends/tda8261.c +index 0d576d41c67d8..8b06f92745dca 100644 +--- a/drivers/media/dvb-frontends/tda8261.c ++++ b/drivers/media/dvb-frontends/tda8261.c +@@ -188,7 +188,7 @@ exit: + return NULL; + } + +-EXPORT_SYMBOL(tda8261_attach); ++EXPORT_SYMBOL_GPL(tda8261_attach); + + MODULE_AUTHOR("Manu Abraham"); + MODULE_DESCRIPTION("TDA8261 8PSK/QPSK Tuner"); +diff --git a/drivers/media/dvb-frontends/tda826x.c b/drivers/media/dvb-frontends/tda826x.c +index f9703a1dd758c..eafcf5f7da3dc 100644 +--- a/drivers/media/dvb-frontends/tda826x.c ++++ b/drivers/media/dvb-frontends/tda826x.c +@@ -164,7 +164,7 @@ struct dvb_frontend *tda826x_attach(struct dvb_frontend *fe, int addr, struct i2 + + return fe; + } +-EXPORT_SYMBOL(tda826x_attach); ++EXPORT_SYMBOL_GPL(tda826x_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c +index 02338256b974f..2f64f1a8bc233 100644 +--- a/drivers/media/dvb-frontends/ts2020.c ++++ b/drivers/media/dvb-frontends/ts2020.c +@@ -525,7 +525,7 @@ struct dvb_frontend *ts2020_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(ts2020_attach); ++EXPORT_SYMBOL_GPL(ts2020_attach); + + /* + * We implement own regmap locking due to legacy DVB attach which uses frontend +diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c +index 2483f614d0e7d..41dd9b6d31908 100644 +--- a/drivers/media/dvb-frontends/tua6100.c ++++ b/drivers/media/dvb-frontends/tua6100.c +@@ -186,7 +186,7 @@ struct dvb_frontend *tua6100_attach(struct dvb_frontend *fe, int addr, struct i2 + fe->tuner_priv = priv; + return fe; + } +-EXPORT_SYMBOL(tua6100_attach); ++EXPORT_SYMBOL_GPL(tua6100_attach); + + MODULE_DESCRIPTION("DVB tua6100 driver"); + MODULE_AUTHOR("Andrew de Quincey"); +diff --git a/drivers/media/dvb-frontends/ves1820.c b/drivers/media/dvb-frontends/ves1820.c +index 9df14d0be1c1a..ee5620e731e9b 100644 +--- a/drivers/media/dvb-frontends/ves1820.c ++++ b/drivers/media/dvb-frontends/ves1820.c +@@ -434,4 +434,4 @@ MODULE_DESCRIPTION("VLSI VES1820 DVB-C Demodulator driver"); + MODULE_AUTHOR("Ralph Metzler, Holger Waechtler"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(ves1820_attach); ++EXPORT_SYMBOL_GPL(ves1820_attach); +diff --git a/drivers/media/dvb-frontends/ves1x93.c b/drivers/media/dvb-frontends/ves1x93.c +index b747272863025..c60e21d26b881 100644 +--- a/drivers/media/dvb-frontends/ves1x93.c ++++ b/drivers/media/dvb-frontends/ves1x93.c +@@ -540,4 +540,4 @@ MODULE_DESCRIPTION("VLSI VES1x93 DVB-S Demodulator driver"); + MODULE_AUTHOR("Ralph Metzler"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(ves1x93_attach); ++EXPORT_SYMBOL_GPL(ves1x93_attach); +diff --git a/drivers/media/dvb-frontends/zl10036.c b/drivers/media/dvb-frontends/zl10036.c +index d392c7cce2ce0..7ba575e9c55f4 100644 +--- a/drivers/media/dvb-frontends/zl10036.c ++++ b/drivers/media/dvb-frontends/zl10036.c +@@ -496,7 +496,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(zl10036_attach); ++EXPORT_SYMBOL_GPL(zl10036_attach); + + module_param_named(debug, zl10036_debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/zl10039.c b/drivers/media/dvb-frontends/zl10039.c +index 1335bf78d5b7f..a3e4d219400ce 100644 +--- a/drivers/media/dvb-frontends/zl10039.c ++++ b/drivers/media/dvb-frontends/zl10039.c +@@ -295,7 +295,7 @@ error: + kfree(state); + return NULL; + } +-EXPORT_SYMBOL(zl10039_attach); ++EXPORT_SYMBOL_GPL(zl10039_attach); + + module_param(debug, int, 0644); + MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off)."); +diff --git a/drivers/media/dvb-frontends/zl10353.c b/drivers/media/dvb-frontends/zl10353.c +index 2a2cf20a73d61..8849d05475c27 100644 +--- a/drivers/media/dvb-frontends/zl10353.c ++++ b/drivers/media/dvb-frontends/zl10353.c +@@ -665,4 +665,4 @@ MODULE_DESCRIPTION("Zarlink ZL10353 DVB-T demodulator driver"); + MODULE_AUTHOR("Chris Pascoe"); + MODULE_LICENSE("GPL"); + +-EXPORT_SYMBOL(zl10353_attach); ++EXPORT_SYMBOL_GPL(zl10353_attach); +diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig +index 7806d4b81716e..a34afb5217ebc 100644 +--- a/drivers/media/i2c/Kconfig ++++ b/drivers/media/i2c/Kconfig +@@ -25,8 +25,15 @@ config VIDEO_IR_I2C + # V4L2 I2C drivers that are related with Camera support + # + +-menu "Camera sensor devices" +- visible if MEDIA_CAMERA_SUPPORT ++menuconfig VIDEO_CAMERA_SENSOR ++ bool "Camera sensor devices" ++ depends on MEDIA_CAMERA_SUPPORT && I2C ++ select MEDIA_CONTROLLER ++ select V4L2_FWNODE ++ select VIDEO_V4L2_SUBDEV_API ++ default y ++ ++if VIDEO_CAMERA_SENSOR + + config VIDEO_APTINA_PLL + tristate +@@ -783,7 +790,7 @@ source "drivers/media/i2c/ccs/Kconfig" + source "drivers/media/i2c/et8ek8/Kconfig" + source "drivers/media/i2c/m5mols/Kconfig" + +-endmenu ++endif + + menu "Lens drivers" + visible if MEDIA_CAMERA_SUPPORT +diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c +index a12fedcc3a1ce..088c29c4e2529 100644 +--- a/drivers/media/i2c/ad5820.c ++++ b/drivers/media/i2c/ad5820.c +@@ -356,7 +356,6 @@ static void ad5820_remove(struct i2c_client *client) + static const struct i2c_device_id ad5820_id_table[] = { + { "ad5820", 0 }, + { "ad5821", 0 }, +- { "ad5823", 0 }, + { } + }; + MODULE_DEVICE_TABLE(i2c, ad5820_id_table); +@@ -364,7 +363,6 @@ MODULE_DEVICE_TABLE(i2c, ad5820_id_table); + static const struct of_device_id ad5820_of_table[] = { + { .compatible = "adi,ad5820" }, + { .compatible = "adi,ad5821" }, +- { .compatible = "adi,ad5823" }, + { } + }; + MODULE_DEVICE_TABLE(of, ad5820_of_table); +diff --git a/drivers/media/i2c/ccs/ccs-data.c b/drivers/media/i2c/ccs/ccs-data.c +index 45f2b2f55ec5c..08400edf77ced 100644 +--- a/drivers/media/i2c/ccs/ccs-data.c ++++ b/drivers/media/i2c/ccs/ccs-data.c +@@ -464,8 +464,7 @@ static int ccs_data_parse_rules(struct bin_container *bin, + rule_payload = __rule_type + 1; + rule_plen2 = rule_plen - sizeof(*__rule_type); + +- switch (*__rule_type) { +- case CCS_DATA_BLOCK_RULE_ID_IF: { ++ if (*__rule_type == CCS_DATA_BLOCK_RULE_ID_IF) { + const struct __ccs_data_block_rule_if *__if_rules = + rule_payload; + const size_t __num_if_rules = +@@ -514,49 +513,61 @@ static int ccs_data_parse_rules(struct bin_container *bin, + rules->if_rules = if_rule; + rules->num_if_rules = __num_if_rules; + } +- break; +- } +- case CCS_DATA_BLOCK_RULE_ID_READ_ONLY_REGS: +- rval = ccs_data_parse_reg_rules(bin, &rules->read_only_regs, +- &rules->num_read_only_regs, +- rule_payload, +- rule_payload + rule_plen2, +- dev); +- if (rval) +- return rval; +- break; +- case CCS_DATA_BLOCK_RULE_ID_FFD: +- rval = ccs_data_parse_ffd(bin, &rules->frame_format, +- rule_payload, +- rule_payload + rule_plen2, +- dev); +- if (rval) +- return rval; +- break; +- case CCS_DATA_BLOCK_RULE_ID_MSR: +- rval = ccs_data_parse_reg_rules(bin, +- &rules->manufacturer_regs, +- &rules->num_manufacturer_regs, +- rule_payload, +- rule_payload + rule_plen2, +- dev); +- if (rval) +- return rval; +- break; +- case CCS_DATA_BLOCK_RULE_ID_PDAF_READOUT: +- rval = ccs_data_parse_pdaf_readout(bin, +- &rules->pdaf_readout, +- rule_payload, +- rule_payload + rule_plen2, +- dev); +- if (rval) +- return rval; +- break; +- default: +- dev_dbg(dev, +- "Don't know how to handle rule type %u!\n", +- *__rule_type); +- return -EINVAL; ++ } else { ++ /* Check there was an if rule before any other rules */ ++ if (bin->base && !rules) ++ return -EINVAL; ++ ++ switch (*__rule_type) { ++ case CCS_DATA_BLOCK_RULE_ID_READ_ONLY_REGS: ++ rval = ccs_data_parse_reg_rules(bin, ++ rules ? ++ &rules->read_only_regs : NULL, ++ rules ? ++ &rules->num_read_only_regs : NULL, ++ rule_payload, ++ rule_payload + rule_plen2, ++ dev); ++ if (rval) ++ return rval; ++ break; ++ case CCS_DATA_BLOCK_RULE_ID_FFD: ++ rval = ccs_data_parse_ffd(bin, rules ? ++ &rules->frame_format : NULL, ++ rule_payload, ++ rule_payload + rule_plen2, ++ dev); ++ if (rval) ++ return rval; ++ break; ++ case CCS_DATA_BLOCK_RULE_ID_MSR: ++ rval = ccs_data_parse_reg_rules(bin, ++ rules ? ++ &rules->manufacturer_regs : NULL, ++ rules ? ++ &rules->num_manufacturer_regs : NULL, ++ rule_payload, ++ rule_payload + rule_plen2, ++ dev); ++ if (rval) ++ return rval; ++ break; ++ case CCS_DATA_BLOCK_RULE_ID_PDAF_READOUT: ++ rval = ccs_data_parse_pdaf_readout(bin, ++ rules ? ++ &rules->pdaf_readout : NULL, ++ rule_payload, ++ rule_payload + rule_plen2, ++ dev); ++ if (rval) ++ return rval; ++ break; ++ default: ++ dev_dbg(dev, ++ "Don't know how to handle rule type %u!\n", ++ *__rule_type); ++ return -EINVAL; ++ } + } + __next_rule = __next_rule + rule_hlen + rule_plen; + } +diff --git a/drivers/media/i2c/ov2680.c b/drivers/media/i2c/ov2680.c +index de66d3395a4dd..8943e4e78a0df 100644 +--- a/drivers/media/i2c/ov2680.c ++++ b/drivers/media/i2c/ov2680.c +@@ -54,6 +54,9 @@ + #define OV2680_WIDTH_MAX 1600 + #define OV2680_HEIGHT_MAX 1200 + ++#define OV2680_DEFAULT_WIDTH 800 ++#define OV2680_DEFAULT_HEIGHT 600 ++ + enum ov2680_mode_id { + OV2680_MODE_QUXGA_800_600, + OV2680_MODE_720P_1280_720, +@@ -85,15 +88,8 @@ struct ov2680_mode_info { + + struct ov2680_ctrls { + struct v4l2_ctrl_handler handler; +- struct { +- struct v4l2_ctrl *auto_exp; +- struct v4l2_ctrl *exposure; +- }; +- struct { +- struct v4l2_ctrl *auto_gain; +- struct v4l2_ctrl *gain; +- }; +- ++ struct v4l2_ctrl *exposure; ++ struct v4l2_ctrl *gain; + struct v4l2_ctrl *hflip; + struct v4l2_ctrl *vflip; + struct v4l2_ctrl *test_pattern; +@@ -143,6 +139,7 @@ static const struct reg_value ov2680_setting_30fps_QUXGA_800_600[] = { + {0x380e, 0x02}, {0x380f, 0x84}, {0x3811, 0x04}, {0x3813, 0x04}, + {0x3814, 0x31}, {0x3815, 0x31}, {0x3820, 0xc0}, {0x4008, 0x00}, + {0x4009, 0x03}, {0x4837, 0x1e}, {0x3501, 0x4e}, {0x3502, 0xe0}, ++ {0x3503, 0x03}, + }; + + static const struct reg_value ov2680_setting_30fps_720P_1280_720[] = { +@@ -321,70 +318,62 @@ static void ov2680_power_down(struct ov2680_dev *sensor) + usleep_range(5000, 10000); + } + +-static int ov2680_bayer_order(struct ov2680_dev *sensor) ++static void ov2680_set_bayer_order(struct ov2680_dev *sensor, ++ struct v4l2_mbus_framefmt *fmt) + { +- u32 format1; +- u32 format2; +- u32 hv_flip; +- int ret; +- +- ret = ov2680_read_reg(sensor, OV2680_REG_FORMAT1, &format1); +- if (ret < 0) +- return ret; +- +- ret = ov2680_read_reg(sensor, OV2680_REG_FORMAT2, &format2); +- if (ret < 0) +- return ret; ++ int hv_flip = 0; + +- hv_flip = (format2 & BIT(2) << 1) | (format1 & BIT(2)); ++ if (sensor->ctrls.vflip && sensor->ctrls.vflip->val) ++ hv_flip += 1; + +- sensor->fmt.code = ov2680_hv_flip_bayer_order[hv_flip]; ++ if (sensor->ctrls.hflip && sensor->ctrls.hflip->val) ++ hv_flip += 2; + +- return 0; ++ fmt->code = ov2680_hv_flip_bayer_order[hv_flip]; + } + +-static int ov2680_vflip_enable(struct ov2680_dev *sensor) ++static void ov2680_fill_format(struct ov2680_dev *sensor, ++ struct v4l2_mbus_framefmt *fmt, ++ unsigned int width, unsigned int height) + { +- int ret; +- +- ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, BIT(2), BIT(2)); +- if (ret < 0) +- return ret; +- +- return ov2680_bayer_order(sensor); ++ memset(fmt, 0, sizeof(*fmt)); ++ fmt->width = width; ++ fmt->height = height; ++ fmt->field = V4L2_FIELD_NONE; ++ fmt->colorspace = V4L2_COLORSPACE_SRGB; ++ ov2680_set_bayer_order(sensor, fmt); + } + +-static int ov2680_vflip_disable(struct ov2680_dev *sensor) ++static int ov2680_set_vflip(struct ov2680_dev *sensor, s32 val) + { + int ret; + +- ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, BIT(2), BIT(0)); +- if (ret < 0) +- return ret; +- +- return ov2680_bayer_order(sensor); +-} +- +-static int ov2680_hflip_enable(struct ov2680_dev *sensor) +-{ +- int ret; ++ if (sensor->is_streaming) ++ return -EBUSY; + +- ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, BIT(2), BIT(2)); ++ ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, ++ BIT(2), val ? BIT(2) : 0); + if (ret < 0) + return ret; + +- return ov2680_bayer_order(sensor); ++ ov2680_set_bayer_order(sensor, &sensor->fmt); ++ return 0; + } + +-static int ov2680_hflip_disable(struct ov2680_dev *sensor) ++static int ov2680_set_hflip(struct ov2680_dev *sensor, s32 val) + { + int ret; + +- ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, BIT(2), BIT(0)); ++ if (sensor->is_streaming) ++ return -EBUSY; ++ ++ ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, ++ BIT(2), val ? BIT(2) : 0); + if (ret < 0) + return ret; + +- return ov2680_bayer_order(sensor); ++ ov2680_set_bayer_order(sensor, &sensor->fmt); ++ return 0; + } + + static int ov2680_test_pattern_set(struct ov2680_dev *sensor, int value) +@@ -405,69 +394,15 @@ static int ov2680_test_pattern_set(struct ov2680_dev *sensor, int value) + return 0; + } + +-static int ov2680_gain_set(struct ov2680_dev *sensor, bool auto_gain) ++static int ov2680_gain_set(struct ov2680_dev *sensor, u32 gain) + { +- struct ov2680_ctrls *ctrls = &sensor->ctrls; +- u32 gain; +- int ret; +- +- ret = ov2680_mod_reg(sensor, OV2680_REG_R_MANUAL, BIT(1), +- auto_gain ? 0 : BIT(1)); +- if (ret < 0) +- return ret; +- +- if (auto_gain || !ctrls->gain->is_new) +- return 0; +- +- gain = ctrls->gain->val; +- +- ret = ov2680_write_reg16(sensor, OV2680_REG_GAIN_PK, gain); +- +- return 0; +-} +- +-static int ov2680_gain_get(struct ov2680_dev *sensor) +-{ +- u32 gain; +- int ret; +- +- ret = ov2680_read_reg16(sensor, OV2680_REG_GAIN_PK, &gain); +- if (ret) +- return ret; +- +- return gain; +-} +- +-static int ov2680_exposure_set(struct ov2680_dev *sensor, bool auto_exp) +-{ +- struct ov2680_ctrls *ctrls = &sensor->ctrls; +- u32 exp; +- int ret; +- +- ret = ov2680_mod_reg(sensor, OV2680_REG_R_MANUAL, BIT(0), +- auto_exp ? 0 : BIT(0)); +- if (ret < 0) +- return ret; +- +- if (auto_exp || !ctrls->exposure->is_new) +- return 0; +- +- exp = (u32)ctrls->exposure->val; +- exp <<= 4; +- +- return ov2680_write_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, exp); ++ return ov2680_write_reg16(sensor, OV2680_REG_GAIN_PK, gain); + } + +-static int ov2680_exposure_get(struct ov2680_dev *sensor) ++static int ov2680_exposure_set(struct ov2680_dev *sensor, u32 exp) + { +- int ret; +- u32 exp; +- +- ret = ov2680_read_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, &exp); +- if (ret) +- return ret; +- +- return exp >> 4; ++ return ov2680_write_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, ++ exp << 4); + } + + static int ov2680_stream_enable(struct ov2680_dev *sensor) +@@ -482,33 +417,17 @@ static int ov2680_stream_disable(struct ov2680_dev *sensor) + + static int ov2680_mode_set(struct ov2680_dev *sensor) + { +- struct ov2680_ctrls *ctrls = &sensor->ctrls; + int ret; + +- ret = ov2680_gain_set(sensor, false); +- if (ret < 0) +- return ret; +- +- ret = ov2680_exposure_set(sensor, false); ++ ret = ov2680_load_regs(sensor, sensor->current_mode); + if (ret < 0) + return ret; + +- ret = ov2680_load_regs(sensor, sensor->current_mode); ++ /* Restore value of all ctrls */ ++ ret = __v4l2_ctrl_handler_setup(&sensor->ctrls.handler); + if (ret < 0) + return ret; + +- if (ctrls->auto_gain->val) { +- ret = ov2680_gain_set(sensor, true); +- if (ret < 0) +- return ret; +- } +- +- if (ctrls->auto_exp->val == V4L2_EXPOSURE_AUTO) { +- ret = ov2680_exposure_set(sensor, true); +- if (ret < 0) +- return ret; +- } +- + sensor->mode_pending_changes = false; + + return 0; +@@ -556,7 +475,7 @@ static int ov2680_power_on(struct ov2680_dev *sensor) + ret = ov2680_write_reg(sensor, OV2680_REG_SOFT_RESET, 0x01); + if (ret != 0) { + dev_err(dev, "sensor soft reset failed\n"); +- return ret; ++ goto err_disable_regulators; + } + usleep_range(1000, 2000); + } else { +@@ -566,7 +485,7 @@ static int ov2680_power_on(struct ov2680_dev *sensor) + + ret = clk_prepare_enable(sensor->xvclk); + if (ret < 0) +- return ret; ++ goto err_disable_regulators; + + sensor->is_enabled = true; + +@@ -576,6 +495,10 @@ static int ov2680_power_on(struct ov2680_dev *sensor) + ov2680_stream_disable(sensor); + + return 0; ++ ++err_disable_regulators: ++ regulator_bulk_disable(OV2680_NUM_SUPPLIES, sensor->supplies); ++ return ret; + } + + static int ov2680_s_power(struct v4l2_subdev *sd, int on) +@@ -590,15 +513,10 @@ static int ov2680_s_power(struct v4l2_subdev *sd, int on) + else + ret = ov2680_power_off(sensor); + +- mutex_unlock(&sensor->lock); +- +- if (on && ret == 0) { +- ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler); +- if (ret < 0) +- return ret; +- ++ if (on && ret == 0) + ret = ov2680_mode_restore(sensor); +- } ++ ++ mutex_unlock(&sensor->lock); + + return ret; + } +@@ -664,7 +582,6 @@ static int ov2680_get_fmt(struct v4l2_subdev *sd, + { + struct ov2680_dev *sensor = to_ov2680_dev(sd); + struct v4l2_mbus_framefmt *fmt = NULL; +- int ret = 0; + + if (format->pad != 0) + return -EINVAL; +@@ -672,22 +589,17 @@ static int ov2680_get_fmt(struct v4l2_subdev *sd, + mutex_lock(&sensor->lock); + + if (format->which == V4L2_SUBDEV_FORMAT_TRY) { +-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API + fmt = v4l2_subdev_get_try_format(&sensor->sd, sd_state, + format->pad); +-#else +- ret = -EINVAL; +-#endif + } else { + fmt = &sensor->fmt; + } + +- if (fmt) +- format->format = *fmt; ++ format->format = *fmt; + + mutex_unlock(&sensor->lock); + +- return ret; ++ return 0; + } + + static int ov2680_set_fmt(struct v4l2_subdev *sd, +@@ -695,43 +607,35 @@ static int ov2680_set_fmt(struct v4l2_subdev *sd, + struct v4l2_subdev_format *format) + { + struct ov2680_dev *sensor = to_ov2680_dev(sd); +- struct v4l2_mbus_framefmt *fmt = &format->format; +-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API + struct v4l2_mbus_framefmt *try_fmt; +-#endif + const struct ov2680_mode_info *mode; + int ret = 0; + + if (format->pad != 0) + return -EINVAL; + +- mutex_lock(&sensor->lock); +- +- if (sensor->is_streaming) { +- ret = -EBUSY; +- goto unlock; +- } +- + mode = v4l2_find_nearest_size(ov2680_mode_data, +- ARRAY_SIZE(ov2680_mode_data), width, +- height, fmt->width, fmt->height); +- if (!mode) { +- ret = -EINVAL; +- goto unlock; +- } ++ ARRAY_SIZE(ov2680_mode_data), ++ width, height, ++ format->format.width, ++ format->format.height); ++ if (!mode) ++ return -EINVAL; ++ ++ ov2680_fill_format(sensor, &format->format, mode->width, mode->height); + + if (format->which == V4L2_SUBDEV_FORMAT_TRY) { +-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API + try_fmt = v4l2_subdev_get_try_format(sd, sd_state, 0); +- format->format = *try_fmt; +-#endif +- goto unlock; ++ *try_fmt = format->format; ++ return 0; + } + +- fmt->width = mode->width; +- fmt->height = mode->height; +- fmt->code = sensor->fmt.code; +- fmt->colorspace = sensor->fmt.colorspace; ++ mutex_lock(&sensor->lock); ++ ++ if (sensor->is_streaming) { ++ ret = -EBUSY; ++ goto unlock; ++ } + + sensor->current_mode = mode; + sensor->fmt = format->format; +@@ -746,16 +650,11 @@ unlock: + static int ov2680_init_cfg(struct v4l2_subdev *sd, + struct v4l2_subdev_state *sd_state) + { +- struct v4l2_subdev_format fmt = { +- .which = sd_state ? V4L2_SUBDEV_FORMAT_TRY +- : V4L2_SUBDEV_FORMAT_ACTIVE, +- .format = { +- .width = 800, +- .height = 600, +- } +- }; ++ struct ov2680_dev *sensor = to_ov2680_dev(sd); + +- return ov2680_set_fmt(sd, sd_state, &fmt); ++ ov2680_fill_format(sensor, &sd_state->pads[0].try_fmt, ++ OV2680_DEFAULT_WIDTH, OV2680_DEFAULT_HEIGHT); ++ return 0; + } + + static int ov2680_enum_frame_size(struct v4l2_subdev *sd, +@@ -794,66 +693,23 @@ static int ov2680_enum_frame_interval(struct v4l2_subdev *sd, + return 0; + } + +-static int ov2680_g_volatile_ctrl(struct v4l2_ctrl *ctrl) +-{ +- struct v4l2_subdev *sd = ctrl_to_sd(ctrl); +- struct ov2680_dev *sensor = to_ov2680_dev(sd); +- struct ov2680_ctrls *ctrls = &sensor->ctrls; +- int val; +- +- if (!sensor->is_enabled) +- return 0; +- +- switch (ctrl->id) { +- case V4L2_CID_GAIN: +- val = ov2680_gain_get(sensor); +- if (val < 0) +- return val; +- ctrls->gain->val = val; +- break; +- case V4L2_CID_EXPOSURE: +- val = ov2680_exposure_get(sensor); +- if (val < 0) +- return val; +- ctrls->exposure->val = val; +- break; +- } +- +- return 0; +-} +- + static int ov2680_s_ctrl(struct v4l2_ctrl *ctrl) + { + struct v4l2_subdev *sd = ctrl_to_sd(ctrl); + struct ov2680_dev *sensor = to_ov2680_dev(sd); +- struct ov2680_ctrls *ctrls = &sensor->ctrls; + + if (!sensor->is_enabled) + return 0; + + switch (ctrl->id) { +- case V4L2_CID_AUTOGAIN: +- return ov2680_gain_set(sensor, !!ctrl->val); + case V4L2_CID_GAIN: +- return ov2680_gain_set(sensor, !!ctrls->auto_gain->val); +- case V4L2_CID_EXPOSURE_AUTO: +- return ov2680_exposure_set(sensor, !!ctrl->val); ++ return ov2680_gain_set(sensor, ctrl->val); + case V4L2_CID_EXPOSURE: +- return ov2680_exposure_set(sensor, !!ctrls->auto_exp->val); ++ return ov2680_exposure_set(sensor, ctrl->val); + case V4L2_CID_VFLIP: +- if (sensor->is_streaming) +- return -EBUSY; +- if (ctrl->val) +- return ov2680_vflip_enable(sensor); +- else +- return ov2680_vflip_disable(sensor); ++ return ov2680_set_vflip(sensor, ctrl->val); + case V4L2_CID_HFLIP: +- if (sensor->is_streaming) +- return -EBUSY; +- if (ctrl->val) +- return ov2680_hflip_enable(sensor); +- else +- return ov2680_hflip_disable(sensor); ++ return ov2680_set_hflip(sensor, ctrl->val); + case V4L2_CID_TEST_PATTERN: + return ov2680_test_pattern_set(sensor, ctrl->val); + default: +@@ -864,7 +720,6 @@ static int ov2680_s_ctrl(struct v4l2_ctrl *ctrl) + } + + static const struct v4l2_ctrl_ops ov2680_ctrl_ops = { +- .g_volatile_ctrl = ov2680_g_volatile_ctrl, + .s_ctrl = ov2680_s_ctrl, + }; + +@@ -898,11 +753,8 @@ static int ov2680_mode_init(struct ov2680_dev *sensor) + const struct ov2680_mode_info *init_mode; + + /* set initial mode */ +- sensor->fmt.code = MEDIA_BUS_FMT_SBGGR10_1X10; +- sensor->fmt.width = 800; +- sensor->fmt.height = 600; +- sensor->fmt.field = V4L2_FIELD_NONE; +- sensor->fmt.colorspace = V4L2_COLORSPACE_SRGB; ++ ov2680_fill_format(sensor, &sensor->fmt, ++ OV2680_DEFAULT_WIDTH, OV2680_DEFAULT_HEIGHT); + + sensor->frame_interval.denominator = OV2680_FRAME_RATE; + sensor->frame_interval.numerator = 1; +@@ -926,9 +778,7 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor) + v4l2_i2c_subdev_init(&sensor->sd, sensor->i2c_client, + &ov2680_subdev_ops); + +-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API + sensor->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE; +-#endif + sensor->pad.flags = MEDIA_PAD_FL_SOURCE; + sensor->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR; + +@@ -936,7 +786,7 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor) + if (ret < 0) + return ret; + +- v4l2_ctrl_handler_init(hdl, 7); ++ v4l2_ctrl_handler_init(hdl, 5); + + hdl->lock = &sensor->lock; + +@@ -948,16 +798,9 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor) + ARRAY_SIZE(test_pattern_menu) - 1, + 0, 0, test_pattern_menu); + +- ctrls->auto_exp = v4l2_ctrl_new_std_menu(hdl, ops, +- V4L2_CID_EXPOSURE_AUTO, +- V4L2_EXPOSURE_MANUAL, 0, +- V4L2_EXPOSURE_AUTO); +- + ctrls->exposure = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_EXPOSURE, + 0, 32767, 1, 0); + +- ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN, +- 0, 1, 1, 1); + ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN, 0, 2047, 1, 0); + + if (hdl->error) { +@@ -965,11 +808,8 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor) + goto cleanup_entity; + } + +- ctrls->gain->flags |= V4L2_CTRL_FLAG_VOLATILE; +- ctrls->exposure->flags |= V4L2_CTRL_FLAG_VOLATILE; +- +- v4l2_ctrl_auto_cluster(2, &ctrls->auto_gain, 0, true); +- v4l2_ctrl_auto_cluster(2, &ctrls->auto_exp, 1, true); ++ ctrls->vflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT; ++ ctrls->hflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT; + + sensor->sd.ctrl_handler = hdl; + +diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c +index 267f514023e72..2ee832426736d 100644 +--- a/drivers/media/i2c/ov5640.c ++++ b/drivers/media/i2c/ov5640.c +@@ -557,9 +557,7 @@ static const struct reg_value ov5640_init_setting[] = { + {0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x3000, 0x00, 0, 0}, + {0x3002, 0x1c, 0, 0}, {0x3004, 0xff, 0, 0}, {0x3006, 0xc3, 0, 0}, + {0x302e, 0x08, 0, 0}, {0x4300, 0x3f, 0, 0}, +- {0x501f, 0x00, 0, 0}, {0x4407, 0x04, 0, 0}, +- {0x440e, 0x00, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0}, +- {0x4837, 0x0a, 0, 0}, {0x3824, 0x02, 0, 0}, ++ {0x501f, 0x00, 0, 0}, {0x440e, 0x00, 0, 0}, {0x4837, 0x0a, 0, 0}, + {0x5000, 0xa7, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x5180, 0xff, 0, 0}, + {0x5181, 0xf2, 0, 0}, {0x5182, 0x00, 0, 0}, {0x5183, 0x14, 0, 0}, + {0x5184, 0x25, 0, 0}, {0x5185, 0x24, 0, 0}, {0x5186, 0x09, 0, 0}, +@@ -623,7 +621,8 @@ static const struct reg_value ov5640_setting_low_res[] = { + {0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0}, + {0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0}, + {0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, +- {0x4407, 0x04, 0, 0}, {0x5001, 0xa3, 0, 0}, ++ {0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0}, ++ {0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0}, + }; + + static const struct reg_value ov5640_setting_720P_1280_720[] = { +@@ -2442,16 +2441,13 @@ static void ov5640_power(struct ov5640_dev *sensor, bool enable) + static void ov5640_powerup_sequence(struct ov5640_dev *sensor) + { + if (sensor->pwdn_gpio) { +- gpiod_set_value_cansleep(sensor->reset_gpio, 0); ++ gpiod_set_value_cansleep(sensor->reset_gpio, 1); + + /* camera power cycle */ + ov5640_power(sensor, false); +- usleep_range(5000, 10000); ++ usleep_range(5000, 10000); /* t2 */ + ov5640_power(sensor, true); +- usleep_range(5000, 10000); +- +- gpiod_set_value_cansleep(sensor->reset_gpio, 1); +- usleep_range(1000, 2000); ++ usleep_range(1000, 2000); /* t3 */ + + gpiod_set_value_cansleep(sensor->reset_gpio, 0); + } else { +@@ -2459,7 +2455,7 @@ static void ov5640_powerup_sequence(struct ov5640_dev *sensor) + ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0, + OV5640_REG_SYS_CTRL0_SW_RST); + } +- usleep_range(20000, 25000); ++ usleep_range(20000, 25000); /* t4 */ + + /* + * software standby: allows registers programming; +@@ -2532,9 +2528,9 @@ static int ov5640_set_power_mipi(struct ov5640_dev *sensor, bool on) + * "ov5640_set_stream_mipi()") + * [4] = 0 : Power up MIPI HS Tx + * [3] = 0 : Power up MIPI LS Rx +- * [2] = 0 : MIPI interface disabled ++ * [2] = 1 : MIPI interface enabled + */ +- ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x40); ++ ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x44); + if (ret) + return ret; + +diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c +index 9ccc56c30d3b0..d269c541ebe4c 100644 +--- a/drivers/media/i2c/rdacm21.c ++++ b/drivers/media/i2c/rdacm21.c +@@ -351,7 +351,7 @@ static void ov10640_power_up(struct rdacm21_device *dev) + static int ov10640_check_id(struct rdacm21_device *dev) + { + unsigned int i; +- u8 val; ++ u8 val = 0; + + /* Read OV10640 ID to test communications. */ + for (i = 0; i < OV10640_PID_TIMEOUT; ++i) { +diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c +index 859f1cb2fa744..84f87c016f9b5 100644 +--- a/drivers/media/i2c/tvp5150.c ++++ b/drivers/media/i2c/tvp5150.c +@@ -2068,6 +2068,10 @@ static int tvp5150_parse_dt(struct tvp5150 *decoder, struct device_node *np) + tvpc->ent.name = devm_kasprintf(dev, GFP_KERNEL, "%s %s", + v4l2c->name, v4l2c->label ? + v4l2c->label : ""); ++ if (!tvpc->ent.name) { ++ ret = -ENOMEM; ++ goto err_free; ++ } + } + + ep_np = of_graph_get_endpoint_by_regs(np, TVP5150_PAD_VID_OUT, 0); +diff --git a/drivers/media/pci/bt8xx/dst.c b/drivers/media/pci/bt8xx/dst.c +index 3e52a51982d76..110651e478314 100644 +--- a/drivers/media/pci/bt8xx/dst.c ++++ b/drivers/media/pci/bt8xx/dst.c +@@ -1722,7 +1722,7 @@ struct dst_state *dst_attach(struct dst_state *state, struct dvb_adapter *dvb_ad + return state; /* Manu (DST is a card not a frontend) */ + } + +-EXPORT_SYMBOL(dst_attach); ++EXPORT_SYMBOL_GPL(dst_attach); + + static const struct dvb_frontend_ops dst_dvbt_ops = { + .delsys = { SYS_DVBT }, +diff --git a/drivers/media/pci/bt8xx/dst_ca.c b/drivers/media/pci/bt8xx/dst_ca.c +index 85fcdc59f0d18..571392d80ccc6 100644 +--- a/drivers/media/pci/bt8xx/dst_ca.c ++++ b/drivers/media/pci/bt8xx/dst_ca.c +@@ -668,7 +668,7 @@ struct dvb_device *dst_ca_attach(struct dst_state *dst, struct dvb_adapter *dvb_ + return NULL; + } + +-EXPORT_SYMBOL(dst_ca_attach); ++EXPORT_SYMBOL_GPL(dst_ca_attach); + + MODULE_DESCRIPTION("DST DVB-S/T/C Combo CA driver"); + MODULE_AUTHOR("Manu Abraham"); +diff --git a/drivers/media/pci/cx23885/cx23885-dvb.c b/drivers/media/pci/cx23885/cx23885-dvb.c +index 8fd5b6ef24282..7551ca4a322a4 100644 +--- a/drivers/media/pci/cx23885/cx23885-dvb.c ++++ b/drivers/media/pci/cx23885/cx23885-dvb.c +@@ -2459,16 +2459,10 @@ static int dvb_register(struct cx23885_tsport *port) + request_module("%s", info.type); + client_tuner = i2c_new_client_device(&dev->i2c_bus[1].i2c_adap, &info); + if (!i2c_client_has_driver(client_tuner)) { +- module_put(client_demod->dev.driver->owner); +- i2c_unregister_device(client_demod); +- port->i2c_client_demod = NULL; + goto frontend_detach; + } + if (!try_module_get(client_tuner->dev.driver->owner)) { + i2c_unregister_device(client_tuner); +- module_put(client_demod->dev.driver->owner); +- i2c_unregister_device(client_demod); +- port->i2c_client_demod = NULL; + goto frontend_detach; + } + port->i2c_client_tuner = client_tuner; +@@ -2505,16 +2499,10 @@ static int dvb_register(struct cx23885_tsport *port) + request_module("%s", info.type); + client_tuner = i2c_new_client_device(&dev->i2c_bus[1].i2c_adap, &info); + if (!i2c_client_has_driver(client_tuner)) { +- module_put(client_demod->dev.driver->owner); +- i2c_unregister_device(client_demod); +- port->i2c_client_demod = NULL; + goto frontend_detach; + } + if (!try_module_get(client_tuner->dev.driver->owner)) { + i2c_unregister_device(client_tuner); +- module_put(client_demod->dev.driver->owner); +- i2c_unregister_device(client_demod); +- port->i2c_client_demod = NULL; + goto frontend_detach; + } + port->i2c_client_tuner = client_tuner; +diff --git a/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c b/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c +index 6868a0c4fc82a..520ebd16b0c44 100644 +--- a/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c ++++ b/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c +@@ -112,7 +112,7 @@ struct dvb_frontend *ddbridge_dummy_fe_qam_attach(void) + state->frontend.demodulator_priv = state; + return &state->frontend; + } +-EXPORT_SYMBOL(ddbridge_dummy_fe_qam_attach); ++EXPORT_SYMBOL_GPL(ddbridge_dummy_fe_qam_attach); + + static const struct dvb_frontend_ops ddbridge_dummy_fe_qam_ops = { + .delsys = { SYS_DVBC_ANNEX_A }, +diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c +index c08b5a2bfc1df..dc35a87e628ec 100644 +--- a/drivers/media/platform/amphion/vdec.c ++++ b/drivers/media/platform/amphion/vdec.c +@@ -249,7 +249,8 @@ static int vdec_update_state(struct vpu_inst *inst, enum vpu_codec_state state, + vdec->state = VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE; + + if (inst->state != pre_state) +- vpu_trace(inst->dev, "[%d] %d -> %d\n", inst->id, pre_state, inst->state); ++ vpu_trace(inst->dev, "[%d] %s -> %s\n", inst->id, ++ vpu_codec_state_name(pre_state), vpu_codec_state_name(inst->state)); + + if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE) + vdec_handle_resolution_change(inst); +@@ -956,6 +957,7 @@ static int vdec_response_frame_abnormal(struct vpu_inst *inst) + { + struct vdec_t *vdec = inst->priv; + struct vpu_fs_info info; ++ int ret; + + if (!vdec->req_frame_count) + return 0; +@@ -963,7 +965,9 @@ static int vdec_response_frame_abnormal(struct vpu_inst *inst) + memset(&info, 0, sizeof(info)); + info.type = MEM_RES_FRAME; + info.tag = vdec->seq_tag + 0xf0; +- vpu_session_alloc_fs(inst, &info); ++ ret = vpu_session_alloc_fs(inst, &info); ++ if (ret) ++ return ret; + vdec->req_frame_count--; + + return 0; +@@ -994,8 +998,8 @@ static int vdec_response_frame(struct vpu_inst *inst, struct vb2_v4l2_buffer *vb + return -EINVAL; + } + +- dev_dbg(inst->dev, "[%d] state = %d, alloc fs %d, tag = 0x%x\n", +- inst->id, inst->state, vbuf->vb2_buf.index, vdec->seq_tag); ++ dev_dbg(inst->dev, "[%d] state = %s, alloc fs %d, tag = 0x%x\n", ++ inst->id, vpu_codec_state_name(inst->state), vbuf->vb2_buf.index, vdec->seq_tag); + vpu_buf = to_vpu_vb2_buffer(vbuf); + + memset(&info, 0, sizeof(info)); +@@ -1354,7 +1358,7 @@ static void vdec_abort(struct vpu_inst *inst) + struct vpu_rpc_buffer_desc desc; + int ret; + +- vpu_trace(inst->dev, "[%d] state = %d\n", inst->id, inst->state); ++ vpu_trace(inst->dev, "[%d] state = %s\n", inst->id, vpu_codec_state_name(inst->state)); + + vdec->aborting = true; + vpu_iface_add_scode(inst, SCODE_PADDING_ABORT); +@@ -1407,9 +1411,7 @@ static void vdec_release(struct vpu_inst *inst) + { + if (inst->id != VPU_INST_NULL_ID) + vpu_trace(inst->dev, "[%d]\n", inst->id); +- vpu_inst_lock(inst); + vdec_stop(inst, true); +- vpu_inst_unlock(inst); + } + + static void vdec_cleanup(struct vpu_inst *inst) +diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c +index e8cb22da938e6..1df2b35c1a240 100644 +--- a/drivers/media/platform/amphion/venc.c ++++ b/drivers/media/platform/amphion/venc.c +@@ -278,7 +278,7 @@ static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm + { + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc = inst->priv; +- struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe; ++ struct v4l2_fract *timeperframe; + + if (!parm) + return -EINVAL; +@@ -289,6 +289,7 @@ static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm + if (!vpu_helper_check_type(inst, parm->type)) + return -EINVAL; + ++ timeperframe = &parm->parm.capture.timeperframe; + parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME; + parm->parm.capture.readbuffers = 0; + timeperframe->numerator = venc->params.frame_rate.numerator; +@@ -301,7 +302,7 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm + { + struct vpu_inst *inst = to_inst(file); + struct venc_t *venc = inst->priv; +- struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe; ++ struct v4l2_fract *timeperframe; + unsigned long n, d; + + if (!parm) +@@ -313,6 +314,7 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm + if (!vpu_helper_check_type(inst, parm->type)) + return -EINVAL; + ++ timeperframe = &parm->parm.capture.timeperframe; + if (!timeperframe->numerator) + timeperframe->numerator = venc->params.frame_rate.numerator; + if (!timeperframe->denominator) +diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h +index 048c23c2bf4db..deb2288d42904 100644 +--- a/drivers/media/platform/amphion/vpu.h ++++ b/drivers/media/platform/amphion/vpu.h +@@ -353,6 +353,9 @@ void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow); + int vpu_core_driver_init(void); + void vpu_core_driver_exit(void); + ++const char *vpu_id_name(u32 id); ++const char *vpu_codec_state_name(enum vpu_codec_state state); ++ + extern bool debug; + #define vpu_trace(dev, fmt, arg...) \ + do { \ +diff --git a/drivers/media/platform/amphion/vpu_cmds.c b/drivers/media/platform/amphion/vpu_cmds.c +index fa581ba6bab2d..235b71398d403 100644 +--- a/drivers/media/platform/amphion/vpu_cmds.c ++++ b/drivers/media/platform/amphion/vpu_cmds.c +@@ -98,7 +98,7 @@ static struct vpu_cmd_t *vpu_alloc_cmd(struct vpu_inst *inst, u32 id, void *data + cmd->id = id; + ret = vpu_iface_pack_cmd(inst->core, cmd->pkt, inst->id, id, data); + if (ret) { +- dev_err(inst->dev, "iface pack cmd(%d) fail\n", id); ++ dev_err(inst->dev, "iface pack cmd %s fail\n", vpu_id_name(id)); + vfree(cmd->pkt); + vfree(cmd); + return NULL; +@@ -125,14 +125,14 @@ static int vpu_session_process_cmd(struct vpu_inst *inst, struct vpu_cmd_t *cmd) + { + int ret; + +- dev_dbg(inst->dev, "[%d]send cmd(0x%x)\n", inst->id, cmd->id); ++ dev_dbg(inst->dev, "[%d]send cmd %s\n", inst->id, vpu_id_name(cmd->id)); + vpu_iface_pre_send_cmd(inst); + ret = vpu_cmd_send(inst->core, cmd->pkt); + if (!ret) { + vpu_iface_post_send_cmd(inst); + vpu_inst_record_flow(inst, cmd->id); + } else { +- dev_err(inst->dev, "[%d] iface send cmd(0x%x) fail\n", inst->id, cmd->id); ++ dev_err(inst->dev, "[%d] iface send cmd %s fail\n", inst->id, vpu_id_name(cmd->id)); + } + + return ret; +@@ -149,7 +149,8 @@ static void vpu_process_cmd_request(struct vpu_inst *inst) + list_for_each_entry_safe(cmd, tmp, &inst->cmd_q, list) { + list_del_init(&cmd->list); + if (vpu_session_process_cmd(inst, cmd)) +- dev_err(inst->dev, "[%d] process cmd(%d) fail\n", inst->id, cmd->id); ++ dev_err(inst->dev, "[%d] process cmd %s fail\n", ++ inst->id, vpu_id_name(cmd->id)); + if (cmd->request) { + inst->pending = (void *)cmd; + break; +@@ -305,7 +306,8 @@ static void vpu_core_keep_active(struct vpu_core *core) + + dev_dbg(core->dev, "try to wake up\n"); + mutex_lock(&core->cmd_lock); +- vpu_cmd_send(core, &pkt); ++ if (vpu_cmd_send(core, &pkt)) ++ dev_err(core->dev, "fail to keep active\n"); + mutex_unlock(&core->cmd_lock); + } + +@@ -313,7 +315,7 @@ static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data) + { + unsigned long key; + int sync = false; +- int ret = -EINVAL; ++ int ret; + + if (inst->id < 0) + return -EINVAL; +@@ -339,7 +341,7 @@ static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data) + + exit: + if (ret) +- dev_err(inst->dev, "[%d] send cmd(0x%x) fail\n", inst->id, id); ++ dev_err(inst->dev, "[%d] send cmd %s fail\n", inst->id, vpu_id_name(id)); + + return ret; + } +diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c +index be80410682681..9add73b9b45f9 100644 +--- a/drivers/media/platform/amphion/vpu_core.c ++++ b/drivers/media/platform/amphion/vpu_core.c +@@ -88,6 +88,8 @@ static int vpu_core_boot_done(struct vpu_core *core) + + core->supported_instance_count = min(core->supported_instance_count, count); + } ++ if (core->supported_instance_count >= BITS_PER_TYPE(core->instance_mask)) ++ core->supported_instance_count = BITS_PER_TYPE(core->instance_mask); + core->fw_version = fw_version; + vpu_core_set_state(core, VPU_CORE_ACTIVE); + +diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c +index 260f1c4b8f8dc..f105da82d92f9 100644 +--- a/drivers/media/platform/amphion/vpu_dbg.c ++++ b/drivers/media/platform/amphion/vpu_dbg.c +@@ -50,6 +50,13 @@ static char *vpu_stat_name[] = { + [VPU_BUF_STATE_ERROR] = "error", + }; + ++static inline const char *to_vpu_stat_name(int state) ++{ ++ if (state <= VPU_BUF_STATE_ERROR) ++ return vpu_stat_name[state]; ++ return "unknown"; ++} ++ + static int vpu_dbg_instance(struct seq_file *s, void *data) + { + struct vpu_inst *inst = s->private; +@@ -67,7 +74,7 @@ static int vpu_dbg_instance(struct seq_file *s, void *data) + num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid, inst->pid); + if (seq_write(s, str, num)) + return 0; +- num = scnprintf(str, sizeof(str), "state = %d\n", inst->state); ++ num = scnprintf(str, sizeof(str), "state = %s\n", vpu_codec_state_name(inst->state)); + if (seq_write(s, str, num)) + return 0; + num = scnprintf(str, sizeof(str), +@@ -141,7 +148,7 @@ static int vpu_dbg_instance(struct seq_file *s, void *data) + num = scnprintf(str, sizeof(str), + "output [%2d] state = %10s, %8s\n", + i, vb2_stat_name[vb->state], +- vpu_stat_name[vpu_get_buffer_state(vbuf)]); ++ to_vpu_stat_name(vpu_get_buffer_state(vbuf))); + if (seq_write(s, str, num)) + return 0; + } +@@ -156,7 +163,7 @@ static int vpu_dbg_instance(struct seq_file *s, void *data) + num = scnprintf(str, sizeof(str), + "capture[%2d] state = %10s, %8s\n", + i, vb2_stat_name[vb->state], +- vpu_stat_name[vpu_get_buffer_state(vbuf)]); ++ to_vpu_stat_name(vpu_get_buffer_state(vbuf))); + if (seq_write(s, str, num)) + return 0; + } +@@ -188,9 +195,9 @@ static int vpu_dbg_instance(struct seq_file *s, void *data) + + if (!inst->flows[idx]) + continue; +- num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n", ++ num = scnprintf(str, sizeof(str), "\t[%s] %s\n", + inst->flows[idx] >= VPU_MSG_ID_NOOP ? "M" : "C", +- inst->flows[idx]); ++ vpu_id_name(inst->flows[idx])); + if (seq_write(s, str, num)) { + mutex_unlock(&inst->core->cmd_lock); + return 0; +diff --git a/drivers/media/platform/amphion/vpu_helpers.c b/drivers/media/platform/amphion/vpu_helpers.c +index e9aeb3453dfcb..2e78666322f02 100644 +--- a/drivers/media/platform/amphion/vpu_helpers.c ++++ b/drivers/media/platform/amphion/vpu_helpers.c +@@ -11,6 +11,7 @@ + #include + #include + #include "vpu.h" ++#include "vpu_defs.h" + #include "vpu_core.h" + #include "vpu_rpc.h" + #include "vpu_helpers.h" +@@ -412,3 +413,63 @@ int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst) + + return -EINVAL; + } ++ ++const char *vpu_id_name(u32 id) ++{ ++ switch (id) { ++ case VPU_CMD_ID_NOOP: return "noop"; ++ case VPU_CMD_ID_CONFIGURE_CODEC: return "configure codec"; ++ case VPU_CMD_ID_START: return "start"; ++ case VPU_CMD_ID_STOP: return "stop"; ++ case VPU_CMD_ID_ABORT: return "abort"; ++ case VPU_CMD_ID_RST_BUF: return "reset buf"; ++ case VPU_CMD_ID_SNAPSHOT: return "snapshot"; ++ case VPU_CMD_ID_FIRM_RESET: return "reset firmware"; ++ case VPU_CMD_ID_UPDATE_PARAMETER: return "update parameter"; ++ case VPU_CMD_ID_FRAME_ENCODE: return "encode frame"; ++ case VPU_CMD_ID_SKIP: return "skip"; ++ case VPU_CMD_ID_FS_ALLOC: return "alloc fb"; ++ case VPU_CMD_ID_FS_RELEASE: return "release fb"; ++ case VPU_CMD_ID_TIMESTAMP: return "timestamp"; ++ case VPU_CMD_ID_DEBUG: return "debug"; ++ case VPU_MSG_ID_RESET_DONE: return "reset done"; ++ case VPU_MSG_ID_START_DONE: return "start done"; ++ case VPU_MSG_ID_STOP_DONE: return "stop done"; ++ case VPU_MSG_ID_ABORT_DONE: return "abort done"; ++ case VPU_MSG_ID_BUF_RST: return "buf reset done"; ++ case VPU_MSG_ID_MEM_REQUEST: return "mem request"; ++ case VPU_MSG_ID_PARAM_UPD_DONE: return "param upd done"; ++ case VPU_MSG_ID_FRAME_INPUT_DONE: return "frame input done"; ++ case VPU_MSG_ID_ENC_DONE: return "encode done"; ++ case VPU_MSG_ID_DEC_DONE: return "frame display"; ++ case VPU_MSG_ID_FRAME_REQ: return "fb request"; ++ case VPU_MSG_ID_FRAME_RELEASE: return "fb release"; ++ case VPU_MSG_ID_SEQ_HDR_FOUND: return "seq hdr found"; ++ case VPU_MSG_ID_RES_CHANGE: return "resolution change"; ++ case VPU_MSG_ID_PIC_HDR_FOUND: return "pic hdr found"; ++ case VPU_MSG_ID_PIC_DECODED: return "picture decoded"; ++ case VPU_MSG_ID_PIC_EOS: return "eos"; ++ case VPU_MSG_ID_FIFO_LOW: return "fifo low"; ++ case VPU_MSG_ID_BS_ERROR: return "bs error"; ++ case VPU_MSG_ID_UNSUPPORTED: return "unsupported"; ++ case VPU_MSG_ID_FIRMWARE_XCPT: return "exception"; ++ case VPU_MSG_ID_PIC_SKIPPED: return "skipped"; ++ } ++ return ""; ++} ++ ++const char *vpu_codec_state_name(enum vpu_codec_state state) ++{ ++ switch (state) { ++ case VPU_CODEC_STATE_DEINIT: return "initialization"; ++ case VPU_CODEC_STATE_CONFIGURED: return "configured"; ++ case VPU_CODEC_STATE_START: return "start"; ++ case VPU_CODEC_STATE_STARTED: return "started"; ++ case VPU_CODEC_STATE_ACTIVE: return "active"; ++ case VPU_CODEC_STATE_SEEK: return "seek"; ++ case VPU_CODEC_STATE_STOP: return "stop"; ++ case VPU_CODEC_STATE_DRAIN: return "drain"; ++ case VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE: return "resolution change"; ++ } ++ return ""; ++} +diff --git a/drivers/media/platform/amphion/vpu_mbox.c b/drivers/media/platform/amphion/vpu_mbox.c +index bf759eb2fd46d..b6d5b4844f672 100644 +--- a/drivers/media/platform/amphion/vpu_mbox.c ++++ b/drivers/media/platform/amphion/vpu_mbox.c +@@ -46,11 +46,10 @@ static int vpu_mbox_request_channel(struct device *dev, struct vpu_mbox *mbox) + cl->rx_callback = vpu_mbox_rx_callback; + + ch = mbox_request_channel_byname(cl, mbox->name); +- if (IS_ERR(ch)) { +- dev_err(dev, "Failed to request mbox chan %s, ret : %ld\n", +- mbox->name, PTR_ERR(ch)); +- return PTR_ERR(ch); +- } ++ if (IS_ERR(ch)) ++ return dev_err_probe(dev, PTR_ERR(ch), ++ "Failed to request mbox chan %s\n", ++ mbox->name); + + mbox->ch = ch; + return 0; +diff --git a/drivers/media/platform/amphion/vpu_msgs.c b/drivers/media/platform/amphion/vpu_msgs.c +index 92672a802b492..d0ead051f7d18 100644 +--- a/drivers/media/platform/amphion/vpu_msgs.c ++++ b/drivers/media/platform/amphion/vpu_msgs.c +@@ -32,7 +32,7 @@ static void vpu_session_handle_start_done(struct vpu_inst *inst, struct vpu_rpc_ + + static void vpu_session_handle_mem_request(struct vpu_inst *inst, struct vpu_rpc_event *pkt) + { +- struct vpu_pkt_mem_req_data req_data; ++ struct vpu_pkt_mem_req_data req_data = { 0 }; + + vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&req_data); + vpu_trace(inst->dev, "[%d] %d:%d %d:%d %d:%d\n", +@@ -80,7 +80,7 @@ static void vpu_session_handle_resolution_change(struct vpu_inst *inst, struct v + + static void vpu_session_handle_enc_frame_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt) + { +- struct vpu_enc_pic_info info; ++ struct vpu_enc_pic_info info = { 0 }; + + vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info); + dev_dbg(inst->dev, "[%d] frame id = %d, wptr = 0x%x, size = %d\n", +@@ -90,7 +90,7 @@ static void vpu_session_handle_enc_frame_done(struct vpu_inst *inst, struct vpu_ + + static void vpu_session_handle_frame_request(struct vpu_inst *inst, struct vpu_rpc_event *pkt) + { +- struct vpu_fs_info fs; ++ struct vpu_fs_info fs = { 0 }; + + vpu_iface_unpack_msg_data(inst->core, pkt, &fs); + call_void_vop(inst, event_notify, VPU_MSG_ID_FRAME_REQ, &fs); +@@ -107,7 +107,7 @@ static void vpu_session_handle_frame_release(struct vpu_inst *inst, struct vpu_r + info.type = inst->out_format.type; + call_void_vop(inst, buf_done, &info); + } else if (inst->core->type == VPU_CORE_TYPE_DEC) { +- struct vpu_fs_info fs; ++ struct vpu_fs_info fs = { 0 }; + + vpu_iface_unpack_msg_data(inst->core, pkt, &fs); + call_void_vop(inst, event_notify, VPU_MSG_ID_FRAME_RELEASE, &fs); +@@ -122,7 +122,7 @@ static void vpu_session_handle_input_done(struct vpu_inst *inst, struct vpu_rpc_ + + static void vpu_session_handle_pic_decoded(struct vpu_inst *inst, struct vpu_rpc_event *pkt) + { +- struct vpu_dec_pic_info info; ++ struct vpu_dec_pic_info info = { 0 }; + + vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info); + call_void_vop(inst, get_one_frame, &info); +@@ -130,7 +130,7 @@ static void vpu_session_handle_pic_decoded(struct vpu_inst *inst, struct vpu_rpc + + static void vpu_session_handle_pic_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt) + { +- struct vpu_dec_pic_info info; ++ struct vpu_dec_pic_info info = { 0 }; + struct vpu_frame_info frame; + + memset(&frame, 0, sizeof(frame)); +@@ -210,7 +210,7 @@ static int vpu_session_handle_msg(struct vpu_inst *inst, struct vpu_rpc_event *m + return -EINVAL; + + msg_id = ret; +- dev_dbg(inst->dev, "[%d] receive event(0x%x)\n", inst->id, msg_id); ++ dev_dbg(inst->dev, "[%d] receive event(%s)\n", inst->id, vpu_id_name(msg_id)); + + for (i = 0; i < ARRAY_SIZE(handlers); i++) { + if (handlers[i].id == msg_id) { +diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c +index a74953191c221..e5c8e1a753ccd 100644 +--- a/drivers/media/platform/amphion/vpu_v4l2.c ++++ b/drivers/media/platform/amphion/vpu_v4l2.c +@@ -404,6 +404,11 @@ static int vpu_vb2_queue_setup(struct vb2_queue *vq, + for (i = 0; i < cur_fmt->num_planes; i++) + psize[i] = cur_fmt->sizeimage[i]; + ++ if (V4L2_TYPE_IS_OUTPUT(vq->type) && inst->state == VPU_CODEC_STATE_SEEK) { ++ vpu_trace(inst->dev, "reinit when VIDIOC_REQBUFS(OUTPUT, 0)\n"); ++ call_void_vop(inst, release); ++ } ++ + return 0; + } + +@@ -688,9 +693,9 @@ int vpu_v4l2_close(struct file *file) + v4l2_m2m_ctx_release(inst->fh.m2m_ctx); + inst->fh.m2m_ctx = NULL; + } ++ call_void_vop(inst, release); + vpu_inst_unlock(inst); + +- call_void_vop(inst, release); + vpu_inst_unregister(inst); + vpu_inst_put(inst); + +diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c +index 3071b61946c3b..e9a4f8abd21c5 100644 +--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c ++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c +@@ -1412,6 +1412,7 @@ static int mtk_jpeg_remove(struct platform_device *pdev) + { + struct mtk_jpeg_dev *jpeg = platform_get_drvdata(pdev); + ++ cancel_delayed_work_sync(&jpeg->job_timeout_work); + pm_runtime_disable(&pdev->dev); + video_unregister_device(jpeg->vdev); + v4l2_m2m_release(jpeg->m2m_dev); +diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c +index 70b8383f7c8ec..a27a109d8d144 100644 +--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c ++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c +@@ -226,10 +226,11 @@ static struct vdec_fb *vp9_rm_from_fb_use_list(struct vdec_vp9_inst + if (fb->base_y.va == addr) { + list_move_tail(&node->list, + &inst->available_fb_node_list); +- break; ++ return fb; + } + } +- return fb; ++ ++ return NULL; + } + + static void vp9_add_to_fb_free_list(struct vdec_vp9_inst *inst, +diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c +index 03f8d7cd8eddc..a81212c0ade9d 100644 +--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c ++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c +@@ -246,6 +246,7 @@ void vdec_msg_queue_deinit(struct vdec_msg_queue *msg_queue, + mtk_vcodec_mem_free(ctx, mem); + + kfree(lat_buf->private_data); ++ lat_buf->private_data = NULL; + } + } + +@@ -312,6 +313,7 @@ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue, + err = mtk_vcodec_mem_alloc(ctx, &msg_queue->wdma_addr); + if (err) { + mtk_v4l2_err("failed to allocate wdma_addr buf"); ++ msg_queue->wdma_addr.size = 0; + return -ENOMEM; + } + msg_queue->wdma_rptr_addr = msg_queue->wdma_addr.dma_addr; +diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c +index 2ad40b3945b0b..8fc8f46dc3908 100644 +--- a/drivers/media/platform/qcom/venus/hfi_venus.c ++++ b/drivers/media/platform/qcom/venus/hfi_venus.c +@@ -131,7 +131,6 @@ struct venus_hfi_device { + + static bool venus_pkt_debug; + int venus_fw_debug = HFI_DEBUG_MSG_ERROR | HFI_DEBUG_MSG_FATAL; +-static bool venus_sys_idle_indicator; + static bool venus_fw_low_power_mode = true; + static int venus_hw_rsp_timeout = 1000; + static bool venus_fw_coverage; +@@ -454,7 +453,6 @@ static int venus_boot_core(struct venus_hfi_device *hdev) + void __iomem *wrapper_base = hdev->core->wrapper_base; + int ret = 0; + +- writel(BIT(VIDC_CTRL_INIT_CTRL_SHIFT), cpu_cs_base + VIDC_CTRL_INIT); + if (IS_V6(hdev->core)) { + mask_val = readl(wrapper_base + WRAPPER_INTR_MASK); + mask_val &= ~(WRAPPER_INTR_MASK_A2HWD_BASK_V6 | +@@ -465,6 +463,7 @@ static int venus_boot_core(struct venus_hfi_device *hdev) + writel(mask_val, wrapper_base + WRAPPER_INTR_MASK); + writel(1, cpu_cs_base + CPU_CS_SCIACMDARG3); + ++ writel(BIT(VIDC_CTRL_INIT_CTRL_SHIFT), cpu_cs_base + VIDC_CTRL_INIT); + while (!ctrl_status && count < max_tries) { + ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0); + if ((ctrl_status & CPU_CS_SCIACMDARG0_ERROR_STATUS_MASK) == 4) { +@@ -947,17 +946,12 @@ static int venus_sys_set_default_properties(struct venus_hfi_device *hdev) + if (ret) + dev_warn(dev, "setting fw debug msg ON failed (%d)\n", ret); + +- /* +- * Idle indicator is disabled by default on some 4xx firmware versions, +- * enable it explicitly in order to make suspend functional by checking +- * WFI (wait-for-interrupt) bit. +- */ +- if (IS_V4(hdev->core) || IS_V6(hdev->core)) +- venus_sys_idle_indicator = true; +- +- ret = venus_sys_set_idle_message(hdev, venus_sys_idle_indicator); +- if (ret) +- dev_warn(dev, "setting idle response ON failed (%d)\n", ret); ++ /* HFI_PROPERTY_SYS_IDLE_INDICATOR is not supported beyond 8916 (HFI V1) */ ++ if (IS_V1(hdev->core)) { ++ ret = venus_sys_set_idle_message(hdev, false); ++ if (ret) ++ dev_warn(dev, "setting idle response ON failed (%d)\n", ret); ++ } + + ret = venus_sys_set_power_control(hdev, venus_fw_low_power_mode); + if (ret) +diff --git a/drivers/media/tuners/fc0011.c b/drivers/media/tuners/fc0011.c +index eaa3bbc903d7e..3d3b54be29557 100644 +--- a/drivers/media/tuners/fc0011.c ++++ b/drivers/media/tuners/fc0011.c +@@ -499,7 +499,7 @@ struct dvb_frontend *fc0011_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(fc0011_attach); ++EXPORT_SYMBOL_GPL(fc0011_attach); + + MODULE_DESCRIPTION("Fitipower FC0011 silicon tuner driver"); + MODULE_AUTHOR("Michael Buesch "); +diff --git a/drivers/media/tuners/fc0012.c b/drivers/media/tuners/fc0012.c +index 4429d5e8c5796..81e65acbdb170 100644 +--- a/drivers/media/tuners/fc0012.c ++++ b/drivers/media/tuners/fc0012.c +@@ -495,7 +495,7 @@ err: + + return fe; + } +-EXPORT_SYMBOL(fc0012_attach); ++EXPORT_SYMBOL_GPL(fc0012_attach); + + MODULE_DESCRIPTION("Fitipower FC0012 silicon tuner driver"); + MODULE_AUTHOR("Hans-Frieder Vogt "); +diff --git a/drivers/media/tuners/fc0013.c b/drivers/media/tuners/fc0013.c +index 29dd9b55ff333..1006a2798eefc 100644 +--- a/drivers/media/tuners/fc0013.c ++++ b/drivers/media/tuners/fc0013.c +@@ -608,7 +608,7 @@ struct dvb_frontend *fc0013_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(fc0013_attach); ++EXPORT_SYMBOL_GPL(fc0013_attach); + + MODULE_DESCRIPTION("Fitipower FC0013 silicon tuner driver"); + MODULE_AUTHOR("Hans-Frieder Vogt "); +diff --git a/drivers/media/tuners/max2165.c b/drivers/media/tuners/max2165.c +index 1c746bed51fee..1575ab94e1c8b 100644 +--- a/drivers/media/tuners/max2165.c ++++ b/drivers/media/tuners/max2165.c +@@ -410,7 +410,7 @@ struct dvb_frontend *max2165_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(max2165_attach); ++EXPORT_SYMBOL_GPL(max2165_attach); + + MODULE_AUTHOR("David T. L. Wong "); + MODULE_DESCRIPTION("Maxim MAX2165 silicon tuner driver"); +diff --git a/drivers/media/tuners/mc44s803.c b/drivers/media/tuners/mc44s803.c +index 0c9161516abdf..ed8bdf7ebd99d 100644 +--- a/drivers/media/tuners/mc44s803.c ++++ b/drivers/media/tuners/mc44s803.c +@@ -356,7 +356,7 @@ error: + kfree(priv); + return NULL; + } +-EXPORT_SYMBOL(mc44s803_attach); ++EXPORT_SYMBOL_GPL(mc44s803_attach); + + MODULE_AUTHOR("Jochen Friedrich"); + MODULE_DESCRIPTION("Freescale MC44S803 silicon tuner driver"); +diff --git a/drivers/media/tuners/mt2060.c b/drivers/media/tuners/mt2060.c +index 322c806228a5a..da7e23c2689b8 100644 +--- a/drivers/media/tuners/mt2060.c ++++ b/drivers/media/tuners/mt2060.c +@@ -440,7 +440,7 @@ struct dvb_frontend * mt2060_attach(struct dvb_frontend *fe, struct i2c_adapter + + return fe; + } +-EXPORT_SYMBOL(mt2060_attach); ++EXPORT_SYMBOL_GPL(mt2060_attach); + + static int mt2060_probe(struct i2c_client *client, + const struct i2c_device_id *id) +diff --git a/drivers/media/tuners/mt2131.c b/drivers/media/tuners/mt2131.c +index 37f50ff6c0bd2..eebc060883414 100644 +--- a/drivers/media/tuners/mt2131.c ++++ b/drivers/media/tuners/mt2131.c +@@ -274,7 +274,7 @@ struct dvb_frontend * mt2131_attach(struct dvb_frontend *fe, + fe->tuner_priv = priv; + return fe; + } +-EXPORT_SYMBOL(mt2131_attach); ++EXPORT_SYMBOL_GPL(mt2131_attach); + + MODULE_AUTHOR("Steven Toth"); + MODULE_DESCRIPTION("Microtune MT2131 silicon tuner driver"); +diff --git a/drivers/media/tuners/mt2266.c b/drivers/media/tuners/mt2266.c +index 6136f20fa9b7f..2e92885a6bcb9 100644 +--- a/drivers/media/tuners/mt2266.c ++++ b/drivers/media/tuners/mt2266.c +@@ -336,7 +336,7 @@ struct dvb_frontend * mt2266_attach(struct dvb_frontend *fe, struct i2c_adapter + mt2266_calibrate(priv); + return fe; + } +-EXPORT_SYMBOL(mt2266_attach); ++EXPORT_SYMBOL_GPL(mt2266_attach); + + MODULE_AUTHOR("Olivier DANET"); + MODULE_DESCRIPTION("Microtune MT2266 silicon tuner driver"); +diff --git a/drivers/media/tuners/mxl5005s.c b/drivers/media/tuners/mxl5005s.c +index ab4c43df9d180..2a8b0ea5d0cd3 100644 +--- a/drivers/media/tuners/mxl5005s.c ++++ b/drivers/media/tuners/mxl5005s.c +@@ -4116,7 +4116,7 @@ struct dvb_frontend *mxl5005s_attach(struct dvb_frontend *fe, + fe->tuner_priv = state; + return fe; + } +-EXPORT_SYMBOL(mxl5005s_attach); ++EXPORT_SYMBOL_GPL(mxl5005s_attach); + + MODULE_DESCRIPTION("MaxLinear MXL5005S silicon tuner driver"); + MODULE_AUTHOR("Steven Toth"); +diff --git a/drivers/media/tuners/qt1010.c b/drivers/media/tuners/qt1010.c +index 3853a3d43d4f2..60931367b82ca 100644 +--- a/drivers/media/tuners/qt1010.c ++++ b/drivers/media/tuners/qt1010.c +@@ -440,7 +440,7 @@ struct dvb_frontend * qt1010_attach(struct dvb_frontend *fe, + fe->tuner_priv = priv; + return fe; + } +-EXPORT_SYMBOL(qt1010_attach); ++EXPORT_SYMBOL_GPL(qt1010_attach); + + MODULE_DESCRIPTION("Quantek QT1010 silicon tuner driver"); + MODULE_AUTHOR("Antti Palosaari "); +diff --git a/drivers/media/tuners/tda18218.c b/drivers/media/tuners/tda18218.c +index 4ed94646116fa..7d8d84dcb2459 100644 +--- a/drivers/media/tuners/tda18218.c ++++ b/drivers/media/tuners/tda18218.c +@@ -336,7 +336,7 @@ struct dvb_frontend *tda18218_attach(struct dvb_frontend *fe, + + return fe; + } +-EXPORT_SYMBOL(tda18218_attach); ++EXPORT_SYMBOL_GPL(tda18218_attach); + + MODULE_DESCRIPTION("NXP TDA18218HN silicon tuner driver"); + MODULE_AUTHOR("Antti Palosaari "); +diff --git a/drivers/media/tuners/xc2028.c b/drivers/media/tuners/xc2028.c +index 69c2e1b99bf17..5a967edceca93 100644 +--- a/drivers/media/tuners/xc2028.c ++++ b/drivers/media/tuners/xc2028.c +@@ -1512,7 +1512,7 @@ fail: + return NULL; + } + +-EXPORT_SYMBOL(xc2028_attach); ++EXPORT_SYMBOL_GPL(xc2028_attach); + + MODULE_DESCRIPTION("Xceive xc2028/xc3028 tuner driver"); + MODULE_AUTHOR("Michel Ludwig "); +diff --git a/drivers/media/tuners/xc4000.c b/drivers/media/tuners/xc4000.c +index d59b4ab774302..57ded9ff3f043 100644 +--- a/drivers/media/tuners/xc4000.c ++++ b/drivers/media/tuners/xc4000.c +@@ -1742,7 +1742,7 @@ fail2: + xc4000_release(fe); + return NULL; + } +-EXPORT_SYMBOL(xc4000_attach); ++EXPORT_SYMBOL_GPL(xc4000_attach); + + MODULE_AUTHOR("Steven Toth, Davide Ferri"); + MODULE_DESCRIPTION("Xceive xc4000 silicon tuner driver"); +diff --git a/drivers/media/tuners/xc5000.c b/drivers/media/tuners/xc5000.c +index 7b7d9fe4f9453..2182e5b7b6064 100644 +--- a/drivers/media/tuners/xc5000.c ++++ b/drivers/media/tuners/xc5000.c +@@ -1460,7 +1460,7 @@ fail: + xc5000_release(fe); + return NULL; + } +-EXPORT_SYMBOL(xc5000_attach); ++EXPORT_SYMBOL_GPL(xc5000_attach); + + MODULE_AUTHOR("Steven Toth"); + MODULE_DESCRIPTION("Xceive xc5000 silicon tuner driver"); +diff --git a/drivers/media/usb/dvb-usb/m920x.c b/drivers/media/usb/dvb-usb/m920x.c +index 548199cd86f60..11f4f0455f155 100644 +--- a/drivers/media/usb/dvb-usb/m920x.c ++++ b/drivers/media/usb/dvb-usb/m920x.c +@@ -277,7 +277,6 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu + char *read = kmalloc(1, GFP_KERNEL); + if (!read) { + ret = -ENOMEM; +- kfree(read); + goto unlock; + } + +@@ -288,8 +287,10 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu + + if ((ret = m920x_read(d->udev, M9206_I2C, 0x0, + 0x20 | stop, +- read, 1)) != 0) ++ read, 1)) != 0) { ++ kfree(read); + goto unlock; ++ } + msg[i].buf[j] = read[0]; + } + +diff --git a/drivers/media/usb/go7007/go7007-i2c.c b/drivers/media/usb/go7007/go7007-i2c.c +index 38339dd2f83f7..2880370e45c8b 100644 +--- a/drivers/media/usb/go7007/go7007-i2c.c ++++ b/drivers/media/usb/go7007/go7007-i2c.c +@@ -165,8 +165,6 @@ static int go7007_i2c_master_xfer(struct i2c_adapter *adapter, + } else if (msgs[i].len == 3) { + if (msgs[i].flags & I2C_M_RD) + return -EIO; +- if (msgs[i].len != 3) +- return -EIO; + if (go7007_i2c_xfer(go, msgs[i].addr, 0, + (msgs[i].buf[0] << 8) | msgs[i].buf[1], + 0x01, &msgs[i].buf[2]) < 0) +diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c +index 640737d3b8aeb..8a39cac76c585 100644 +--- a/drivers/media/usb/siano/smsusb.c ++++ b/drivers/media/usb/siano/smsusb.c +@@ -455,12 +455,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id) + rc = smscore_register_device(¶ms, &dev->coredev, 0, mdev); + if (rc < 0) { + pr_err("smscore_register_device(...) failed, rc %d\n", rc); +- smsusb_term_device(intf); +-#ifdef CONFIG_MEDIA_CONTROLLER_DVB +- media_device_unregister(mdev); +-#endif +- kfree(mdev); +- return rc; ++ goto err_unregister_device; + } + + smscore_set_board_id(dev->coredev, board_id); +@@ -477,8 +472,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id) + rc = smsusb_start_streaming(dev); + if (rc < 0) { + pr_err("smsusb_start_streaming(...) failed\n"); +- smsusb_term_device(intf); +- return rc; ++ goto err_unregister_device; + } + + dev->state = SMSUSB_ACTIVE; +@@ -486,13 +480,20 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id) + rc = smscore_start_device(dev->coredev); + if (rc < 0) { + pr_err("smscore_start_device(...) failed\n"); +- smsusb_term_device(intf); +- return rc; ++ goto err_unregister_device; + } + + pr_debug("device 0x%p created\n", dev); + + return rc; ++ ++err_unregister_device: ++ smsusb_term_device(intf); ++#ifdef CONFIG_MEDIA_CONTROLLER_DVB ++ media_device_unregister(mdev); ++#endif ++ kfree(mdev); ++ return rc; + } + + static int smsusb_probe(struct usb_interface *intf, +diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c +index 3d85a8600f576..69c8b3b656860 100644 +--- a/drivers/media/v4l2-core/v4l2-fwnode.c ++++ b/drivers/media/v4l2-core/v4l2-fwnode.c +@@ -551,19 +551,29 @@ int v4l2_fwnode_parse_link(struct fwnode_handle *fwnode, + link->local_id = fwep.id; + link->local_port = fwep.port; + link->local_node = fwnode_graph_get_port_parent(fwnode); ++ if (!link->local_node) ++ return -ENOLINK; + + fwnode = fwnode_graph_get_remote_endpoint(fwnode); +- if (!fwnode) { +- fwnode_handle_put(fwnode); +- return -ENOLINK; +- } ++ if (!fwnode) ++ goto err_put_local_node; + + fwnode_graph_parse_endpoint(fwnode, &fwep); + link->remote_id = fwep.id; + link->remote_port = fwep.port; + link->remote_node = fwnode_graph_get_port_parent(fwnode); ++ if (!link->remote_node) ++ goto err_put_remote_endpoint; + + return 0; ++ ++err_put_remote_endpoint: ++ fwnode_handle_put(fwnode); ++ ++err_put_local_node: ++ fwnode_handle_put(link->local_node); ++ ++ return -ENOLINK; + } + EXPORT_SYMBOL_GPL(v4l2_fwnode_parse_link); + +diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c +index e38d0e8b8e0ed..7572c5714b469 100644 +--- a/drivers/mmc/host/renesas_sdhi_core.c ++++ b/drivers/mmc/host/renesas_sdhi_core.c +@@ -1006,6 +1006,8 @@ int renesas_sdhi_probe(struct platform_device *pdev, + host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27; + host->sdcard_irq_mask_all = TMIO_MASK_ALL_RCAR2; + host->reset = renesas_sdhi_reset; ++ } else { ++ host->sdcard_irq_mask_all = TMIO_MASK_ALL; + } + + /* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */ +@@ -1102,9 +1104,7 @@ int renesas_sdhi_probe(struct platform_device *pdev, + host->ops.hs400_complete = renesas_sdhi_hs400_complete; + } + +- ret = tmio_mmc_host_probe(host); +- if (ret < 0) +- goto edisclk; ++ sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all); + + num_irqs = platform_irq_count(pdev); + if (num_irqs < 0) { +@@ -1131,6 +1131,10 @@ int renesas_sdhi_probe(struct platform_device *pdev, + goto eirq; + } + ++ ret = tmio_mmc_host_probe(host); ++ if (ret < 0) ++ goto edisclk; ++ + dev_info(&pdev->dev, "%s base at %pa, max clock rate %u MHz\n", + mmc_hostname(host->mmc), &res->start, host->mmc->f_max / 1000000); + +diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c +index 2e9c2e2d9c9f7..d8418d7fcc372 100644 +--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c ++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c +@@ -2612,6 +2612,8 @@ static int brcmnand_setup_dev(struct brcmnand_host *host) + struct nand_chip *chip = &host->chip; + const struct nand_ecc_props *requirements = + nanddev_get_ecc_requirements(&chip->base); ++ struct nand_memory_organization *memorg = ++ nanddev_get_memorg(&chip->base); + struct brcmnand_controller *ctrl = host->ctrl; + struct brcmnand_cfg *cfg = &host->hwcfg; + char msg[128]; +@@ -2633,10 +2635,11 @@ static int brcmnand_setup_dev(struct brcmnand_host *host) + if (cfg->spare_area_size > ctrl->max_oob) + cfg->spare_area_size = ctrl->max_oob; + /* +- * Set oobsize to be consistent with controller's spare_area_size, as +- * the rest is inaccessible. ++ * Set mtd and memorg oobsize to be consistent with controller's ++ * spare_area_size, as the rest is inaccessible. + */ + mtd->oobsize = cfg->spare_area_size * (mtd->writesize >> FC_SHIFT); ++ memorg->oobsize = mtd->oobsize; + + cfg->device_size = mtd->size; + cfg->block_size = mtd->erasesize; +diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c +index 6b2bda815b880..17786e1331e6d 100644 +--- a/drivers/mtd/nand/raw/fsmc_nand.c ++++ b/drivers/mtd/nand/raw/fsmc_nand.c +@@ -1202,9 +1202,14 @@ static int fsmc_nand_suspend(struct device *dev) + static int fsmc_nand_resume(struct device *dev) + { + struct fsmc_nand_data *host = dev_get_drvdata(dev); ++ int ret; + + if (host) { +- clk_prepare_enable(host->clk); ++ ret = clk_prepare_enable(host->clk); ++ if (ret) { ++ dev_err(dev, "failed to enable clk\n"); ++ return ret; ++ } + if (host->dev_timings) + fsmc_nand_setup(host, host->dev_timings); + nand_reset(&host->nand, 0); +diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c +index dc4d86ceee447..a9000b0ebe690 100644 +--- a/drivers/mtd/spi-nor/core.c ++++ b/drivers/mtd/spi-nor/core.c +@@ -770,21 +770,22 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1) + ret = spi_nor_read_cr(nor, &sr_cr[1]); + if (ret) + return ret; +- } else if (nor->params->quad_enable) { ++ } else if (spi_nor_get_protocol_width(nor->read_proto) == 4 && ++ spi_nor_get_protocol_width(nor->write_proto) == 4 && ++ nor->params->quad_enable) { + /* + * If the Status Register 2 Read command (35h) is not + * supported, we should at least be sure we don't + * change the value of the SR2 Quad Enable bit. + * +- * We can safely assume that when the Quad Enable method is +- * set, the value of the QE bit is one, as a consequence of the +- * nor->params->quad_enable() call. ++ * When the Quad Enable method is set and the buswidth is 4, we ++ * can safely assume that the value of the QE bit is one, as a ++ * consequence of the nor->params->quad_enable() call. + * +- * We can safely assume that the Quad Enable bit is present in +- * the Status Register 2 at BIT(1). According to the JESD216 +- * revB standard, BFPT DWORDS[15], bits 22:20, the 16-bit +- * Write Status (01h) command is available just for the cases +- * in which the QE bit is described in SR2 at BIT(1). ++ * According to the JESD216 revB standard, BFPT DWORDS[15], ++ * bits 22:20, the 16-bit Write Status (01h) command is ++ * available just for the cases in which the QE bit is ++ * described in SR2 at BIT(1). + */ + sr_cr[1] = SR2_QUAD_EN_BIT1; + } else { +diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c +index 1bad1866ae462..a48220f91a2df 100644 +--- a/drivers/net/arcnet/arcnet.c ++++ b/drivers/net/arcnet/arcnet.c +@@ -468,7 +468,7 @@ static void arcnet_reply_tasklet(struct tasklet_struct *t) + + ret = sock_queue_err_skb(sk, ackskb); + if (ret) +- kfree_skb(ackskb); ++ dev_kfree_skb_irq(ackskb); + + local_irq_enable(); + }; +diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c +index 5858cbafbc965..264a0f764e011 100644 +--- a/drivers/net/can/usb/gs_usb.c ++++ b/drivers/net/can/usb/gs_usb.c +@@ -626,6 +626,9 @@ static void gs_usb_receive_bulk_callback(struct urb *urb) + } + + if (hf->flags & GS_CAN_FLAG_OVERFLOW) { ++ stats->rx_over_errors++; ++ stats->rx_errors++; ++ + skb = alloc_can_err_skb(netdev, &cf); + if (!skb) + goto resubmit_urb; +@@ -633,8 +636,6 @@ static void gs_usb_receive_bulk_callback(struct urb *urb) + cf->can_id |= CAN_ERR_CRTL; + cf->len = CAN_ERR_DLC; + cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW; +- stats->rx_over_errors++; +- stats->rx_errors++; + netif_rx(skb); + } + +diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c +index 8c492d56d2c36..dc9eea3c8ab16 100644 +--- a/drivers/net/dsa/microchip/ksz_common.c ++++ b/drivers/net/dsa/microchip/ksz_common.c +@@ -590,10 +590,9 @@ static const struct regmap_range ksz9477_valid_regs[] = { + regmap_reg_range(0x1030, 0x1030), + regmap_reg_range(0x1100, 0x1115), + regmap_reg_range(0x111a, 0x111f), +- regmap_reg_range(0x1122, 0x1127), +- regmap_reg_range(0x112a, 0x112b), +- regmap_reg_range(0x1136, 0x1139), +- regmap_reg_range(0x113e, 0x113f), ++ regmap_reg_range(0x1120, 0x112b), ++ regmap_reg_range(0x1134, 0x113b), ++ regmap_reg_range(0x113c, 0x113f), + regmap_reg_range(0x1400, 0x1401), + regmap_reg_range(0x1403, 0x1403), + regmap_reg_range(0x1410, 0x1417), +@@ -624,10 +623,9 @@ static const struct regmap_range ksz9477_valid_regs[] = { + regmap_reg_range(0x2030, 0x2030), + regmap_reg_range(0x2100, 0x2115), + regmap_reg_range(0x211a, 0x211f), +- regmap_reg_range(0x2122, 0x2127), +- regmap_reg_range(0x212a, 0x212b), +- regmap_reg_range(0x2136, 0x2139), +- regmap_reg_range(0x213e, 0x213f), ++ regmap_reg_range(0x2120, 0x212b), ++ regmap_reg_range(0x2134, 0x213b), ++ regmap_reg_range(0x213c, 0x213f), + regmap_reg_range(0x2400, 0x2401), + regmap_reg_range(0x2403, 0x2403), + regmap_reg_range(0x2410, 0x2417), +@@ -658,10 +656,9 @@ static const struct regmap_range ksz9477_valid_regs[] = { + regmap_reg_range(0x3030, 0x3030), + regmap_reg_range(0x3100, 0x3115), + regmap_reg_range(0x311a, 0x311f), +- regmap_reg_range(0x3122, 0x3127), +- regmap_reg_range(0x312a, 0x312b), +- regmap_reg_range(0x3136, 0x3139), +- regmap_reg_range(0x313e, 0x313f), ++ regmap_reg_range(0x3120, 0x312b), ++ regmap_reg_range(0x3134, 0x313b), ++ regmap_reg_range(0x313c, 0x313f), + regmap_reg_range(0x3400, 0x3401), + regmap_reg_range(0x3403, 0x3403), + regmap_reg_range(0x3410, 0x3417), +@@ -692,10 +689,9 @@ static const struct regmap_range ksz9477_valid_regs[] = { + regmap_reg_range(0x4030, 0x4030), + regmap_reg_range(0x4100, 0x4115), + regmap_reg_range(0x411a, 0x411f), +- regmap_reg_range(0x4122, 0x4127), +- regmap_reg_range(0x412a, 0x412b), +- regmap_reg_range(0x4136, 0x4139), +- regmap_reg_range(0x413e, 0x413f), ++ regmap_reg_range(0x4120, 0x412b), ++ regmap_reg_range(0x4134, 0x413b), ++ regmap_reg_range(0x413c, 0x413f), + regmap_reg_range(0x4400, 0x4401), + regmap_reg_range(0x4403, 0x4403), + regmap_reg_range(0x4410, 0x4417), +@@ -726,10 +722,9 @@ static const struct regmap_range ksz9477_valid_regs[] = { + regmap_reg_range(0x5030, 0x5030), + regmap_reg_range(0x5100, 0x5115), + regmap_reg_range(0x511a, 0x511f), +- regmap_reg_range(0x5122, 0x5127), +- regmap_reg_range(0x512a, 0x512b), +- regmap_reg_range(0x5136, 0x5139), +- regmap_reg_range(0x513e, 0x513f), ++ regmap_reg_range(0x5120, 0x512b), ++ regmap_reg_range(0x5134, 0x513b), ++ regmap_reg_range(0x513c, 0x513f), + regmap_reg_range(0x5400, 0x5401), + regmap_reg_range(0x5403, 0x5403), + regmap_reg_range(0x5410, 0x5417), +diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c +index 40c781695d581..7762e532c6a4f 100644 +--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c ++++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c +@@ -2104,8 +2104,11 @@ static int atl1c_tso_csum(struct atl1c_adapter *adapter, + real_len = (((unsigned char *)ip_hdr(skb) - skb->data) + + ntohs(ip_hdr(skb)->tot_len)); + +- if (real_len < skb->len) +- pskb_trim(skb, real_len); ++ if (real_len < skb->len) { ++ err = pskb_trim(skb, real_len); ++ if (err) ++ return err; ++ } + + hdr_len = skb_tcp_all_headers(skb); + if (unlikely(skb->len == hdr_len)) { +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +index 51b1690fd0459..a1783faf4fe99 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +@@ -14312,11 +14312,16 @@ static void bnx2x_io_resume(struct pci_dev *pdev) + bp->fw_seq = SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) & + DRV_MSG_SEQ_NUMBER_MASK; + +- if (netif_running(dev)) +- bnx2x_nic_load(bp, LOAD_NORMAL); ++ if (netif_running(dev)) { ++ if (bnx2x_nic_load(bp, LOAD_NORMAL)) { ++ netdev_err(bp->dev, "Error during driver initialization, try unloading/reloading the driver\n"); ++ goto done; ++ } ++ } + + netif_device_attach(dev); + ++done: + rtnl_unlock(); + } + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 6af2273f227c2..84ecd8b9be48c 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -10936,9 +10936,12 @@ int hclge_cfg_flowctrl(struct hclge_dev *hdev) + u32 rx_pause, tx_pause; + u8 flowctl; + +- if (!phydev->link || !phydev->autoneg) ++ if (!phydev->link) + return 0; + ++ if (!phydev->autoneg) ++ return hclge_mac_pause_setup_hw(hdev); ++ + local_advertising = linkmode_adv_to_lcl_adv_t(phydev->advertising); + + if (phydev->pause) +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c +index 150f146fa24fb..8b40c6b4ee53e 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c +@@ -1549,7 +1549,7 @@ static int hclge_bp_setup_hw(struct hclge_dev *hdev, u8 tc) + return 0; + } + +-static int hclge_mac_pause_setup_hw(struct hclge_dev *hdev) ++int hclge_mac_pause_setup_hw(struct hclge_dev *hdev) + { + bool tx_en, rx_en; + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h +index dd6f1fd486cf2..251e808456208 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h +@@ -242,6 +242,7 @@ int hclge_pfc_pause_en_cfg(struct hclge_dev *hdev, u8 tx_rx_bitmap, + u8 pfc_bitmap); + int hclge_mac_pause_en_cfg(struct hclge_dev *hdev, bool tx, bool rx); + int hclge_pause_addr_cfg(struct hclge_dev *hdev, const u8 *mac_addr); ++int hclge_mac_pause_setup_hw(struct hclge_dev *hdev); + void hclge_pfc_rx_stats_get(struct hclge_dev *hdev, u64 *stats); + void hclge_pfc_tx_stats_get(struct hclge_dev *hdev, u64 *stats); + int hclge_tm_qs_shaper_cfg(struct hclge_vport *vport, int max_tx_rate); +diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c +index 7a00d297be3a9..3f98781e74b28 100644 +--- a/drivers/net/ethernet/intel/ice/ice_main.c ++++ b/drivers/net/ethernet/intel/ice/ice_main.c +@@ -1356,6 +1356,7 @@ int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout, + static void ice_aq_check_events(struct ice_pf *pf, u16 opcode, + struct ice_rq_event_info *event) + { ++ struct ice_rq_event_info *task_ev; + struct ice_aq_task *task; + bool found = false; + +@@ -1364,15 +1365,15 @@ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode, + if (task->state || task->opcode != opcode) + continue; + +- memcpy(&task->event->desc, &event->desc, sizeof(event->desc)); +- task->event->msg_len = event->msg_len; ++ task_ev = task->event; ++ memcpy(&task_ev->desc, &event->desc, sizeof(event->desc)); ++ task_ev->msg_len = event->msg_len; + + /* Only copy the data buffer if a destination was set */ +- if (task->event->msg_buf && +- task->event->buf_len > event->buf_len) { +- memcpy(task->event->msg_buf, event->msg_buf, ++ if (task_ev->msg_buf && task_ev->buf_len >= event->buf_len) { ++ memcpy(task_ev->msg_buf, event->msg_buf, + event->buf_len); +- task->event->buf_len = event->buf_len; ++ task_ev->buf_len = event->buf_len; + } + + task->state = ICE_AQ_TASK_COMPLETE; +diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +index 772b1f566d6ed..813acd6a4b469 100644 +--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c ++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +@@ -131,6 +131,8 @@ static void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) + case READ_TIME: + cmd_val |= GLTSYN_CMD_READ_TIME; + break; ++ case ICE_PTP_NOP: ++ break; + } + + wr32(hw, GLTSYN_CMD, cmd_val); +@@ -1200,18 +1202,18 @@ ice_ptp_read_port_capture(struct ice_hw *hw, u8 port, u64 *tx_ts, u64 *rx_ts) + } + + /** +- * ice_ptp_one_port_cmd - Prepare a single PHY port for a timer command ++ * ice_ptp_write_port_cmd_e822 - Prepare a single PHY port for a timer command + * @hw: pointer to HW struct + * @port: Port to which cmd has to be sent + * @cmd: Command to be sent to the port + * + * Prepare the requested port for an upcoming timer sync command. + * +- * Note there is no equivalent of this operation on E810, as that device +- * always handles all external PHYs internally. ++ * Do not use this function directly. If you want to configure exactly one ++ * port, use ice_ptp_one_port_cmd() instead. + */ + static int +-ice_ptp_one_port_cmd(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd) ++ice_ptp_write_port_cmd_e822(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd) + { + u32 cmd_val, val; + u8 tmr_idx; +@@ -1235,6 +1237,8 @@ ice_ptp_one_port_cmd(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd) + case ADJ_TIME_AT_TIME: + cmd_val |= PHY_CMD_ADJ_TIME_AT_TIME; + break; ++ case ICE_PTP_NOP: ++ break; + } + + /* Tx case */ +@@ -1280,6 +1284,39 @@ ice_ptp_one_port_cmd(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd) + return 0; + } + ++/** ++ * ice_ptp_one_port_cmd - Prepare one port for a timer command ++ * @hw: pointer to the HW struct ++ * @configured_port: the port to configure with configured_cmd ++ * @configured_cmd: timer command to prepare on the configured_port ++ * ++ * Prepare the configured_port for the configured_cmd, and prepare all other ++ * ports for ICE_PTP_NOP. This causes the configured_port to execute the ++ * desired command while all other ports perform no operation. ++ */ ++static int ++ice_ptp_one_port_cmd(struct ice_hw *hw, u8 configured_port, ++ enum ice_ptp_tmr_cmd configured_cmd) ++{ ++ u8 port; ++ ++ for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { ++ enum ice_ptp_tmr_cmd cmd; ++ int err; ++ ++ if (port == configured_port) ++ cmd = configured_cmd; ++ else ++ cmd = ICE_PTP_NOP; ++ ++ err = ice_ptp_write_port_cmd_e822(hw, port, cmd); ++ if (err) ++ return err; ++ } ++ ++ return 0; ++} ++ + /** + * ice_ptp_port_cmd_e822 - Prepare all ports for a timer command + * @hw: pointer to the HW struct +@@ -1296,7 +1333,7 @@ ice_ptp_port_cmd_e822(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) + for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) { + int err; + +- err = ice_ptp_one_port_cmd(hw, port, cmd); ++ err = ice_ptp_write_port_cmd_e822(hw, port, cmd); + if (err) + return err; + } +@@ -2245,6 +2282,9 @@ static int ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port) + if (err) + goto err_unlock; + ++ /* Do not perform any action on the main timer */ ++ ice_ptp_src_cmd(hw, ICE_PTP_NOP); ++ + /* Issue the sync to activate the time adjustment */ + ice_ptp_exec_tmr_cmd(hw); + +@@ -2371,6 +2411,9 @@ ice_start_phy_timer_e822(struct ice_hw *hw, u8 port, bool bypass) + if (err) + return err; + ++ /* Do not perform any action on the main timer */ ++ ice_ptp_src_cmd(hw, ICE_PTP_NOP); ++ + ice_ptp_exec_tmr_cmd(hw); + + err = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val); +@@ -2914,6 +2957,8 @@ static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) + case ADJ_TIME_AT_TIME: + cmd_val = GLTSYN_CMD_ADJ_INIT_TIME; + break; ++ case ICE_PTP_NOP: ++ return 0; + } + + /* Read, modify, write */ +diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h +index 2bda64c76abc3..071f545aa85e8 100644 +--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h ++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h +@@ -9,7 +9,8 @@ enum ice_ptp_tmr_cmd { + INIT_INCVAL, + ADJ_TIME, + ADJ_TIME_AT_TIME, +- READ_TIME ++ READ_TIME, ++ ICE_PTP_NOP, + }; + + enum ice_ptp_serdes { +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index 3e0444354632d..d0ead18ec0266 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -4758,6 +4758,10 @@ void igb_configure_rx_ring(struct igb_adapter *adapter, + static void igb_set_rx_buffer_len(struct igb_adapter *adapter, + struct igb_ring *rx_ring) + { ++#if (PAGE_SIZE < 8192) ++ struct e1000_hw *hw = &adapter->hw; ++#endif ++ + /* set build_skb and buffer size flags */ + clear_ring_build_skb_enabled(rx_ring); + clear_ring_uses_large_buffer(rx_ring); +@@ -4768,10 +4772,9 @@ static void igb_set_rx_buffer_len(struct igb_adapter *adapter, + set_ring_build_skb_enabled(rx_ring); + + #if (PAGE_SIZE < 8192) +- if (adapter->max_frame_size <= IGB_MAX_FRAME_BUILD_SKB) +- return; +- +- set_ring_uses_large_buffer(rx_ring); ++ if (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB || ++ rd32(E1000_RCTL) & E1000_RCTL_SBP) ++ set_ring_uses_large_buffer(rx_ring); + #endif + } + +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +index 5541e284cd3f0..c85e0180d96da 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +@@ -1691,6 +1691,42 @@ exit: + return true; + } + ++static void nix_reset_tx_schedule(struct rvu *rvu, int blkaddr, ++ int lvl, int schq) ++{ ++ u64 tlx_parent = 0, tlx_schedule = 0; ++ ++ switch (lvl) { ++ case NIX_TXSCH_LVL_TL2: ++ tlx_parent = NIX_AF_TL2X_PARENT(schq); ++ tlx_schedule = NIX_AF_TL2X_SCHEDULE(schq); ++ break; ++ case NIX_TXSCH_LVL_TL3: ++ tlx_parent = NIX_AF_TL3X_PARENT(schq); ++ tlx_schedule = NIX_AF_TL3X_SCHEDULE(schq); ++ break; ++ case NIX_TXSCH_LVL_TL4: ++ tlx_parent = NIX_AF_TL4X_PARENT(schq); ++ tlx_schedule = NIX_AF_TL4X_SCHEDULE(schq); ++ break; ++ case NIX_TXSCH_LVL_MDQ: ++ /* no need to reset SMQ_CFG as HW clears this CSR ++ * on SMQ flush ++ */ ++ tlx_parent = NIX_AF_MDQX_PARENT(schq); ++ tlx_schedule = NIX_AF_MDQX_SCHEDULE(schq); ++ break; ++ default: ++ return; ++ } ++ ++ if (tlx_parent) ++ rvu_write64(rvu, blkaddr, tlx_parent, 0x0); ++ ++ if (tlx_schedule) ++ rvu_write64(rvu, blkaddr, tlx_schedule, 0x0); ++} ++ + /* Disable shaping of pkts by a scheduler queue + * at a given scheduler level. + */ +@@ -2040,6 +2076,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, + pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); + nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); + nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); ++ nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); + } + + for (idx = 0; idx < req->schq[lvl]; idx++) { +@@ -2049,6 +2086,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, + pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); + nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); + nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); ++ nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); + } + } + +@@ -2137,6 +2175,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) + continue; + nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); + nix_clear_tx_xoff(rvu, blkaddr, lvl, schq); ++ nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + } + } + nix_clear_tx_xoff(rvu, blkaddr, NIX_TXSCH_LVL_TL1, +@@ -2175,6 +2214,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) + for (schq = 0; schq < txsch->schq.max; schq++) { + if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) + continue; ++ nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); + rvu_free_rsrc(&txsch->schq, schq); + txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE); + } +@@ -2234,6 +2274,9 @@ static int nix_txschq_free_one(struct rvu *rvu, + */ + nix_clear_tx_xoff(rvu, blkaddr, lvl, schq); + ++ nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); ++ nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); ++ + /* Flush if it is a SMQ. Onus of disabling + * TL2/3 queue links before SMQ flush is on user + */ +@@ -2243,6 +2286,8 @@ static int nix_txschq_free_one(struct rvu *rvu, + goto err; + } + ++ nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); ++ + /* Free the resource */ + rvu_free_rsrc(&txsch->schq, schq); + txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE); +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +index 8a41ad8ca04f1..011355e73696e 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +@@ -716,7 +716,8 @@ EXPORT_SYMBOL(otx2_smq_flush); + int otx2_txsch_alloc(struct otx2_nic *pfvf) + { + struct nix_txsch_alloc_req *req; +- int lvl; ++ struct nix_txsch_alloc_rsp *rsp; ++ int lvl, schq, rc; + + /* Get memory to put this msg */ + req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox); +@@ -726,33 +727,69 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) + /* Request one schq per level */ + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) + req->schq[lvl] = 1; ++ rc = otx2_sync_mbox_msg(&pfvf->mbox); ++ if (rc) ++ return rc; + +- return otx2_sync_mbox_msg(&pfvf->mbox); ++ rsp = (struct nix_txsch_alloc_rsp *) ++ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); ++ if (IS_ERR(rsp)) ++ return PTR_ERR(rsp); ++ ++ /* Setup transmit scheduler list */ ++ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) ++ for (schq = 0; schq < rsp->schq[lvl]; schq++) ++ pfvf->hw.txschq_list[lvl][schq] = ++ rsp->schq_list[lvl][schq]; ++ ++ pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; ++ ++ return 0; + } + +-int otx2_txschq_stop(struct otx2_nic *pfvf) ++void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq) + { + struct nix_txsch_free_req *free_req; +- int lvl, schq, err; ++ int err; + + mutex_lock(&pfvf->mbox.lock); +- /* Free the transmit schedulers */ ++ + free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox); + if (!free_req) { + mutex_unlock(&pfvf->mbox.lock); +- return -ENOMEM; ++ netdev_err(pfvf->netdev, ++ "Failed alloc txschq free req\n"); ++ return; + } + +- free_req->flags = TXSCHQ_FREE_ALL; ++ free_req->schq_lvl = lvl; ++ free_req->schq = schq; ++ + err = otx2_sync_mbox_msg(&pfvf->mbox); ++ if (err) { ++ netdev_err(pfvf->netdev, ++ "Failed stop txschq %d at level %d\n", schq, lvl); ++ } ++ + mutex_unlock(&pfvf->mbox.lock); ++} ++EXPORT_SYMBOL(otx2_txschq_free_one); ++ ++void otx2_txschq_stop(struct otx2_nic *pfvf) ++{ ++ int lvl, schq; ++ ++ /* free non QOS TLx nodes */ ++ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) ++ otx2_txschq_free_one(pfvf, lvl, ++ pfvf->hw.txschq_list[lvl][0]); + + /* Clear the txschq list */ + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) + pfvf->hw.txschq_list[lvl][schq] = 0; + } +- return err; ++ + } + + void otx2_sqb_flush(struct otx2_nic *pfvf) +@@ -1629,21 +1666,6 @@ void mbox_handler_cgx_fec_stats(struct otx2_nic *pfvf, + pfvf->hw.cgx_fec_uncorr_blks += rsp->fec_uncorr_blks; + } + +-void mbox_handler_nix_txsch_alloc(struct otx2_nic *pf, +- struct nix_txsch_alloc_rsp *rsp) +-{ +- int lvl, schq; +- +- /* Setup transmit scheduler list */ +- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) +- for (schq = 0; schq < rsp->schq[lvl]; schq++) +- pf->hw.txschq_list[lvl][schq] = +- rsp->schq_list[lvl][schq]; +- +- pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; +-} +-EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc); +- + void mbox_handler_npa_lf_alloc(struct otx2_nic *pfvf, + struct npa_lf_alloc_rsp *rsp) + { +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +index 241016ca64d05..8a9793b06769f 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +@@ -917,7 +917,8 @@ int otx2_config_nix(struct otx2_nic *pfvf); + int otx2_config_nix_queues(struct otx2_nic *pfvf); + int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en); + int otx2_txsch_alloc(struct otx2_nic *pfvf); +-int otx2_txschq_stop(struct otx2_nic *pfvf); ++void otx2_txschq_stop(struct otx2_nic *pfvf); ++void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq); + void otx2_sqb_flush(struct otx2_nic *pfvf); + int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, + dma_addr_t *dma); +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c +index ccaf97bb1ce03..bfddbff7bcdfb 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c +@@ -70,7 +70,7 @@ static int otx2_pfc_txschq_alloc_one(struct otx2_nic *pfvf, u8 prio) + * link config level. These rest of the scheduler can be + * same as hw.txschq_list. + */ +- for (lvl = 0; lvl < pfvf->hw.txschq_link_cfg_lvl; lvl++) ++ for (lvl = 0; lvl <= pfvf->hw.txschq_link_cfg_lvl; lvl++) + req->schq[lvl] = 1; + + rc = otx2_sync_mbox_msg(&pfvf->mbox); +@@ -83,7 +83,7 @@ static int otx2_pfc_txschq_alloc_one(struct otx2_nic *pfvf, u8 prio) + return PTR_ERR(rsp); + + /* Setup transmit scheduler list */ +- for (lvl = 0; lvl < pfvf->hw.txschq_link_cfg_lvl; lvl++) { ++ for (lvl = 0; lvl <= pfvf->hw.txschq_link_cfg_lvl; lvl++) { + if (!rsp->schq[lvl]) + return -ENOSPC; + +@@ -125,19 +125,12 @@ int otx2_pfc_txschq_alloc(struct otx2_nic *pfvf) + + static int otx2_pfc_txschq_stop_one(struct otx2_nic *pfvf, u8 prio) + { +- struct nix_txsch_free_req *free_req; ++ int lvl; + +- mutex_lock(&pfvf->mbox.lock); + /* free PFC TLx nodes */ +- free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox); +- if (!free_req) { +- mutex_unlock(&pfvf->mbox.lock); +- return -ENOMEM; +- } +- +- free_req->flags = TXSCHQ_FREE_ALL; +- otx2_sync_mbox_msg(&pfvf->mbox); +- mutex_unlock(&pfvf->mbox.lock); ++ for (lvl = 0; lvl <= pfvf->hw.txschq_link_cfg_lvl; lvl++) ++ otx2_txschq_free_one(pfvf, lvl, ++ pfvf->pfc_schq_list[lvl][prio]); + + pfvf->pfc_alloc_status[prio] = false; + return 0; +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +index c236dba80ff1a..17e546d0d7e55 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +@@ -791,10 +791,6 @@ static void otx2_process_pfaf_mbox_msg(struct otx2_nic *pf, + case MBOX_MSG_NIX_LF_ALLOC: + mbox_handler_nix_lf_alloc(pf, (struct nix_lf_alloc_rsp *)msg); + break; +- case MBOX_MSG_NIX_TXSCH_ALLOC: +- mbox_handler_nix_txsch_alloc(pf, +- (struct nix_txsch_alloc_rsp *)msg); +- break; + case MBOX_MSG_NIX_BP_ENABLE: + mbox_handler_nix_bp_enable(pf, (struct nix_bp_cfg_rsp *)msg); + break; +@@ -1517,8 +1513,7 @@ err_free_nix_queues: + otx2_free_cq_res(pf); + otx2_ctx_disable(mbox, NIX_AQ_CTYPE_RQ, false); + err_free_txsch: +- if (otx2_txschq_stop(pf)) +- dev_err(pf->dev, "%s failed to stop TX schedulers\n", __func__); ++ otx2_txschq_stop(pf); + err_free_sq_ptrs: + otx2_sq_free_sqbs(pf); + err_free_rq_ptrs: +@@ -1553,15 +1548,13 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) + struct mbox *mbox = &pf->mbox; + struct otx2_cq_queue *cq; + struct msg_req *req; +- int qidx, err; ++ int qidx; + + /* Ensure all SQE are processed */ + otx2_sqb_flush(pf); + + /* Stop transmission */ +- err = otx2_txschq_stop(pf); +- if (err) +- dev_err(pf->dev, "RVUPF: Failed to stop/free TX schedulers\n"); ++ otx2_txschq_stop(pf); + + #ifdef CONFIG_DCB + if (pf->pfc_en) +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +index 53366dbfbf27c..f8f0c01f62a14 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +@@ -70,10 +70,6 @@ static void otx2vf_process_vfaf_mbox_msg(struct otx2_nic *vf, + case MBOX_MSG_NIX_LF_ALLOC: + mbox_handler_nix_lf_alloc(vf, (struct nix_lf_alloc_rsp *)msg); + break; +- case MBOX_MSG_NIX_TXSCH_ALLOC: +- mbox_handler_nix_txsch_alloc(vf, +- (struct nix_txsch_alloc_rsp *)msg); +- break; + case MBOX_MSG_NIX_BP_ENABLE: + mbox_handler_nix_bp_enable(vf, (struct nix_bp_cfg_rsp *)msg); + break; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c +index d219f8417d93a..dec1492da74de 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c +@@ -319,16 +319,11 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev) + pci_cfg_access_lock(sdev); + } + /* PCI link toggle */ +- err = pci_read_config_word(bridge, cap + PCI_EXP_LNKCTL, ®16); +- if (err) +- return err; +- reg16 |= PCI_EXP_LNKCTL_LD; +- err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16); ++ err = pcie_capability_set_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD); + if (err) + return err; + msleep(500); +- reg16 &= ~PCI_EXP_LNKCTL_LD; +- err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16); ++ err = pcie_capability_clear_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD); + if (err) + return err; + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c +index 70735068cf292..0fd290d776ffe 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c +@@ -405,7 +405,8 @@ mlxsw_hwmon_module_temp_label_show(struct device *dev, + container_of(attr, struct mlxsw_hwmon_attr, dev_attr); + + return sprintf(buf, "front panel %03u\n", +- mlxsw_hwmon_attr->type_index); ++ mlxsw_hwmon_attr->type_index + 1 - ++ mlxsw_hwmon_attr->mlxsw_hwmon_dev->sensor_count); + } + + static ssize_t +diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c +index f5f5f8dc3d190..3beefc167da91 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c +@@ -48,6 +48,7 @@ + #define MLXSW_I2C_MBOX_SIZE_BITS 12 + #define MLXSW_I2C_ADDR_BUF_SIZE 4 + #define MLXSW_I2C_BLK_DEF 32 ++#define MLXSW_I2C_BLK_MAX 100 + #define MLXSW_I2C_RETRY 5 + #define MLXSW_I2C_TIMEOUT_MSECS 5000 + #define MLXSW_I2C_MAX_DATA_SIZE 256 +@@ -444,7 +445,7 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size, + } else { + /* No input mailbox is case of initialization query command. */ + reg_size = MLXSW_I2C_MAX_DATA_SIZE; +- num = reg_size / mlxsw_i2c->block_size; ++ num = DIV_ROUND_UP(reg_size, mlxsw_i2c->block_size); + + if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) { + dev_err(&client->dev, "Could not acquire lock"); +@@ -653,7 +654,7 @@ static int mlxsw_i2c_probe(struct i2c_client *client, + return -EOPNOTSUPP; + } + +- mlxsw_i2c->block_size = max_t(u16, MLXSW_I2C_BLK_DEF, ++ mlxsw_i2c->block_size = min_t(u16, MLXSW_I2C_BLK_MAX, + min_t(u16, quirks->max_read_len, + quirks->max_write_len)); + } else { +diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c +index cabed1b7b45ed..a9a0dca0c0305 100644 +--- a/drivers/net/ethernet/realtek/r8169_main.c ++++ b/drivers/net/ethernet/realtek/r8169_main.c +@@ -5201,13 +5201,9 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) + + /* Disable ASPM L1 as that cause random device stop working + * problems as well as full system hangs for some PCIe devices users. +- * Chips from RTL8168h partially have issues with L1.2, but seem +- * to work fine with L1 and L1.1. + */ + if (rtl_aspm_is_safe(tp)) + rc = 0; +- else if (tp->mac_version >= RTL_GIGA_MAC_VER_46) +- rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); + else + rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1); + tp->aspm_manageable = !rc; +diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c +index eaef4a15008a3..692c7f132e9f9 100644 +--- a/drivers/net/ethernet/sfc/ptp.c ++++ b/drivers/net/ethernet/sfc/ptp.c +@@ -1387,7 +1387,8 @@ static int efx_ptp_insert_multicast_filters(struct efx_nic *efx) + goto fail; + + rc = efx_ptp_insert_eth_filter(efx); +- if (rc < 0) ++ /* Not all firmware variants support this filter */ ++ if (rc < 0 && rc != -EPROTONOSUPPORT) + goto fail; + } + +diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c +index 6a7965ed63001..578f470e9fad9 100644 +--- a/drivers/net/macsec.c ++++ b/drivers/net/macsec.c +@@ -1331,8 +1331,7 @@ static struct crypto_aead *macsec_alloc_tfm(char *key, int key_len, int icv_len) + struct crypto_aead *tfm; + int ret; + +- /* Pick a sync gcm(aes) cipher to ensure order is preserved. */ +- tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC); ++ tfm = crypto_alloc_aead("gcm(aes)", 0, 0); + + if (IS_ERR(tfm)) + return tfm; +diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c +index daac293e8edec..1865e3dbdfad0 100644 +--- a/drivers/net/phy/sfp-bus.c ++++ b/drivers/net/phy/sfp-bus.c +@@ -254,6 +254,16 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id, + switch (id->base.extended_cc) { + case SFF8024_ECC_UNSPEC: + break; ++ case SFF8024_ECC_100G_25GAUI_C2M_AOC: ++ if (br_min <= 28000 && br_max >= 25000) { ++ /* 25GBASE-R, possibly with FEC */ ++ __set_bit(PHY_INTERFACE_MODE_25GBASER, interfaces); ++ /* There is currently no link mode for 25000base ++ * with unspecified range, reuse SR. ++ */ ++ phylink_set(modes, 25000baseSR_Full); ++ } ++ break; + case SFF8024_ECC_100GBASE_SR4_25GBASE_SR: + phylink_set(modes, 100000baseSR4_Full); + phylink_set(modes, 25000baseSR_Full); +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 68829a5a93d3e..4fb981b8732ef 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -1422,6 +1422,7 @@ static const struct usb_device_id products[] = { + {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */ + {QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */ + {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */ ++ {QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)}, /* Quectel EM05GV2 */ + {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */ + {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */ + {QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/ +diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c +index 728d607289c36..522691ba4c5d2 100644 +--- a/drivers/net/wireless/ath/ath10k/pci.c ++++ b/drivers/net/wireless/ath/ath10k/pci.c +@@ -1963,8 +1963,9 @@ static int ath10k_pci_hif_start(struct ath10k *ar) + ath10k_pci_irq_enable(ar); + ath10k_pci_rx_post(ar); + +- pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL, +- ar_pci->link_ctl); ++ pcie_capability_clear_and_set_word(ar_pci->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_ASPMC, ++ ar_pci->link_ctl & PCI_EXP_LNKCTL_ASPMC); + + return 0; + } +@@ -2821,8 +2822,8 @@ static int ath10k_pci_hif_power_up(struct ath10k *ar, + + pcie_capability_read_word(ar_pci->pdev, PCI_EXP_LNKCTL, + &ar_pci->link_ctl); +- pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL, +- ar_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); ++ pcie_capability_clear_word(ar_pci->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_ASPMC); + + /* + * Bring the target up cleanly. +diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c +index 3c6005ab9a717..3953ebd551bf8 100644 +--- a/drivers/net/wireless/ath/ath11k/pci.c ++++ b/drivers/net/wireless/ath/ath11k/pci.c +@@ -582,8 +582,8 @@ static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci) + u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1)); + + /* disable L0s and L1 */ +- pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, +- ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); ++ pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_ASPMC); + + set_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags); + } +@@ -591,8 +591,10 @@ static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci) + static void ath11k_pci_aspm_restore(struct ath11k_pci *ab_pci) + { + if (test_and_clear_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags)) +- pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, +- ab_pci->link_ctl); ++ pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_ASPMC, ++ ab_pci->link_ctl & ++ PCI_EXP_LNKCTL_ASPMC); + } + + static int ath11k_pci_power_up(struct ath11k_base *ab) +diff --git a/drivers/net/wireless/ath/ath6kl/Makefile b/drivers/net/wireless/ath/ath6kl/Makefile +index a75bfa9fd1cfd..dc2b3b46781e1 100644 +--- a/drivers/net/wireless/ath/ath6kl/Makefile ++++ b/drivers/net/wireless/ath/ath6kl/Makefile +@@ -36,11 +36,6 @@ ath6kl_core-y += wmi.o + ath6kl_core-y += core.o + ath6kl_core-y += recovery.o + +-# FIXME: temporarily silence -Wdangling-pointer on non W=1+ builds +-ifndef KBUILD_EXTRA_WARN +-CFLAGS_htc_mbox.o += $(call cc-disable-warning, dangling-pointer) +-endif +- + ath6kl_core-$(CONFIG_NL80211_TESTMODE) += testmode.o + ath6kl_core-$(CONFIG_ATH6KL_TRACING) += trace.o + +diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c +index b3ed65e5c4da8..c55aab01fff5d 100644 +--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c ++++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c +@@ -491,7 +491,7 @@ int ath9k_htc_init_debug(struct ath_hw *ah) + + priv->debug.debugfs_phy = debugfs_create_dir(KBUILD_MODNAME, + priv->hw->wiphy->debugfsdir); +- if (!priv->debug.debugfs_phy) ++ if (IS_ERR(priv->debug.debugfs_phy)) + return -ENOMEM; + + ath9k_cmn_spectral_init_debug(&priv->spec_priv, priv->debug.debugfs_phy); +diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c +index d652c647d56b5..1476b42b52a91 100644 +--- a/drivers/net/wireless/ath/ath9k/wmi.c ++++ b/drivers/net/wireless/ath/ath9k/wmi.c +@@ -242,10 +242,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb, + spin_unlock_irqrestore(&wmi->wmi_lock, flags); + goto free_skb; + } +- spin_unlock_irqrestore(&wmi->wmi_lock, flags); + + /* WMI command response */ + ath9k_wmi_rsp_callback(wmi, skb); ++ spin_unlock_irqrestore(&wmi->wmi_lock, flags); + + free_skb: + kfree_skb(skb); +@@ -283,7 +283,8 @@ int ath9k_wmi_connect(struct htc_target *htc, struct wmi *wmi, + + static int ath9k_wmi_cmd_issue(struct wmi *wmi, + struct sk_buff *skb, +- enum wmi_cmd_id cmd, u16 len) ++ enum wmi_cmd_id cmd, u16 len, ++ u8 *rsp_buf, u32 rsp_len) + { + struct wmi_cmd_hdr *hdr; + unsigned long flags; +@@ -293,6 +294,11 @@ static int ath9k_wmi_cmd_issue(struct wmi *wmi, + hdr->seq_no = cpu_to_be16(++wmi->tx_seq_id); + + spin_lock_irqsave(&wmi->wmi_lock, flags); ++ ++ /* record the rsp buffer and length */ ++ wmi->cmd_rsp_buf = rsp_buf; ++ wmi->cmd_rsp_len = rsp_len; ++ + wmi->last_seq_id = wmi->tx_seq_id; + spin_unlock_irqrestore(&wmi->wmi_lock, flags); + +@@ -308,8 +314,8 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id, + struct ath_common *common = ath9k_hw_common(ah); + u16 headroom = sizeof(struct htc_frame_hdr) + + sizeof(struct wmi_cmd_hdr); ++ unsigned long time_left, flags; + struct sk_buff *skb; +- unsigned long time_left; + int ret = 0; + + if (ah->ah_flags & AH_UNPLUGGED) +@@ -333,11 +339,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id, + goto out; + } + +- /* record the rsp buffer and length */ +- wmi->cmd_rsp_buf = rsp_buf; +- wmi->cmd_rsp_len = rsp_len; +- +- ret = ath9k_wmi_cmd_issue(wmi, skb, cmd_id, cmd_len); ++ ret = ath9k_wmi_cmd_issue(wmi, skb, cmd_id, cmd_len, rsp_buf, rsp_len); + if (ret) + goto out; + +@@ -345,7 +347,9 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id, + if (!time_left) { + ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n", + wmi_cmd_to_name(cmd_id)); ++ spin_lock_irqsave(&wmi->wmi_lock, flags); + wmi->last_seq_id = 0; ++ spin_unlock_irqrestore(&wmi->wmi_lock, flags); + mutex_unlock(&wmi->op_mutex); + return -ETIMEDOUT; + } +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h +index f518e025d6e46..a8d88aedc4227 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h +@@ -383,7 +383,12 @@ struct brcmf_scan_params_le { + * fixed parameter portion is assumed, otherwise + * ssid in the fixed portion is ignored + */ +- __le16 channel_list[1]; /* list of chanspecs */ ++ union { ++ __le16 padding; /* Reserve space for at least 1 entry for abort ++ * which uses an on stack brcmf_scan_params_le ++ */ ++ DECLARE_FLEX_ARRAY(__le16, channel_list); /* chanspecs */ ++ }; + }; + + struct brcmf_scan_results { +diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c +index bda53cb91f376..63f232c723374 100644 +--- a/drivers/net/wireless/marvell/mwifiex/debugfs.c ++++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c +@@ -253,8 +253,11 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf, + if (!p) + return -ENOMEM; + +- if (!priv || !priv->hist_data) +- return -EFAULT; ++ if (!priv || !priv->hist_data) { ++ ret = -EFAULT; ++ goto free_and_exit; ++ } ++ + phist_data = priv->hist_data; + + p += sprintf(p, "\n" +@@ -309,6 +312,8 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf, + ret = simple_read_from_buffer(ubuf, count, ppos, (char *)page, + (unsigned long)p - page); + ++free_and_exit: ++ free_page(page); + return ret; + } + +diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c +index 9a698a16a8f38..6697132ecc977 100644 +--- a/drivers/net/wireless/marvell/mwifiex/pcie.c ++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c +@@ -189,6 +189,8 @@ static int mwifiex_pcie_probe_of(struct device *dev) + } + + static void mwifiex_pcie_work(struct work_struct *work); ++static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter); ++static int mwifiex_pcie_delete_evtbd_ring(struct mwifiex_adapter *adapter); + + static int + mwifiex_map_pci_memory(struct mwifiex_adapter *adapter, struct sk_buff *skb, +@@ -792,14 +794,15 @@ static int mwifiex_init_rxq_ring(struct mwifiex_adapter *adapter) + if (!skb) { + mwifiex_dbg(adapter, ERROR, + "Unable to allocate skb for RX ring.\n"); +- kfree(card->rxbd_ring_vbase); + return -ENOMEM; + } + + if (mwifiex_map_pci_memory(adapter, skb, + MWIFIEX_RX_DATA_BUF_SIZE, +- DMA_FROM_DEVICE)) +- return -1; ++ DMA_FROM_DEVICE)) { ++ kfree_skb(skb); ++ return -ENOMEM; ++ } + + buf_pa = MWIFIEX_SKB_DMA_ADDR(skb); + +@@ -849,7 +852,6 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter) + if (!skb) { + mwifiex_dbg(adapter, ERROR, + "Unable to allocate skb for EVENT buf.\n"); +- kfree(card->evtbd_ring_vbase); + return -ENOMEM; + } + skb_put(skb, MAX_EVENT_SIZE); +@@ -857,8 +859,7 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter) + if (mwifiex_map_pci_memory(adapter, skb, MAX_EVENT_SIZE, + DMA_FROM_DEVICE)) { + kfree_skb(skb); +- kfree(card->evtbd_ring_vbase); +- return -1; ++ return -ENOMEM; + } + + buf_pa = MWIFIEX_SKB_DMA_ADDR(skb); +@@ -1058,6 +1059,7 @@ static int mwifiex_pcie_delete_txbd_ring(struct mwifiex_adapter *adapter) + */ + static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter) + { ++ int ret; + struct pcie_service_card *card = adapter->card; + const struct mwifiex_pcie_card_reg *reg = card->pcie.reg; + +@@ -1096,7 +1098,10 @@ static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter) + (u32)((u64)card->rxbd_ring_pbase >> 32), + card->rxbd_ring_size); + +- return mwifiex_init_rxq_ring(adapter); ++ ret = mwifiex_init_rxq_ring(adapter); ++ if (ret) ++ mwifiex_pcie_delete_rxbd_ring(adapter); ++ return ret; + } + + /* +@@ -1127,6 +1132,7 @@ static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter) + */ + static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter) + { ++ int ret; + struct pcie_service_card *card = adapter->card; + const struct mwifiex_pcie_card_reg *reg = card->pcie.reg; + +@@ -1161,7 +1167,10 @@ static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter) + (u32)((u64)card->evtbd_ring_pbase >> 32), + card->evtbd_ring_size); + +- return mwifiex_pcie_init_evt_ring(adapter); ++ ret = mwifiex_pcie_init_evt_ring(adapter); ++ if (ret) ++ mwifiex_pcie_delete_evtbd_ring(adapter); ++ return ret; + } + + /* +diff --git a/drivers/net/wireless/marvell/mwifiex/sta_rx.c b/drivers/net/wireless/marvell/mwifiex/sta_rx.c +index 13659b02ba882..65420ad674167 100644 +--- a/drivers/net/wireless/marvell/mwifiex/sta_rx.c ++++ b/drivers/net/wireless/marvell/mwifiex/sta_rx.c +@@ -86,6 +86,15 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv, + rx_pkt_len = le16_to_cpu(local_rx_pd->rx_pkt_length); + rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_off; + ++ if (sizeof(*rx_pkt_hdr) + rx_pkt_off > skb->len) { ++ mwifiex_dbg(priv->adapter, ERROR, ++ "wrong rx packet offset: len=%d, rx_pkt_off=%d\n", ++ skb->len, rx_pkt_off); ++ priv->stats.rx_dropped++; ++ dev_kfree_skb_any(skb); ++ return -1; ++ } ++ + if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header, + sizeof(bridge_tunnel_header))) || + (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header, +@@ -194,7 +203,8 @@ int mwifiex_process_sta_rx_packet(struct mwifiex_private *priv, + + rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_offset; + +- if ((rx_pkt_offset + rx_pkt_length) > (u16) skb->len) { ++ if ((rx_pkt_offset + rx_pkt_length) > skb->len || ++ sizeof(rx_pkt_hdr->eth803_hdr) + rx_pkt_offset > skb->len) { + mwifiex_dbg(adapter, ERROR, + "wrong rx packet: len=%d, rx_pkt_offset=%d, rx_pkt_length=%d\n", + skb->len, rx_pkt_offset, rx_pkt_length); +diff --git a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c +index e495f7eaea033..b8b9a0fcb19cd 100644 +--- a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c ++++ b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c +@@ -103,6 +103,16 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv, + return; + } + ++ if (sizeof(*rx_pkt_hdr) + ++ le16_to_cpu(uap_rx_pd->rx_pkt_offset) > skb->len) { ++ mwifiex_dbg(adapter, ERROR, ++ "wrong rx packet offset: len=%d,rx_pkt_offset=%d\n", ++ skb->len, le16_to_cpu(uap_rx_pd->rx_pkt_offset)); ++ priv->stats.rx_dropped++; ++ dev_kfree_skb_any(skb); ++ return; ++ } ++ + if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header, + sizeof(bridge_tunnel_header))) || + (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header, +@@ -243,7 +253,15 @@ int mwifiex_handle_uap_rx_forward(struct mwifiex_private *priv, + + if (is_multicast_ether_addr(ra)) { + skb_uap = skb_copy(skb, GFP_ATOMIC); +- mwifiex_uap_queue_bridged_pkt(priv, skb_uap); ++ if (likely(skb_uap)) { ++ mwifiex_uap_queue_bridged_pkt(priv, skb_uap); ++ } else { ++ mwifiex_dbg(adapter, ERROR, ++ "failed to copy skb for uAP\n"); ++ priv->stats.rx_dropped++; ++ dev_kfree_skb_any(skb); ++ return -1; ++ } + } else { + if (mwifiex_get_sta_entry(priv, ra)) { + /* Requeue Intra-BSS packet */ +@@ -367,6 +385,16 @@ int mwifiex_process_uap_rx_packet(struct mwifiex_private *priv, + rx_pkt_type = le16_to_cpu(uap_rx_pd->rx_pkt_type); + rx_pkt_hdr = (void *)uap_rx_pd + le16_to_cpu(uap_rx_pd->rx_pkt_offset); + ++ if (le16_to_cpu(uap_rx_pd->rx_pkt_offset) + ++ sizeof(rx_pkt_hdr->eth803_hdr) > skb->len) { ++ mwifiex_dbg(adapter, ERROR, ++ "wrong rx packet for struct ethhdr: len=%d, offset=%d\n", ++ skb->len, le16_to_cpu(uap_rx_pd->rx_pkt_offset)); ++ priv->stats.rx_dropped++; ++ dev_kfree_skb_any(skb); ++ return 0; ++ } ++ + ether_addr_copy(ta, rx_pkt_hdr->eth803_hdr.h_source); + + if ((le16_to_cpu(uap_rx_pd->rx_pkt_offset) + +diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c +index 94c2d219835da..745b1d925b217 100644 +--- a/drivers/net/wireless/marvell/mwifiex/util.c ++++ b/drivers/net/wireless/marvell/mwifiex/util.c +@@ -393,11 +393,15 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv, + } + + rx_pd = (struct rxpd *)skb->data; ++ pkt_len = le16_to_cpu(rx_pd->rx_pkt_length); ++ if (pkt_len < sizeof(struct ieee80211_hdr) + sizeof(pkt_len)) { ++ mwifiex_dbg(priv->adapter, ERROR, "invalid rx_pkt_length"); ++ return -1; ++ } + + skb_pull(skb, le16_to_cpu(rx_pd->rx_pkt_offset)); + skb_pull(skb, sizeof(pkt_len)); +- +- pkt_len = le16_to_cpu(rx_pd->rx_pkt_length); ++ pkt_len -= sizeof(pkt_len); + + ieee_hdr = (void *)skb->data; + if (ieee80211_is_mgmt(ieee_hdr->frame_control)) { +@@ -410,7 +414,7 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv, + skb->data + sizeof(struct ieee80211_hdr), + pkt_len - sizeof(struct ieee80211_hdr)); + +- pkt_len -= ETH_ALEN + sizeof(pkt_len); ++ pkt_len -= ETH_ALEN; + rx_pd->rx_pkt_length = cpu_to_le16(pkt_len); + + cfg80211_rx_mgmt(&priv->wdev, priv->roc_cfg.chan.center_freq, +diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c +index bda26bd62412e..3280843ea8566 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c +@@ -455,7 +455,8 @@ static int mt7915_config(struct ieee80211_hw *hw, u32 changed) + ieee80211_wake_queues(hw); + } + +- if (changed & IEEE80211_CONF_CHANGE_POWER) { ++ if (changed & (IEEE80211_CONF_CHANGE_POWER | ++ IEEE80211_CONF_CHANGE_CHANNEL)) { + ret = mt7915_mcu_set_txpower_sku(phy); + if (ret) + return ret; +diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c +index 4ad66b3443838..c997b8d3ea590 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c ++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c +@@ -80,7 +80,8 @@ mt7921_init_wiphy(struct ieee80211_hw *hw) + wiphy->max_sched_scan_ssids = MT76_CONNAC_MAX_SCHED_SCAN_SSID; + wiphy->max_match_sets = MT76_CONNAC_MAX_SCAN_MATCH; + wiphy->max_sched_scan_reqs = 1; +- wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH; ++ wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH | ++ WIPHY_FLAG_SPLIT_SCAN_6GHZ; + wiphy->reg_notifier = mt7921_regd_notifier; + + wiphy->features |= NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR | +diff --git a/drivers/net/wireless/mediatek/mt76/testmode.c b/drivers/net/wireless/mediatek/mt76/testmode.c +index 0accc71a91c9a..4644dace9bb34 100644 +--- a/drivers/net/wireless/mediatek/mt76/testmode.c ++++ b/drivers/net/wireless/mediatek/mt76/testmode.c +@@ -8,6 +8,7 @@ const struct nla_policy mt76_tm_policy[NUM_MT76_TM_ATTRS] = { + [MT76_TM_ATTR_RESET] = { .type = NLA_FLAG }, + [MT76_TM_ATTR_STATE] = { .type = NLA_U8 }, + [MT76_TM_ATTR_TX_COUNT] = { .type = NLA_U32 }, ++ [MT76_TM_ATTR_TX_LENGTH] = { .type = NLA_U32 }, + [MT76_TM_ATTR_TX_RATE_MODE] = { .type = NLA_U8 }, + [MT76_TM_ATTR_TX_RATE_NSS] = { .type = NLA_U8 }, + [MT76_TM_ATTR_TX_RATE_IDX] = { .type = NLA_U8 }, +diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c +index ec0af903961f0..3a8fe60d0bb7b 100644 +--- a/drivers/net/wireless/realtek/rtw89/debug.c ++++ b/drivers/net/wireless/realtek/rtw89/debug.c +@@ -2302,12 +2302,14 @@ static ssize_t rtw89_debug_priv_btc_manual_set(struct file *filp, + struct rtw89_dev *rtwdev = debugfs_priv->rtwdev; + struct rtw89_btc *btc = &rtwdev->btc; + bool btc_manual; ++ int ret; + +- if (kstrtobool_from_user(user_buf, count, &btc_manual)) +- goto out; ++ ret = kstrtobool_from_user(user_buf, count, &btc_manual); ++ if (ret) ++ return ret; + + btc->ctrl.manual = btc_manual; +-out: ++ + return count; + } + +diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c +index 2abd2235bbcab..9532108d2dce1 100644 +--- a/drivers/ntb/ntb_transport.c ++++ b/drivers/ntb/ntb_transport.c +@@ -909,7 +909,7 @@ static int ntb_set_mw(struct ntb_transport_ctx *nt, int num_mw, + return 0; + } + +-static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp) ++static void ntb_qp_link_context_reset(struct ntb_transport_qp *qp) + { + qp->link_is_up = false; + qp->active = false; +@@ -932,6 +932,13 @@ static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp) + qp->tx_async = 0; + } + ++static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp) ++{ ++ ntb_qp_link_context_reset(qp); ++ if (qp->remote_rx_info) ++ qp->remote_rx_info->entry = qp->rx_max_entry - 1; ++} ++ + static void ntb_qp_link_cleanup(struct ntb_transport_qp *qp) + { + struct ntb_transport_ctx *nt = qp->transport; +@@ -1174,7 +1181,7 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt, + qp->ndev = nt->ndev; + qp->client_ready = false; + qp->event_handler = NULL; +- ntb_qp_link_down_reset(qp); ++ ntb_qp_link_context_reset(qp); + + if (mw_num < qp_count % mw_count) + num_qps_mw = qp_count / mw_count + 1; +@@ -2276,9 +2283,13 @@ int ntb_transport_tx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data, + struct ntb_queue_entry *entry; + int rc; + +- if (!qp || !qp->link_is_up || !len) ++ if (!qp || !len) + return -EINVAL; + ++ /* If the qp link is down already, just ignore. */ ++ if (!qp->link_is_up) ++ return 0; ++ + entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q); + if (!entry) { + qp->tx_err_no_buf++; +@@ -2418,7 +2429,7 @@ unsigned int ntb_transport_tx_free_entry(struct ntb_transport_qp *qp) + unsigned int head = qp->tx_index; + unsigned int tail = qp->remote_rx_info->entry; + +- return tail > head ? tail - head : qp->tx_max_entry + tail - head; ++ return tail >= head ? tail - head : qp->tx_max_entry + tail - head; + } + EXPORT_SYMBOL_GPL(ntb_transport_tx_free_entry); + +diff --git a/drivers/nvdimm/nd_perf.c b/drivers/nvdimm/nd_perf.c +index 433bbb68ae641..2b6dc80d8fb5b 100644 +--- a/drivers/nvdimm/nd_perf.c ++++ b/drivers/nvdimm/nd_perf.c +@@ -308,8 +308,8 @@ int register_nvdimm_pmu(struct nvdimm_pmu *nd_pmu, struct platform_device *pdev) + + rc = perf_pmu_register(&nd_pmu->pmu, nd_pmu->pmu.name, -1); + if (rc) { +- kfree(nd_pmu->pmu.attr_groups); + nvdimm_pmu_free_hotplug_memory(nd_pmu); ++ kfree(nd_pmu->pmu.attr_groups); + return rc; + } + +@@ -324,6 +324,7 @@ void unregister_nvdimm_pmu(struct nvdimm_pmu *nd_pmu) + { + perf_pmu_unregister(&nd_pmu->pmu); + nvdimm_pmu_free_hotplug_memory(nd_pmu); ++ kfree(nd_pmu->pmu.attr_groups); + kfree(nd_pmu); + } + EXPORT_SYMBOL_GPL(unregister_nvdimm_pmu); +diff --git a/drivers/of/dynamic.c b/drivers/of/dynamic.c +index 4e436f2d13aeb..95501b77ef314 100644 +--- a/drivers/of/dynamic.c ++++ b/drivers/of/dynamic.c +@@ -225,6 +225,7 @@ static void __of_attach_node(struct device_node *np) + np->sibling = np->parent->child; + np->parent->child = np; + of_node_clear_flag(np, OF_DETACHED); ++ np->fwnode.flags |= FWNODE_FLAG_NOT_DEVICE; + } + + /** +diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c +index 5289975bad708..4402871b5c0c0 100644 +--- a/drivers/of/overlay.c ++++ b/drivers/of/overlay.c +@@ -752,8 +752,6 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs) + if (!of_node_is_root(ovcs->overlay_root)) + pr_debug("%s() ovcs->overlay_root is not root\n", __func__); + +- of_changeset_init(&ovcs->cset); +- + cnt = 0; + + /* fragment nodes */ +@@ -1013,6 +1011,7 @@ int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size, + + INIT_LIST_HEAD(&ovcs->ovcs_list); + list_add_tail(&ovcs->ovcs_list, &ovcs_list); ++ of_changeset_init(&ovcs->cset); + + /* + * Must create permanent copy of FDT because of_fdt_unflatten_tree() +diff --git a/drivers/of/platform.c b/drivers/of/platform.c +index e181c3f50f1da..bf96862cb7003 100644 +--- a/drivers/of/platform.c ++++ b/drivers/of/platform.c +@@ -741,6 +741,11 @@ static int of_platform_notify(struct notifier_block *nb, + if (of_node_check_flag(rd->dn, OF_POPULATED)) + return NOTIFY_OK; + ++ /* ++ * Clear the flag before adding the device so that fw_devlink ++ * doesn't skip adding consumers to this device. ++ */ ++ rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; + /* pdev_parent may be NULL when no bus platform device */ + pdev_parent = of_find_device_by_node(rd->dn->parent); + pdev = of_platform_device_create(rd->dn, NULL, +diff --git a/drivers/of/property.c b/drivers/of/property.c +index 134cfc980b70b..b636777e6f7c8 100644 +--- a/drivers/of/property.c ++++ b/drivers/of/property.c +@@ -1062,20 +1062,6 @@ of_fwnode_device_get_match_data(const struct fwnode_handle *fwnode, + return of_device_get_match_data(dev); + } + +-static bool of_is_ancestor_of(struct device_node *test_ancestor, +- struct device_node *child) +-{ +- of_node_get(child); +- while (child) { +- if (child == test_ancestor) { +- of_node_put(child); +- return true; +- } +- child = of_get_next_parent(child); +- } +- return false; +-} +- + static struct device_node *of_get_compat_node(struct device_node *np) + { + of_node_get(np); +@@ -1106,71 +1092,27 @@ static struct device_node *of_get_compat_node_parent(struct device_node *np) + return node; + } + +-/** +- * of_link_to_phandle - Add fwnode link to supplier from supplier phandle +- * @con_np: consumer device tree node +- * @sup_np: supplier device tree node +- * +- * Given a phandle to a supplier device tree node (@sup_np), this function +- * finds the device that owns the supplier device tree node and creates a +- * device link from @dev consumer device to the supplier device. This function +- * doesn't create device links for invalid scenarios such as trying to create a +- * link with a parent device as the consumer of its child device. In such +- * cases, it returns an error. +- * +- * Returns: +- * - 0 if fwnode link successfully created to supplier +- * - -EINVAL if the supplier link is invalid and should not be created +- * - -ENODEV if struct device will never be create for supplier +- */ +-static int of_link_to_phandle(struct device_node *con_np, ++static void of_link_to_phandle(struct device_node *con_np, + struct device_node *sup_np) + { +- struct device *sup_dev; +- struct device_node *tmp_np = sup_np; ++ struct device_node *tmp_np = of_node_get(sup_np); + +- /* +- * Find the device node that contains the supplier phandle. It may be +- * @sup_np or it may be an ancestor of @sup_np. +- */ +- sup_np = of_get_compat_node(sup_np); +- if (!sup_np) { +- pr_debug("Not linking %pOFP to %pOFP - No device\n", +- con_np, tmp_np); +- return -ENODEV; +- } ++ /* Check that sup_np and its ancestors are available. */ ++ while (tmp_np) { ++ if (of_fwnode_handle(tmp_np)->dev) { ++ of_node_put(tmp_np); ++ break; ++ } + +- /* +- * Don't allow linking a device node as a consumer of one of its +- * descendant nodes. By definition, a child node can't be a functional +- * dependency for the parent node. +- */ +- if (of_is_ancestor_of(con_np, sup_np)) { +- pr_debug("Not linking %pOFP to %pOFP - is descendant\n", +- con_np, sup_np); +- of_node_put(sup_np); +- return -EINVAL; +- } ++ if (!of_device_is_available(tmp_np)) { ++ of_node_put(tmp_np); ++ return; ++ } + +- /* +- * Don't create links to "early devices" that won't have struct devices +- * created for them. +- */ +- sup_dev = get_dev_from_fwnode(&sup_np->fwnode); +- if (!sup_dev && +- (of_node_check_flag(sup_np, OF_POPULATED) || +- sup_np->fwnode.flags & FWNODE_FLAG_NOT_DEVICE)) { +- pr_debug("Not linking %pOFP to %pOFP - No struct device\n", +- con_np, sup_np); +- of_node_put(sup_np); +- return -ENODEV; ++ tmp_np = of_get_next_parent(tmp_np); + } +- put_device(sup_dev); + + fwnode_link_add(of_fwnode_handle(con_np), of_fwnode_handle(sup_np)); +- of_node_put(sup_np); +- +- return 0; + } + + /** +@@ -1324,6 +1266,7 @@ DEFINE_SIMPLE_PROP(pwms, "pwms", "#pwm-cells") + DEFINE_SIMPLE_PROP(resets, "resets", "#reset-cells") + DEFINE_SIMPLE_PROP(leds, "leds", NULL) + DEFINE_SIMPLE_PROP(backlight, "backlight", NULL) ++DEFINE_SIMPLE_PROP(panel, "panel", NULL) + DEFINE_SUFFIX_PROP(regulators, "-supply", NULL) + DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells") + +@@ -1412,6 +1355,7 @@ static const struct supplier_bindings of_supplier_bindings[] = { + { .parse_prop = parse_resets, }, + { .parse_prop = parse_leds, }, + { .parse_prop = parse_backlight, }, ++ { .parse_prop = parse_panel, }, + { .parse_prop = parse_gpio_compat, }, + { .parse_prop = parse_interrupts, }, + { .parse_prop = parse_regulators, }, +diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c +index 9be6ed47a1ce4..edd2342598e49 100644 +--- a/drivers/of/unittest.c ++++ b/drivers/of/unittest.c +@@ -70,7 +70,7 @@ static void __init of_unittest_find_node_by_name(void) + + np = of_find_node_by_path("/testcase-data"); + name = kasprintf(GFP_KERNEL, "%pOF", np); +- unittest(np && !strcmp("/testcase-data", name), ++ unittest(np && name && !strcmp("/testcase-data", name), + "find /testcase-data failed\n"); + of_node_put(np); + kfree(name); +@@ -81,14 +81,14 @@ static void __init of_unittest_find_node_by_name(void) + + np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a"); + name = kasprintf(GFP_KERNEL, "%pOF", np); +- unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name), ++ unittest(np && name && !strcmp("/testcase-data/phandle-tests/consumer-a", name), + "find /testcase-data/phandle-tests/consumer-a failed\n"); + of_node_put(np); + kfree(name); + + np = of_find_node_by_path("testcase-alias"); + name = kasprintf(GFP_KERNEL, "%pOF", np); +- unittest(np && !strcmp("/testcase-data", name), ++ unittest(np && name && !strcmp("/testcase-data", name), + "find testcase-alias failed\n"); + of_node_put(np); + kfree(name); +@@ -99,7 +99,7 @@ static void __init of_unittest_find_node_by_name(void) + + np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a"); + name = kasprintf(GFP_KERNEL, "%pOF", np); +- unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name), ++ unittest(np && name && !strcmp("/testcase-data/phandle-tests/consumer-a", name), + "find testcase-alias/phandle-tests/consumer-a failed\n"); + of_node_put(np); + kfree(name); +@@ -1379,6 +1379,8 @@ static void attach_node_and_children(struct device_node *np) + const char *full_name; + + full_name = kasprintf(GFP_KERNEL, "%pOF", np); ++ if (!full_name) ++ return; + + if (!strcmp(full_name, "/__local_fixups__") || + !strcmp(full_name, "/__fixups__")) { +@@ -2060,7 +2062,7 @@ static int __init of_unittest_apply_revert_overlay_check(int overlay_nr, + of_unittest_untrack_overlay(save_ovcs_id); + + /* unittest device must be again in before state */ +- if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) { ++ if (of_unittest_device_exists(unittest_nr, ovtype) != before) { + unittest(0, "%s with device @\"%s\" %s\n", + overlay_name_from_nr(overlay_nr), + unittest_path(unittest_nr, ovtype), +diff --git a/drivers/opp/core.c b/drivers/opp/core.c +index d707214069ca9..f0d70ecc0271b 100644 +--- a/drivers/opp/core.c ++++ b/drivers/opp/core.c +@@ -2372,7 +2372,7 @@ static int _opp_attach_genpd(struct opp_table *opp_table, struct device *dev, + + virt_dev = dev_pm_domain_attach_by_name(dev, *name); + if (IS_ERR_OR_NULL(virt_dev)) { +- ret = PTR_ERR(virt_dev) ? : -ENODEV; ++ ret = virt_dev ? PTR_ERR(virt_dev) : -ENODEV; + dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret); + goto err; + } +diff --git a/drivers/pci/access.c b/drivers/pci/access.c +index 708c7529647fd..3d20f9c51efe7 100644 +--- a/drivers/pci/access.c ++++ b/drivers/pci/access.c +@@ -491,8 +491,8 @@ int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val) + } + EXPORT_SYMBOL(pcie_capability_write_dword); + +-int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos, +- u16 clear, u16 set) ++int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos, ++ u16 clear, u16 set) + { + int ret; + u16 val; +@@ -506,7 +506,21 @@ int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos, + + return ret; + } +-EXPORT_SYMBOL(pcie_capability_clear_and_set_word); ++EXPORT_SYMBOL(pcie_capability_clear_and_set_word_unlocked); ++ ++int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos, ++ u16 clear, u16 set) ++{ ++ unsigned long flags; ++ int ret; ++ ++ spin_lock_irqsave(&dev->pcie_cap_lock, flags); ++ ret = pcie_capability_clear_and_set_word_unlocked(dev, pos, clear, set); ++ spin_unlock_irqrestore(&dev->pcie_cap_lock, flags); ++ ++ return ret; ++} ++EXPORT_SYMBOL(pcie_capability_clear_and_set_word_locked); + + int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos, + u32 clear, u32 set) +diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c +index 6d0d1b759ca24..d4c566c1c8725 100644 +--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c ++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c +@@ -410,7 +410,7 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci) + /* Gate Master AXI clock to MHI bus during L1SS */ + val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); + val &= ~PARF_MSTR_AXI_CLK_EN; +- val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); ++ writel_relaxed(val, pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); + + dw_pcie_ep_init_notify(&pcie_ep->pci.ep); + +diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c +index 528e73ccfa43e..2241029537a03 100644 +--- a/drivers/pci/controller/dwc/pcie-tegra194.c ++++ b/drivers/pci/controller/dwc/pcie-tegra194.c +@@ -879,11 +879,6 @@ static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp) + pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, + PCI_CAP_ID_EXP); + +- val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL); +- val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD; +- val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B; +- dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16); +- + val = dw_pcie_readl_dbi(pci, PCI_IO_BASE); + val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8); + dw_pcie_writel_dbi(pci, PCI_IO_BASE, val); +@@ -1872,11 +1867,6 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie) + pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, + PCI_CAP_ID_EXP); + +- val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL); +- val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD; +- val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B; +- dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16); +- + /* Clear Slot Clock Configuration bit if SRNS configuration */ + if (pcie->enable_srns) { + val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + +diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c +index 3351863352d36..9693bab59bf7c 100644 +--- a/drivers/pci/controller/pci-hyperv.c ++++ b/drivers/pci/controller/pci-hyperv.c +@@ -3930,6 +3930,9 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg) + struct msi_desc *entry; + int ret = 0; + ++ if (!pdev->msi_enabled && !pdev->msix_enabled) ++ return 0; ++ + msi_lock_descs(&pdev->dev); + msi_for_each_desc(entry, &pdev->dev, MSI_DESC_ASSOCIATED) { + irq_data = irq_get_irq_data(entry->irq); +diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c +index 66f37e403a09c..2340dab6cd5bd 100644 +--- a/drivers/pci/controller/pcie-apple.c ++++ b/drivers/pci/controller/pcie-apple.c +@@ -783,6 +783,10 @@ static int apple_pcie_init(struct pci_config_window *cfg) + cfg->priv = pcie; + INIT_LIST_HEAD(&pcie->ports); + ++ ret = apple_msi_init(pcie); ++ if (ret) ++ return ret; ++ + for_each_child_of_node(dev->of_node, of_port) { + ret = apple_pcie_setup_port(pcie, of_port); + if (ret) { +@@ -792,7 +796,7 @@ static int apple_pcie_init(struct pci_config_window *cfg) + } + } + +- return apple_msi_init(pcie); ++ return 0; + } + + static int apple_pcie_probe(struct platform_device *pdev) +diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c +index 7263d175b5adb..5ba101efd9326 100644 +--- a/drivers/pci/controller/pcie-microchip-host.c ++++ b/drivers/pci/controller/pcie-microchip-host.c +@@ -167,12 +167,12 @@ + #define EVENT_PCIE_DLUP_EXIT 2 + #define EVENT_SEC_TX_RAM_SEC_ERR 3 + #define EVENT_SEC_RX_RAM_SEC_ERR 4 +-#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 5 +-#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 6 ++#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 5 ++#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 6 + #define EVENT_DED_TX_RAM_DED_ERR 7 + #define EVENT_DED_RX_RAM_DED_ERR 8 +-#define EVENT_DED_AXI2PCIE_RAM_DED_ERR 9 +-#define EVENT_DED_PCIE2AXI_RAM_DED_ERR 10 ++#define EVENT_DED_PCIE2AXI_RAM_DED_ERR 9 ++#define EVENT_DED_AXI2PCIE_RAM_DED_ERR 10 + #define EVENT_LOCAL_DMA_END_ENGINE_0 11 + #define EVENT_LOCAL_DMA_END_ENGINE_1 12 + #define EVENT_LOCAL_DMA_ERROR_ENGINE_0 13 +diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h +index fe0333778fd93..6111de35f84ca 100644 +--- a/drivers/pci/controller/pcie-rockchip.h ++++ b/drivers/pci/controller/pcie-rockchip.h +@@ -158,7 +158,9 @@ + #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) + #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) + +-#define PCIE_ADDR_MASK 0xffffff00 ++#define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 ++#define MIN_AXI_ADDR_BITS_PASSED 8 ++#define PCIE_ADDR_MASK GENMASK_ULL(63, MIN_AXI_ADDR_BITS_PASSED) + #define PCIE_CORE_AXI_CONF_BASE 0xc00000 + #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) + #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f +@@ -185,8 +187,6 @@ + #define AXI_WRAPPER_TYPE1_CFG 0xb + #define AXI_WRAPPER_NOR_MSG 0xc + +-#define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 +-#define MIN_AXI_ADDR_BITS_PASSED 8 + #define PCIE_RC_SEND_PME_OFF 0x11960 + #define ROCKCHIP_VENDOR_ID 0x1d87 + #define PCIE_LINK_IS_L2(x) \ +diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c +index e5e9b287b9766..c1776f82b7fce 100644 +--- a/drivers/pci/doe.c ++++ b/drivers/pci/doe.c +@@ -223,8 +223,8 @@ static int pci_doe_recv_resp(struct pci_doe_mb *doe_mb, struct pci_doe_task *tas + static void signal_task_complete(struct pci_doe_task *task, int rv) + { + task->rv = rv; +- task->complete(task); + destroy_work_on_stack(&task->work); ++ task->complete(task); + } + + static void signal_task_abort(struct pci_doe_task *task, int rv) +diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c +index 112c8f401ac4e..358f077284cbe 100644 +--- a/drivers/pci/hotplug/pciehp_hpc.c ++++ b/drivers/pci/hotplug/pciehp_hpc.c +@@ -332,17 +332,11 @@ int pciehp_check_link_status(struct controller *ctrl) + static int __pciehp_link_set(struct controller *ctrl, bool enable) + { + struct pci_dev *pdev = ctrl_dev(ctrl); +- u16 lnk_ctrl; + +- pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl); ++ pcie_capability_clear_and_set_word(pdev, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_LD, ++ enable ? 0 : PCI_EXP_LNKCTL_LD); + +- if (enable) +- lnk_ctrl &= ~PCI_EXP_LNKCTL_LD; +- else +- lnk_ctrl |= PCI_EXP_LNKCTL_LD; +- +- pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl); +- ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl); + return 0; + } + +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c +index ba38fc47d35e9..dd0d9d9bc5097 100644 +--- a/drivers/pci/pci-sysfs.c ++++ b/drivers/pci/pci-sysfs.c +@@ -756,6 +756,13 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj, + if (ret) + return ret; + ++ if (resource_is_exclusive(&dev->driver_exclusive_resource, off, ++ count)) { ++ pci_warn_once(dev, "%s: Unexpected write to kernel-exclusive config offset %llx", ++ current->comm, off); ++ add_taint(TAINT_USER, LOCKDEP_STILL_OK); ++ } ++ + if (off > dev->cfg_size) + return 0; + if (off + count > dev->cfg_size) { +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index 88c4372499825..835e9ea14b3a1 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -1193,6 +1193,10 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) + * + * On success, return 0 or 1, depending on whether or not it is necessary to + * restore the device's BARs subsequently (1 is returned in that case). ++ * ++ * On failure, return a negative error code. Always return failure if @dev ++ * lacks a Power Management Capability, even if the platform was able to ++ * put the device in D0 via non-PCI means. + */ + int pci_power_up(struct pci_dev *dev) + { +@@ -1209,9 +1213,6 @@ int pci_power_up(struct pci_dev *dev) + else + dev->current_state = state; + +- if (state == PCI_D0) +- return 0; +- + return -EIO; + } + +@@ -1269,8 +1270,12 @@ static int pci_set_full_power_state(struct pci_dev *dev) + int ret; + + ret = pci_power_up(dev); +- if (ret < 0) ++ if (ret < 0) { ++ if (dev->current_state == PCI_D0) ++ return 0; ++ + return ret; ++ } + + pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); + dev->current_state = pmcsr & PCI_PM_CTRL_STATE_MASK; +diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c +index 07166a4ec27ad..7e89cdbd446fc 100644 +--- a/drivers/pci/pcie/aspm.c ++++ b/drivers/pci/pcie/aspm.c +@@ -250,7 +250,7 @@ static int pcie_retrain_link(struct pcie_link_state *link) + static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) + { + int same_clock = 1; +- u16 reg16, parent_reg, child_reg[8]; ++ u16 reg16, ccc, parent_old_ccc, child_old_ccc[8]; + struct pci_dev *child, *parent = link->pdev; + struct pci_bus *linkbus = parent->subordinate; + /* +@@ -272,6 +272,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) + + /* Port might be already in common clock mode */ + pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16); ++ parent_old_ccc = reg16 & PCI_EXP_LNKCTL_CCC; + if (same_clock && (reg16 & PCI_EXP_LNKCTL_CCC)) { + bool consistent = true; + +@@ -288,34 +289,29 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) + pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n"); + } + ++ ccc = same_clock ? PCI_EXP_LNKCTL_CCC : 0; + /* Configure downstream component, all functions */ + list_for_each_entry(child, &linkbus->devices, bus_list) { + pcie_capability_read_word(child, PCI_EXP_LNKCTL, ®16); +- child_reg[PCI_FUNC(child->devfn)] = reg16; +- if (same_clock) +- reg16 |= PCI_EXP_LNKCTL_CCC; +- else +- reg16 &= ~PCI_EXP_LNKCTL_CCC; +- pcie_capability_write_word(child, PCI_EXP_LNKCTL, reg16); ++ child_old_ccc[PCI_FUNC(child->devfn)] = reg16 & PCI_EXP_LNKCTL_CCC; ++ pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_CCC, ccc); + } + + /* Configure upstream component */ +- pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16); +- parent_reg = reg16; +- if (same_clock) +- reg16 |= PCI_EXP_LNKCTL_CCC; +- else +- reg16 &= ~PCI_EXP_LNKCTL_CCC; +- pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); ++ pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_CCC, ccc); + + if (pcie_retrain_link(link)) { + + /* Training failed. Restore common clock configurations */ + pci_err(parent, "ASPM: Could not configure common clock\n"); + list_for_each_entry(child, &linkbus->devices, bus_list) +- pcie_capability_write_word(child, PCI_EXP_LNKCTL, +- child_reg[PCI_FUNC(child->devfn)]); +- pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); ++ pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_CCC, ++ child_old_ccc[PCI_FUNC(child->devfn)]); ++ pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, ++ PCI_EXP_LNKCTL_CCC, parent_old_ccc); + } + } + +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c +index 7170516298b0b..0945f50fe94ff 100644 +--- a/drivers/pci/probe.c ++++ b/drivers/pci/probe.c +@@ -996,6 +996,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) + res = window->res; + if (!res->flags && !res->start && !res->end) { + release_resource(res); ++ resource_list_destroy_entry(window); + continue; + } + +@@ -2306,6 +2307,13 @@ struct pci_dev *pci_alloc_dev(struct pci_bus *bus) + INIT_LIST_HEAD(&dev->bus_list); + dev->dev.type = &pci_dev_type; + dev->bus = pci_bus_get(bus); ++ dev->driver_exclusive_resource = (struct resource) { ++ .name = "PCI Exclusive", ++ .start = 0, ++ .end = -1, ++ }; ++ ++ spin_lock_init(&dev->pcie_cap_lock); + #ifdef CONFIG_PCI_MSI + raw_spin_lock_init(&dev->msi_lock); + #endif +diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c +index 8e058e08fe810..cd4ce2b4906d1 100644 +--- a/drivers/perf/fsl_imx8_ddr_perf.c ++++ b/drivers/perf/fsl_imx8_ddr_perf.c +@@ -102,6 +102,7 @@ struct ddr_pmu { + const struct fsl_ddr_devtype_data *devtype_data; + int irq; + int id; ++ int active_counter; + }; + + static ssize_t ddr_perf_identifier_show(struct device *dev, +@@ -496,6 +497,10 @@ static void ddr_perf_event_start(struct perf_event *event, int flags) + + ddr_perf_counter_enable(pmu, event->attr.config, counter, true); + ++ if (!pmu->active_counter++) ++ ddr_perf_counter_enable(pmu, EVENT_CYCLES_ID, ++ EVENT_CYCLES_COUNTER, true); ++ + hwc->state = 0; + } + +@@ -550,6 +555,10 @@ static void ddr_perf_event_stop(struct perf_event *event, int flags) + ddr_perf_counter_enable(pmu, event->attr.config, counter, false); + ddr_perf_event_update(event); + ++ if (!--pmu->active_counter) ++ ddr_perf_counter_enable(pmu, EVENT_CYCLES_ID, ++ EVENT_CYCLES_COUNTER, false); ++ + hwc->state |= PERF_HES_STOPPED; + } + +@@ -568,25 +577,10 @@ static void ddr_perf_event_del(struct perf_event *event, int flags) + + static void ddr_perf_pmu_enable(struct pmu *pmu) + { +- struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu); +- +- /* enable cycle counter if cycle is not active event list */ +- if (ddr_pmu->events[EVENT_CYCLES_COUNTER] == NULL) +- ddr_perf_counter_enable(ddr_pmu, +- EVENT_CYCLES_ID, +- EVENT_CYCLES_COUNTER, +- true); + } + + static void ddr_perf_pmu_disable(struct pmu *pmu) + { +- struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu); +- +- if (ddr_pmu->events[EVENT_CYCLES_COUNTER] == NULL) +- ddr_perf_counter_enable(ddr_pmu, +- EVENT_CYCLES_ID, +- EVENT_CYCLES_COUNTER, +- false); + } + + static int ddr_perf_init(struct ddr_pmu *pmu, void __iomem *base, +diff --git a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c +index 6170f8fd118e2..d0319bee01c0f 100644 +--- a/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c ++++ b/drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c +@@ -214,8 +214,7 @@ static int __maybe_unused qcom_snps_hsphy_runtime_suspend(struct device *dev) + if (!hsphy->phy_initialized) + return 0; + +- qcom_snps_hsphy_suspend(hsphy); +- return 0; ++ return qcom_snps_hsphy_suspend(hsphy); + } + + static int __maybe_unused qcom_snps_hsphy_runtime_resume(struct device *dev) +@@ -225,8 +224,7 @@ static int __maybe_unused qcom_snps_hsphy_runtime_resume(struct device *dev) + if (!hsphy->phy_initialized) + return 0; + +- qcom_snps_hsphy_resume(hsphy); +- return 0; ++ return qcom_snps_hsphy_resume(hsphy); + } + + static int qcom_snps_hsphy_set_mode(struct phy *phy, enum phy_mode mode, +diff --git a/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c b/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c +index 80acca4e9e146..2556caf475c0c 100644 +--- a/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c ++++ b/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c +@@ -745,10 +745,12 @@ unsigned long inno_hdmi_phy_rk3328_clk_recalc_rate(struct clk_hw *hw, + do_div(vco, (nd * (no_a == 1 ? no_b : no_a) * no_d * 2)); + } + +- inno->pixclock = vco; +- dev_dbg(inno->dev, "%s rate %lu\n", __func__, inno->pixclock); ++ inno->pixclock = DIV_ROUND_CLOSEST((unsigned long)vco, 1000) * 1000; + +- return vco; ++ dev_dbg(inno->dev, "%s rate %lu vco %llu\n", ++ __func__, inno->pixclock, vco); ++ ++ return inno->pixclock; + } + + static long inno_hdmi_phy_rk3328_clk_round_rate(struct clk_hw *hw, +@@ -790,8 +792,8 @@ static int inno_hdmi_phy_rk3328_clk_set_rate(struct clk_hw *hw, + RK3328_PRE_PLL_POWER_DOWN); + + /* Configure pre-pll */ +- inno_update_bits(inno, 0xa0, RK3228_PCLK_VCO_DIV_5_MASK, +- RK3228_PCLK_VCO_DIV_5(cfg->vco_div_5_en)); ++ inno_update_bits(inno, 0xa0, RK3328_PCLK_VCO_DIV_5_MASK, ++ RK3328_PCLK_VCO_DIV_5(cfg->vco_div_5_en)); + inno_write(inno, 0xa1, RK3328_PRE_PLL_PRE_DIV(cfg->prediv)); + + val = RK3328_SPREAD_SPECTRUM_MOD_DISABLE; +@@ -1021,9 +1023,10 @@ inno_hdmi_phy_rk3328_power_on(struct inno_hdmi_phy *inno, + + inno_write(inno, 0xac, RK3328_POST_PLL_FB_DIV_7_0(cfg->fbdiv)); + if (cfg->postdiv == 1) { +- inno_write(inno, 0xaa, RK3328_POST_PLL_REFCLK_SEL_TMDS); + inno_write(inno, 0xab, RK3328_POST_PLL_FB_DIV_8(cfg->fbdiv) | + RK3328_POST_PLL_PRE_DIV(cfg->prediv)); ++ inno_write(inno, 0xaa, RK3328_POST_PLL_REFCLK_SEL_TMDS | ++ RK3328_POST_PLL_POWER_DOWN); + } else { + v = (cfg->postdiv / 2) - 1; + v &= RK3328_POST_PLL_POST_DIV_MASK; +@@ -1031,7 +1034,8 @@ inno_hdmi_phy_rk3328_power_on(struct inno_hdmi_phy *inno, + inno_write(inno, 0xab, RK3328_POST_PLL_FB_DIV_8(cfg->fbdiv) | + RK3328_POST_PLL_PRE_DIV(cfg->prediv)); + inno_write(inno, 0xaa, RK3328_POST_PLL_POST_DIV_ENABLE | +- RK3328_POST_PLL_REFCLK_SEL_TMDS); ++ RK3328_POST_PLL_REFCLK_SEL_TMDS | ++ RK3328_POST_PLL_POWER_DOWN); + } + + for (v = 0; v < 14; v++) +diff --git a/drivers/pinctrl/pinctrl-mcp23s08_spi.c b/drivers/pinctrl/pinctrl-mcp23s08_spi.c +index 9ae10318f6f35..ea059b9c5542e 100644 +--- a/drivers/pinctrl/pinctrl-mcp23s08_spi.c ++++ b/drivers/pinctrl/pinctrl-mcp23s08_spi.c +@@ -91,18 +91,28 @@ static int mcp23s08_spi_regmap_init(struct mcp23s08 *mcp, struct device *dev, + mcp->reg_shift = 0; + mcp->chip.ngpio = 8; + mcp->chip.label = devm_kasprintf(dev, GFP_KERNEL, "mcp23s08.%d", addr); ++ if (!mcp->chip.label) ++ return -ENOMEM; + + config = &mcp23x08_regmap; + name = devm_kasprintf(dev, GFP_KERNEL, "%d", addr); ++ if (!name) ++ return -ENOMEM; ++ + break; + + case MCP_TYPE_S17: + mcp->reg_shift = 1; + mcp->chip.ngpio = 16; + mcp->chip.label = devm_kasprintf(dev, GFP_KERNEL, "mcp23s17.%d", addr); ++ if (!mcp->chip.label) ++ return -ENOMEM; + + config = &mcp23x17_regmap; + name = devm_kasprintf(dev, GFP_KERNEL, "%d", addr); ++ if (!name) ++ return -ENOMEM; ++ + break; + + case MCP_TYPE_S18: +diff --git a/drivers/platform/chrome/chromeos_acpi.c b/drivers/platform/chrome/chromeos_acpi.c +index 50d8a4d4352d6..1312aaaa8750b 100644 +--- a/drivers/platform/chrome/chromeos_acpi.c ++++ b/drivers/platform/chrome/chromeos_acpi.c +@@ -90,7 +90,36 @@ static int chromeos_acpi_handle_package(struct device *dev, union acpi_object *o + case ACPI_TYPE_STRING: + return sysfs_emit(buf, "%s\n", element->string.pointer); + case ACPI_TYPE_BUFFER: +- return sysfs_emit(buf, "%s\n", element->buffer.pointer); ++ { ++ int i, r, at, room_left; ++ const int byte_per_line = 16; ++ ++ at = 0; ++ room_left = PAGE_SIZE - 1; ++ for (i = 0; i < element->buffer.length && room_left; i += byte_per_line) { ++ r = hex_dump_to_buffer(element->buffer.pointer + i, ++ element->buffer.length - i, ++ byte_per_line, 1, buf + at, room_left, ++ false); ++ if (r > room_left) ++ goto truncating; ++ at += r; ++ room_left -= r; ++ ++ r = sysfs_emit_at(buf, at, "\n"); ++ if (!r) ++ goto truncating; ++ at += r; ++ room_left -= r; ++ } ++ ++ buf[at] = 0; ++ return at; ++truncating: ++ dev_info_once(dev, "truncating sysfs content for %s\n", name); ++ sysfs_emit_at(buf, PAGE_SIZE - 4, "..\n"); ++ return PAGE_SIZE - 1; ++ } + default: + dev_err(dev, "element type %d not supported\n", element->type); + return -EINVAL; +diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c +index b2e19f30a928b..d31fe7eed38df 100644 +--- a/drivers/platform/mellanox/mlxbf-tmfifo.c ++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c +@@ -868,6 +868,7 @@ static bool mlxbf_tmfifo_virtio_notify(struct virtqueue *vq) + tm_vdev = fifo->vdev[VIRTIO_ID_CONSOLE]; + mlxbf_tmfifo_console_output(tm_vdev, vring); + spin_unlock_irqrestore(&fifo->spin_lock[0], flags); ++ set_bit(MLXBF_TM_TX_LWM_IRQ, &fifo->pend_events); + } else if (test_and_set_bit(MLXBF_TM_TX_LWM_IRQ, + &fifo->pend_events)) { + return true; +diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c +index 8a38cd94a605d..d10c097380c56 100644 +--- a/drivers/platform/x86/amd/pmf/core.c ++++ b/drivers/platform/x86/amd/pmf/core.c +@@ -322,7 +322,8 @@ static void amd_pmf_init_features(struct amd_pmf_dev *dev) + + static void amd_pmf_deinit_features(struct amd_pmf_dev *dev) + { +- if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) { ++ if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR) || ++ is_apmf_func_supported(dev, APMF_FUNC_OS_POWER_SLIDER_UPDATE)) { + power_supply_unreg_notifier(&dev->pwr_src_notifier); + amd_pmf_deinit_sps(dev); + } +diff --git a/drivers/platform/x86/amd/pmf/sps.c b/drivers/platform/x86/amd/pmf/sps.c +index fd448844de206..b2cf62937227c 100644 +--- a/drivers/platform/x86/amd/pmf/sps.c ++++ b/drivers/platform/x86/amd/pmf/sps.c +@@ -121,7 +121,8 @@ int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf) + + int amd_pmf_power_slider_update_event(struct amd_pmf_dev *dev) + { +- u8 mode, flag = 0; ++ u8 flag = 0; ++ int mode; + int src; + + mode = amd_pmf_get_pprof_modes(dev); +diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c +index 02bf286924183..36effe04c6f33 100644 +--- a/drivers/platform/x86/asus-wmi.c ++++ b/drivers/platform/x86/asus-wmi.c +@@ -738,13 +738,23 @@ static ssize_t kbd_rgb_mode_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) + { +- u32 cmd, mode, r, g, b, speed; ++ u32 cmd, mode, r, g, b, speed; + int err; + + if (sscanf(buf, "%d %d %d %d %d %d", &cmd, &mode, &r, &g, &b, &speed) != 6) + return -EINVAL; + +- cmd = !!cmd; ++ /* B3 is set and B4 is save to BIOS */ ++ switch (cmd) { ++ case 0: ++ cmd = 0xb3; ++ break; ++ case 1: ++ cmd = 0xb4; ++ break; ++ default: ++ return -EINVAL; ++ } + + /* These are the known usable modes across all TUF/ROG */ + if (mode >= 12 || mode == 9) +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +index 0a6411a8a104c..b2406a595be9a 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +@@ -396,6 +396,7 @@ static int init_bios_attributes(int attr_type, const char *guid) + struct kobject *attr_name_kobj; //individual attribute names + union acpi_object *obj = NULL; + union acpi_object *elements; ++ struct kobject *duplicate; + struct kset *tmp_set; + int min_elements; + +@@ -454,9 +455,11 @@ static int init_bios_attributes(int attr_type, const char *guid) + else + tmp_set = wmi_priv.main_dir_kset; + +- if (kset_find_obj(tmp_set, elements[ATTR_NAME].string.pointer)) { +- pr_debug("duplicate attribute name found - %s\n", +- elements[ATTR_NAME].string.pointer); ++ duplicate = kset_find_obj(tmp_set, elements[ATTR_NAME].string.pointer); ++ if (duplicate) { ++ pr_debug("Duplicate attribute name found - %s\n", ++ elements[ATTR_NAME].string.pointer); ++ kobject_put(duplicate); + goto nextobj; + } + +diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c +index b85050e4a0d65..ae5daecff1771 100644 +--- a/drivers/platform/x86/huawei-wmi.c ++++ b/drivers/platform/x86/huawei-wmi.c +@@ -86,6 +86,8 @@ static const struct key_entry huawei_wmi_keymap[] = { + { KE_IGNORE, 0x293, { KEY_KBDILLUMTOGGLE } }, + { KE_IGNORE, 0x294, { KEY_KBDILLUMUP } }, + { KE_IGNORE, 0x295, { KEY_KBDILLUMUP } }, ++ // Ignore Ambient Light Sensoring ++ { KE_KEY, 0x2c1, { KEY_RESERVED } }, + { KE_END, 0 } + }; + +diff --git a/drivers/platform/x86/intel/hid.c b/drivers/platform/x86/intel/hid.c +index b6313ecd190c0..b96ef0eb82aff 100644 +--- a/drivers/platform/x86/intel/hid.c ++++ b/drivers/platform/x86/intel/hid.c +@@ -131,6 +131,12 @@ static const struct dmi_system_id dmi_vgbs_allow_list[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go"), + }, + }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "HP"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite Dragonfly G2 Notebook PC"), ++ }, ++ }, + { } + }; + +@@ -601,7 +607,7 @@ static bool button_array_present(struct platform_device *device) + static int intel_hid_probe(struct platform_device *device) + { + acpi_handle handle = ACPI_HANDLE(&device->dev); +- unsigned long long mode; ++ unsigned long long mode, dummy; + struct intel_hid_priv *priv; + acpi_status status; + int err; +@@ -666,18 +672,15 @@ static int intel_hid_probe(struct platform_device *device) + if (err) + goto err_remove_notify; + +- if (priv->array) { +- unsigned long long dummy; ++ intel_button_array_enable(&device->dev, true); + +- intel_button_array_enable(&device->dev, true); +- +- /* Call button load method to enable HID power button */ +- if (!intel_hid_evaluate_method(handle, INTEL_HID_DSM_BTNL_FN, +- &dummy)) { +- dev_warn(&device->dev, +- "failed to enable HID power button\n"); +- } +- } ++ /* ++ * Call button load method to enable HID power button ++ * Always do this since it activates events on some devices without ++ * a button array too. ++ */ ++ if (!intel_hid_evaluate_method(handle, INTEL_HID_DSM_BTNL_FN, &dummy)) ++ dev_warn(&device->dev, "failed to enable HID power button\n"); + + device_init_wakeup(&device->dev, true); + /* +diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c +index 3cbb92b6c5215..f6290221d139d 100644 +--- a/drivers/platform/x86/think-lmi.c ++++ b/drivers/platform/x86/think-lmi.c +@@ -719,12 +719,12 @@ static ssize_t cert_to_password_store(struct kobject *kobj, + /* Format: 'Password,Signature' */ + auth_str = kasprintf(GFP_KERNEL, "%s,%s", passwd, setting->signature); + if (!auth_str) { +- kfree(passwd); ++ kfree_sensitive(passwd); + return -ENOMEM; + } + ret = tlmi_simple_call(LENOVO_CERT_TO_PASSWORD_GUID, auth_str); + kfree(auth_str); +- kfree(passwd); ++ kfree_sensitive(passwd); + + return ret ?: count; + } +diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c +index 67e7664efb0dc..cb8f65c1d4e3b 100644 +--- a/drivers/rpmsg/qcom_glink_native.c ++++ b/drivers/rpmsg/qcom_glink_native.c +@@ -224,6 +224,10 @@ static struct glink_channel *qcom_glink_alloc_channel(struct qcom_glink *glink, + + channel->glink = glink; + channel->name = kstrdup(name, GFP_KERNEL); ++ if (!channel->name) { ++ kfree(channel); ++ return ERR_PTR(-ENOMEM); ++ } + + init_completion(&channel->open_req); + init_completion(&channel->open_ack); +diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c +index bce3422d85640..04d9b1d4b1ba9 100644 +--- a/drivers/s390/block/dasd.c ++++ b/drivers/s390/block/dasd.c +@@ -2926,41 +2926,32 @@ static void _dasd_wake_block_flush_cb(struct dasd_ccw_req *cqr, void *data) + * Requeue a request back to the block request queue + * only works for block requests + */ +-static int _dasd_requeue_request(struct dasd_ccw_req *cqr) ++static void _dasd_requeue_request(struct dasd_ccw_req *cqr) + { +- struct dasd_block *block = cqr->block; + struct request *req; + +- if (!block) +- return -EINVAL; + /* + * If the request is an ERP request there is nothing to requeue. + * This will be done with the remaining original request. + */ + if (cqr->refers) +- return 0; ++ return; + spin_lock_irq(&cqr->dq->lock); + req = (struct request *) cqr->callback_data; + blk_mq_requeue_request(req, true); + spin_unlock_irq(&cqr->dq->lock); + +- return 0; ++ return; + } + +-/* +- * Go through all request on the dasd_block request queue, cancel them +- * on the respective dasd_device, and return them to the generic +- * block layer. +- */ +-static int dasd_flush_block_queue(struct dasd_block *block) ++static int _dasd_requests_to_flushqueue(struct dasd_block *block, ++ struct list_head *flush_queue) + { + struct dasd_ccw_req *cqr, *n; +- int rc, i; +- struct list_head flush_queue; + unsigned long flags; ++ int rc, i; + +- INIT_LIST_HEAD(&flush_queue); +- spin_lock_bh(&block->queue_lock); ++ spin_lock_irqsave(&block->queue_lock, flags); + rc = 0; + restart: + list_for_each_entry_safe(cqr, n, &block->ccw_queue, blocklist) { +@@ -2975,13 +2966,32 @@ restart: + * is returned from the dasd_device layer. + */ + cqr->callback = _dasd_wake_block_flush_cb; +- for (i = 0; cqr != NULL; cqr = cqr->refers, i++) +- list_move_tail(&cqr->blocklist, &flush_queue); ++ for (i = 0; cqr; cqr = cqr->refers, i++) ++ list_move_tail(&cqr->blocklist, flush_queue); + if (i > 1) + /* moved more than one request - need to restart */ + goto restart; + } +- spin_unlock_bh(&block->queue_lock); ++ spin_unlock_irqrestore(&block->queue_lock, flags); ++ ++ return rc; ++} ++ ++/* ++ * Go through all request on the dasd_block request queue, cancel them ++ * on the respective dasd_device, and return them to the generic ++ * block layer. ++ */ ++static int dasd_flush_block_queue(struct dasd_block *block) ++{ ++ struct dasd_ccw_req *cqr, *n; ++ struct list_head flush_queue; ++ unsigned long flags; ++ int rc; ++ ++ INIT_LIST_HEAD(&flush_queue); ++ rc = _dasd_requests_to_flushqueue(block, &flush_queue); ++ + /* Now call the callback function of flushed requests */ + restart_cb: + list_for_each_entry_safe(cqr, n, &flush_queue, blocklist) { +@@ -3864,75 +3874,36 @@ EXPORT_SYMBOL_GPL(dasd_generic_space_avail); + */ + int dasd_generic_requeue_all_requests(struct dasd_device *device) + { ++ struct dasd_block *block = device->block; + struct list_head requeue_queue; + struct dasd_ccw_req *cqr, *n; +- struct dasd_ccw_req *refers; + int rc; + +- INIT_LIST_HEAD(&requeue_queue); +- spin_lock_irq(get_ccwdev_lock(device->cdev)); +- rc = 0; +- list_for_each_entry_safe(cqr, n, &device->ccw_queue, devlist) { +- /* Check status and move request to flush_queue */ +- if (cqr->status == DASD_CQR_IN_IO) { +- rc = device->discipline->term_IO(cqr); +- if (rc) { +- /* unable to terminate requeust */ +- dev_err(&device->cdev->dev, +- "Unable to terminate request %p " +- "on suspend\n", cqr); +- spin_unlock_irq(get_ccwdev_lock(device->cdev)); +- dasd_put_device(device); +- return rc; +- } +- } +- list_move_tail(&cqr->devlist, &requeue_queue); +- } +- spin_unlock_irq(get_ccwdev_lock(device->cdev)); +- +- list_for_each_entry_safe(cqr, n, &requeue_queue, devlist) { +- wait_event(dasd_flush_wq, +- (cqr->status != DASD_CQR_CLEAR_PENDING)); ++ if (!block) ++ return 0; + +- /* +- * requeue requests to blocklayer will only work +- * for block device requests +- */ +- if (_dasd_requeue_request(cqr)) +- continue; ++ INIT_LIST_HEAD(&requeue_queue); ++ rc = _dasd_requests_to_flushqueue(block, &requeue_queue); + +- /* remove requests from device and block queue */ +- list_del_init(&cqr->devlist); +- while (cqr->refers != NULL) { +- refers = cqr->refers; +- /* remove the request from the block queue */ +- list_del(&cqr->blocklist); +- /* free the finished erp request */ +- dasd_free_erp_request(cqr, cqr->memdev); +- cqr = refers; ++ /* Now call the callback function of flushed requests */ ++restart_cb: ++ list_for_each_entry_safe(cqr, n, &requeue_queue, blocklist) { ++ wait_event(dasd_flush_wq, (cqr->status < DASD_CQR_QUEUED)); ++ /* Process finished ERP request. */ ++ if (cqr->refers) { ++ spin_lock_bh(&block->queue_lock); ++ __dasd_process_erp(block->base, cqr); ++ spin_unlock_bh(&block->queue_lock); ++ /* restart list_for_xx loop since dasd_process_erp ++ * might remove multiple elements ++ */ ++ goto restart_cb; + } +- +- /* +- * _dasd_requeue_request already checked for a valid +- * blockdevice, no need to check again +- * all erp requests (cqr->refers) have a cqr->block +- * pointer copy from the original cqr +- */ ++ _dasd_requeue_request(cqr); + list_del_init(&cqr->blocklist); + cqr->block->base->discipline->free_cp( + cqr, (struct request *) cqr->callback_data); + } +- +- /* +- * if requests remain then they are internal request +- * and go back to the device queue +- */ +- if (!list_empty(&requeue_queue)) { +- /* move freeze_queue to start of the ccw_queue */ +- spin_lock_irq(get_ccwdev_lock(device->cdev)); +- list_splice_tail(&requeue_queue, &device->ccw_queue); +- spin_unlock_irq(get_ccwdev_lock(device->cdev)); +- } + dasd_schedule_device_bh(device); + return rc; + } +diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c +index d030fe2e29643..91cb9d52a4250 100644 +--- a/drivers/s390/block/dasd_3990_erp.c ++++ b/drivers/s390/block/dasd_3990_erp.c +@@ -2441,7 +2441,7 @@ static struct dasd_ccw_req *dasd_3990_erp_add_erp(struct dasd_ccw_req *cqr) + erp->block = cqr->block; + erp->magic = cqr->magic; + erp->expires = cqr->expires; +- erp->retries = 256; ++ erp->retries = device->default_retries; + erp->buildclk = get_tod_clock(); + erp->status = DASD_CQR_FILLED; + +diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c +index df17f0f9cb0fc..b2a4c34330573 100644 +--- a/drivers/s390/block/dasd_devmap.c ++++ b/drivers/s390/block/dasd_devmap.c +@@ -1377,16 +1377,12 @@ static ssize_t dasd_vendor_show(struct device *dev, + + static DEVICE_ATTR(vendor, 0444, dasd_vendor_show, NULL); + +-#define UID_STRLEN ( /* vendor */ 3 + 1 + /* serial */ 14 + 1 +\ +- /* SSID */ 4 + 1 + /* unit addr */ 2 + 1 +\ +- /* vduit */ 32 + 1) +- + static ssize_t + dasd_uid_show(struct device *dev, struct device_attribute *attr, char *buf) + { ++ char uid_string[DASD_UID_STRLEN]; + struct dasd_device *device; + struct dasd_uid uid; +- char uid_string[UID_STRLEN]; + char ua_string[3]; + + device = dasd_device_from_cdev(to_ccwdev(dev)); +diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c +index 792e5d245bc38..c5619751a0658 100644 +--- a/drivers/s390/block/dasd_eckd.c ++++ b/drivers/s390/block/dasd_eckd.c +@@ -1079,12 +1079,12 @@ static void dasd_eckd_get_uid_string(struct dasd_conf *conf, + + create_uid(conf, &uid); + if (strlen(uid.vduit) > 0) +- snprintf(print_uid, sizeof(*print_uid), ++ snprintf(print_uid, DASD_UID_STRLEN, + "%s.%s.%04x.%02x.%s", + uid.vendor, uid.serial, uid.ssid, + uid.real_unit_addr, uid.vduit); + else +- snprintf(print_uid, sizeof(*print_uid), ++ snprintf(print_uid, DASD_UID_STRLEN, + "%s.%s.%04x.%02x", + uid.vendor, uid.serial, uid.ssid, + uid.real_unit_addr); +@@ -1093,8 +1093,8 @@ static void dasd_eckd_get_uid_string(struct dasd_conf *conf, + static int dasd_eckd_check_cabling(struct dasd_device *device, + void *conf_data, __u8 lpm) + { ++ char print_path_uid[DASD_UID_STRLEN], print_device_uid[DASD_UID_STRLEN]; + struct dasd_eckd_private *private = device->private; +- char print_path_uid[60], print_device_uid[60]; + struct dasd_conf path_conf; + + path_conf.data = conf_data; +@@ -1293,9 +1293,9 @@ static void dasd_eckd_path_available_action(struct dasd_device *device, + __u8 path_rcd_buf[DASD_ECKD_RCD_DATA_SIZE]; + __u8 lpm, opm, npm, ppm, epm, hpfpm, cablepm; + struct dasd_conf_data *conf_data; ++ char print_uid[DASD_UID_STRLEN]; + struct dasd_conf path_conf; + unsigned long flags; +- char print_uid[60]; + int rc, pos; + + opm = 0; +@@ -5856,8 +5856,8 @@ static void dasd_eckd_dump_sense(struct dasd_device *device, + static int dasd_eckd_reload_device(struct dasd_device *device) + { + struct dasd_eckd_private *private = device->private; ++ char print_uid[DASD_UID_STRLEN]; + int rc, old_base; +- char print_uid[60]; + struct dasd_uid uid; + unsigned long flags; + +diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h +index 97adc8a7ae6b1..f50932518f83a 100644 +--- a/drivers/s390/block/dasd_int.h ++++ b/drivers/s390/block/dasd_int.h +@@ -259,6 +259,10 @@ struct dasd_uid { + char vduit[33]; + }; + ++#define DASD_UID_STRLEN ( /* vendor */ 3 + 1 + /* serial */ 14 + 1 + \ ++ /* SSID */ 4 + 1 + /* unit addr */ 2 + 1 + \ ++ /* vduit */ 32 + 1) ++ + /* + * PPRC Status data + */ +diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c +index c0f85ffb2b62d..735ee0ca4a13b 100644 +--- a/drivers/s390/block/dcssblk.c ++++ b/drivers/s390/block/dcssblk.c +@@ -411,6 +411,7 @@ removeseg: + } + list_del(&dev_info->lh); + ++ dax_remove_host(dev_info->gd); + kill_dax(dev_info->dax_dev); + put_dax(dev_info->dax_dev); + del_gendisk(dev_info->gd); +@@ -706,9 +707,9 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char + goto out; + + out_dax_host: ++ put_device(&dev_info->dev); + dax_remove_host(dev_info->gd); + out_dax: +- put_device(&dev_info->dev); + kill_dax(dev_info->dax_dev); + put_dax(dev_info->dax_dev); + put_dev: +@@ -788,6 +789,7 @@ dcssblk_remove_store(struct device *dev, struct device_attribute *attr, const ch + } + + list_del(&dev_info->lh); ++ dax_remove_host(dev_info->gd); + kill_dax(dev_info->dax_dev); + put_dax(dev_info->dax_dev); + del_gendisk(dev_info->gd); +diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c +index a8def50c149bd..2b92ec20ed68e 100644 +--- a/drivers/s390/crypto/pkey_api.c ++++ b/drivers/s390/crypto/pkey_api.c +@@ -565,6 +565,11 @@ static int pkey_genseckey2(const struct pkey_apqn *apqns, size_t nr_apqns, + if (*keybufsize < MINEP11AESKEYBLOBSIZE) + return -EINVAL; + break; ++ case PKEY_TYPE_EP11_AES: ++ if (*keybufsize < (sizeof(struct ep11kblob_header) + ++ MINEP11AESKEYBLOBSIZE)) ++ return -EINVAL; ++ break; + default: + return -EINVAL; + } +@@ -581,9 +586,10 @@ static int pkey_genseckey2(const struct pkey_apqn *apqns, size_t nr_apqns, + for (i = 0, rc = -ENODEV; i < nr_apqns; i++) { + card = apqns[i].card; + dom = apqns[i].domain; +- if (ktype == PKEY_TYPE_EP11) { ++ if (ktype == PKEY_TYPE_EP11 || ++ ktype == PKEY_TYPE_EP11_AES) { + rc = ep11_genaeskey(card, dom, ksize, kflags, +- keybuf, keybufsize); ++ keybuf, keybufsize, ktype); + } else if (ktype == PKEY_TYPE_CCA_DATA) { + rc = cca_genseckey(card, dom, ksize, keybuf); + *keybufsize = (rc ? 0 : SECKEYBLOBSIZE); +@@ -747,7 +753,7 @@ static int pkey_verifykey2(const u8 *key, size_t keylen, + if (ktype) + *ktype = PKEY_TYPE_EP11; + if (ksize) +- *ksize = kb->head.keybitlen; ++ *ksize = kb->head.bitlen; + + rc = ep11_findcard2(&_apqns, &_nr_apqns, *cardnr, *domain, + ZCRYPT_CEX7, EP11_API_V, kb->wkvp); +@@ -1313,7 +1319,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd, + apqns = _copy_apqns_from_user(kgs.apqns, kgs.apqn_entries); + if (IS_ERR(apqns)) + return PTR_ERR(apqns); +- kkey = kmalloc(klen, GFP_KERNEL); ++ kkey = kzalloc(klen, GFP_KERNEL); + if (!kkey) { + kfree(apqns); + return -ENOMEM; +@@ -1941,7 +1947,7 @@ static struct attribute_group ccacipher_attr_group = { + * (i.e. off != 0 or count < key blob size) -EINVAL is returned. + * This function and the sysfs attributes using it provide EP11 key blobs + * padded to the upper limit of MAXEP11AESKEYBLOBSIZE which is currently +- * 320 bytes. ++ * 336 bytes. + */ + static ssize_t pkey_ep11_aes_attr_read(enum pkey_key_size keybits, + bool is_xts, char *buf, loff_t off, +@@ -1969,7 +1975,8 @@ static ssize_t pkey_ep11_aes_attr_read(enum pkey_key_size keybits, + for (i = 0, rc = -ENODEV; i < nr_apqns; i++) { + card = apqns[i] >> 16; + dom = apqns[i] & 0xFFFF; +- rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize); ++ rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize, ++ PKEY_TYPE_EP11_AES); + if (rc == 0) + break; + } +@@ -1979,7 +1986,8 @@ static ssize_t pkey_ep11_aes_attr_read(enum pkey_key_size keybits, + if (is_xts) { + keysize = MAXEP11AESKEYBLOBSIZE; + buf += MAXEP11AESKEYBLOBSIZE; +- rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize); ++ rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize, ++ PKEY_TYPE_EP11_AES); + if (rc == 0) + return 2 * MAXEP11AESKEYBLOBSIZE; + } +diff --git a/drivers/s390/crypto/zcrypt_ep11misc.c b/drivers/s390/crypto/zcrypt_ep11misc.c +index b1c29017be5bc..20bbeec1a1a22 100644 +--- a/drivers/s390/crypto/zcrypt_ep11misc.c ++++ b/drivers/s390/crypto/zcrypt_ep11misc.c +@@ -113,6 +113,50 @@ static void __exit card_cache_free(void) + spin_unlock_bh(&card_list_lock); + } + ++static int ep11_kb_split(const u8 *kb, size_t kblen, u32 kbver, ++ struct ep11kblob_header **kbhdr, size_t *kbhdrsize, ++ u8 **kbpl, size_t *kbplsize) ++{ ++ struct ep11kblob_header *hdr = NULL; ++ size_t hdrsize, plsize = 0; ++ int rc = -EINVAL; ++ u8 *pl = NULL; ++ ++ if (kblen < sizeof(struct ep11kblob_header)) ++ goto out; ++ hdr = (struct ep11kblob_header *)kb; ++ ++ switch (kbver) { ++ case TOKVER_EP11_AES: ++ /* header overlays the payload */ ++ hdrsize = 0; ++ break; ++ case TOKVER_EP11_ECC_WITH_HEADER: ++ case TOKVER_EP11_AES_WITH_HEADER: ++ /* payload starts after the header */ ++ hdrsize = sizeof(struct ep11kblob_header); ++ break; ++ default: ++ goto out; ++ } ++ ++ plsize = kblen - hdrsize; ++ pl = (u8 *)kb + hdrsize; ++ ++ if (kbhdr) ++ *kbhdr = hdr; ++ if (kbhdrsize) ++ *kbhdrsize = hdrsize; ++ if (kbpl) ++ *kbpl = pl; ++ if (kbplsize) ++ *kbplsize = plsize; ++ ++ rc = 0; ++out: ++ return rc; ++} ++ + /* + * Simple check if the key blob is a valid EP11 AES key blob with header. + */ +@@ -664,8 +708,9 @@ EXPORT_SYMBOL(ep11_get_domain_info); + */ + #define KEY_ATTR_DEFAULTS 0x00200c00 + +-int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, +- u8 *keybuf, size_t *keybufsize) ++static int _ep11_genaeskey(u16 card, u16 domain, ++ u32 keybitsize, u32 keygenflags, ++ u8 *keybuf, size_t *keybufsize) + { + struct keygen_req_pl { + struct pl_head head; +@@ -701,7 +746,6 @@ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, + struct ep11_cprb *req = NULL, *rep = NULL; + struct ep11_target_dev target; + struct ep11_urb *urb = NULL; +- struct ep11keyblob *kb; + int api, rc = -ENOMEM; + + switch (keybitsize) { +@@ -780,14 +824,9 @@ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, + goto out; + } + +- /* copy key blob and set header values */ ++ /* copy key blob */ + memcpy(keybuf, rep_pl->data, rep_pl->data_len); + *keybufsize = rep_pl->data_len; +- kb = (struct ep11keyblob *)keybuf; +- kb->head.type = TOKTYPE_NON_CCA; +- kb->head.len = rep_pl->data_len; +- kb->head.version = TOKVER_EP11_AES; +- kb->head.keybitlen = keybitsize; + + out: + kfree(req); +@@ -795,6 +834,43 @@ out: + kfree(urb); + return rc; + } ++ ++int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, ++ u8 *keybuf, size_t *keybufsize, u32 keybufver) ++{ ++ struct ep11kblob_header *hdr; ++ size_t hdr_size, pl_size; ++ u8 *pl; ++ int rc; ++ ++ switch (keybufver) { ++ case TOKVER_EP11_AES: ++ case TOKVER_EP11_AES_WITH_HEADER: ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ rc = ep11_kb_split(keybuf, *keybufsize, keybufver, ++ &hdr, &hdr_size, &pl, &pl_size); ++ if (rc) ++ return rc; ++ ++ rc = _ep11_genaeskey(card, domain, keybitsize, keygenflags, ++ pl, &pl_size); ++ if (rc) ++ return rc; ++ ++ *keybufsize = hdr_size + pl_size; ++ ++ /* update header information */ ++ hdr->type = TOKTYPE_NON_CCA; ++ hdr->len = *keybufsize; ++ hdr->version = keybufver; ++ hdr->bitlen = keybitsize; ++ ++ return 0; ++} + EXPORT_SYMBOL(ep11_genaeskey); + + static int ep11_cryptsingle(u16 card, u16 domain, +@@ -1055,7 +1131,7 @@ static int ep11_unwrapkey(u16 card, u16 domain, + kb->head.type = TOKTYPE_NON_CCA; + kb->head.len = rep_pl->data_len; + kb->head.version = TOKVER_EP11_AES; +- kb->head.keybitlen = keybitsize; ++ kb->head.bitlen = keybitsize; + + out: + kfree(req); +@@ -1201,7 +1277,6 @@ int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, + const u8 *clrkey, u8 *keybuf, size_t *keybufsize) + { + int rc; +- struct ep11keyblob *kb; + u8 encbuf[64], *kek = NULL; + size_t clrkeylen, keklen, encbuflen = sizeof(encbuf); + +@@ -1223,17 +1298,15 @@ int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, + } + + /* Step 1: generate AES 256 bit random kek key */ +- rc = ep11_genaeskey(card, domain, 256, +- 0x00006c00, /* EN/DECRYPT, WRAP/UNWRAP */ +- kek, &keklen); ++ rc = _ep11_genaeskey(card, domain, 256, ++ 0x00006c00, /* EN/DECRYPT, WRAP/UNWRAP */ ++ kek, &keklen); + if (rc) { + DEBUG_ERR( + "%s generate kek key failed, rc=%d\n", + __func__, rc); + goto out; + } +- kb = (struct ep11keyblob *)kek; +- memset(&kb->head, 0, sizeof(kb->head)); + + /* Step 2: encrypt clear key value with the kek key */ + rc = ep11_cryptsingle(card, domain, 0, 0, def_iv, kek, keklen, +diff --git a/drivers/s390/crypto/zcrypt_ep11misc.h b/drivers/s390/crypto/zcrypt_ep11misc.h +index 07445041869fe..ed328c354bade 100644 +--- a/drivers/s390/crypto/zcrypt_ep11misc.h ++++ b/drivers/s390/crypto/zcrypt_ep11misc.h +@@ -29,14 +29,7 @@ struct ep11keyblob { + union { + u8 session[32]; + /* only used for PKEY_TYPE_EP11: */ +- struct { +- u8 type; /* 0x00 (TOKTYPE_NON_CCA) */ +- u8 res0; /* unused */ +- u16 len; /* total length in bytes of this blob */ +- u8 version; /* 0x03 (TOKVER_EP11_AES) */ +- u8 res1; /* unused */ +- u16 keybitlen; /* clear key bit len, 0 for unknown */ +- } head; ++ struct ep11kblob_header head; + }; + u8 wkvp[16]; /* wrapping key verification pattern */ + u64 attr; /* boolean key attributes */ +@@ -114,7 +107,7 @@ int ep11_get_domain_info(u16 card, u16 domain, struct ep11_domain_info *info); + * Generate (random) EP11 AES secure key. + */ + int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, +- u8 *keybuf, size_t *keybufsize); ++ u8 *keybuf, size_t *keybufsize, u32 keybufver); + + /* + * Generate EP11 AES secure key with given clear key value. +diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h +index 5e115e8b2ba46..7c6efde75da66 100644 +--- a/drivers/scsi/aacraid/aacraid.h ++++ b/drivers/scsi/aacraid/aacraid.h +@@ -1678,6 +1678,7 @@ struct aac_dev + u32 handle_pci_error; + bool init_reset; + u8 soft_reset_support; ++ u8 use_map_queue; + }; + + #define aac_adapter_interrupt(dev) \ +diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c +index deb32c9f4b3e6..3f062e4013ab6 100644 +--- a/drivers/scsi/aacraid/commsup.c ++++ b/drivers/scsi/aacraid/commsup.c +@@ -223,8 +223,12 @@ int aac_fib_setup(struct aac_dev * dev) + struct fib *aac_fib_alloc_tag(struct aac_dev *dev, struct scsi_cmnd *scmd) + { + struct fib *fibptr; ++ u32 blk_tag; ++ int i; + +- fibptr = &dev->fibs[scsi_cmd_to_rq(scmd)->tag]; ++ blk_tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd)); ++ i = blk_mq_unique_tag_to_tag(blk_tag); ++ fibptr = &dev->fibs[i]; + /* + * Null out fields that depend on being zero at the start of + * each I/O +diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c +index 5ba5c18b77b46..bff49b8ab057d 100644 +--- a/drivers/scsi/aacraid/linit.c ++++ b/drivers/scsi/aacraid/linit.c +@@ -19,6 +19,7 @@ + + #include + #include ++#include + #include + #include + #include +@@ -505,6 +506,15 @@ common_config: + return 0; + } + ++static void aac_map_queues(struct Scsi_Host *shost) ++{ ++ struct aac_dev *aac = (struct aac_dev *)shost->hostdata; ++ ++ blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], ++ aac->pdev, 0); ++ aac->use_map_queue = true; ++} ++ + /** + * aac_change_queue_depth - alter queue depths + * @sdev: SCSI device we are considering +@@ -1489,6 +1499,7 @@ static struct scsi_host_template aac_driver_template = { + .bios_param = aac_biosparm, + .shost_groups = aac_host_groups, + .slave_configure = aac_slave_configure, ++ .map_queues = aac_map_queues, + .change_queue_depth = aac_change_queue_depth, + .sdev_groups = aac_dev_groups, + .eh_abort_handler = aac_eh_abort, +@@ -1776,6 +1787,8 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) + shost->max_lun = AAC_MAX_LUN; + + pci_set_drvdata(pdev, shost); ++ shost->nr_hw_queues = aac->max_msix; ++ shost->host_tagset = 1; + + error = scsi_add_host(shost, &pdev->dev); + if (error) +@@ -1908,6 +1921,7 @@ static void aac_remove_one(struct pci_dev *pdev) + struct aac_dev *aac = (struct aac_dev *)shost->hostdata; + + aac_cancel_rescan_worker(aac); ++ aac->use_map_queue = false; + scsi_remove_host(shost); + + __aac_shutdown(aac); +diff --git a/drivers/scsi/aacraid/src.c b/drivers/scsi/aacraid/src.c +index 11ef58204e96f..61949f3741886 100644 +--- a/drivers/scsi/aacraid/src.c ++++ b/drivers/scsi/aacraid/src.c +@@ -493,6 +493,10 @@ static int aac_src_deliver_message(struct fib *fib) + #endif + + u16 vector_no; ++ struct scsi_cmnd *scmd; ++ u32 blk_tag; ++ struct Scsi_Host *shost = dev->scsi_host_ptr; ++ struct blk_mq_queue_map *qmap; + + atomic_inc(&q->numpending); + +@@ -505,8 +509,25 @@ static int aac_src_deliver_message(struct fib *fib) + if ((dev->comm_interface == AAC_COMM_MESSAGE_TYPE3) + && dev->sa_firmware) + vector_no = aac_get_vector(dev); +- else +- vector_no = fib->vector_no; ++ else { ++ if (!fib->vector_no || !fib->callback_data) { ++ if (shost && dev->use_map_queue) { ++ qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; ++ vector_no = qmap->mq_map[raw_smp_processor_id()]; ++ } ++ /* ++ * We hardcode the vector_no for ++ * reserved commands as a valid shost is ++ * absent during the init ++ */ ++ else ++ vector_no = 0; ++ } else { ++ scmd = (struct scsi_cmnd *)fib->callback_data; ++ blk_tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd)); ++ vector_no = blk_mq_unique_tag_to_hwq(blk_tag); ++ } ++ } + + if (native_hba) { + if (fib->flags & FIB_CONTEXT_FLAG_NATIVE_HBA_TMF) { +diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c +index 8aeaddc93b167..8d374ae863ba2 100644 +--- a/drivers/scsi/be2iscsi/be_iscsi.c ++++ b/drivers/scsi/be2iscsi/be_iscsi.c +@@ -450,6 +450,10 @@ int beiscsi_iface_set_param(struct Scsi_Host *shost, + } + + nla_for_each_attr(attrib, data, dt_len, rm_len) { ++ /* ignore nla_type as it is never used */ ++ if (nla_len(attrib) < sizeof(*iface_param)) ++ return -EINVAL; ++ + iface_param = nla_data(attrib); + + if (iface_param->param_type != ISCSI_NET_PARAM) +diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c +index ddc048069af25..8a4124e7d2043 100644 +--- a/drivers/scsi/fcoe/fcoe_ctlr.c ++++ b/drivers/scsi/fcoe/fcoe_ctlr.c +@@ -319,16 +319,17 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip) + { + struct fcoe_fcf *sel; + struct fcoe_fcf *fcf; ++ unsigned long flags; + + mutex_lock(&fip->ctlr_mutex); +- spin_lock_bh(&fip->ctlr_lock); ++ spin_lock_irqsave(&fip->ctlr_lock, flags); + + kfree_skb(fip->flogi_req); + fip->flogi_req = NULL; + list_for_each_entry(fcf, &fip->fcfs, list) + fcf->flogi_sent = 0; + +- spin_unlock_bh(&fip->ctlr_lock); ++ spin_unlock_irqrestore(&fip->ctlr_lock, flags); + sel = fip->sel_fcf; + + if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr)) +@@ -699,6 +700,7 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport, + { + struct fc_frame *fp; + struct fc_frame_header *fh; ++ unsigned long flags; + u16 old_xid; + u8 op; + u8 mac[ETH_ALEN]; +@@ -732,11 +734,11 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport, + op = FIP_DT_FLOGI; + if (fip->mode == FIP_MODE_VN2VN) + break; +- spin_lock_bh(&fip->ctlr_lock); ++ spin_lock_irqsave(&fip->ctlr_lock, flags); + kfree_skb(fip->flogi_req); + fip->flogi_req = skb; + fip->flogi_req_send = 1; +- spin_unlock_bh(&fip->ctlr_lock); ++ spin_unlock_irqrestore(&fip->ctlr_lock, flags); + schedule_work(&fip->timer_work); + return -EINPROGRESS; + case ELS_FDISC: +@@ -1705,10 +1707,11 @@ static int fcoe_ctlr_flogi_send_locked(struct fcoe_ctlr *fip) + static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip) + { + struct fcoe_fcf *fcf; ++ unsigned long flags; + int error; + + mutex_lock(&fip->ctlr_mutex); +- spin_lock_bh(&fip->ctlr_lock); ++ spin_lock_irqsave(&fip->ctlr_lock, flags); + LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n"); + fcf = fcoe_ctlr_select(fip); + if (!fcf || fcf->flogi_sent) { +@@ -1719,7 +1722,7 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip) + fcoe_ctlr_solicit(fip, NULL); + error = fcoe_ctlr_flogi_send_locked(fip); + } +- spin_unlock_bh(&fip->ctlr_lock); ++ spin_unlock_irqrestore(&fip->ctlr_lock, flags); + mutex_unlock(&fip->ctlr_mutex); + return error; + } +@@ -1736,8 +1739,9 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip) + static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip) + { + struct fcoe_fcf *fcf; ++ unsigned long flags; + +- spin_lock_bh(&fip->ctlr_lock); ++ spin_lock_irqsave(&fip->ctlr_lock, flags); + fcf = fip->sel_fcf; + if (!fcf || !fip->flogi_req_send) + goto unlock; +@@ -1764,7 +1768,7 @@ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip) + } else /* XXX */ + LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n"); + unlock: +- spin_unlock_bh(&fip->ctlr_lock); ++ spin_unlock_irqrestore(&fip->ctlr_lock, flags); + } + + /** +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +index 02575d81afca2..50697672146ad 100644 +--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c ++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +@@ -2026,6 +2026,11 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba, + u16 dma_tx_err_type = le16_to_cpu(err_record->dma_tx_err_type); + u16 sipc_rx_err_type = le16_to_cpu(err_record->sipc_rx_err_type); + u32 dma_rx_err_type = le32_to_cpu(err_record->dma_rx_err_type); ++ struct hisi_sas_complete_v2_hdr *complete_queue = ++ hisi_hba->complete_hdr[slot->cmplt_queue]; ++ struct hisi_sas_complete_v2_hdr *complete_hdr = ++ &complete_queue[slot->cmplt_queue_slot]; ++ u32 dw0 = le32_to_cpu(complete_hdr->dw0); + int error = -1; + + if (err_phase == 1) { +@@ -2310,7 +2315,8 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba, + break; + } + } +- hisi_sas_sata_done(task, slot); ++ if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) ++ hisi_sas_sata_done(task, slot); + } + break; + default: +@@ -2443,7 +2449,8 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba, + case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: + { + ts->stat = SAS_SAM_STAT_GOOD; +- hisi_sas_sata_done(task, slot); ++ if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) ++ hisi_sas_sata_done(task, slot); + break; + } + default: +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +index e8a3511040af2..c0e74d768716d 100644 +--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c ++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +@@ -2163,6 +2163,7 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, + u32 trans_tx_fail_type = le32_to_cpu(record->trans_tx_fail_type); + u16 sipc_rx_err_type = le16_to_cpu(record->sipc_rx_err_type); + u32 dw3 = le32_to_cpu(complete_hdr->dw3); ++ u32 dw0 = le32_to_cpu(complete_hdr->dw0); + + switch (task->task_proto) { + case SAS_PROTOCOL_SSP: +@@ -2172,8 +2173,8 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, + * but I/O information has been written to the host memory, we examine + * response IU. + */ +- if (!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_GOOD_MSK) && +- (complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)) ++ if (!(dw0 & CMPLT_HDR_RSPNS_GOOD_MSK) && ++ (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)) + return false; + + ts->residual = trans_tx_fail_type; +@@ -2189,7 +2190,7 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, + case SAS_PROTOCOL_SATA: + case SAS_PROTOCOL_STP: + case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: +- if ((complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) && ++ if ((dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) && + (sipc_rx_err_type & RX_FIS_STATUS_ERR_MSK)) { + ts->stat = SAS_PROTO_RESPONSE; + } else if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) { +@@ -2202,7 +2203,8 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, + ts->stat = SAS_OPEN_REJECT; + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; + } +- hisi_sas_sata_done(task, slot); ++ if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) ++ hisi_sas_sata_done(task, slot); + break; + case SAS_PROTOCOL_SMP: + ts->stat = SAS_SAM_STAT_CHECK_CONDITION; +@@ -2329,7 +2331,8 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba, + case SAS_PROTOCOL_STP: + case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: + ts->stat = SAS_SAM_STAT_GOOD; +- hisi_sas_sata_done(task, slot); ++ if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) ++ hisi_sas_sata_done(task, slot); + break; + default: + ts->stat = SAS_SAM_STAT_CHECK_CONDITION; +diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c +index 45a2fd6584d16..8b825364baade 100644 +--- a/drivers/scsi/hosts.c ++++ b/drivers/scsi/hosts.c +@@ -535,7 +535,7 @@ EXPORT_SYMBOL(scsi_host_alloc); + static int __scsi_host_match(struct device *dev, const void *data) + { + struct Scsi_Host *p; +- const unsigned short *hostnum = data; ++ const unsigned int *hostnum = data; + + p = class_to_shost(dev); + return p->host_no == *hostnum; +@@ -552,7 +552,7 @@ static int __scsi_host_match(struct device *dev, const void *data) + * that scsi_host_get() took. The put_device() below dropped + * the reference from class_find_device(). + **/ +-struct Scsi_Host *scsi_host_lookup(unsigned short hostnum) ++struct Scsi_Host *scsi_host_lookup(unsigned int hostnum) + { + struct device *cdev; + struct Scsi_Host *shost = NULL; +diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c +index 852b025e2fecf..b54fafb486e06 100644 +--- a/drivers/scsi/lpfc/lpfc_bsg.c ++++ b/drivers/scsi/lpfc/lpfc_bsg.c +@@ -889,7 +889,7 @@ lpfc_bsg_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + struct lpfc_iocbq *piocbq) + { + uint32_t evt_req_id = 0; +- uint32_t cmd; ++ u16 cmd; + struct lpfc_dmabuf *dmabuf = NULL; + struct lpfc_bsg_event *evt; + struct event_data *evt_dat = NULL; +@@ -915,7 +915,7 @@ lpfc_bsg_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + + ct_req = (struct lpfc_sli_ct_request *)bdeBuf1->virt; + evt_req_id = ct_req->FsType; +- cmd = ct_req->CommandResponse.bits.CmdRsp; ++ cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp); + + spin_lock_irqsave(&phba->ct_ev_lock, flags); + list_for_each_entry(evt, &phba->ct_ev_waiters, node) { +@@ -3186,8 +3186,8 @@ lpfc_bsg_diag_loopback_run(struct bsg_job *job) + ctreq->RevisionId.bits.InId = 0; + ctreq->FsType = SLI_CT_ELX_LOOPBACK; + ctreq->FsSubType = 0; +- ctreq->CommandResponse.bits.CmdRsp = ELX_LOOPBACK_DATA; +- ctreq->CommandResponse.bits.Size = size; ++ ctreq->CommandResponse.bits.CmdRsp = cpu_to_be16(ELX_LOOPBACK_DATA); ++ ctreq->CommandResponse.bits.Size = cpu_to_be16(size); + segment_offset = ELX_LOOPBACK_HEADER_SZ; + } else + segment_offset = 0; +diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c +index 7a1563564df7f..7aac9fc719675 100644 +--- a/drivers/scsi/lpfc/lpfc_scsi.c ++++ b/drivers/scsi/lpfc/lpfc_scsi.c +@@ -109,8 +109,6 @@ lpfc_sli4_set_rsp_sgl_last(struct lpfc_hba *phba, + } + } + +-#define LPFC_INVALID_REFTAG ((u32)-1) +- + /** + * lpfc_rampdown_queue_depth - Post RAMP_DOWN_QUEUE event to worker thread + * @phba: The Hba for which this call is being executed. +@@ -978,8 +976,6 @@ lpfc_bg_err_inject(struct lpfc_hba *phba, struct scsi_cmnd *sc, + + sgpe = scsi_prot_sglist(sc); + lba = scsi_prot_ref_tag(sc); +- if (lba == LPFC_INVALID_REFTAG) +- return 0; + + /* First check if we need to match the LBA */ + if (phba->lpfc_injerr_lba != LPFC_INJERR_LBA_OFF) { +@@ -1560,8 +1556,6 @@ lpfc_bg_setup_bpl(struct lpfc_hba *phba, struct scsi_cmnd *sc, + + /* extract some info from the scsi command for pde*/ + reftag = scsi_prot_ref_tag(sc); +- if (reftag == LPFC_INVALID_REFTAG) +- goto out; + + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); +@@ -1723,8 +1717,6 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc, + /* extract some info from the scsi command */ + blksize = scsi_prot_interval(sc); + reftag = scsi_prot_ref_tag(sc); +- if (reftag == LPFC_INVALID_REFTAG) +- goto out; + + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); +@@ -1954,8 +1946,6 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc, + + /* extract some info from the scsi command for pde*/ + reftag = scsi_prot_ref_tag(sc); +- if (reftag == LPFC_INVALID_REFTAG) +- goto out; + + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); +@@ -2155,8 +2145,6 @@ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc, + /* extract some info from the scsi command */ + blksize = scsi_prot_interval(sc); + reftag = scsi_prot_ref_tag(sc); +- if (reftag == LPFC_INVALID_REFTAG) +- goto out; + + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); +@@ -2748,8 +2736,6 @@ lpfc_calc_bg_err(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd) + + src = (struct scsi_dif_tuple *)sg_virt(sgpe); + start_ref_tag = scsi_prot_ref_tag(cmd); +- if (start_ref_tag == LPFC_INVALID_REFTAG) +- goto out; + start_app_tag = src->app_tag; + len = sgpe->length; + while (src && protsegcnt) { +@@ -3495,11 +3481,11 @@ err: + scsi_cmnd->sc_data_direction); + + lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, +- "9084 Cannot setup S/G List for HBA" +- "IO segs %d/%d SGL %d SCSI %d: %d %d\n", ++ "9084 Cannot setup S/G List for HBA " ++ "IO segs %d/%d SGL %d SCSI %d: %d %d %d\n", + lpfc_cmd->seg_cnt, lpfc_cmd->prot_seg_cnt, + phba->cfg_total_seg_cnt, phba->cfg_sg_seg_cnt, +- prot_group_type, num_sge); ++ prot_group_type, num_sge, ret); + + lpfc_cmd->seg_cnt = 0; + lpfc_cmd->prot_seg_cnt = 0; +diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c +index 14ae0a9c5d3d8..2093888f154e0 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_base.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c +@@ -139,6 +139,9 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc); + static void + _base_clear_outstanding_commands(struct MPT3SAS_ADAPTER *ioc); + ++static u32 ++_base_readl_ext_retry(const volatile void __iomem *addr); ++ + /** + * mpt3sas_base_check_cmd_timeout - Function + * to check timeout and command termination due +@@ -214,6 +217,20 @@ _base_readl_aero(const volatile void __iomem *addr) + return ret_val; + } + ++static u32 ++_base_readl_ext_retry(const volatile void __iomem *addr) ++{ ++ u32 i, ret_val; ++ ++ for (i = 0 ; i < 30 ; i++) { ++ ret_val = readl(addr); ++ if (ret_val == 0) ++ continue; ++ } ++ ++ return ret_val; ++} ++ + static inline u32 + _base_readl(const volatile void __iomem *addr) + { +@@ -941,7 +958,7 @@ mpt3sas_halt_firmware(struct MPT3SAS_ADAPTER *ioc) + + dump_stack(); + +- doorbell = ioc->base_readl(&ioc->chip->Doorbell); ++ doorbell = ioc->base_readl_ext_retry(&ioc->chip->Doorbell); + if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { + mpt3sas_print_fault_code(ioc, doorbell & + MPI2_DOORBELL_DATA_MASK); +@@ -6697,7 +6714,7 @@ mpt3sas_base_get_iocstate(struct MPT3SAS_ADAPTER *ioc, int cooked) + { + u32 s, sc; + +- s = ioc->base_readl(&ioc->chip->Doorbell); ++ s = ioc->base_readl_ext_retry(&ioc->chip->Doorbell); + sc = s & MPI2_IOC_STATE_MASK; + return cooked ? sc : s; + } +@@ -6842,7 +6859,7 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout) + __func__, count, timeout)); + return 0; + } else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { +- doorbell = ioc->base_readl(&ioc->chip->Doorbell); ++ doorbell = ioc->base_readl_ext_retry(&ioc->chip->Doorbell); + if ((doorbell & MPI2_IOC_STATE_MASK) == + MPI2_IOC_STATE_FAULT) { + mpt3sas_print_fault_code(ioc, doorbell); +@@ -6882,7 +6899,7 @@ _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout) + count = 0; + cntdn = 1000 * timeout; + do { +- doorbell_reg = ioc->base_readl(&ioc->chip->Doorbell); ++ doorbell_reg = ioc->base_readl_ext_retry(&ioc->chip->Doorbell); + if (!(doorbell_reg & MPI2_DOORBELL_USED)) { + dhsprintk(ioc, + ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n", +@@ -7030,7 +7047,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, + __le32 *mfp; + + /* make sure doorbell is not in use */ +- if ((ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) { ++ if ((ioc->base_readl_ext_retry(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) { + ioc_err(ioc, "doorbell is in use (line=%d)\n", __LINE__); + return -EFAULT; + } +@@ -7079,7 +7096,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, + } + + /* read the first two 16-bits, it gives the total length of the reply */ +- reply[0] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell) ++ reply[0] = le16_to_cpu(ioc->base_readl_ext_retry(&ioc->chip->Doorbell) + & MPI2_DOORBELL_DATA_MASK); + writel(0, &ioc->chip->HostInterruptStatus); + if ((_base_wait_for_doorbell_int(ioc, 5))) { +@@ -7087,7 +7104,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, + __LINE__); + return -EFAULT; + } +- reply[1] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell) ++ reply[1] = le16_to_cpu(ioc->base_readl_ext_retry(&ioc->chip->Doorbell) + & MPI2_DOORBELL_DATA_MASK); + writel(0, &ioc->chip->HostInterruptStatus); + +@@ -7098,10 +7115,10 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, + return -EFAULT; + } + if (i >= reply_bytes/2) /* overflow case */ +- ioc->base_readl(&ioc->chip->Doorbell); ++ ioc->base_readl_ext_retry(&ioc->chip->Doorbell); + else + reply[i] = le16_to_cpu( +- ioc->base_readl(&ioc->chip->Doorbell) ++ ioc->base_readl_ext_retry(&ioc->chip->Doorbell) + & MPI2_DOORBELL_DATA_MASK); + writel(0, &ioc->chip->HostInterruptStatus); + } +@@ -7960,7 +7977,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc) + goto out; + } + +- host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic); ++ host_diagnostic = ioc->base_readl_ext_retry(&ioc->chip->HostDiagnostic); + drsprintk(ioc, + ioc_info(ioc, "wrote magic sequence: count(%d), host_diagnostic(0x%08x)\n", + count, host_diagnostic)); +@@ -7980,7 +7997,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc) + for (count = 0; count < (300000000 / + MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC); count++) { + +- host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic); ++ host_diagnostic = ioc->base_readl_ext_retry(&ioc->chip->HostDiagnostic); + + if (host_diagnostic == 0xFFFFFFFF) { + ioc_info(ioc, +@@ -8370,10 +8387,13 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc) + ioc->rdpq_array_enable_assigned = 0; + ioc->use_32bit_dma = false; + ioc->dma_mask = 64; +- if (ioc->is_aero_ioc) ++ if (ioc->is_aero_ioc) { + ioc->base_readl = &_base_readl_aero; +- else ++ ioc->base_readl_ext_retry = &_base_readl_ext_retry; ++ } else { + ioc->base_readl = &_base_readl; ++ ioc->base_readl_ext_retry = &_base_readl; ++ } + r = mpt3sas_base_map_resources(ioc); + if (r) + goto out_free_resources; +diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h +index 05364aa15ecdb..10055c7e4a9f7 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_base.h ++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h +@@ -1618,6 +1618,7 @@ struct MPT3SAS_ADAPTER { + u8 diag_trigger_active; + u8 atomic_desc_capable; + BASE_READ_REG base_readl; ++ BASE_READ_REG base_readl_ext_retry; + struct SL_WH_MASTER_TRIGGER_T diag_trigger_master; + struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event; + struct SL_WH_SCSI_TRIGGERS_T diag_trigger_scsi; +diff --git a/drivers/scsi/qedf/qedf_dbg.h b/drivers/scsi/qedf/qedf_dbg.h +index f4d81127239eb..5ec2b817c694a 100644 +--- a/drivers/scsi/qedf/qedf_dbg.h ++++ b/drivers/scsi/qedf/qedf_dbg.h +@@ -59,6 +59,8 @@ extern uint qedf_debug; + #define QEDF_LOG_NOTICE 0x40000000 /* Notice logs */ + #define QEDF_LOG_WARN 0x80000000 /* Warning logs */ + ++#define QEDF_DEBUGFS_LOG_LEN (2 * PAGE_SIZE) ++ + /* Debug context structure */ + struct qedf_dbg_ctx { + unsigned int host_no; +diff --git a/drivers/scsi/qedf/qedf_debugfs.c b/drivers/scsi/qedf/qedf_debugfs.c +index a3ed681c8ce3f..451fd236bfd05 100644 +--- a/drivers/scsi/qedf/qedf_debugfs.c ++++ b/drivers/scsi/qedf/qedf_debugfs.c +@@ -8,6 +8,7 @@ + #include + #include + #include ++#include + + #include "qedf.h" + #include "qedf_dbg.h" +@@ -98,7 +99,9 @@ static ssize_t + qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count, + loff_t *ppos) + { ++ ssize_t ret; + size_t cnt = 0; ++ char *cbuf; + int id; + struct qedf_fastpath *fp = NULL; + struct qedf_dbg_ctx *qedf_dbg = +@@ -108,19 +111,25 @@ qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count, + + QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n"); + +- cnt = sprintf(buffer, "\nFastpath I/O completions\n\n"); ++ cbuf = vmalloc(QEDF_DEBUGFS_LOG_LEN); ++ if (!cbuf) ++ return 0; ++ ++ cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt, "\nFastpath I/O completions\n\n"); + + for (id = 0; id < qedf->num_queues; id++) { + fp = &(qedf->fp_array[id]); + if (fp->sb_id == QEDF_SB_ID_NULL) + continue; +- cnt += sprintf((buffer + cnt), "#%d: %lu\n", id, +- fp->completions); ++ cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt, ++ "#%d: %lu\n", id, fp->completions); + } + +- cnt = min_t(int, count, cnt - *ppos); +- *ppos += cnt; +- return cnt; ++ ret = simple_read_from_buffer(buffer, count, ppos, cbuf, cnt); ++ ++ vfree(cbuf); ++ ++ return ret; + } + + static ssize_t +@@ -138,15 +147,14 @@ qedf_dbg_debug_cmd_read(struct file *filp, char __user *buffer, size_t count, + loff_t *ppos) + { + int cnt; ++ char cbuf[32]; + struct qedf_dbg_ctx *qedf_dbg = + (struct qedf_dbg_ctx *)filp->private_data; + + QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "debug mask=0x%x\n", qedf_debug); +- cnt = sprintf(buffer, "debug mask = 0x%x\n", qedf_debug); ++ cnt = scnprintf(cbuf, sizeof(cbuf), "debug mask = 0x%x\n", qedf_debug); + +- cnt = min_t(int, count, cnt - *ppos); +- *ppos += cnt; +- return cnt; ++ return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt); + } + + static ssize_t +@@ -185,18 +193,17 @@ qedf_dbg_stop_io_on_error_cmd_read(struct file *filp, char __user *buffer, + size_t count, loff_t *ppos) + { + int cnt; ++ char cbuf[7]; + struct qedf_dbg_ctx *qedf_dbg = + (struct qedf_dbg_ctx *)filp->private_data; + struct qedf_ctx *qedf = container_of(qedf_dbg, + struct qedf_ctx, dbg_ctx); + + QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n"); +- cnt = sprintf(buffer, "%s\n", ++ cnt = scnprintf(cbuf, sizeof(cbuf), "%s\n", + qedf->stop_io_on_error ? "true" : "false"); + +- cnt = min_t(int, count, cnt - *ppos); +- *ppos += cnt; +- return cnt; ++ return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt); + } + + static ssize_t +diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c +index 9fd68d362698f..2ee109fb65616 100644 +--- a/drivers/scsi/qedi/qedi_main.c ++++ b/drivers/scsi/qedi/qedi_main.c +@@ -1977,8 +1977,9 @@ static int qedi_cpu_offline(unsigned int cpu) + struct qedi_percpu_s *p = this_cpu_ptr(&qedi_percpu); + struct qedi_work *work, *tmp; + struct task_struct *thread; ++ unsigned long flags; + +- spin_lock_bh(&p->p_work_lock); ++ spin_lock_irqsave(&p->p_work_lock, flags); + thread = p->iothread; + p->iothread = NULL; + +@@ -1989,7 +1990,7 @@ static int qedi_cpu_offline(unsigned int cpu) + kfree(work); + } + +- spin_unlock_bh(&p->p_work_lock); ++ spin_unlock_irqrestore(&p->p_work_lock, flags); + if (thread) + kthread_stop(thread); + return 0; +diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c +index b597c782b95ee..30bbf33e3a6aa 100644 +--- a/drivers/scsi/qla2xxx/qla_init.c ++++ b/drivers/scsi/qla2xxx/qla_init.c +@@ -5571,7 +5571,7 @@ static void qla_get_login_template(scsi_qla_host_t *vha) + __be32 *q; + + memset(ha->init_cb, 0, ha->init_cb_size); +- sz = min_t(int, sizeof(struct fc_els_csp), ha->init_cb_size); ++ sz = min_t(int, sizeof(struct fc_els_flogi), ha->init_cb_size); + rval = qla24xx_get_port_login_templ(vha, ha->init_cb_dma, + ha->init_cb, sz); + if (rval != QLA_SUCCESS) { +diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c +index 9e849f6b0d0f7..3f2f9734ee42e 100644 +--- a/drivers/scsi/qla4xxx/ql4_os.c ++++ b/drivers/scsi/qla4xxx/ql4_os.c +@@ -968,6 +968,11 @@ static int qla4xxx_set_chap_entry(struct Scsi_Host *shost, void *data, int len) + memset(&chap_rec, 0, sizeof(chap_rec)); + + nla_for_each_attr(attr, data, len, rem) { ++ if (nla_len(attr) < sizeof(*param_info)) { ++ rc = -EINVAL; ++ goto exit_set_chap; ++ } ++ + param_info = nla_data(attr); + + switch (param_info->param) { +@@ -2750,6 +2755,11 @@ qla4xxx_iface_set_param(struct Scsi_Host *shost, void *data, uint32_t len) + } + + nla_for_each_attr(attr, data, len, rem) { ++ if (nla_len(attr) < sizeof(*iface_param)) { ++ rval = -EINVAL; ++ goto exit_init_fw_cb; ++ } ++ + iface_param = nla_data(attr); + + if (iface_param->param_type == ISCSI_NET_PARAM) { +@@ -8104,6 +8114,11 @@ qla4xxx_sysfs_ddb_set_param(struct iscsi_bus_flash_session *fnode_sess, + + memset((void *)&chap_tbl, 0, sizeof(chap_tbl)); + nla_for_each_attr(attr, data, len, rem) { ++ if (nla_len(attr) < sizeof(*fnode_param)) { ++ rc = -EINVAL; ++ goto exit_set_param; ++ } ++ + fnode_param = nla_data(attr); + + switch (fnode_param->param) { +diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c +index bf834e72595a3..49dbcd67579aa 100644 +--- a/drivers/scsi/scsi_transport_iscsi.c ++++ b/drivers/scsi/scsi_transport_iscsi.c +@@ -3013,14 +3013,15 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev + } + + static int +-iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev) ++iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen) + { + char *data = (char*)ev + sizeof(*ev); + struct iscsi_cls_conn *conn; + struct iscsi_cls_session *session; + int err = 0, value = 0, state; + +- if (ev->u.set_param.len > PAGE_SIZE) ++ if (ev->u.set_param.len > rlen || ++ ev->u.set_param.len > PAGE_SIZE) + return -EINVAL; + + session = iscsi_session_lookup(ev->u.set_param.sid); +@@ -3028,6 +3029,10 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev) + if (!conn || !session) + return -EINVAL; + ++ /* data will be regarded as NULL-ended string, do length check */ ++ if (strlen(data) > ev->u.set_param.len) ++ return -EINVAL; ++ + switch (ev->u.set_param.param) { + case ISCSI_PARAM_SESS_RECOVERY_TMO: + sscanf(data, "%d", &value); +@@ -3117,7 +3122,7 @@ put_ep: + + static int + iscsi_if_transport_ep(struct iscsi_transport *transport, +- struct iscsi_uevent *ev, int msg_type) ++ struct iscsi_uevent *ev, int msg_type, u32 rlen) + { + struct iscsi_endpoint *ep; + int rc = 0; +@@ -3125,7 +3130,10 @@ iscsi_if_transport_ep(struct iscsi_transport *transport, + switch (msg_type) { + case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST: + case ISCSI_UEVENT_TRANSPORT_EP_CONNECT: +- rc = iscsi_if_ep_connect(transport, ev, msg_type); ++ if (rlen < sizeof(struct sockaddr)) ++ rc = -EINVAL; ++ else ++ rc = iscsi_if_ep_connect(transport, ev, msg_type); + break; + case ISCSI_UEVENT_TRANSPORT_EP_POLL: + if (!transport->ep_poll) +@@ -3149,12 +3157,15 @@ iscsi_if_transport_ep(struct iscsi_transport *transport, + + static int + iscsi_tgt_dscvr(struct iscsi_transport *transport, +- struct iscsi_uevent *ev) ++ struct iscsi_uevent *ev, u32 rlen) + { + struct Scsi_Host *shost; + struct sockaddr *dst_addr; + int err; + ++ if (rlen < sizeof(*dst_addr)) ++ return -EINVAL; ++ + if (!transport->tgt_dscvr) + return -EINVAL; + +@@ -3175,7 +3186,7 @@ iscsi_tgt_dscvr(struct iscsi_transport *transport, + + static int + iscsi_set_host_param(struct iscsi_transport *transport, +- struct iscsi_uevent *ev) ++ struct iscsi_uevent *ev, u32 rlen) + { + char *data = (char*)ev + sizeof(*ev); + struct Scsi_Host *shost; +@@ -3184,7 +3195,8 @@ iscsi_set_host_param(struct iscsi_transport *transport, + if (!transport->set_host_param) + return -ENOSYS; + +- if (ev->u.set_host_param.len > PAGE_SIZE) ++ if (ev->u.set_host_param.len > rlen || ++ ev->u.set_host_param.len > PAGE_SIZE) + return -EINVAL; + + shost = scsi_host_lookup(ev->u.set_host_param.host_no); +@@ -3194,6 +3206,10 @@ iscsi_set_host_param(struct iscsi_transport *transport, + return -ENODEV; + } + ++ /* see similar check in iscsi_if_set_param() */ ++ if (strlen(data) > ev->u.set_host_param.len) ++ return -EINVAL; ++ + err = transport->set_host_param(shost, ev->u.set_host_param.param, + data, ev->u.set_host_param.len); + scsi_host_put(shost); +@@ -3201,12 +3217,15 @@ iscsi_set_host_param(struct iscsi_transport *transport, + } + + static int +-iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev) ++iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen) + { + struct Scsi_Host *shost; + struct iscsi_path *params; + int err; + ++ if (rlen < sizeof(*params)) ++ return -EINVAL; ++ + if (!transport->set_path) + return -ENOSYS; + +@@ -3266,12 +3285,15 @@ iscsi_set_iface_params(struct iscsi_transport *transport, + } + + static int +-iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev) ++iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen) + { + struct Scsi_Host *shost; + struct sockaddr *dst_addr; + int err; + ++ if (rlen < sizeof(*dst_addr)) ++ return -EINVAL; ++ + if (!transport->send_ping) + return -ENOSYS; + +@@ -3769,13 +3791,12 @@ exit_host_stats: + } + + static int iscsi_if_transport_conn(struct iscsi_transport *transport, +- struct nlmsghdr *nlh) ++ struct nlmsghdr *nlh, u32 pdu_len) + { + struct iscsi_uevent *ev = nlmsg_data(nlh); + struct iscsi_cls_session *session; + struct iscsi_cls_conn *conn = NULL; + struct iscsi_endpoint *ep; +- uint32_t pdu_len; + int err = 0; + + switch (nlh->nlmsg_type) { +@@ -3860,8 +3881,6 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport, + + break; + case ISCSI_UEVENT_SEND_PDU: +- pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev); +- + if ((ev->u.send_pdu.hdr_size > pdu_len) || + (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) { + err = -EINVAL; +@@ -3891,6 +3910,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + struct iscsi_internal *priv; + struct iscsi_cls_session *session; + struct iscsi_endpoint *ep = NULL; ++ u32 rlen; + + if (!netlink_capable(skb, CAP_SYS_ADMIN)) + return -EPERM; +@@ -3910,6 +3930,13 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + + portid = NETLINK_CB(skb).portid; + ++ /* ++ * Even though the remaining payload may not be regarded as nlattr, ++ * (like address or something else), calculate the remaining length ++ * here to ease following length checks. ++ */ ++ rlen = nlmsg_attrlen(nlh, sizeof(*ev)); ++ + switch (nlh->nlmsg_type) { + case ISCSI_UEVENT_CREATE_SESSION: + err = iscsi_if_create_session(priv, ep, ev, +@@ -3966,7 +3993,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + err = -EINVAL; + break; + case ISCSI_UEVENT_SET_PARAM: +- err = iscsi_set_param(transport, ev); ++ err = iscsi_if_set_param(transport, ev, rlen); + break; + case ISCSI_UEVENT_CREATE_CONN: + case ISCSI_UEVENT_DESTROY_CONN: +@@ -3974,7 +4001,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + case ISCSI_UEVENT_START_CONN: + case ISCSI_UEVENT_BIND_CONN: + case ISCSI_UEVENT_SEND_PDU: +- err = iscsi_if_transport_conn(transport, nlh); ++ err = iscsi_if_transport_conn(transport, nlh, rlen); + break; + case ISCSI_UEVENT_GET_STATS: + err = iscsi_if_get_stats(transport, nlh); +@@ -3983,23 +4010,22 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + case ISCSI_UEVENT_TRANSPORT_EP_POLL: + case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT: + case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST: +- err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type); ++ err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type, rlen); + break; + case ISCSI_UEVENT_TGT_DSCVR: +- err = iscsi_tgt_dscvr(transport, ev); ++ err = iscsi_tgt_dscvr(transport, ev, rlen); + break; + case ISCSI_UEVENT_SET_HOST_PARAM: +- err = iscsi_set_host_param(transport, ev); ++ err = iscsi_set_host_param(transport, ev, rlen); + break; + case ISCSI_UEVENT_PATH_UPDATE: +- err = iscsi_set_path(transport, ev); ++ err = iscsi_set_path(transport, ev, rlen); + break; + case ISCSI_UEVENT_SET_IFACE_PARAMS: +- err = iscsi_set_iface_params(transport, ev, +- nlmsg_attrlen(nlh, sizeof(*ev))); ++ err = iscsi_set_iface_params(transport, ev, rlen); + break; + case ISCSI_UEVENT_PING: +- err = iscsi_send_ping(transport, ev); ++ err = iscsi_send_ping(transport, ev, rlen); + break; + case ISCSI_UEVENT_GET_CHAP: + err = iscsi_get_chap(transport, nlh); +@@ -4008,13 +4034,10 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + err = iscsi_delete_chap(transport, ev); + break; + case ISCSI_UEVENT_SET_FLASHNODE_PARAMS: +- err = iscsi_set_flashnode_param(transport, ev, +- nlmsg_attrlen(nlh, +- sizeof(*ev))); ++ err = iscsi_set_flashnode_param(transport, ev, rlen); + break; + case ISCSI_UEVENT_NEW_FLASHNODE: +- err = iscsi_new_flashnode(transport, ev, +- nlmsg_attrlen(nlh, sizeof(*ev))); ++ err = iscsi_new_flashnode(transport, ev, rlen); + break; + case ISCSI_UEVENT_DEL_FLASHNODE: + err = iscsi_del_flashnode(transport, ev); +@@ -4029,8 +4052,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) + err = iscsi_logout_flashnode_sid(transport, ev); + break; + case ISCSI_UEVENT_SET_CHAP: +- err = iscsi_set_chap(transport, ev, +- nlmsg_attrlen(nlh, sizeof(*ev))); ++ err = iscsi_set_chap(transport, ev, rlen); + break; + case ISCSI_UEVENT_GET_HOST_STATS: + err = iscsi_get_host_stats(transport, nlh); +diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c +index 83d09c2009280..7a1dc5c7c49ee 100644 +--- a/drivers/scsi/storvsc_drv.c ++++ b/drivers/scsi/storvsc_drv.c +@@ -1568,6 +1568,8 @@ static int storvsc_device_configure(struct scsi_device *sdevice) + { + blk_queue_rq_timeout(sdevice->request_queue, (storvsc_timeout * HZ)); + ++ /* storvsc devices don't support MAINTENANCE_IN SCSI cmd */ ++ sdevice->no_report_opcodes = 1; + sdevice->no_write_same = 1; + + /* +diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c +index c92d26b73e6fc..27c668eac9647 100644 +--- a/drivers/soc/qcom/ocmem.c ++++ b/drivers/soc/qcom/ocmem.c +@@ -76,8 +76,12 @@ struct ocmem { + #define OCMEM_REG_GFX_MPU_START 0x00001004 + #define OCMEM_REG_GFX_MPU_END 0x00001008 + +-#define OCMEM_HW_PROFILE_NUM_PORTS(val) FIELD_PREP(0x0000000f, (val)) +-#define OCMEM_HW_PROFILE_NUM_MACROS(val) FIELD_PREP(0x00003f00, (val)) ++#define OCMEM_HW_VERSION_MAJOR(val) FIELD_GET(GENMASK(31, 28), val) ++#define OCMEM_HW_VERSION_MINOR(val) FIELD_GET(GENMASK(27, 16), val) ++#define OCMEM_HW_VERSION_STEP(val) FIELD_GET(GENMASK(15, 0), val) ++ ++#define OCMEM_HW_PROFILE_NUM_PORTS(val) FIELD_GET(0x0000000f, (val)) ++#define OCMEM_HW_PROFILE_NUM_MACROS(val) FIELD_GET(0x00003f00, (val)) + + #define OCMEM_HW_PROFILE_LAST_REGN_HALFSIZE 0x00010000 + #define OCMEM_HW_PROFILE_INTERLEAVING 0x00020000 +@@ -355,6 +359,12 @@ static int ocmem_dev_probe(struct platform_device *pdev) + } + } + ++ reg = ocmem_read(ocmem, OCMEM_REG_HW_VERSION); ++ dev_dbg(dev, "OCMEM hardware version: %lu.%lu.%lu\n", ++ OCMEM_HW_VERSION_MAJOR(reg), ++ OCMEM_HW_VERSION_MINOR(reg), ++ OCMEM_HW_VERSION_STEP(reg)); ++ + reg = ocmem_read(ocmem, OCMEM_REG_HW_PROFILE); + ocmem->num_ports = OCMEM_HW_PROFILE_NUM_PORTS(reg); + ocmem->num_macros = OCMEM_HW_PROFILE_NUM_MACROS(reg); +diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c +index 4f163d62942c1..af8d90efd91fa 100644 +--- a/drivers/soc/qcom/smem.c ++++ b/drivers/soc/qcom/smem.c +@@ -723,7 +723,7 @@ EXPORT_SYMBOL(qcom_smem_get_free_space); + + static bool addr_in_range(void __iomem *base, size_t size, void *addr) + { +- return base && (addr >= base && addr < base + size); ++ return base && ((void __iomem *)addr >= base && (void __iomem *)addr < base + size); + } + + /** +diff --git a/drivers/spi/spi-tegra20-sflash.c b/drivers/spi/spi-tegra20-sflash.c +index 220ee08c4a06c..d4bebb4314172 100644 +--- a/drivers/spi/spi-tegra20-sflash.c ++++ b/drivers/spi/spi-tegra20-sflash.c +@@ -455,7 +455,11 @@ static int tegra_sflash_probe(struct platform_device *pdev) + goto exit_free_master; + } + +- tsd->irq = platform_get_irq(pdev, 0); ++ ret = platform_get_irq(pdev, 0); ++ if (ret < 0) ++ goto exit_free_master; ++ tsd->irq = ret; ++ + ret = request_irq(tsd->irq, tegra_sflash_isr, 0, + dev_name(&pdev->dev), tsd); + if (ret < 0) { +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c +index 5f9aedd1f0b65..151fef199c380 100644 +--- a/drivers/spi/spi.c ++++ b/drivers/spi/spi.c +@@ -4370,6 +4370,11 @@ static int of_spi_notify(struct notifier_block *nb, unsigned long action, + return NOTIFY_OK; + } + ++ /* ++ * Clear the flag before adding the device so that fw_devlink ++ * doesn't skip adding consumers to this device. ++ */ ++ rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; + spi = of_register_spi_device(ctlr, rd->dn); + put_device(&ctlr->dev); + +diff --git a/drivers/staging/fbtft/fb_ili9341.c b/drivers/staging/fbtft/fb_ili9341.c +index 9ccd0823c3ab3..47e72b87d76d9 100644 +--- a/drivers/staging/fbtft/fb_ili9341.c ++++ b/drivers/staging/fbtft/fb_ili9341.c +@@ -145,7 +145,7 @@ static struct fbtft_display display = { + }, + }; + +-FBTFT_REGISTER_DRIVER(DRVNAME, "ilitek,ili9341", &display); ++FBTFT_REGISTER_SPI_DRIVER(DRVNAME, "ilitek", "ili9341", &display); + + MODULE_ALIAS("spi:" DRVNAME); + MODULE_ALIAS("platform:" DRVNAME); +diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c +index 82806f198074a..a9bd1e71ea487 100644 +--- a/drivers/staging/media/rkvdec/rkvdec.c ++++ b/drivers/staging/media/rkvdec/rkvdec.c +@@ -120,7 +120,7 @@ static const struct rkvdec_coded_fmt_desc rkvdec_coded_fmts[] = { + .max_width = 4096, + .step_width = 16, + .min_height = 48, +- .max_height = 2304, ++ .max_height = 2560, + .step_height = 16, + }, + .ctrls = &rkvdec_h264_ctrls, +diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c +index aacba30bc10c1..762d1990180bf 100644 +--- a/drivers/thermal/thermal_of.c ++++ b/drivers/thermal/thermal_of.c +@@ -409,13 +409,13 @@ static int __thermal_of_unbind(struct device_node *map_np, int index, int trip_i + ret = of_parse_phandle_with_args(map_np, "cooling-device", "#cooling-cells", + index, &cooling_spec); + +- of_node_put(cooling_spec.np); +- + if (ret < 0) { + pr_err("Invalid cooling-device entry\n"); + return ret; + } + ++ of_node_put(cooling_spec.np); ++ + if (cooling_spec.args_count < 2) { + pr_err("wrong reference to cooling device, missing limits\n"); + return -EINVAL; +@@ -442,13 +442,13 @@ static int __thermal_of_bind(struct device_node *map_np, int index, int trip_id, + ret = of_parse_phandle_with_args(map_np, "cooling-device", "#cooling-cells", + index, &cooling_spec); + +- of_node_put(cooling_spec.np); +- + if (ret < 0) { + pr_err("Invalid cooling-device entry\n"); + return ret; + } + ++ of_node_put(cooling_spec.np); ++ + if (cooling_spec.args_count < 2) { + pr_err("wrong reference to cooling device, missing limits\n"); + return -EINVAL; +diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c +index 8411a0f312db0..21145eb8f2a9c 100644 +--- a/drivers/tty/serial/sc16is7xx.c ++++ b/drivers/tty/serial/sc16is7xx.c +@@ -236,7 +236,8 @@ + + /* IOControl register bits (Only 750/760) */ + #define SC16IS7XX_IOCONTROL_LATCH_BIT (1 << 0) /* Enable input latching */ +-#define SC16IS7XX_IOCONTROL_MODEM_BIT (1 << 1) /* Enable GPIO[7:4] as modem pins */ ++#define SC16IS7XX_IOCONTROL_MODEM_A_BIT (1 << 1) /* Enable GPIO[7:4] as modem A pins */ ++#define SC16IS7XX_IOCONTROL_MODEM_B_BIT (1 << 2) /* Enable GPIO[3:0] as modem B pins */ + #define SC16IS7XX_IOCONTROL_SRESET_BIT (1 << 3) /* Software Reset */ + + /* EFCR register bits */ +@@ -301,12 +302,12 @@ + /* Misc definitions */ + #define SC16IS7XX_FIFO_SIZE (64) + #define SC16IS7XX_REG_SHIFT 2 ++#define SC16IS7XX_GPIOS_PER_BANK 4 + + struct sc16is7xx_devtype { + char name[10]; + int nr_gpio; + int nr_uart; +- int has_mctrl; + }; + + #define SC16IS7XX_RECONF_MD (1 << 0) +@@ -336,7 +337,9 @@ struct sc16is7xx_port { + struct clk *clk; + #ifdef CONFIG_GPIOLIB + struct gpio_chip gpio; ++ unsigned long gpio_valid_mask; + #endif ++ u8 mctrl_mask; + unsigned char buf[SC16IS7XX_FIFO_SIZE]; + struct kthread_worker kworker; + struct task_struct *kworker_task; +@@ -447,35 +450,30 @@ static const struct sc16is7xx_devtype sc16is74x_devtype = { + .name = "SC16IS74X", + .nr_gpio = 0, + .nr_uart = 1, +- .has_mctrl = 0, + }; + + static const struct sc16is7xx_devtype sc16is750_devtype = { + .name = "SC16IS750", +- .nr_gpio = 4, ++ .nr_gpio = 8, + .nr_uart = 1, +- .has_mctrl = 1, + }; + + static const struct sc16is7xx_devtype sc16is752_devtype = { + .name = "SC16IS752", +- .nr_gpio = 0, ++ .nr_gpio = 8, + .nr_uart = 2, +- .has_mctrl = 1, + }; + + static const struct sc16is7xx_devtype sc16is760_devtype = { + .name = "SC16IS760", +- .nr_gpio = 4, ++ .nr_gpio = 8, + .nr_uart = 1, +- .has_mctrl = 1, + }; + + static const struct sc16is7xx_devtype sc16is762_devtype = { + .name = "SC16IS762", +- .nr_gpio = 0, ++ .nr_gpio = 8, + .nr_uart = 2, +- .has_mctrl = 1, + }; + + static bool sc16is7xx_regmap_volatile(struct device *dev, unsigned int reg) +@@ -1360,8 +1358,98 @@ static int sc16is7xx_gpio_direction_output(struct gpio_chip *chip, + + return 0; + } ++ ++static int sc16is7xx_gpio_init_valid_mask(struct gpio_chip *chip, ++ unsigned long *valid_mask, ++ unsigned int ngpios) ++{ ++ struct sc16is7xx_port *s = gpiochip_get_data(chip); ++ ++ *valid_mask = s->gpio_valid_mask; ++ ++ return 0; ++} ++ ++static int sc16is7xx_setup_gpio_chip(struct sc16is7xx_port *s) ++{ ++ struct device *dev = s->p[0].port.dev; ++ ++ if (!s->devtype->nr_gpio) ++ return 0; ++ ++ switch (s->mctrl_mask) { ++ case 0: ++ s->gpio_valid_mask = GENMASK(7, 0); ++ break; ++ case SC16IS7XX_IOCONTROL_MODEM_A_BIT: ++ s->gpio_valid_mask = GENMASK(3, 0); ++ break; ++ case SC16IS7XX_IOCONTROL_MODEM_B_BIT: ++ s->gpio_valid_mask = GENMASK(7, 4); ++ break; ++ default: ++ break; ++ } ++ ++ if (s->gpio_valid_mask == 0) ++ return 0; ++ ++ s->gpio.owner = THIS_MODULE; ++ s->gpio.parent = dev; ++ s->gpio.label = dev_name(dev); ++ s->gpio.init_valid_mask = sc16is7xx_gpio_init_valid_mask; ++ s->gpio.direction_input = sc16is7xx_gpio_direction_input; ++ s->gpio.get = sc16is7xx_gpio_get; ++ s->gpio.direction_output = sc16is7xx_gpio_direction_output; ++ s->gpio.set = sc16is7xx_gpio_set; ++ s->gpio.base = -1; ++ s->gpio.ngpio = s->devtype->nr_gpio; ++ s->gpio.can_sleep = 1; ++ ++ return gpiochip_add_data(&s->gpio, s); ++} + #endif + ++/* ++ * Configure ports designated to operate as modem control lines. ++ */ ++static int sc16is7xx_setup_mctrl_ports(struct sc16is7xx_port *s) ++{ ++ int i; ++ int ret; ++ int count; ++ u32 mctrl_port[2]; ++ struct device *dev = s->p[0].port.dev; ++ ++ count = device_property_count_u32(dev, "nxp,modem-control-line-ports"); ++ if (count < 0 || count > ARRAY_SIZE(mctrl_port)) ++ return 0; ++ ++ ret = device_property_read_u32_array(dev, "nxp,modem-control-line-ports", ++ mctrl_port, count); ++ if (ret) ++ return ret; ++ ++ s->mctrl_mask = 0; ++ ++ for (i = 0; i < count; i++) { ++ /* Use GPIO lines as modem control lines */ ++ if (mctrl_port[i] == 0) ++ s->mctrl_mask |= SC16IS7XX_IOCONTROL_MODEM_A_BIT; ++ else if (mctrl_port[i] == 1) ++ s->mctrl_mask |= SC16IS7XX_IOCONTROL_MODEM_B_BIT; ++ } ++ ++ if (s->mctrl_mask) ++ regmap_update_bits( ++ s->regmap, ++ SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT, ++ SC16IS7XX_IOCONTROL_MODEM_A_BIT | ++ SC16IS7XX_IOCONTROL_MODEM_B_BIT, s->mctrl_mask); ++ ++ return 0; ++} ++ + static const struct serial_rs485 sc16is7xx_rs485_supported = { + .flags = SER_RS485_ENABLED | SER_RS485_RTS_AFTER_SEND, + .delay_rts_before_send = 1, +@@ -1474,12 +1562,6 @@ static int sc16is7xx_probe(struct device *dev, + SC16IS7XX_EFCR_RXDISABLE_BIT | + SC16IS7XX_EFCR_TXDISABLE_BIT); + +- /* Use GPIO lines as modem status registers */ +- if (devtype->has_mctrl) +- sc16is7xx_port_write(&s->p[i].port, +- SC16IS7XX_IOCONTROL_REG, +- SC16IS7XX_IOCONTROL_MODEM_BIT); +- + /* Initialize kthread work structs */ + kthread_init_work(&s->p[i].tx_work, sc16is7xx_tx_proc); + kthread_init_work(&s->p[i].reg_work, sc16is7xx_reg_proc); +@@ -1517,23 +1599,14 @@ static int sc16is7xx_probe(struct device *dev, + s->p[u].irda_mode = true; + } + ++ ret = sc16is7xx_setup_mctrl_ports(s); ++ if (ret) ++ goto out_ports; ++ + #ifdef CONFIG_GPIOLIB +- if (devtype->nr_gpio) { +- /* Setup GPIO cotroller */ +- s->gpio.owner = THIS_MODULE; +- s->gpio.parent = dev; +- s->gpio.label = dev_name(dev); +- s->gpio.direction_input = sc16is7xx_gpio_direction_input; +- s->gpio.get = sc16is7xx_gpio_get; +- s->gpio.direction_output = sc16is7xx_gpio_direction_output; +- s->gpio.set = sc16is7xx_gpio_set; +- s->gpio.base = -1; +- s->gpio.ngpio = devtype->nr_gpio; +- s->gpio.can_sleep = 1; +- ret = gpiochip_add_data(&s->gpio, s); +- if (ret) +- goto out_thread; +- } ++ ret = sc16is7xx_setup_gpio_chip(s); ++ if (ret) ++ goto out_ports; + #endif + + /* +@@ -1556,10 +1629,8 @@ static int sc16is7xx_probe(struct device *dev, + return 0; + + #ifdef CONFIG_GPIOLIB +- if (devtype->nr_gpio) ++ if (s->gpio_valid_mask) + gpiochip_remove(&s->gpio); +- +-out_thread: + #endif + + out_ports: +@@ -1582,7 +1653,7 @@ static void sc16is7xx_remove(struct device *dev) + int i; + + #ifdef CONFIG_GPIOLIB +- if (s->devtype->nr_gpio) ++ if (s->gpio_valid_mask) + gpiochip_remove(&s->gpio); + #endif + +diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c +index c08360212aa20..7aa2b5b67001d 100644 +--- a/drivers/tty/serial/serial-tegra.c ++++ b/drivers/tty/serial/serial-tegra.c +@@ -999,7 +999,11 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup) + tup->ier_shadow = 0; + tup->current_baud = 0; + +- clk_prepare_enable(tup->uart_clk); ++ ret = clk_prepare_enable(tup->uart_clk); ++ if (ret) { ++ dev_err(tup->uport.dev, "could not enable clk\n"); ++ return ret; ++ } + + /* Reset the UART controller to clear all previous status.*/ + reset_control_assert(tup->rst); +diff --git a/drivers/tty/serial/sprd_serial.c b/drivers/tty/serial/sprd_serial.c +index 342a879676315..9c7f71993e945 100644 +--- a/drivers/tty/serial/sprd_serial.c ++++ b/drivers/tty/serial/sprd_serial.c +@@ -367,7 +367,7 @@ static void sprd_rx_free_buf(struct sprd_uart_port *sp) + if (sp->rx_dma.virt) + dma_free_coherent(sp->port.dev, SPRD_UART_RX_SIZE, + sp->rx_dma.virt, sp->rx_dma.phys_addr); +- ++ sp->rx_dma.virt = NULL; + } + + static int sprd_rx_dma_config(struct uart_port *port, u32 burst) +@@ -1132,7 +1132,7 @@ static bool sprd_uart_is_console(struct uart_port *uport) + static int sprd_clk_init(struct uart_port *uport) + { + struct clk *clk_uart, *clk_parent; +- struct sprd_uart_port *u = sprd_port[uport->line]; ++ struct sprd_uart_port *u = container_of(uport, struct sprd_uart_port, port); + + clk_uart = devm_clk_get(uport->dev, "uart"); + if (IS_ERR(clk_uart)) { +@@ -1175,22 +1175,22 @@ static int sprd_probe(struct platform_device *pdev) + { + struct resource *res; + struct uart_port *up; ++ struct sprd_uart_port *sport; + int irq; + int index; + int ret; + + index = of_alias_get_id(pdev->dev.of_node, "serial"); +- if (index < 0 || index >= ARRAY_SIZE(sprd_port)) { ++ if (index < 0 || index >= UART_NR_MAX) { + dev_err(&pdev->dev, "got a wrong serial alias id %d\n", index); + return -EINVAL; + } + +- sprd_port[index] = devm_kzalloc(&pdev->dev, sizeof(*sprd_port[index]), +- GFP_KERNEL); +- if (!sprd_port[index]) ++ sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL); ++ if (!sport) + return -ENOMEM; + +- up = &sprd_port[index]->port; ++ up = &sport->port; + up->dev = &pdev->dev; + up->line = index; + up->type = PORT_SPRD; +@@ -1221,7 +1221,7 @@ static int sprd_probe(struct platform_device *pdev) + * Allocate one dma buffer to prepare for receive transfer, in case + * memory allocation failure at runtime. + */ +- ret = sprd_rx_alloc_buf(sprd_port[index]); ++ ret = sprd_rx_alloc_buf(sport); + if (ret) + return ret; + +@@ -1229,17 +1229,27 @@ static int sprd_probe(struct platform_device *pdev) + ret = uart_register_driver(&sprd_uart_driver); + if (ret < 0) { + pr_err("Failed to register SPRD-UART driver\n"); +- return ret; ++ goto free_rx_buf; + } + } ++ + sprd_ports_num++; ++ sprd_port[index] = sport; + + ret = uart_add_one_port(&sprd_uart_driver, up); + if (ret) +- sprd_remove(pdev); ++ goto clean_port; + + platform_set_drvdata(pdev, up); + ++ return 0; ++ ++clean_port: ++ sprd_port[index] = NULL; ++ if (--sprd_ports_num == 0) ++ uart_unregister_driver(&sprd_uart_driver); ++free_rx_buf: ++ sprd_rx_free_buf(sport); + return ret; + } + +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index 977bd4b9dd0b4..36437d39b93c8 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -8830,9 +8830,11 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba, + for (retries = 3; retries > 0; --retries) { + ret = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, &sshdr, + HZ, 0, 0, RQF_PM, NULL); +- if (!scsi_status_is_check_condition(ret) || +- !scsi_sense_valid(&sshdr) || +- sshdr.sense_key != UNIT_ATTENTION) ++ /* ++ * scsi_execute() only returns a negative value if the request ++ * queue is dying. ++ */ ++ if (ret <= 0) + break; + } + if (ret) { +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index 8300baedafd20..6af0a31ff1475 100644 +--- a/drivers/usb/core/hcd.c ++++ b/drivers/usb/core/hcd.c +@@ -983,6 +983,7 @@ static int register_root_hub(struct usb_hcd *hcd) + { + struct device *parent_dev = hcd->self.controller; + struct usb_device *usb_dev = hcd->self.root_hub; ++ struct usb_device_descriptor *descr; + const int devnum = 1; + int retval; + +@@ -994,13 +995,16 @@ static int register_root_hub(struct usb_hcd *hcd) + mutex_lock(&usb_bus_idr_lock); + + usb_dev->ep0.desc.wMaxPacketSize = cpu_to_le16(64); +- retval = usb_get_device_descriptor(usb_dev, USB_DT_DEVICE_SIZE); +- if (retval != sizeof usb_dev->descriptor) { ++ descr = usb_get_device_descriptor(usb_dev); ++ if (IS_ERR(descr)) { ++ retval = PTR_ERR(descr); + mutex_unlock(&usb_bus_idr_lock); + dev_dbg (parent_dev, "can't read %s device descriptor %d\n", + dev_name(&usb_dev->dev), retval); +- return (retval < 0) ? retval : -EMSGSIZE; ++ return retval; + } ++ usb_dev->descriptor = *descr; ++ kfree(descr); + + if (le16_to_cpu(usb_dev->descriptor.bcdUSB) >= 0x0201) { + retval = usb_get_bos_descriptor(usb_dev); +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 1abe43ddb75f0..0069a24bd216c 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -2656,12 +2656,17 @@ int usb_authorize_device(struct usb_device *usb_dev) + } + + if (usb_dev->wusb) { +- result = usb_get_device_descriptor(usb_dev, sizeof(usb_dev->descriptor)); +- if (result < 0) { ++ struct usb_device_descriptor *descr; ++ ++ descr = usb_get_device_descriptor(usb_dev); ++ if (IS_ERR(descr)) { ++ result = PTR_ERR(descr); + dev_err(&usb_dev->dev, "can't re-read device descriptor for " + "authorization: %d\n", result); + goto error_device_descriptor; + } ++ usb_dev->descriptor = *descr; ++ kfree(descr); + } + + usb_dev->authorized = 1; +@@ -4661,6 +4666,67 @@ static int hub_enable_device(struct usb_device *udev) + return hcd->driver->enable_device(hcd, udev); + } + ++/* ++ * Get the bMaxPacketSize0 value during initialization by reading the ++ * device's device descriptor. Since we don't already know this value, ++ * the transfer is unsafe and it ignores I/O errors, only testing for ++ * reasonable received values. ++ * ++ * For "old scheme" initialization, size will be 8 so we read just the ++ * start of the device descriptor, which should work okay regardless of ++ * the actual bMaxPacketSize0 value. For "new scheme" initialization, ++ * size will be 64 (and buf will point to a sufficiently large buffer), ++ * which might not be kosher according to the USB spec but it's what ++ * Windows does and what many devices expect. ++ * ++ * Returns: bMaxPacketSize0 or a negative error code. ++ */ ++static int get_bMaxPacketSize0(struct usb_device *udev, ++ struct usb_device_descriptor *buf, int size, bool first_time) ++{ ++ int i, rc; ++ ++ /* ++ * Retry on all errors; some devices are flakey. ++ * 255 is for WUSB devices, we actually need to use ++ * 512 (WUSB1.0[4.8.1]). ++ */ ++ for (i = 0; i < GET_MAXPACKET0_TRIES; ++i) { ++ /* Start with invalid values in case the transfer fails */ ++ buf->bDescriptorType = buf->bMaxPacketSize0 = 0; ++ rc = usb_control_msg(udev, usb_rcvaddr0pipe(), ++ USB_REQ_GET_DESCRIPTOR, USB_DIR_IN, ++ USB_DT_DEVICE << 8, 0, ++ buf, size, ++ initial_descriptor_timeout); ++ switch (buf->bMaxPacketSize0) { ++ case 8: case 16: case 32: case 64: case 9: ++ if (buf->bDescriptorType == USB_DT_DEVICE) { ++ rc = buf->bMaxPacketSize0; ++ break; ++ } ++ fallthrough; ++ default: ++ if (rc >= 0) ++ rc = -EPROTO; ++ break; ++ } ++ ++ /* ++ * Some devices time out if they are powered on ++ * when already connected. They need a second ++ * reset, so return early. But only on the first ++ * attempt, lest we get into a time-out/reset loop. ++ */ ++ if (rc > 0 || (rc == -ETIMEDOUT && first_time && ++ udev->speed > USB_SPEED_FULL)) ++ break; ++ } ++ return rc; ++} ++ ++#define GET_DESCRIPTOR_BUFSIZE 64 ++ + /* Reset device, (re)assign address, get device descriptor. + * Device connection must be stable, no more debouncing needed. + * Returns device in USB_STATE_ADDRESS, except on error. +@@ -4670,10 +4736,17 @@ static int hub_enable_device(struct usb_device *udev) + * the port lock. For a newly detected device that is not accessible + * through any global pointers, it's not necessary to lock the device, + * but it is still necessary to lock the port. ++ * ++ * For a newly detected device, @dev_descr must be NULL. The device ++ * descriptor retrieved from the device will then be stored in ++ * @udev->descriptor. For an already existing device, @dev_descr ++ * must be non-NULL. The device descriptor will be stored there, ++ * not in @udev->descriptor, because descriptors for registered ++ * devices are meant to be immutable. + */ + static int + hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, +- int retry_counter) ++ int retry_counter, struct usb_device_descriptor *dev_descr) + { + struct usb_device *hdev = hub->hdev; + struct usb_hcd *hcd = bus_to_hcd(hdev->bus); +@@ -4685,6 +4758,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + int devnum = udev->devnum; + const char *driver_name; + bool do_new_scheme; ++ const bool initial = !dev_descr; ++ int maxp0; ++ struct usb_device_descriptor *buf, *descr; ++ ++ buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO); ++ if (!buf) ++ return -ENOMEM; + + /* root hub ports have a slightly longer reset period + * (from USB 2.0 spec, section 7.1.7.5) +@@ -4717,32 +4797,34 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + } + oldspeed = udev->speed; + +- /* USB 2.0 section 5.5.3 talks about ep0 maxpacket ... +- * it's fixed size except for full speed devices. +- * For Wireless USB devices, ep0 max packet is always 512 (tho +- * reported as 0xff in the device descriptor). WUSB1.0[4.8.1]. +- */ +- switch (udev->speed) { +- case USB_SPEED_SUPER_PLUS: +- case USB_SPEED_SUPER: +- case USB_SPEED_WIRELESS: /* fixed at 512 */ +- udev->ep0.desc.wMaxPacketSize = cpu_to_le16(512); +- break; +- case USB_SPEED_HIGH: /* fixed at 64 */ +- udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64); +- break; +- case USB_SPEED_FULL: /* 8, 16, 32, or 64 */ +- /* to determine the ep0 maxpacket size, try to read +- * the device descriptor to get bMaxPacketSize0 and +- * then correct our initial guess. ++ if (initial) { ++ /* USB 2.0 section 5.5.3 talks about ep0 maxpacket ... ++ * it's fixed size except for full speed devices. ++ * For Wireless USB devices, ep0 max packet is always 512 (tho ++ * reported as 0xff in the device descriptor). WUSB1.0[4.8.1]. + */ +- udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64); +- break; +- case USB_SPEED_LOW: /* fixed at 8 */ +- udev->ep0.desc.wMaxPacketSize = cpu_to_le16(8); +- break; +- default: +- goto fail; ++ switch (udev->speed) { ++ case USB_SPEED_SUPER_PLUS: ++ case USB_SPEED_SUPER: ++ case USB_SPEED_WIRELESS: /* fixed at 512 */ ++ udev->ep0.desc.wMaxPacketSize = cpu_to_le16(512); ++ break; ++ case USB_SPEED_HIGH: /* fixed at 64 */ ++ udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64); ++ break; ++ case USB_SPEED_FULL: /* 8, 16, 32, or 64 */ ++ /* to determine the ep0 maxpacket size, try to read ++ * the device descriptor to get bMaxPacketSize0 and ++ * then correct our initial guess. ++ */ ++ udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64); ++ break; ++ case USB_SPEED_LOW: /* fixed at 8 */ ++ udev->ep0.desc.wMaxPacketSize = cpu_to_le16(8); ++ break; ++ default: ++ goto fail; ++ } + } + + if (udev->speed == USB_SPEED_WIRELESS) +@@ -4765,22 +4847,24 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + if (udev->speed < USB_SPEED_SUPER) + dev_info(&udev->dev, + "%s %s USB device number %d using %s\n", +- (udev->config) ? "reset" : "new", speed, ++ (initial ? "new" : "reset"), speed, + devnum, driver_name); + +- /* Set up TT records, if needed */ +- if (hdev->tt) { +- udev->tt = hdev->tt; +- udev->ttport = hdev->ttport; +- } else if (udev->speed != USB_SPEED_HIGH +- && hdev->speed == USB_SPEED_HIGH) { +- if (!hub->tt.hub) { +- dev_err(&udev->dev, "parent hub has no TT\n"); +- retval = -EINVAL; +- goto fail; ++ if (initial) { ++ /* Set up TT records, if needed */ ++ if (hdev->tt) { ++ udev->tt = hdev->tt; ++ udev->ttport = hdev->ttport; ++ } else if (udev->speed != USB_SPEED_HIGH ++ && hdev->speed == USB_SPEED_HIGH) { ++ if (!hub->tt.hub) { ++ dev_err(&udev->dev, "parent hub has no TT\n"); ++ retval = -EINVAL; ++ goto fail; ++ } ++ udev->tt = &hub->tt; ++ udev->ttport = port1; + } +- udev->tt = &hub->tt; +- udev->ttport = port1; + } + + /* Why interleave GET_DESCRIPTOR and SET_ADDRESS this way? +@@ -4799,9 +4883,6 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + + for (retries = 0; retries < GET_DESCRIPTOR_TRIES; (++retries, msleep(100))) { + if (do_new_scheme) { +- struct usb_device_descriptor *buf; +- int r = 0; +- + retval = hub_enable_device(udev); + if (retval < 0) { + dev_err(&udev->dev, +@@ -4810,52 +4891,14 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + goto fail; + } + +-#define GET_DESCRIPTOR_BUFSIZE 64 +- buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO); +- if (!buf) { +- retval = -ENOMEM; +- continue; +- } +- +- /* Retry on all errors; some devices are flakey. +- * 255 is for WUSB devices, we actually need to use +- * 512 (WUSB1.0[4.8.1]). +- */ +- for (operations = 0; operations < GET_MAXPACKET0_TRIES; +- ++operations) { +- buf->bMaxPacketSize0 = 0; +- r = usb_control_msg(udev, usb_rcvaddr0pipe(), +- USB_REQ_GET_DESCRIPTOR, USB_DIR_IN, +- USB_DT_DEVICE << 8, 0, +- buf, GET_DESCRIPTOR_BUFSIZE, +- initial_descriptor_timeout); +- switch (buf->bMaxPacketSize0) { +- case 8: case 16: case 32: case 64: case 255: +- if (buf->bDescriptorType == +- USB_DT_DEVICE) { +- r = 0; +- break; +- } +- fallthrough; +- default: +- if (r == 0) +- r = -EPROTO; +- break; +- } +- /* +- * Some devices time out if they are powered on +- * when already connected. They need a second +- * reset. But only on the first attempt, +- * lest we get into a time out/reset loop +- */ +- if (r == 0 || (r == -ETIMEDOUT && +- retries == 0 && +- udev->speed > USB_SPEED_FULL)) +- break; ++ maxp0 = get_bMaxPacketSize0(udev, buf, ++ GET_DESCRIPTOR_BUFSIZE, retries == 0); ++ if (maxp0 > 0 && !initial && ++ maxp0 != udev->descriptor.bMaxPacketSize0) { ++ dev_err(&udev->dev, "device reset changed ep0 maxpacket size!\n"); ++ retval = -ENODEV; ++ goto fail; + } +- udev->descriptor.bMaxPacketSize0 = +- buf->bMaxPacketSize0; +- kfree(buf); + + retval = hub_port_reset(hub, port1, udev, delay, false); + if (retval < 0) /* error or disconnect */ +@@ -4866,14 +4909,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + retval = -ENODEV; + goto fail; + } +- if (r) { +- if (r != -ENODEV) ++ if (maxp0 < 0) { ++ if (maxp0 != -ENODEV) + dev_err(&udev->dev, "device descriptor read/64, error %d\n", +- r); +- retval = -EMSGSIZE; ++ maxp0); ++ retval = maxp0; + continue; + } +-#undef GET_DESCRIPTOR_BUFSIZE + } + + /* +@@ -4919,18 +4961,22 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + break; + } + +- retval = usb_get_device_descriptor(udev, 8); +- if (retval < 8) { ++ /* !do_new_scheme || wusb */ ++ maxp0 = get_bMaxPacketSize0(udev, buf, 8, retries == 0); ++ if (maxp0 < 0) { ++ retval = maxp0; + if (retval != -ENODEV) + dev_err(&udev->dev, + "device descriptor read/8, error %d\n", + retval); +- if (retval >= 0) +- retval = -EMSGSIZE; + } else { + u32 delay; + +- retval = 0; ++ if (!initial && maxp0 != udev->descriptor.bMaxPacketSize0) { ++ dev_err(&udev->dev, "device reset changed ep0 maxpacket size!\n"); ++ retval = -ENODEV; ++ goto fail; ++ } + + delay = udev->parent->hub_delay; + udev->hub_delay = min_t(u32, delay, +@@ -4949,48 +4995,61 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + goto fail; + + /* +- * Some superspeed devices have finished the link training process +- * and attached to a superspeed hub port, but the device descriptor +- * got from those devices show they aren't superspeed devices. Warm +- * reset the port attached by the devices can fix them. ++ * Check the ep0 maxpacket guess and correct it if necessary. ++ * maxp0 is the value stored in the device descriptor; ++ * i is the value it encodes (logarithmic for SuperSpeed or greater). + */ +- if ((udev->speed >= USB_SPEED_SUPER) && +- (le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) { +- dev_err(&udev->dev, "got a wrong device descriptor, " +- "warm reset device\n"); +- hub_port_reset(hub, port1, udev, +- HUB_BH_RESET_TIME, true); +- retval = -EINVAL; +- goto fail; +- } +- +- if (udev->descriptor.bMaxPacketSize0 == 0xff || +- udev->speed >= USB_SPEED_SUPER) +- i = 512; +- else +- i = udev->descriptor.bMaxPacketSize0; +- if (usb_endpoint_maxp(&udev->ep0.desc) != i) { +- if (udev->speed == USB_SPEED_LOW || +- !(i == 8 || i == 16 || i == 32 || i == 64)) { +- dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", i); +- retval = -EMSGSIZE; +- goto fail; +- } ++ i = maxp0; ++ if (udev->speed >= USB_SPEED_SUPER) { ++ if (maxp0 <= 16) ++ i = 1 << maxp0; ++ else ++ i = 0; /* Invalid */ ++ } ++ if (usb_endpoint_maxp(&udev->ep0.desc) == i) { ++ ; /* Initial ep0 maxpacket guess is right */ ++ } else if ((udev->speed == USB_SPEED_FULL || ++ udev->speed == USB_SPEED_HIGH) && ++ (i == 8 || i == 16 || i == 32 || i == 64)) { ++ /* Initial guess is wrong; use the descriptor's value */ + if (udev->speed == USB_SPEED_FULL) + dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i); + else + dev_warn(&udev->dev, "Using ep0 maxpacket: %d\n", i); + udev->ep0.desc.wMaxPacketSize = cpu_to_le16(i); + usb_ep0_reinit(udev); ++ } else { ++ /* Initial guess is wrong and descriptor's value is invalid */ ++ dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", maxp0); ++ retval = -EMSGSIZE; ++ goto fail; + } + +- retval = usb_get_device_descriptor(udev, USB_DT_DEVICE_SIZE); +- if (retval < (signed)sizeof(udev->descriptor)) { ++ descr = usb_get_device_descriptor(udev); ++ if (IS_ERR(descr)) { ++ retval = PTR_ERR(descr); + if (retval != -ENODEV) + dev_err(&udev->dev, "device descriptor read/all, error %d\n", + retval); +- if (retval >= 0) +- retval = -ENOMSG; ++ goto fail; ++ } ++ if (initial) ++ udev->descriptor = *descr; ++ else ++ *dev_descr = *descr; ++ kfree(descr); ++ ++ /* ++ * Some superspeed devices have finished the link training process ++ * and attached to a superspeed hub port, but the device descriptor ++ * got from those devices show they aren't superspeed devices. Warm ++ * reset the port attached by the devices can fix them. ++ */ ++ if ((udev->speed >= USB_SPEED_SUPER) && ++ (le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) { ++ dev_err(&udev->dev, "got a wrong device descriptor, warm reset device\n"); ++ hub_port_reset(hub, port1, udev, HUB_BH_RESET_TIME, true); ++ retval = -EINVAL; + goto fail; + } + +@@ -5016,6 +5075,7 @@ fail: + hub_port_disable(hub, port1, 0); + update_devnum(udev, devnum); /* for disconnect processing */ + } ++ kfree(buf); + return retval; + } + +@@ -5096,7 +5156,7 @@ hub_power_remaining(struct usb_hub *hub) + + + static int descriptors_changed(struct usb_device *udev, +- struct usb_device_descriptor *old_device_descriptor, ++ struct usb_device_descriptor *new_device_descriptor, + struct usb_host_bos *old_bos) + { + int changed = 0; +@@ -5107,8 +5167,8 @@ static int descriptors_changed(struct usb_device *udev, + int length; + char *buf; + +- if (memcmp(&udev->descriptor, old_device_descriptor, +- sizeof(*old_device_descriptor)) != 0) ++ if (memcmp(&udev->descriptor, new_device_descriptor, ++ sizeof(*new_device_descriptor)) != 0) + return 1; + + if ((old_bos && !udev->bos) || (!old_bos && udev->bos)) +@@ -5281,7 +5341,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus, + } + + /* reset (non-USB 3.0 devices) and get descriptor */ +- status = hub_port_init(hub, udev, port1, i); ++ status = hub_port_init(hub, udev, port1, i, NULL); + if (status < 0) + goto loop; + +@@ -5428,9 +5488,8 @@ static void hub_port_connect_change(struct usb_hub *hub, int port1, + { + struct usb_port *port_dev = hub->ports[port1 - 1]; + struct usb_device *udev = port_dev->child; +- struct usb_device_descriptor descriptor; ++ struct usb_device_descriptor *descr; + int status = -ENODEV; +- int retval; + + dev_dbg(&port_dev->dev, "status %04x, change %04x, %s\n", portstatus, + portchange, portspeed(hub, portstatus)); +@@ -5457,23 +5516,20 @@ static void hub_port_connect_change(struct usb_hub *hub, int port1, + * changed device descriptors before resuscitating the + * device. + */ +- descriptor = udev->descriptor; +- retval = usb_get_device_descriptor(udev, +- sizeof(udev->descriptor)); +- if (retval < 0) { ++ descr = usb_get_device_descriptor(udev); ++ if (IS_ERR(descr)) { + dev_dbg(&udev->dev, +- "can't read device descriptor %d\n", +- retval); ++ "can't read device descriptor %ld\n", ++ PTR_ERR(descr)); + } else { +- if (descriptors_changed(udev, &descriptor, ++ if (descriptors_changed(udev, descr, + udev->bos)) { + dev_dbg(&udev->dev, + "device descriptor has changed\n"); +- /* for disconnect() calls */ +- udev->descriptor = descriptor; + } else { + status = 0; /* Nothing to do */ + } ++ kfree(descr); + } + #ifdef CONFIG_PM + } else if (udev->state == USB_STATE_SUSPENDED && +@@ -5911,7 +5967,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev) + struct usb_device *parent_hdev = udev->parent; + struct usb_hub *parent_hub; + struct usb_hcd *hcd = bus_to_hcd(udev->bus); +- struct usb_device_descriptor descriptor = udev->descriptor; ++ struct usb_device_descriptor descriptor; + struct usb_host_bos *bos; + int i, j, ret = 0; + int port1 = udev->portnum; +@@ -5943,7 +5999,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev) + /* ep0 maxpacket size may change; let the HCD know about it. + * Other endpoints will be handled by re-enumeration. */ + usb_ep0_reinit(udev); +- ret = hub_port_init(parent_hub, udev, port1, i); ++ ret = hub_port_init(parent_hub, udev, port1, i, &descriptor); + if (ret >= 0 || ret == -ENOTCONN || ret == -ENODEV) + break; + } +@@ -5955,7 +6011,6 @@ static int usb_reset_and_verify_device(struct usb_device *udev) + /* Device might have changed firmware (DFU or similar) */ + if (descriptors_changed(udev, &descriptor, bos)) { + dev_info(&udev->dev, "device firmware changed\n"); +- udev->descriptor = descriptor; /* for disconnect() calls */ + goto re_enumerate; + } + +diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c +index 4d59d927ae3e3..1673e5d089263 100644 +--- a/drivers/usb/core/message.c ++++ b/drivers/usb/core/message.c +@@ -1039,40 +1039,35 @@ char *usb_cache_string(struct usb_device *udev, int index) + } + + /* +- * usb_get_device_descriptor - (re)reads the device descriptor (usbcore) +- * @dev: the device whose device descriptor is being updated +- * @size: how much of the descriptor to read ++ * usb_get_device_descriptor - read the device descriptor ++ * @udev: the device whose device descriptor should be read + * + * Context: task context, might sleep. + * +- * Updates the copy of the device descriptor stored in the device structure, +- * which dedicates space for this purpose. +- * + * Not exported, only for use by the core. If drivers really want to read + * the device descriptor directly, they can call usb_get_descriptor() with + * type = USB_DT_DEVICE and index = 0. + * +- * This call is synchronous, and may not be used in an interrupt context. +- * +- * Return: The number of bytes received on success, or else the status code +- * returned by the underlying usb_control_msg() call. ++ * Returns: a pointer to a dynamically allocated usb_device_descriptor ++ * structure (which the caller must deallocate), or an ERR_PTR value. + */ +-int usb_get_device_descriptor(struct usb_device *dev, unsigned int size) ++struct usb_device_descriptor *usb_get_device_descriptor(struct usb_device *udev) + { + struct usb_device_descriptor *desc; + int ret; + +- if (size > sizeof(*desc)) +- return -EINVAL; + desc = kmalloc(sizeof(*desc), GFP_NOIO); + if (!desc) +- return -ENOMEM; ++ return ERR_PTR(-ENOMEM); ++ ++ ret = usb_get_descriptor(udev, USB_DT_DEVICE, 0, desc, sizeof(*desc)); ++ if (ret == sizeof(*desc)) ++ return desc; + +- ret = usb_get_descriptor(dev, USB_DT_DEVICE, 0, desc, size); + if (ret >= 0) +- memcpy(&dev->descriptor, desc, size); ++ ret = -EMSGSIZE; + kfree(desc); +- return ret; ++ return ERR_PTR(ret); + } + + /* +diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h +index 82538daac8b89..3bb2e1db42b5d 100644 +--- a/drivers/usb/core/usb.h ++++ b/drivers/usb/core/usb.h +@@ -42,8 +42,8 @@ extern bool usb_endpoint_is_ignored(struct usb_device *udev, + struct usb_endpoint_descriptor *epd); + extern int usb_remove_device(struct usb_device *udev); + +-extern int usb_get_device_descriptor(struct usb_device *dev, +- unsigned int size); ++extern struct usb_device_descriptor *usb_get_device_descriptor( ++ struct usb_device *udev); + extern int usb_set_isoch_delay(struct usb_device *dev); + extern int usb_get_bos_descriptor(struct usb_device *dev); + extern void usb_release_bos_descriptor(struct usb_device *dev); +diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c +index 3abf7f586e2af..7b9a4cf9b100c 100644 +--- a/drivers/usb/gadget/function/f_mass_storage.c ++++ b/drivers/usb/gadget/function/f_mass_storage.c +@@ -926,7 +926,7 @@ static void invalidate_sub(struct fsg_lun *curlun) + { + struct file *filp = curlun->filp; + struct inode *inode = file_inode(filp); +- unsigned long rc; ++ unsigned long __maybe_unused rc; + + rc = invalidate_mapping_pages(inode->i_mapping, 0, -1); + VLDBG(curlun, "invalidate_mapping_pages -> %ld\n", rc); +diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c +index 316e9cc3987be..1c0c61e8ba696 100644 +--- a/drivers/usb/gadget/udc/core.c ++++ b/drivers/usb/gadget/udc/core.c +@@ -40,6 +40,7 @@ static struct bus_type gadget_bus_type; + * @allow_connect: Indicates whether UDC is allowed to be pulled up. + * Set/cleared by gadget_(un)bind_driver() after gadget driver is bound or + * unbound. ++ * @vbus_work: work routine to handle VBUS status change notifications. + * @connect_lock: protects udc->started, gadget->connect, + * gadget->allow_connect and gadget->deactivate. The routines + * usb_gadget_connect_locked(), usb_gadget_disconnect_locked(), +diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c +index d2836ef5d15c7..9299df53eb9df 100644 +--- a/drivers/usb/phy/phy-mxs-usb.c ++++ b/drivers/usb/phy/phy-mxs-usb.c +@@ -388,14 +388,8 @@ static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect) + + static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy) + { +- void __iomem *base = mxs_phy->phy.io_priv; +- u32 phyctrl = readl(base + HW_USBPHY_CTRL); +- +- if (IS_ENABLED(CONFIG_USB_OTG) && +- !(phyctrl & BM_USBPHY_CTRL_OTG_ID_VALUE)) +- return true; +- +- return false; ++ return IS_ENABLED(CONFIG_USB_OTG) && ++ mxs_phy->phy.last_event == USB_EVENT_ID; + } + + static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on) +diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c +index 31c2a3130cadb..69442a8135856 100644 +--- a/drivers/usb/typec/bus.c ++++ b/drivers/usb/typec/bus.c +@@ -154,12 +154,20 @@ EXPORT_SYMBOL_GPL(typec_altmode_exit); + * + * Notifies the partner of @adev about Attention command. + */ +-void typec_altmode_attention(struct typec_altmode *adev, u32 vdo) ++int typec_altmode_attention(struct typec_altmode *adev, u32 vdo) + { +- struct typec_altmode *pdev = &to_altmode(adev)->partner->adev; ++ struct altmode *partner = to_altmode(adev)->partner; ++ struct typec_altmode *pdev; ++ ++ if (!partner) ++ return -ENODEV; ++ ++ pdev = &partner->adev; + + if (pdev->ops && pdev->ops->attention) + pdev->ops->attention(pdev, vdo); ++ ++ return 0; + } + EXPORT_SYMBOL_GPL(typec_altmode_attention); + +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c +index 5f45b82dd1914..ad4d0314d27fa 100644 +--- a/drivers/usb/typec/tcpm/tcpm.c ++++ b/drivers/usb/typec/tcpm/tcpm.c +@@ -1871,7 +1871,8 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port, + } + break; + case ADEV_ATTENTION: +- typec_altmode_attention(adev, p[1]); ++ if (typec_altmode_attention(adev, p[1])) ++ tcpm_log(port, "typec_altmode_attention no port partner altmode"); + break; + } + } +@@ -3929,6 +3930,29 @@ static enum typec_cc_status tcpm_pwr_opmode_to_rp(enum typec_pwr_opmode opmode) + } + } + ++static void tcpm_set_initial_svdm_version(struct tcpm_port *port) ++{ ++ switch (port->negotiated_rev) { ++ case PD_REV30: ++ break; ++ /* ++ * 6.4.4.2.3 Structured VDM Version ++ * 2.0 states "At this time, there is only one version (1.0) defined. ++ * This field Shall be set to zero to indicate Version 1.0." ++ * 3.0 states "This field Shall be set to 01b to indicate Version 2.0." ++ * To ensure that we follow the Power Delivery revision we are currently ++ * operating on, downgrade the SVDM version to the highest one supported ++ * by the Power Delivery revision. ++ */ ++ case PD_REV20: ++ typec_partner_set_svdm_version(port->partner, SVDM_VER_1_0); ++ break; ++ default: ++ typec_partner_set_svdm_version(port->partner, SVDM_VER_1_0); ++ break; ++ } ++} ++ + static void run_state_machine(struct tcpm_port *port) + { + int ret; +@@ -4153,10 +4177,12 @@ static void run_state_machine(struct tcpm_port *port) + * For now, this driver only supports SOP for DISCOVER_IDENTITY, thus using + * port->explicit_contract to decide whether to send the command. + */ +- if (port->explicit_contract) ++ if (port->explicit_contract) { ++ tcpm_set_initial_svdm_version(port); + mod_send_discover_delayed_work(port, 0); +- else ++ } else { + port->send_discover = false; ++ } + + /* + * 6.3.5 +@@ -4439,10 +4465,12 @@ static void run_state_machine(struct tcpm_port *port) + * For now, this driver only supports SOP for DISCOVER_IDENTITY, thus using + * port->explicit_contract. + */ +- if (port->explicit_contract) ++ if (port->explicit_contract) { ++ tcpm_set_initial_svdm_version(port); + mod_send_discover_delayed_work(port, 0); +- else ++ } else { + port->send_discover = false; ++ } + + power_supply_changed(port->psy); + break; +diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c +index 009ba186652ac..18a2dbbc77799 100644 +--- a/drivers/vfio/vfio_iommu_type1.c ++++ b/drivers/vfio/vfio_iommu_type1.c +@@ -2822,7 +2822,7 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu, + static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, + struct vfio_info_cap *caps) + { +- struct vfio_iommu_type1_info_cap_migration cap_mig; ++ struct vfio_iommu_type1_info_cap_migration cap_mig = {}; + + cap_mig.header.id = VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION; + cap_mig.header.version = 1; +diff --git a/drivers/video/backlight/bd6107.c b/drivers/video/backlight/bd6107.c +index a506872d43963..94fe628dd88c0 100644 +--- a/drivers/video/backlight/bd6107.c ++++ b/drivers/video/backlight/bd6107.c +@@ -104,7 +104,7 @@ static int bd6107_backlight_check_fb(struct backlight_device *backlight, + { + struct bd6107 *bd = bl_get_data(backlight); + +- return bd->pdata->fbdev == NULL || bd->pdata->fbdev == info->dev; ++ return bd->pdata->fbdev == NULL || bd->pdata->fbdev == info->device; + } + + static const struct backlight_ops bd6107_backlight_ops = { +diff --git a/drivers/video/backlight/gpio_backlight.c b/drivers/video/backlight/gpio_backlight.c +index 6f78d928f054a..5c5c99f7979e3 100644 +--- a/drivers/video/backlight/gpio_backlight.c ++++ b/drivers/video/backlight/gpio_backlight.c +@@ -35,7 +35,7 @@ static int gpio_backlight_check_fb(struct backlight_device *bl, + { + struct gpio_backlight *gbl = bl_get_data(bl); + +- return gbl->fbdev == NULL || gbl->fbdev == info->dev; ++ return gbl->fbdev == NULL || gbl->fbdev == info->device; + } + + static const struct backlight_ops gpio_backlight_ops = { +diff --git a/drivers/video/backlight/lv5207lp.c b/drivers/video/backlight/lv5207lp.c +index 767b800d79faf..8a027a5ea552b 100644 +--- a/drivers/video/backlight/lv5207lp.c ++++ b/drivers/video/backlight/lv5207lp.c +@@ -67,7 +67,7 @@ static int lv5207lp_backlight_check_fb(struct backlight_device *backlight, + { + struct lv5207lp *lv = bl_get_data(backlight); + +- return lv->pdata->fbdev == NULL || lv->pdata->fbdev == info->dev; ++ return lv->pdata->fbdev == NULL || lv->pdata->fbdev == info->device; + } + + static const struct backlight_ops lv5207lp_backlight_ops = { +diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c +index 90d514c141794..7d320f799ca1e 100644 +--- a/drivers/virtio/virtio_ring.c ++++ b/drivers/virtio/virtio_ring.c +@@ -1449,7 +1449,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, + } + } + +- if (i < head) ++ if (i <= head) + vq->packed.avail_wrap_counter ^= 1; + + /* We're using some buffers from the free list. */ +diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c +index 739e7d55c9e3d..1bf5c51a4c23b 100644 +--- a/fs/dlm/plock.c ++++ b/fs/dlm/plock.c +@@ -455,7 +455,8 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, + } + } else { + list_for_each_entry(iter, &recv_list, list) { +- if (!iter->info.wait) { ++ if (!iter->info.wait && ++ iter->info.fsid == info.fsid) { + op = iter; + break; + } +@@ -467,8 +468,7 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, + if (info.wait) + WARN_ON(op->info.optype != DLM_PLOCK_OP_LOCK); + else +- WARN_ON(op->info.fsid != info.fsid || +- op->info.number != info.number || ++ WARN_ON(op->info.number != info.number || + op->info.owner != info.owner || + op->info.optype != info.optype); + +diff --git a/fs/eventfd.c b/fs/eventfd.c +index 249ca6c0b7843..4a60ea932e3d9 100644 +--- a/fs/eventfd.c ++++ b/fs/eventfd.c +@@ -189,7 +189,7 @@ void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt) + { + lockdep_assert_held(&ctx->wqh.lock); + +- *cnt = (ctx->flags & EFD_SEMAPHORE) ? 1 : ctx->count; ++ *cnt = ((ctx->flags & EFD_SEMAPHORE) && ctx->count) ? 1 : ctx->count; + ctx->count -= *cnt; + } + EXPORT_SYMBOL_GPL(eventfd_ctx_do_read); +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c +index 88ed64ebae3e7..016925b1a0908 100644 +--- a/fs/ext4/mballoc.c ++++ b/fs/ext4/mballoc.c +@@ -966,8 +966,9 @@ static inline int should_optimize_scan(struct ext4_allocation_context *ac) + * Return next linear group for allocation. If linear traversal should not be + * performed, this function just returns the same group + */ +-static int +-next_linear_group(struct ext4_allocation_context *ac, int group, int ngroups) ++static ext4_group_t ++next_linear_group(struct ext4_allocation_context *ac, ext4_group_t group, ++ ext4_group_t ngroups) + { + if (!should_optimize_scan(ac)) + goto inc_and_return; +@@ -2401,7 +2402,7 @@ static bool ext4_mb_good_group(struct ext4_allocation_context *ac, + + BUG_ON(cr < 0 || cr >= 4); + +- if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp)) ++ if (unlikely(!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp))) + return false; + + free = grp->bb_free; +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index 0e1aeb9cb4a7c..6a08fc31a66de 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -2799,6 +2799,7 @@ static int ext4_add_nondir(handle_t *handle, + return err; + } + drop_nlink(inode); ++ ext4_mark_inode_dirty(handle, inode); + ext4_orphan_add(handle, inode); + unlock_new_inode(inode); + return err; +@@ -3436,6 +3437,7 @@ retry: + + err_drop_inode: + clear_nlink(inode); ++ ext4_mark_inode_dirty(handle, inode); + ext4_orphan_add(handle, inode); + unlock_new_inode(inode); + if (handle) +@@ -4021,6 +4023,7 @@ end_rename: + ext4_resetent(handle, &old, + old.inode->i_ino, old_file_type); + drop_nlink(whiteout); ++ ext4_mark_inode_dirty(handle, whiteout); + ext4_orphan_add(handle, whiteout); + } + unlock_new_inode(whiteout); +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 4d1e48c676fab..c2b7d09238941 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -4453,7 +4453,8 @@ static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi) + static inline bool f2fs_may_compress(struct inode *inode) + { + if (IS_SWAPFILE(inode) || f2fs_is_pinned_file(inode) || +- f2fs_is_atomic_file(inode) || f2fs_has_inline_data(inode)) ++ f2fs_is_atomic_file(inode) || f2fs_has_inline_data(inode) || ++ f2fs_is_mmap_file(inode)) + return false; + return S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode); + } +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index 7b94f047cbf79..746c71716bead 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -530,7 +530,11 @@ static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma) + + file_accessed(file); + vma->vm_ops = &f2fs_file_vm_ops; ++ ++ f2fs_down_read(&F2FS_I(inode)->i_sem); + set_inode_flag(inode, FI_MMAP_FILE); ++ f2fs_up_read(&F2FS_I(inode)->i_sem); ++ + return 0; + } + +@@ -1927,12 +1931,19 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask) + int err = f2fs_convert_inline_inode(inode); + if (err) + return err; +- if (!f2fs_may_compress(inode)) +- return -EINVAL; +- if (S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode)) ++ ++ f2fs_down_write(&F2FS_I(inode)->i_sem); ++ if (!f2fs_may_compress(inode) || ++ (S_ISREG(inode->i_mode) && ++ F2FS_HAS_BLOCKS(inode))) { ++ f2fs_up_write(&F2FS_I(inode)->i_sem); + return -EINVAL; +- if (set_compress_context(inode)) +- return -EOPNOTSUPP; ++ } ++ err = set_compress_context(inode); ++ f2fs_up_write(&F2FS_I(inode)->i_sem); ++ ++ if (err) ++ return err; + } + } + +@@ -3958,6 +3969,7 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg) + file_start_write(filp); + inode_lock(inode); + ++ f2fs_down_write(&F2FS_I(inode)->i_sem); + if (f2fs_is_mmap_file(inode) || get_dirty_pages(inode)) { + ret = -EBUSY; + goto out; +@@ -3977,6 +3989,7 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg) + f2fs_warn(sbi, "compression algorithm is successfully set, " + "but current kernel doesn't support this algorithm."); + out: ++ f2fs_up_write(&F2FS_I(inode)->i_sem); + inode_unlock(inode); + file_end_write(filp); + +diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c +index aab3b8b3ab0a7..1fc7760499f10 100644 +--- a/fs/f2fs/inode.c ++++ b/fs/f2fs/inode.c +@@ -397,6 +397,12 @@ static int do_read_inode(struct inode *inode) + fi->i_inline_xattr_size = 0; + } + ++ if (!sanity_check_inode(inode, node_page)) { ++ f2fs_put_page(node_page, 1); ++ f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); ++ return -EFSCORRUPTED; ++ } ++ + /* check data exist */ + if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode)) + __recover_inline_status(inode, node_page); +@@ -459,12 +465,6 @@ static int do_read_inode(struct inode *inode) + /* Need all the flag bits */ + f2fs_init_read_extent_tree(inode, node_page); + +- if (!sanity_check_inode(inode, node_page)) { +- f2fs_put_page(node_page, 1); +- f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); +- return -EFSCORRUPTED; +- } +- + if (!sanity_check_extent_cache(inode)) { + f2fs_put_page(node_page, 1); + f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index b6dad389fa144..2046f633fe57a 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -858,11 +858,6 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount) + if (!name) + return -ENOMEM; + if (!strcmp(name, "adaptive")) { +- if (f2fs_sb_has_blkzoned(sbi)) { +- f2fs_warn(sbi, "adaptive mode is not allowed with zoned block device feature"); +- kfree(name); +- return -EINVAL; +- } + F2FS_OPTION(sbi).fs_mode = FS_MODE_ADAPTIVE; + } else if (!strcmp(name, "lfs")) { + F2FS_OPTION(sbi).fs_mode = FS_MODE_LFS; +@@ -1285,19 +1280,23 @@ default_check: + * zone alignment optimization. This is optional for host-aware + * devices, but mandatory for host-managed zoned block devices. + */ +-#ifndef CONFIG_BLK_DEV_ZONED +- if (f2fs_sb_has_blkzoned(sbi)) { +- f2fs_err(sbi, "Zoned block device support is not enabled"); +- return -EINVAL; +- } +-#endif + if (f2fs_sb_has_blkzoned(sbi)) { ++#ifdef CONFIG_BLK_DEV_ZONED + if (F2FS_OPTION(sbi).discard_unit != + DISCARD_UNIT_SECTION) { + f2fs_info(sbi, "Zoned block device doesn't need small discard, set discard_unit=section by default"); + F2FS_OPTION(sbi).discard_unit = + DISCARD_UNIT_SECTION; + } ++ ++ if (F2FS_OPTION(sbi).fs_mode != FS_MODE_LFS) { ++ f2fs_info(sbi, "Only lfs mode is allowed with zoned block device feature"); ++ return -EINVAL; ++ } ++#else ++ f2fs_err(sbi, "Zoned block device support is not enabled"); ++ return -EINVAL; ++#endif + } + + #ifdef CONFIG_F2FS_FS_COMPRESSION +diff --git a/fs/fs_context.c b/fs/fs_context.c +index 851214d1d013d..375023e40161d 100644 +--- a/fs/fs_context.c ++++ b/fs/fs_context.c +@@ -315,10 +315,31 @@ struct fs_context *fs_context_for_reconfigure(struct dentry *dentry, + } + EXPORT_SYMBOL(fs_context_for_reconfigure); + ++/** ++ * fs_context_for_submount: allocate a new fs_context for a submount ++ * @type: file_system_type of the new context ++ * @reference: reference dentry from which to copy relevant info ++ * ++ * Allocate a new fs_context suitable for a submount. This also ensures that ++ * the fc->security object is inherited from @reference (if needed). ++ */ + struct fs_context *fs_context_for_submount(struct file_system_type *type, + struct dentry *reference) + { +- return alloc_fs_context(type, reference, 0, 0, FS_CONTEXT_FOR_SUBMOUNT); ++ struct fs_context *fc; ++ int ret; ++ ++ fc = alloc_fs_context(type, reference, 0, 0, FS_CONTEXT_FOR_SUBMOUNT); ++ if (IS_ERR(fc)) ++ return fc; ++ ++ ret = security_fs_context_submount(fc, reference->d_sb); ++ if (ret) { ++ put_fs_context(fc); ++ return ERR_PTR(ret); ++ } ++ ++ return fc; + } + EXPORT_SYMBOL(fs_context_for_submount); + +diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c +index 91ee0b308e13d..a0a4d8de82cad 100644 +--- a/fs/iomap/buffered-io.c ++++ b/fs/iomap/buffered-io.c +@@ -488,11 +488,6 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) + WARN_ON_ONCE(folio_test_writeback(folio)); + folio_cancel_dirty(folio); + iomap_page_release(folio); +- } else if (folio_test_large(folio)) { +- /* Must release the iop so the page can be split */ +- WARN_ON_ONCE(!folio_test_uptodate(folio) && +- folio_test_dirty(folio)); +- iomap_page_release(folio); + } + } + EXPORT_SYMBOL_GPL(iomap_invalidate_folio); +diff --git a/fs/jfs/jfs_extent.c b/fs/jfs/jfs_extent.c +index ae99a7e232eeb..a82751e6c47f9 100644 +--- a/fs/jfs/jfs_extent.c ++++ b/fs/jfs/jfs_extent.c +@@ -311,6 +311,11 @@ extBalloc(struct inode *ip, s64 hint, s64 * nblocks, s64 * blkno) + * blocks in the map. in that case, we'll start off with the + * maximum free. + */ ++ ++ /* give up if no space left */ ++ if (bmp->db_maxfreebud == -1) ++ return -ENOSPC; ++ + max = (s64) 1 << bmp->db_maxfreebud; + if (*nblocks >= max && *nblocks > nbperpage) + nb = nblks = (max > nbperpage) ? max : nbperpage; +diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c +index 1d9488cf05348..87a0f207df0b9 100644 +--- a/fs/lockd/mon.c ++++ b/fs/lockd/mon.c +@@ -276,6 +276,9 @@ static struct nsm_handle *nsm_create_handle(const struct sockaddr *sap, + { + struct nsm_handle *new; + ++ if (!hostname) ++ return NULL; ++ + new = kzalloc(sizeof(*new) + hostname_len + 1, GFP_KERNEL); + if (unlikely(new == NULL)) + return NULL; +diff --git a/fs/namei.c b/fs/namei.c +index 5b3865ad9d052..4248647f1ab24 100644 +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -2859,7 +2859,7 @@ int path_pts(struct path *path) + dput(path->dentry); + path->dentry = parent; + child = d_hash_and_lookup(parent, &this); +- if (!child) ++ if (IS_ERR_OR_NULL(child)) + return -ENOENT; + + path->dentry = child; +diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c +index fea5f8821da5e..ce2ea62397972 100644 +--- a/fs/nfs/blocklayout/dev.c ++++ b/fs/nfs/blocklayout/dev.c +@@ -402,7 +402,7 @@ bl_parse_concat(struct nfs_server *server, struct pnfs_block_dev *d, + int ret, i; + + d->children = kcalloc(v->concat.volumes_count, +- sizeof(struct pnfs_block_dev), GFP_KERNEL); ++ sizeof(struct pnfs_block_dev), gfp_mask); + if (!d->children) + return -ENOMEM; + +@@ -431,7 +431,7 @@ bl_parse_stripe(struct nfs_server *server, struct pnfs_block_dev *d, + int ret, i; + + d->children = kcalloc(v->stripe.volumes_count, +- sizeof(struct pnfs_block_dev), GFP_KERNEL); ++ sizeof(struct pnfs_block_dev), gfp_mask); + if (!d->children) + return -ENOMEM; + +diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h +index ae7d4a8c728c2..4b07a0508f9d8 100644 +--- a/fs/nfs/internal.h ++++ b/fs/nfs/internal.h +@@ -484,6 +484,7 @@ struct nfs_pgio_completion_ops; + extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, + struct inode *inode, bool force_mds, + const struct nfs_pgio_completion_ops *compl_ops); ++extern bool nfs_read_alloc_scratch(struct nfs_pgio_header *hdr, size_t size); + extern void nfs_read_prepare(struct rpc_task *task, void *calldata); + extern void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio); + +diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c +index 05c3b4b2b3dd8..c190938142960 100644 +--- a/fs/nfs/nfs2xdr.c ++++ b/fs/nfs/nfs2xdr.c +@@ -949,7 +949,7 @@ int nfs2_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry, + + error = decode_filename_inline(xdr, &entry->name, &entry->len); + if (unlikely(error)) +- return -EAGAIN; ++ return error == -ENAMETOOLONG ? -ENAMETOOLONG : -EAGAIN; + + /* + * The type (size and byte order) of nfscookie isn't defined in +diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c +index 3b0b650c9c5ab..60f032be805ae 100644 +--- a/fs/nfs/nfs3xdr.c ++++ b/fs/nfs/nfs3xdr.c +@@ -1991,7 +1991,7 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry, + + error = decode_inline_filename3(xdr, &entry->name, &entry->len); + if (unlikely(error)) +- return -EAGAIN; ++ return error == -ENAMETOOLONG ? -ENAMETOOLONG : -EAGAIN; + + error = decode_cookie3(xdr, &new_cookie); + if (unlikely(error)) +diff --git a/fs/nfs/nfs42.h b/fs/nfs/nfs42.h +index 0fe5aacbcfdf1..b59876b01a1e3 100644 +--- a/fs/nfs/nfs42.h ++++ b/fs/nfs/nfs42.h +@@ -13,6 +13,7 @@ + * more? Need to consider not to pre-alloc too much for a compound. + */ + #define PNFS_LAYOUTSTATS_MAXDEV (4) ++#define READ_PLUS_SCRATCH_SIZE (16) + + /* nfs4.2proc.c */ + #ifdef CONFIG_NFS_V4_2 +diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c +index 7c33bba179d2f..d903ea10410c2 100644 +--- a/fs/nfs/nfs42proc.c ++++ b/fs/nfs/nfs42proc.c +@@ -470,8 +470,9 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src, + continue; + } + break; +- } else if (err == -NFS4ERR_OFFLOAD_NO_REQS && !args.sync) { +- args.sync = true; ++ } else if (err == -NFS4ERR_OFFLOAD_NO_REQS && ++ args.sync != res.synchronous) { ++ args.sync = res.synchronous; + dst_exception.retry = 1; + continue; + } else if ((err == -ESTALE || +diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c +index 2fd465cab631d..20aa5e746497d 100644 +--- a/fs/nfs/nfs42xdr.c ++++ b/fs/nfs/nfs42xdr.c +@@ -47,13 +47,20 @@ + #define decode_deallocate_maxsz (op_decode_hdr_maxsz) + #define encode_read_plus_maxsz (op_encode_hdr_maxsz + \ + encode_stateid_maxsz + 3) +-#define NFS42_READ_PLUS_SEGMENT_SIZE (1 /* data_content4 */ + \ ++#define NFS42_READ_PLUS_DATA_SEGMENT_SIZE \ ++ (1 /* data_content4 */ + \ ++ 2 /* data_info4.di_offset */ + \ ++ 1 /* data_info4.di_length */) ++#define NFS42_READ_PLUS_HOLE_SEGMENT_SIZE \ ++ (1 /* data_content4 */ + \ + 2 /* data_info4.di_offset */ + \ + 2 /* data_info4.di_length */) ++#define READ_PLUS_SEGMENT_SIZE_DIFF (NFS42_READ_PLUS_HOLE_SEGMENT_SIZE - \ ++ NFS42_READ_PLUS_DATA_SEGMENT_SIZE) + #define decode_read_plus_maxsz (op_decode_hdr_maxsz + \ + 1 /* rpr_eof */ + \ + 1 /* rpr_contents count */ + \ +- 2 * NFS42_READ_PLUS_SEGMENT_SIZE) ++ NFS42_READ_PLUS_HOLE_SEGMENT_SIZE) + #define encode_seek_maxsz (op_encode_hdr_maxsz + \ + encode_stateid_maxsz + \ + 2 /* offset */ + \ +@@ -780,8 +787,8 @@ static void nfs4_xdr_enc_read_plus(struct rpc_rqst *req, + encode_putfh(xdr, args->fh, &hdr); + encode_read_plus(xdr, args, &hdr); + +- rpc_prepare_reply_pages(req, args->pages, args->pgbase, +- args->count, hdr.replen); ++ rpc_prepare_reply_pages(req, args->pages, args->pgbase, args->count, ++ hdr.replen - READ_PLUS_SEGMENT_SIZE_DIFF); + encode_nops(&hdr); + } + +@@ -1121,7 +1128,6 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res) + uint32_t segments; + struct read_plus_segment *segs; + int status, i; +- char scratch_buf[16]; + __be32 *p; + + status = decode_op_hdr(xdr, OP_READ_PLUS); +@@ -1136,14 +1142,12 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res) + res->eof = be32_to_cpup(p++); + segments = be32_to_cpup(p++); + if (segments == 0) +- return status; ++ return 0; + + segs = kmalloc_array(segments, sizeof(*segs), GFP_KERNEL); + if (!segs) + return -ENOMEM; + +- xdr_set_scratch_buffer(xdr, &scratch_buf, sizeof(scratch_buf)); +- status = -EIO; + for (i = 0; i < segments; i++) { + status = decode_read_plus_segment(xdr, &segs[i]); + if (status < 0) +@@ -1347,6 +1351,8 @@ static int nfs4_xdr_dec_read_plus(struct rpc_rqst *rqstp, + struct compound_hdr hdr; + int status; + ++ xdr_set_scratch_buffer(xdr, res->scratch, READ_PLUS_SCRATCH_SIZE); ++ + status = decode_compound_hdr(xdr, &hdr); + if (status) + goto out; +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 1044305e77996..2dec0fed1ba16 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -5459,17 +5459,21 @@ static int nfs4_read_done(struct rpc_task *task, struct nfs_pgio_header *hdr) + } + + #if defined CONFIG_NFS_V4_2 && defined CONFIG_NFS_V4_2_READ_PLUS +-static void nfs42_read_plus_support(struct nfs_pgio_header *hdr, ++static bool nfs42_read_plus_support(struct nfs_pgio_header *hdr, + struct rpc_message *msg) + { + /* Note: We don't use READ_PLUS with pNFS yet */ +- if (nfs_server_capable(hdr->inode, NFS_CAP_READ_PLUS) && !hdr->ds_clp) ++ if (nfs_server_capable(hdr->inode, NFS_CAP_READ_PLUS) && !hdr->ds_clp) { + msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ_PLUS]; ++ return nfs_read_alloc_scratch(hdr, READ_PLUS_SCRATCH_SIZE); ++ } ++ return false; + } + #else +-static void nfs42_read_plus_support(struct nfs_pgio_header *hdr, ++static bool nfs42_read_plus_support(struct nfs_pgio_header *hdr, + struct rpc_message *msg) + { ++ return false; + } + #endif /* CONFIG_NFS_V4_2 */ + +@@ -5479,8 +5483,8 @@ static void nfs4_proc_read_setup(struct nfs_pgio_header *hdr, + hdr->timestamp = jiffies; + if (!hdr->pgio_done_cb) + hdr->pgio_done_cb = nfs4_read_done_cb; +- msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ]; +- nfs42_read_plus_support(hdr, msg); ++ if (!nfs42_read_plus_support(hdr, msg)) ++ msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ]; + nfs4_init_sequence(&hdr->args.seq_args, &hdr->res.seq_res, 0, 0); + } + +diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c +index 5d035dd2d7bf0..47a8da3f5c9ff 100644 +--- a/fs/nfs/pnfs_nfs.c ++++ b/fs/nfs/pnfs_nfs.c +@@ -943,7 +943,7 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv, + * Test this address for session trunking and + * add as an alias + */ +- xprtdata.cred = nfs4_get_clid_cred(clp), ++ xprtdata.cred = nfs4_get_clid_cred(clp); + rpc_clnt_add_xprt(clp->cl_rpcclient, &xprt_args, + rpc_clnt_setup_test_and_add_xprt, + &rpcdata); +diff --git a/fs/nfs/read.c b/fs/nfs/read.c +index cd970ce62786b..6aad42fbf797a 100644 +--- a/fs/nfs/read.c ++++ b/fs/nfs/read.c +@@ -47,6 +47,8 @@ static struct nfs_pgio_header *nfs_readhdr_alloc(void) + + static void nfs_readhdr_free(struct nfs_pgio_header *rhdr) + { ++ if (rhdr->res.scratch != NULL) ++ kfree(rhdr->res.scratch); + kmem_cache_free(nfs_rdata_cachep, rhdr); + } + +@@ -109,6 +111,14 @@ void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio) + } + EXPORT_SYMBOL_GPL(nfs_pageio_reset_read_mds); + ++bool nfs_read_alloc_scratch(struct nfs_pgio_header *hdr, size_t size) ++{ ++ WARN_ON(hdr->res.scratch != NULL); ++ hdr->res.scratch = kmalloc(size, GFP_KERNEL); ++ return hdr->res.scratch != NULL; ++} ++EXPORT_SYMBOL_GPL(nfs_read_alloc_scratch); ++ + static void nfs_readpage_release(struct nfs_page *req, int error) + { + struct inode *inode = d_inode(nfs_req_openctx(req)->dentry); +diff --git a/fs/nfsd/blocklayoutxdr.c b/fs/nfsd/blocklayoutxdr.c +index 442543304930b..2455dc8be18a8 100644 +--- a/fs/nfsd/blocklayoutxdr.c ++++ b/fs/nfsd/blocklayoutxdr.c +@@ -82,6 +82,15 @@ nfsd4_block_encode_getdeviceinfo(struct xdr_stream *xdr, + int len = sizeof(__be32), ret, i; + __be32 *p; + ++ /* ++ * See paragraph 5 of RFC 8881 S18.40.3. ++ */ ++ if (!gdp->gd_maxcount) { ++ if (xdr_stream_encode_u32(xdr, 0) != XDR_UNIT) ++ return nfserr_resource; ++ return nfs_ok; ++ } ++ + p = xdr_reserve_space(xdr, len + sizeof(__be32)); + if (!p) + return nfserr_resource; +diff --git a/fs/nfsd/flexfilelayoutxdr.c b/fs/nfsd/flexfilelayoutxdr.c +index e81d2a5cf381e..bb205328e043d 100644 +--- a/fs/nfsd/flexfilelayoutxdr.c ++++ b/fs/nfsd/flexfilelayoutxdr.c +@@ -85,6 +85,15 @@ nfsd4_ff_encode_getdeviceinfo(struct xdr_stream *xdr, + int addr_len; + __be32 *p; + ++ /* ++ * See paragraph 5 of RFC 8881 S18.40.3. ++ */ ++ if (!gdp->gd_maxcount) { ++ if (xdr_stream_encode_u32(xdr, 0) != XDR_UNIT) ++ return nfserr_resource; ++ return nfs_ok; ++ } ++ + /* len + padding for two strings */ + addr_len = 16 + da->netaddr.netid_len + da->netaddr.addr_len; + ver_len = 20; +diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c +index 8f90a87ee9ca0..89a579be042e5 100644 +--- a/fs/nfsd/nfs4xdr.c ++++ b/fs/nfsd/nfs4xdr.c +@@ -4571,20 +4571,17 @@ nfsd4_encode_getdeviceinfo(struct nfsd4_compoundres *resp, __be32 nfserr, + + *p++ = cpu_to_be32(gdev->gd_layout_type); + +- /* If maxcount is 0 then just update notifications */ +- if (gdev->gd_maxcount != 0) { +- ops = nfsd4_layout_ops[gdev->gd_layout_type]; +- nfserr = ops->encode_getdeviceinfo(xdr, gdev); +- if (nfserr) { +- /* +- * We don't bother to burden the layout drivers with +- * enforcing gd_maxcount, just tell the client to +- * come back with a bigger buffer if it's not enough. +- */ +- if (xdr->buf->len + 4 > gdev->gd_maxcount) +- goto toosmall; +- return nfserr; +- } ++ ops = nfsd4_layout_ops[gdev->gd_layout_type]; ++ nfserr = ops->encode_getdeviceinfo(xdr, gdev); ++ if (nfserr) { ++ /* ++ * We don't bother to burden the layout drivers with ++ * enforcing gd_maxcount, just tell the client to ++ * come back with a bigger buffer if it's not enough. ++ */ ++ if (xdr->buf->len + 4 > gdev->gd_maxcount) ++ goto toosmall; ++ return nfserr; + } + + if (gdev->gd_notify_types) { +diff --git a/fs/nls/nls_base.c b/fs/nls/nls_base.c +index 52ccd34b1e792..a026dbd3593f6 100644 +--- a/fs/nls/nls_base.c ++++ b/fs/nls/nls_base.c +@@ -272,7 +272,7 @@ int unregister_nls(struct nls_table * nls) + return -EINVAL; + } + +-static struct nls_table *find_nls(char *charset) ++static struct nls_table *find_nls(const char *charset) + { + struct nls_table *nls; + spin_lock(&nls_lock); +@@ -288,7 +288,7 @@ static struct nls_table *find_nls(char *charset) + return nls; + } + +-struct nls_table *load_nls(char *charset) ++struct nls_table *load_nls(const char *charset) + { + return try_then_request_module(find_nls(charset), "nls_%s", charset); + } +diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c +index 1c7ac433667df..04a8505bd97af 100644 +--- a/fs/ocfs2/namei.c ++++ b/fs/ocfs2/namei.c +@@ -1535,6 +1535,10 @@ static int ocfs2_rename(struct user_namespace *mnt_userns, + status = ocfs2_add_entry(handle, new_dentry, old_inode, + OCFS2_I(old_inode)->ip_blkno, + new_dir_bh, &target_insert); ++ if (status < 0) { ++ mlog_errno(status); ++ goto bail; ++ } + } + + old_inode->i_ctime = current_time(old_inode); +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c +index 51eec4a8e82b2..08d3a1f34ac6c 100644 +--- a/fs/overlayfs/super.c ++++ b/fs/overlayfs/super.c +@@ -2155,7 +2155,7 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent) + ovl_trusted_xattr_handlers; + sb->s_fs_info = ofs; + sb->s_flags |= SB_POSIXACL; +- sb->s_iflags |= SB_I_SKIP_SYNC; ++ sb->s_iflags |= SB_I_SKIP_SYNC | SB_I_IMA_UNVERIFIABLE_SIGNATURE; + + err = -ENOMEM; + root_dentry = ovl_get_root(sb, upperpath.dentry, oe); +diff --git a/fs/proc/base.c b/fs/proc/base.c +index 9e479d7d202b1..74442e01793f3 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -3581,7 +3581,8 @@ static int proc_tid_comm_permission(struct user_namespace *mnt_userns, + } + + static const struct inode_operations proc_tid_comm_inode_operations = { +- .permission = proc_tid_comm_permission, ++ .setattr = proc_setattr, ++ .permission = proc_tid_comm_permission, + }; + + /* +diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c +index 2384de1c2d187..1e755d093d921 100644 +--- a/fs/pstore/ram_core.c ++++ b/fs/pstore/ram_core.c +@@ -518,7 +518,7 @@ static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig, + sig ^= PERSISTENT_RAM_SIG; + + if (prz->buffer->sig == sig) { +- if (buffer_size(prz) == 0) { ++ if (buffer_size(prz) == 0 && buffer_start(prz) == 0) { + pr_debug("found existing empty buffer\n"); + return 0; + } +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index 46dca88d89c36..53b65c5300fde 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -225,13 +225,22 @@ static void put_quota_format(struct quota_format_type *fmt) + + /* + * Dquot List Management: +- * The quota code uses four lists for dquot management: the inuse_list, +- * free_dquots, dqi_dirty_list, and dquot_hash[] array. A single dquot +- * structure may be on some of those lists, depending on its current state. ++ * The quota code uses five lists for dquot management: the inuse_list, ++ * releasing_dquots, free_dquots, dqi_dirty_list, and dquot_hash[] array. ++ * A single dquot structure may be on some of those lists, depending on ++ * its current state. + * + * All dquots are placed to the end of inuse_list when first created, and this + * list is used for invalidate operation, which must look at every dquot. + * ++ * When the last reference of a dquot will be dropped, the dquot will be ++ * added to releasing_dquots. We'd then queue work item which would call ++ * synchronize_srcu() and after that perform the final cleanup of all the ++ * dquots on the list. Both releasing_dquots and free_dquots use the ++ * dq_free list_head in the dquot struct. When a dquot is removed from ++ * releasing_dquots, a reference count is always subtracted, and if ++ * dq_count == 0 at that point, the dquot will be added to the free_dquots. ++ * + * Unused dquots (dq_count == 0) are added to the free_dquots list when freed, + * and this list is searched whenever we need an available dquot. Dquots are + * removed from the list as soon as they are used again, and +@@ -250,6 +259,7 @@ static void put_quota_format(struct quota_format_type *fmt) + + static LIST_HEAD(inuse_list); + static LIST_HEAD(free_dquots); ++static LIST_HEAD(releasing_dquots); + static unsigned int dq_hash_bits, dq_hash_mask; + static struct hlist_head *dquot_hash; + +@@ -260,6 +270,9 @@ static qsize_t inode_get_rsv_space(struct inode *inode); + static qsize_t __inode_get_rsv_space(struct inode *inode); + static int __dquot_initialize(struct inode *inode, int type); + ++static void quota_release_workfn(struct work_struct *work); ++static DECLARE_DELAYED_WORK(quota_release_work, quota_release_workfn); ++ + static inline unsigned int + hashfn(const struct super_block *sb, struct kqid qid) + { +@@ -305,12 +318,18 @@ static inline void put_dquot_last(struct dquot *dquot) + dqstats_inc(DQST_FREE_DQUOTS); + } + ++static inline void put_releasing_dquots(struct dquot *dquot) ++{ ++ list_add_tail(&dquot->dq_free, &releasing_dquots); ++} ++ + static inline void remove_free_dquot(struct dquot *dquot) + { + if (list_empty(&dquot->dq_free)) + return; + list_del_init(&dquot->dq_free); +- dqstats_dec(DQST_FREE_DQUOTS); ++ if (!atomic_read(&dquot->dq_count)) ++ dqstats_dec(DQST_FREE_DQUOTS); + } + + static inline void put_inuse(struct dquot *dquot) +@@ -336,6 +355,11 @@ static void wait_on_dquot(struct dquot *dquot) + mutex_unlock(&dquot->dq_lock); + } + ++static inline int dquot_active(struct dquot *dquot) ++{ ++ return test_bit(DQ_ACTIVE_B, &dquot->dq_flags); ++} ++ + static inline int dquot_dirty(struct dquot *dquot) + { + return test_bit(DQ_MOD_B, &dquot->dq_flags); +@@ -351,14 +375,14 @@ int dquot_mark_dquot_dirty(struct dquot *dquot) + { + int ret = 1; + +- if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) ++ if (!dquot_active(dquot)) + return 0; + + if (sb_dqopt(dquot->dq_sb)->flags & DQUOT_NOLIST_DIRTY) + return test_and_set_bit(DQ_MOD_B, &dquot->dq_flags); + + /* If quota is dirty already, we don't have to acquire dq_list_lock */ +- if (test_bit(DQ_MOD_B, &dquot->dq_flags)) ++ if (dquot_dirty(dquot)) + return 1; + + spin_lock(&dq_list_lock); +@@ -440,7 +464,7 @@ int dquot_acquire(struct dquot *dquot) + smp_mb__before_atomic(); + set_bit(DQ_READ_B, &dquot->dq_flags); + /* Instantiate dquot if needed */ +- if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && !dquot->dq_off) { ++ if (!dquot_active(dquot) && !dquot->dq_off) { + ret = dqopt->ops[dquot->dq_id.type]->commit_dqblk(dquot); + /* Write the info if needed */ + if (info_dirty(&dqopt->info[dquot->dq_id.type])) { +@@ -482,7 +506,7 @@ int dquot_commit(struct dquot *dquot) + goto out_lock; + /* Inactive dquot can be only if there was error during read/init + * => we have better not writing it */ +- if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) ++ if (dquot_active(dquot)) + ret = dqopt->ops[dquot->dq_id.type]->commit_dqblk(dquot); + else + ret = -EIO; +@@ -547,6 +571,8 @@ static void invalidate_dquots(struct super_block *sb, int type) + struct dquot *dquot, *tmp; + + restart: ++ flush_delayed_work("a_release_work); ++ + spin_lock(&dq_list_lock); + list_for_each_entry_safe(dquot, tmp, &inuse_list, dq_inuse) { + if (dquot->dq_sb != sb) +@@ -555,6 +581,12 @@ restart: + continue; + /* Wait for dquot users */ + if (atomic_read(&dquot->dq_count)) { ++ /* dquot in releasing_dquots, flush and retry */ ++ if (!list_empty(&dquot->dq_free)) { ++ spin_unlock(&dq_list_lock); ++ goto restart; ++ } ++ + atomic_inc(&dquot->dq_count); + spin_unlock(&dq_list_lock); + /* +@@ -597,7 +629,7 @@ int dquot_scan_active(struct super_block *sb, + + spin_lock(&dq_list_lock); + list_for_each_entry(dquot, &inuse_list, dq_inuse) { +- if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) ++ if (!dquot_active(dquot)) + continue; + if (dquot->dq_sb != sb) + continue; +@@ -612,7 +644,7 @@ int dquot_scan_active(struct super_block *sb, + * outstanding call and recheck the DQ_ACTIVE_B after that. + */ + wait_on_dquot(dquot); +- if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { ++ if (dquot_active(dquot)) { + ret = fn(dquot, priv); + if (ret < 0) + goto out; +@@ -628,6 +660,18 @@ out: + } + EXPORT_SYMBOL(dquot_scan_active); + ++static inline int dquot_write_dquot(struct dquot *dquot) ++{ ++ int ret = dquot->dq_sb->dq_op->write_dquot(dquot); ++ if (ret < 0) { ++ quota_error(dquot->dq_sb, "Can't write quota structure " ++ "(error %d). Quota may get out of sync!", ret); ++ /* Clear dirty bit anyway to avoid infinite loop. */ ++ clear_dquot_dirty(dquot); ++ } ++ return ret; ++} ++ + /* Write all dquot structures to quota files */ + int dquot_writeback_dquots(struct super_block *sb, int type) + { +@@ -651,23 +695,16 @@ int dquot_writeback_dquots(struct super_block *sb, int type) + dquot = list_first_entry(&dirty, struct dquot, + dq_dirty); + +- WARN_ON(!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)); ++ WARN_ON(!dquot_active(dquot)); + + /* Now we have active dquot from which someone is + * holding reference so we can safely just increase + * use count */ + dqgrab(dquot); + spin_unlock(&dq_list_lock); +- err = sb->dq_op->write_dquot(dquot); +- if (err) { +- /* +- * Clear dirty bit anyway to avoid infinite +- * loop here. +- */ +- clear_dquot_dirty(dquot); +- if (!ret) +- ret = err; +- } ++ err = dquot_write_dquot(dquot); ++ if (err && !ret) ++ ret = err; + dqput(dquot); + spin_lock(&dq_list_lock); + } +@@ -760,13 +797,54 @@ static struct shrinker dqcache_shrinker = { + .seeks = DEFAULT_SEEKS, + }; + ++/* ++ * Safely release dquot and put reference to dquot. ++ */ ++static void quota_release_workfn(struct work_struct *work) ++{ ++ struct dquot *dquot; ++ struct list_head rls_head; ++ ++ spin_lock(&dq_list_lock); ++ /* Exchange the list head to avoid livelock. */ ++ list_replace_init(&releasing_dquots, &rls_head); ++ spin_unlock(&dq_list_lock); ++ ++restart: ++ synchronize_srcu(&dquot_srcu); ++ spin_lock(&dq_list_lock); ++ while (!list_empty(&rls_head)) { ++ dquot = list_first_entry(&rls_head, struct dquot, dq_free); ++ /* Dquot got used again? */ ++ if (atomic_read(&dquot->dq_count) > 1) { ++ remove_free_dquot(dquot); ++ atomic_dec(&dquot->dq_count); ++ continue; ++ } ++ if (dquot_dirty(dquot)) { ++ spin_unlock(&dq_list_lock); ++ /* Commit dquot before releasing */ ++ dquot_write_dquot(dquot); ++ goto restart; ++ } ++ if (dquot_active(dquot)) { ++ spin_unlock(&dq_list_lock); ++ dquot->dq_sb->dq_op->release_dquot(dquot); ++ goto restart; ++ } ++ /* Dquot is inactive and clean, now move it to free list */ ++ remove_free_dquot(dquot); ++ atomic_dec(&dquot->dq_count); ++ put_dquot_last(dquot); ++ } ++ spin_unlock(&dq_list_lock); ++} ++ + /* + * Put reference to dquot + */ + void dqput(struct dquot *dquot) + { +- int ret; +- + if (!dquot) + return; + #ifdef CONFIG_QUOTA_DEBUG +@@ -778,7 +856,7 @@ void dqput(struct dquot *dquot) + } + #endif + dqstats_inc(DQST_DROPS); +-we_slept: ++ + spin_lock(&dq_list_lock); + if (atomic_read(&dquot->dq_count) > 1) { + /* We have more than one user... nothing to do */ +@@ -790,35 +868,15 @@ we_slept: + spin_unlock(&dq_list_lock); + return; + } ++ + /* Need to release dquot? */ +- if (dquot_dirty(dquot)) { +- spin_unlock(&dq_list_lock); +- /* Commit dquot before releasing */ +- ret = dquot->dq_sb->dq_op->write_dquot(dquot); +- if (ret < 0) { +- quota_error(dquot->dq_sb, "Can't write quota structure" +- " (error %d). Quota may get out of sync!", +- ret); +- /* +- * We clear dirty bit anyway, so that we avoid +- * infinite loop here +- */ +- clear_dquot_dirty(dquot); +- } +- goto we_slept; +- } +- if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { +- spin_unlock(&dq_list_lock); +- dquot->dq_sb->dq_op->release_dquot(dquot); +- goto we_slept; +- } +- atomic_dec(&dquot->dq_count); + #ifdef CONFIG_QUOTA_DEBUG + /* sanity check */ + BUG_ON(!list_empty(&dquot->dq_free)); + #endif +- put_dquot_last(dquot); ++ put_releasing_dquots(dquot); + spin_unlock(&dq_list_lock); ++ queue_delayed_work(system_unbound_wq, "a_release_work, 1); + } + EXPORT_SYMBOL(dqput); + +@@ -908,7 +966,7 @@ we_slept: + * already finished or it will be canceled due to dq_count > 1 test */ + wait_on_dquot(dquot); + /* Read the dquot / allocate space in quota file */ +- if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) { ++ if (!dquot_active(dquot)) { + int err; + + err = sb->dq_op->acquire_dquot(dquot); +@@ -1425,7 +1483,7 @@ static int info_bdq_free(struct dquot *dquot, qsize_t space) + return QUOTA_NL_NOWARN; + } + +-static int dquot_active(const struct inode *inode) ++static int inode_quota_active(const struct inode *inode) + { + struct super_block *sb = inode->i_sb; + +@@ -1448,7 +1506,7 @@ static int __dquot_initialize(struct inode *inode, int type) + qsize_t rsv; + int ret = 0; + +- if (!dquot_active(inode)) ++ if (!inode_quota_active(inode)) + return 0; + + dquots = i_dquot(inode); +@@ -1556,7 +1614,7 @@ bool dquot_initialize_needed(struct inode *inode) + struct dquot **dquots; + int i; + +- if (!dquot_active(inode)) ++ if (!inode_quota_active(inode)) + return false; + + dquots = i_dquot(inode); +@@ -1667,7 +1725,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags) + int reserve = flags & DQUOT_SPACE_RESERVE; + struct dquot **dquots; + +- if (!dquot_active(inode)) { ++ if (!inode_quota_active(inode)) { + if (reserve) { + spin_lock(&inode->i_lock); + *inode_reserved_space(inode) += number; +@@ -1737,7 +1795,7 @@ int dquot_alloc_inode(struct inode *inode) + struct dquot_warn warn[MAXQUOTAS]; + struct dquot * const *dquots; + +- if (!dquot_active(inode)) ++ if (!inode_quota_active(inode)) + return 0; + for (cnt = 0; cnt < MAXQUOTAS; cnt++) + warn[cnt].w_type = QUOTA_NL_NOWARN; +@@ -1780,7 +1838,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number) + struct dquot **dquots; + int cnt, index; + +- if (!dquot_active(inode)) { ++ if (!inode_quota_active(inode)) { + spin_lock(&inode->i_lock); + *inode_reserved_space(inode) -= number; + __inode_add_bytes(inode, number); +@@ -1822,7 +1880,7 @@ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number) + struct dquot **dquots; + int cnt, index; + +- if (!dquot_active(inode)) { ++ if (!inode_quota_active(inode)) { + spin_lock(&inode->i_lock); + *inode_reserved_space(inode) += number; + __inode_sub_bytes(inode, number); +@@ -1866,7 +1924,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags) + struct dquot **dquots; + int reserve = flags & DQUOT_SPACE_RESERVE, index; + +- if (!dquot_active(inode)) { ++ if (!inode_quota_active(inode)) { + if (reserve) { + spin_lock(&inode->i_lock); + *inode_reserved_space(inode) -= number; +@@ -1921,7 +1979,7 @@ void dquot_free_inode(struct inode *inode) + struct dquot * const *dquots; + int index; + +- if (!dquot_active(inode)) ++ if (!inode_quota_active(inode)) + return; + + dquots = i_dquot(inode); +@@ -2093,7 +2151,7 @@ int dquot_transfer(struct user_namespace *mnt_userns, struct inode *inode, + struct super_block *sb = inode->i_sb; + int ret; + +- if (!dquot_active(inode)) ++ if (!inode_quota_active(inode)) + return 0; + + if (i_uid_needs_update(mnt_userns, iattr, inode)) { +diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c +index 9f62da7471c9e..eb81b4170cb51 100644 +--- a/fs/reiserfs/journal.c ++++ b/fs/reiserfs/journal.c +@@ -2326,7 +2326,7 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev, + int i, j; + + bh = __getblk(dev, block, bufsize); +- if (buffer_uptodate(bh)) ++ if (!bh || buffer_uptodate(bh)) + return (bh); + + if (block + BUFNR > max_block) { +@@ -2336,6 +2336,8 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev, + j = 1; + for (i = 1; i < blocks; i++) { + bh = __getblk(dev, block + i, bufsize); ++ if (!bh) ++ break; + if (buffer_uptodate(bh)) { + brelse(bh); + break; +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index a37afbb7e399f..4a092cc5a3936 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -970,43 +970,6 @@ release_iface(struct kref *ref) + kfree(iface); + } + +-/* +- * compare two interfaces a and b +- * return 0 if everything matches. +- * return 1 if a has higher link speed, or rdma capable, or rss capable +- * return -1 otherwise. +- */ +-static inline int +-iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) +-{ +- int cmp_ret = 0; +- +- WARN_ON(!a || !b); +- if (a->speed == b->speed) { +- if (a->rdma_capable == b->rdma_capable) { +- if (a->rss_capable == b->rss_capable) { +- cmp_ret = memcmp(&a->sockaddr, &b->sockaddr, +- sizeof(a->sockaddr)); +- if (!cmp_ret) +- return 0; +- else if (cmp_ret > 0) +- return 1; +- else +- return -1; +- } else if (a->rss_capable > b->rss_capable) +- return 1; +- else +- return -1; +- } else if (a->rdma_capable > b->rdma_capable) +- return 1; +- else +- return -1; +- } else if (a->speed > b->speed) +- return 1; +- else +- return -1; +-} +- + struct cifs_chan { + unsigned int in_reconnect : 1; /* if session setup in progress for this channel */ + struct TCP_Server_Info *server; +diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h +index 98513f5af3f96..a914b88ca51a1 100644 +--- a/fs/smb/client/cifsproto.h ++++ b/fs/smb/client/cifsproto.h +@@ -85,6 +85,7 @@ extern int cifs_handle_standard(struct TCP_Server_Info *server, + struct mid_q_entry *mid); + extern int smb3_parse_devname(const char *devname, struct smb3_fs_context *ctx); + extern int smb3_parse_opt(const char *options, const char *key, char **val); ++extern int cifs_ipaddr_cmp(struct sockaddr *srcaddr, struct sockaddr *rhs); + extern bool cifs_match_ipaddr(struct sockaddr *srcaddr, struct sockaddr *rhs); + extern int cifs_discard_remaining_data(struct TCP_Server_Info *server); + extern int cifs_call_async(struct TCP_Server_Info *server, +diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c +index cbe08948baf4a..9cd282960c0bb 100644 +--- a/fs/smb/client/connect.c ++++ b/fs/smb/client/connect.c +@@ -1343,6 +1343,56 @@ next_pdu: + module_put_and_kthread_exit(0); + } + ++int ++cifs_ipaddr_cmp(struct sockaddr *srcaddr, struct sockaddr *rhs) ++{ ++ struct sockaddr_in *saddr4 = (struct sockaddr_in *)srcaddr; ++ struct sockaddr_in *vaddr4 = (struct sockaddr_in *)rhs; ++ struct sockaddr_in6 *saddr6 = (struct sockaddr_in6 *)srcaddr; ++ struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)rhs; ++ ++ switch (srcaddr->sa_family) { ++ case AF_UNSPEC: ++ switch (rhs->sa_family) { ++ case AF_UNSPEC: ++ return 0; ++ case AF_INET: ++ case AF_INET6: ++ return 1; ++ default: ++ return -1; ++ } ++ case AF_INET: { ++ switch (rhs->sa_family) { ++ case AF_UNSPEC: ++ return -1; ++ case AF_INET: ++ return memcmp(saddr4, vaddr4, ++ sizeof(struct sockaddr_in)); ++ case AF_INET6: ++ return 1; ++ default: ++ return -1; ++ } ++ } ++ case AF_INET6: { ++ switch (rhs->sa_family) { ++ case AF_UNSPEC: ++ case AF_INET: ++ return -1; ++ case AF_INET6: ++ return memcmp(saddr6, ++ vaddr6, ++ sizeof(struct sockaddr_in6)); ++ default: ++ return -1; ++ } ++ } ++ default: ++ return -1; /* don't expect to be here */ ++ } ++} ++ + /* + * Returns true if srcaddr isn't specified and rhs isn't specified, or + * if srcaddr is specified and matches the IP address of the rhs argument +diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c +index e6a191a7499e8..6b020d80bb949 100644 +--- a/fs/smb/client/smb2ops.c ++++ b/fs/smb/client/smb2ops.c +@@ -34,6 +34,8 @@ static int + change_conf(struct TCP_Server_Info *server) + { + server->credits += server->echo_credits + server->oplock_credits; ++ if (server->credits > server->max_credits) ++ server->credits = server->max_credits; + server->oplock_credits = server->echo_credits = 0; + switch (server->credits) { + case 0: +@@ -511,6 +513,43 @@ smb3_negotiate_rsize(struct cifs_tcon *tcon, struct smb3_fs_context *ctx) + return rsize; + } + ++/* ++ * compare two interfaces a and b ++ * return 0 if everything matches. ++ * return 1 if a is rdma capable, or rss capable, or has higher link speed ++ * return -1 otherwise. ++ */ ++static int ++iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) ++{ ++ int cmp_ret = 0; ++ ++ WARN_ON(!a || !b); ++ if (a->rdma_capable == b->rdma_capable) { ++ if (a->rss_capable == b->rss_capable) { ++ if (a->speed == b->speed) { ++ cmp_ret = cifs_ipaddr_cmp((struct sockaddr *) &a->sockaddr, ++ (struct sockaddr *) &b->sockaddr); ++ if (!cmp_ret) ++ return 0; ++ else if (cmp_ret > 0) ++ return 1; ++ else ++ return -1; ++ } else if (a->speed > b->speed) ++ return 1; ++ else ++ return -1; ++ } else if (a->rss_capable > b->rss_capable) ++ return 1; ++ else ++ return -1; ++ } else if (a->rdma_capable > b->rdma_capable) ++ return 1; ++ else ++ return -1; ++} ++ + static int + parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf, + size_t buf_len, struct cifs_ses *ses, bool in_mount) +diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c +index ba46156e32680..ae17d78f6ba17 100644 +--- a/fs/smb/client/smb2pdu.c ++++ b/fs/smb/client/smb2pdu.c +@@ -1312,7 +1312,12 @@ SMB2_sess_alloc_buffer(struct SMB2_sess_data *sess_data) + } + + /* enough to enable echos and oplocks and one max size write */ +- req->hdr.CreditRequest = cpu_to_le16(130); ++ if (server->credits >= server->max_credits) ++ req->hdr.CreditRequest = cpu_to_le16(0); ++ else ++ req->hdr.CreditRequest = cpu_to_le16( ++ min_t(int, server->max_credits - ++ server->credits, 130)); + + /* only one of SMB2 signing flags may be set in SMB2 request */ + if (server->sign) +@@ -1907,7 +1912,12 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree, + rqst.rq_nvec = 2; + + /* Need 64 for max size write so ask for more in case not there yet */ +- req->hdr.CreditRequest = cpu_to_le16(64); ++ if (server->credits >= server->max_credits) ++ req->hdr.CreditRequest = cpu_to_le16(0); ++ else ++ req->hdr.CreditRequest = cpu_to_le16( ++ min_t(int, server->max_credits - ++ server->credits, 64)); + + rc = cifs_send_recv(xid, ses, server, + &rqst, &resp_buftype, flags, &rsp_iov); +@@ -4291,6 +4301,7 @@ smb2_async_readv(struct cifs_readdata *rdata) + struct TCP_Server_Info *server; + struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + unsigned int total_len; ++ int credit_request; + + cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", + __func__, rdata->offset, rdata->bytes); +@@ -4322,7 +4333,13 @@ smb2_async_readv(struct cifs_readdata *rdata) + if (rdata->credits.value > 0) { + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, + SMB2_MAX_BUFFER_SIZE)); +- shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8); ++ credit_request = le16_to_cpu(shdr->CreditCharge) + 8; ++ if (server->credits >= server->max_credits) ++ shdr->CreditRequest = cpu_to_le16(0); ++ else ++ shdr->CreditRequest = cpu_to_le16( ++ min_t(int, server->max_credits - ++ server->credits, credit_request)); + + rc = adjust_credits(server, &rdata->credits, rdata->bytes); + if (rc) +@@ -4532,6 +4549,7 @@ smb2_async_writev(struct cifs_writedata *wdata, + unsigned int total_len; + struct cifs_io_parms _io_parms; + struct cifs_io_parms *io_parms = NULL; ++ int credit_request; + + if (!wdata->server) + server = wdata->server = cifs_pick_channel(tcon->ses); +@@ -4649,7 +4667,13 @@ smb2_async_writev(struct cifs_writedata *wdata, + if (wdata->credits.value > 0) { + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, + SMB2_MAX_BUFFER_SIZE)); +- shdr->CreditRequest = cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 8); ++ credit_request = le16_to_cpu(shdr->CreditCharge) + 8; ++ if (server->credits >= server->max_credits) ++ shdr->CreditRequest = cpu_to_le16(0); ++ else ++ shdr->CreditRequest = cpu_to_le16( ++ min_t(int, server->max_credits - ++ server->credits, credit_request)); + + rc = adjust_credits(server, &wdata->credits, io_parms->length); + if (rc) +diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c +index 847ee62afb8a1..9804cabe72a84 100644 +--- a/fs/smb/server/server.c ++++ b/fs/smb/server/server.c +@@ -286,6 +286,7 @@ static void handle_ksmbd_work(struct work_struct *wk) + static int queue_ksmbd_work(struct ksmbd_conn *conn) + { + struct ksmbd_work *work; ++ int err; + + work = ksmbd_alloc_work_struct(); + if (!work) { +@@ -297,7 +298,11 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn) + work->request_buf = conn->request_buf; + conn->request_buf = NULL; + +- ksmbd_init_smb_server(work); ++ err = ksmbd_init_smb_server(work); ++ if (err) { ++ ksmbd_free_work_struct(work); ++ return 0; ++ } + + ksmbd_conn_enqueue_request(work); + atomic_inc(&conn->r_count); +diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c +index 9b621fd993bb7..f6fd5cf976a50 100644 +--- a/fs/smb/server/smb2pdu.c ++++ b/fs/smb/server/smb2pdu.c +@@ -86,9 +86,9 @@ struct channel *lookup_chann_list(struct ksmbd_session *sess, struct ksmbd_conn + */ + int smb2_get_ksmbd_tcon(struct ksmbd_work *work) + { +- struct smb2_hdr *req_hdr = smb2_get_msg(work->request_buf); ++ struct smb2_hdr *req_hdr = ksmbd_req_buf_next(work); + unsigned int cmd = le16_to_cpu(req_hdr->Command); +- int tree_id; ++ unsigned int tree_id; + + if (cmd == SMB2_TREE_CONNECT_HE || + cmd == SMB2_CANCEL_HE || +@@ -113,7 +113,7 @@ int smb2_get_ksmbd_tcon(struct ksmbd_work *work) + pr_err("The first operation in the compound does not have tcon\n"); + return -EINVAL; + } +- if (work->tcon->id != tree_id) { ++ if (tree_id != UINT_MAX && work->tcon->id != tree_id) { + pr_err("tree id(%u) is different with id(%u) in first operation\n", + tree_id, work->tcon->id); + return -EINVAL; +@@ -565,9 +565,9 @@ int smb2_allocate_rsp_buf(struct ksmbd_work *work) + */ + int smb2_check_user_session(struct ksmbd_work *work) + { +- struct smb2_hdr *req_hdr = smb2_get_msg(work->request_buf); ++ struct smb2_hdr *req_hdr = ksmbd_req_buf_next(work); + struct ksmbd_conn *conn = work->conn; +- unsigned int cmd = conn->ops->get_cmd_val(work); ++ unsigned int cmd = le16_to_cpu(req_hdr->Command); + unsigned long long sess_id; + + /* +@@ -593,7 +593,7 @@ int smb2_check_user_session(struct ksmbd_work *work) + pr_err("The first operation in the compound does not have sess\n"); + return -EINVAL; + } +- if (work->sess->id != sess_id) { ++ if (sess_id != ULLONG_MAX && work->sess->id != sess_id) { + pr_err("session id(%llu) is different with the first operation(%lld)\n", + sess_id, work->sess->id); + return -EINVAL; +@@ -6314,6 +6314,11 @@ int smb2_read(struct ksmbd_work *work) + unsigned int max_read_size = conn->vals->max_read_size; + + WORK_BUFFERS(work, req, rsp); ++ if (work->next_smb2_rcv_hdr_off) { ++ work->send_no_response = 1; ++ err = -EOPNOTSUPP; ++ goto out; ++ } + + if (test_share_config_flag(work->tcon->share_conf, + KSMBD_SHARE_FLAG_PIPE)) { +@@ -8713,7 +8718,8 @@ int smb3_decrypt_req(struct ksmbd_work *work) + struct smb2_transform_hdr *tr_hdr = smb2_get_msg(buf); + int rc = 0; + +- if (buf_data_size < sizeof(struct smb2_hdr)) { ++ if (pdu_length < sizeof(struct smb2_transform_hdr) || ++ buf_data_size < sizeof(struct smb2_hdr)) { + pr_err("Transform message is too small (%u)\n", + pdu_length); + return -ECONNABORTED; +diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c +index d937e2f45c829..a4421d9458d90 100644 +--- a/fs/smb/server/smb_common.c ++++ b/fs/smb/server/smb_common.c +@@ -388,26 +388,29 @@ static struct smb_version_cmds smb1_server_cmds[1] = { + [SMB_COM_NEGOTIATE_EX] = { .proc = smb1_negotiate, }, + }; + +-static void init_smb1_server(struct ksmbd_conn *conn) ++static int init_smb1_server(struct ksmbd_conn *conn) + { + conn->ops = &smb1_server_ops; + conn->cmds = smb1_server_cmds; + conn->max_cmds = ARRAY_SIZE(smb1_server_cmds); ++ return 0; + } + +-void ksmbd_init_smb_server(struct ksmbd_work *work) ++int ksmbd_init_smb_server(struct ksmbd_work *work) + { + struct ksmbd_conn *conn = work->conn; + __le32 proto; + +- if (conn->need_neg == false) +- return; +- + proto = *(__le32 *)((struct smb_hdr *)work->request_buf)->Protocol; ++ if (conn->need_neg == false) { ++ if (proto == SMB1_PROTO_NUMBER) ++ return -EINVAL; ++ return 0; ++ } ++ + if (proto == SMB1_PROTO_NUMBER) +- init_smb1_server(conn); +- else +- init_smb3_11_server(conn); ++ return init_smb1_server(conn); ++ return init_smb3_11_server(conn); + } + + int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level, +diff --git a/fs/smb/server/smb_common.h b/fs/smb/server/smb_common.h +index e63d2a4f466b5..1cbb492cdefec 100644 +--- a/fs/smb/server/smb_common.h ++++ b/fs/smb/server/smb_common.h +@@ -427,7 +427,7 @@ bool ksmbd_smb_request(struct ksmbd_conn *conn); + + int ksmbd_lookup_dialect_by_id(__le16 *cli_dialects, __le16 dialects_count); + +-void ksmbd_init_smb_server(struct ksmbd_work *work); ++int ksmbd_init_smb_server(struct ksmbd_work *work); + + struct ksmbd_kstat; + int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, +diff --git a/fs/udf/balloc.c b/fs/udf/balloc.c +index 8e597db4d9710..f416b7fe092fc 100644 +--- a/fs/udf/balloc.c ++++ b/fs/udf/balloc.c +@@ -36,18 +36,41 @@ static int read_block_bitmap(struct super_block *sb, + unsigned long bitmap_nr) + { + struct buffer_head *bh = NULL; +- int retval = 0; ++ int i; ++ int max_bits, off, count; + struct kernel_lb_addr loc; + + loc.logicalBlockNum = bitmap->s_extPosition; + loc.partitionReferenceNum = UDF_SB(sb)->s_partition; + + bh = udf_tread(sb, udf_get_lb_pblock(sb, &loc, block)); ++ bitmap->s_block_bitmap[bitmap_nr] = bh; + if (!bh) +- retval = -EIO; ++ return -EIO; + +- bitmap->s_block_bitmap[bitmap_nr] = bh; +- return retval; ++ /* Check consistency of Space Bitmap buffer. */ ++ max_bits = sb->s_blocksize * 8; ++ if (!bitmap_nr) { ++ off = sizeof(struct spaceBitmapDesc) << 3; ++ count = min(max_bits - off, bitmap->s_nr_groups); ++ } else { ++ /* ++ * Rough check if bitmap number is too big to have any bitmap ++ * blocks reserved. ++ */ ++ if (bitmap_nr > ++ (bitmap->s_nr_groups >> (sb->s_blocksize_bits + 3)) + 2) ++ return 0; ++ off = 0; ++ count = bitmap->s_nr_groups - bitmap_nr * max_bits + ++ (sizeof(struct spaceBitmapDesc) << 3); ++ count = min(count, max_bits); ++ } ++ ++ for (i = 0; i < count; i++) ++ if (udf_test_bit(i + off, bh->b_data)) ++ return -EFSCORRUPTED; ++ return 0; + } + + static int __load_block_bitmap(struct super_block *sb, +diff --git a/fs/udf/inode.c b/fs/udf/inode.c +index a4e875b61f895..b574c2a9ce7ba 100644 +--- a/fs/udf/inode.c ++++ b/fs/udf/inode.c +@@ -57,15 +57,15 @@ static int udf_update_inode(struct inode *, int); + static int udf_sync_inode(struct inode *inode); + static int udf_alloc_i_data(struct inode *inode, size_t size); + static sector_t inode_getblk(struct inode *, sector_t, int *, int *); +-static int8_t udf_insert_aext(struct inode *, struct extent_position, +- struct kernel_lb_addr, uint32_t); ++static int udf_insert_aext(struct inode *, struct extent_position, ++ struct kernel_lb_addr, uint32_t); + static void udf_split_extents(struct inode *, int *, int, udf_pblk_t, + struct kernel_long_ad *, int *); + static void udf_prealloc_extents(struct inode *, int, int, + struct kernel_long_ad *, int *); + static void udf_merge_extents(struct inode *, struct kernel_long_ad *, int *); +-static void udf_update_extents(struct inode *, struct kernel_long_ad *, int, +- int, struct extent_position *); ++static int udf_update_extents(struct inode *, struct kernel_long_ad *, int, ++ int, struct extent_position *); + static int udf_get_block(struct inode *, sector_t, struct buffer_head *, int); + + static void __udf_clear_extent_cache(struct inode *inode) +@@ -696,7 +696,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, + struct kernel_lb_addr eloc, tmpeloc; + int c = 1; + loff_t lbcount = 0, b_off = 0; +- udf_pblk_t newblocknum, newblock; ++ udf_pblk_t newblocknum, newblock = 0; + sector_t offset = 0; + int8_t etype; + struct udf_inode_info *iinfo = UDF_I(inode); +@@ -799,7 +799,6 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, + ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len); + if (ret < 0) { + *err = ret; +- newblock = 0; + goto out_free; + } + c = 0; +@@ -862,7 +861,6 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, + goal, err); + if (!newblocknum) { + *err = -ENOSPC; +- newblock = 0; + goto out_free; + } + if (isBeyondEOF) +@@ -888,7 +886,9 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, + /* write back the new extents, inserting new extents if the new number + * of extents is greater than the old number, and deleting extents if + * the new number of extents is less than the old number */ +- udf_update_extents(inode, laarr, startnum, endnum, &prev_epos); ++ *err = udf_update_extents(inode, laarr, startnum, endnum, &prev_epos); ++ if (*err < 0) ++ goto out_free; + + newblock = udf_get_pblock(inode->i_sb, newblocknum, + iinfo->i_location.partitionReferenceNum, 0); +@@ -1156,21 +1156,30 @@ static void udf_merge_extents(struct inode *inode, struct kernel_long_ad *laarr, + } + } + +-static void udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr, +- int startnum, int endnum, +- struct extent_position *epos) ++static int udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr, ++ int startnum, int endnum, ++ struct extent_position *epos) + { + int start = 0, i; + struct kernel_lb_addr tmploc; + uint32_t tmplen; ++ int err; + + if (startnum > endnum) { + for (i = 0; i < (startnum - endnum); i++) + udf_delete_aext(inode, *epos); + } else if (startnum < endnum) { + for (i = 0; i < (endnum - startnum); i++) { +- udf_insert_aext(inode, *epos, laarr[i].extLocation, +- laarr[i].extLength); ++ err = udf_insert_aext(inode, *epos, ++ laarr[i].extLocation, ++ laarr[i].extLength); ++ /* ++ * If we fail here, we are likely corrupting the extent ++ * list and leaking blocks. At least stop early to ++ * limit the damage. ++ */ ++ if (err < 0) ++ return err; + udf_next_aext(inode, epos, &laarr[i].extLocation, + &laarr[i].extLength, 1); + start++; +@@ -1182,6 +1191,7 @@ static void udf_update_extents(struct inode *inode, struct kernel_long_ad *laarr + udf_write_aext(inode, epos, &laarr[i].extLocation, + laarr[i].extLength, 1); + } ++ return 0; + } + + struct buffer_head *udf_bread(struct inode *inode, udf_pblk_t block, +@@ -2210,12 +2220,13 @@ int8_t udf_current_aext(struct inode *inode, struct extent_position *epos, + return etype; + } + +-static int8_t udf_insert_aext(struct inode *inode, struct extent_position epos, +- struct kernel_lb_addr neloc, uint32_t nelen) ++static int udf_insert_aext(struct inode *inode, struct extent_position epos, ++ struct kernel_lb_addr neloc, uint32_t nelen) + { + struct kernel_lb_addr oeloc; + uint32_t oelen; + int8_t etype; ++ int err; + + if (epos.bh) + get_bh(epos.bh); +@@ -2225,10 +2236,10 @@ static int8_t udf_insert_aext(struct inode *inode, struct extent_position epos, + neloc = oeloc; + nelen = (etype << 30) | oelen; + } +- udf_add_aext(inode, &epos, &neloc, nelen, 1); ++ err = udf_add_aext(inode, &epos, &neloc, nelen, 1); + brelse(epos.bh); + +- return (nelen >> 30); ++ return err; + } + + int8_t udf_delete_aext(struct inode *inode, struct extent_position epos) +diff --git a/fs/verity/signature.c b/fs/verity/signature.c +index 143a530a80088..b59de03055e1e 100644 +--- a/fs/verity/signature.c ++++ b/fs/verity/signature.c +@@ -54,6 +54,22 @@ int fsverity_verify_signature(const struct fsverity_info *vi, + return 0; + } + ++ if (fsverity_keyring->keys.nr_leaves_on_tree == 0) { ++ /* ++ * The ".fs-verity" keyring is empty, due to builtin signatures ++ * being supported by the kernel but not actually being used. ++ * In this case, verify_pkcs7_signature() would always return an ++ * error, usually ENOKEY. It could also be EBADMSG if the ++ * PKCS#7 is malformed, but that isn't very important to ++ * distinguish. So, just skip to ENOKEY to avoid the attack ++ * surface of the PKCS#7 parser, which would otherwise be ++ * reachable by any task able to execute FS_IOC_ENABLE_VERITY. ++ */ ++ fsverity_err(inode, ++ "fs-verity keyring is empty, rejecting signed file!"); ++ return -ENOKEY; ++ } ++ + d = kzalloc(sizeof(*d) + hash_alg->digest_size, GFP_KERNEL); + if (!d) + return -ENOMEM; +diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h +index 224b860647083..939a3196bf002 100644 +--- a/include/crypto/algapi.h ++++ b/include/crypto/algapi.h +@@ -12,6 +12,7 @@ + #include + #include + #include ++#include + + #include + +@@ -60,6 +61,8 @@ struct crypto_instance { + struct crypto_spawn *spawns; + }; + ++ struct work_struct free_work; ++ + void *__ctx[] CRYPTO_MINALIGN_ATTR; + }; + +diff --git a/include/dt-bindings/clock/qcom,gcc-sc8280xp.h b/include/dt-bindings/clock/qcom,gcc-sc8280xp.h +index cb2fb638825ca..8454915917849 100644 +--- a/include/dt-bindings/clock/qcom,gcc-sc8280xp.h ++++ b/include/dt-bindings/clock/qcom,gcc-sc8280xp.h +@@ -492,5 +492,17 @@ + #define USB30_MP_GDSC 9 + #define USB30_PRIM_GDSC 10 + #define USB30_SEC_GDSC 11 ++#define EMAC_0_GDSC 12 ++#define EMAC_1_GDSC 13 ++#define USB4_1_GDSC 14 ++#define USB4_GDSC 15 ++#define HLOS1_VOTE_MMNOC_MMU_TBU_HF0_GDSC 16 ++#define HLOS1_VOTE_MMNOC_MMU_TBU_HF1_GDSC 17 ++#define HLOS1_VOTE_MMNOC_MMU_TBU_SF0_GDSC 18 ++#define HLOS1_VOTE_MMNOC_MMU_TBU_SF1_GDSC 19 ++#define HLOS1_VOTE_TURING_MMU_TBU0_GDSC 20 ++#define HLOS1_VOTE_TURING_MMU_TBU1_GDSC 21 ++#define HLOS1_VOTE_TURING_MMU_TBU2_GDSC 22 ++#define HLOS1_VOTE_TURING_MMU_TBU3_GDSC 23 + + #endif +diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h +index 14dc461b0e829..255701e1251b4 100644 +--- a/include/linux/arm_sdei.h ++++ b/include/linux/arm_sdei.h +@@ -47,10 +47,12 @@ int sdei_unregister_ghes(struct ghes *ghes); + int sdei_mask_local_cpu(void); + int sdei_unmask_local_cpu(void); + void __init sdei_init(void); ++void sdei_handler_abort(void); + #else + static inline int sdei_mask_local_cpu(void) { return 0; } + static inline int sdei_unmask_local_cpu(void) { return 0; } + static inline void sdei_init(void) { } ++static inline void sdei_handler_abort(void) { } + #endif /* CONFIG_ARM_SDE_INTERFACE */ + + +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index 427e79ac72194..57674b3c58774 100644 +--- a/include/linux/blkdev.h ++++ b/include/linux/blkdev.h +@@ -565,6 +565,7 @@ struct request_queue { + #define QUEUE_FLAG_NOXMERGES 9 /* No extended merges */ + #define QUEUE_FLAG_ADD_RANDOM 10 /* Contributes to random pool */ + #define QUEUE_FLAG_SAME_FORCE 12 /* force complete on same CPU */ ++#define QUEUE_FLAG_HW_WC 18 /* Write back caching supported */ + #define QUEUE_FLAG_INIT_DONE 14 /* queue is initialized */ + #define QUEUE_FLAG_STABLE_WRITES 15 /* don't modify blks until WB is done */ + #define QUEUE_FLAG_POLL 16 /* IO polling enabled if set */ +diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h +index 267cd06b54a01..aefb06373720f 100644 +--- a/include/linux/clk-provider.h ++++ b/include/linux/clk-provider.h +@@ -1361,7 +1361,13 @@ struct clk_hw_onecell_data { + struct clk_hw *hws[]; + }; + +-#define CLK_OF_DECLARE(name, compat, fn) OF_DECLARE_1(clk, name, compat, fn) ++#define CLK_OF_DECLARE(name, compat, fn) \ ++ static void __init __##name##_of_clk_init_declare(struct device_node *np) \ ++ { \ ++ fn(np); \ ++ fwnode_dev_initialized(of_fwnode_handle(np), true); \ ++ } \ ++ OF_DECLARE_1(clk, name, compat, __##name##_of_clk_init_declare) + + /* + * Use this macro when you have a driver that requires two initialization +diff --git a/include/linux/hid.h b/include/linux/hid.h +index 0a1ccc68e798a..784dd6b6046eb 100644 +--- a/include/linux/hid.h ++++ b/include/linux/hid.h +@@ -357,6 +357,7 @@ struct hid_item { + #define HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP BIT(18) + #define HID_QUIRK_HAVE_SPECIAL_DRIVER BIT(19) + #define HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE BIT(20) ++#define HID_QUIRK_NOINVERT BIT(21) + #define HID_QUIRK_FULLSPEED_INTERVAL BIT(28) + #define HID_QUIRK_NO_INIT_REPORTS BIT(29) + #define HID_QUIRK_NO_IGNORE BIT(30) +diff --git a/include/linux/if_arp.h b/include/linux/if_arp.h +index 1ed52441972f9..10a1e81434cb9 100644 +--- a/include/linux/if_arp.h ++++ b/include/linux/if_arp.h +@@ -53,6 +53,10 @@ static inline bool dev_is_mac_header_xmit(const struct net_device *dev) + case ARPHRD_NONE: + case ARPHRD_RAWIP: + case ARPHRD_PIMREG: ++ /* PPP adds its l2 header automatically in ppp_start_xmit(). ++ * This makes it look like an l3 device to __bpf_redirect() and tcf_mirred_init(). ++ */ ++ case ARPHRD_PPP: + return false; + default: + return true; +diff --git a/include/linux/ioport.h b/include/linux/ioport.h +index 27642ca15d932..4ae3c541ea6f4 100644 +--- a/include/linux/ioport.h ++++ b/include/linux/ioport.h +@@ -318,6 +318,8 @@ extern void __devm_release_region(struct device *dev, struct resource *parent, + resource_size_t start, resource_size_t n); + extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size); + extern bool iomem_is_exclusive(u64 addr); ++extern bool resource_is_exclusive(struct resource *resource, u64 addr, ++ resource_size_t size); + + extern int + walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages, +diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h +index 73f5c120def88..2a36f3218b510 100644 +--- a/include/linux/kernfs.h ++++ b/include/linux/kernfs.h +@@ -550,6 +550,10 @@ static inline int kernfs_setattr(struct kernfs_node *kn, + const struct iattr *iattr) + { return -ENOSYS; } + ++static inline __poll_t kernfs_generic_poll(struct kernfs_open_file *of, ++ struct poll_table_struct *pt) ++{ return -ENOSYS; } ++ + static inline void kernfs_notify(struct kernfs_node *kn) { } + + static inline int kernfs_xattr_get(struct kernfs_node *kn, const char *name, +diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h +index ec119da1d89b4..4a97a6db9bcec 100644 +--- a/include/linux/lsm_hook_defs.h ++++ b/include/linux/lsm_hook_defs.h +@@ -54,6 +54,7 @@ LSM_HOOK(int, 0, bprm_creds_from_file, struct linux_binprm *bprm, struct file *f + LSM_HOOK(int, 0, bprm_check_security, struct linux_binprm *bprm) + LSM_HOOK(void, LSM_RET_VOID, bprm_committing_creds, struct linux_binprm *bprm) + LSM_HOOK(void, LSM_RET_VOID, bprm_committed_creds, struct linux_binprm *bprm) ++LSM_HOOK(int, 0, fs_context_submount, struct fs_context *fc, struct super_block *reference) + LSM_HOOK(int, 0, fs_context_dup, struct fs_context *fc, + struct fs_context *src_sc) + LSM_HOOK(int, -ENOPARAM, fs_context_parse_param, struct fs_context *fc, +diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h +index e039763029563..099521835cd14 100644 +--- a/include/linux/memcontrol.h ++++ b/include/linux/memcontrol.h +@@ -283,6 +283,11 @@ struct mem_cgroup { + atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS]; + atomic_long_t memory_events_local[MEMCG_NR_MEMORY_EVENTS]; + ++ /* ++ * Hint of reclaim pressure for socket memroy management. Note ++ * that this indicator should NOT be used in legacy cgroup mode ++ * where socket memory is accounted/charged separately. ++ */ + unsigned long socket_pressure; + + /* Legacy tcp memory accounting */ +@@ -1704,8 +1709,8 @@ void mem_cgroup_sk_alloc(struct sock *sk); + void mem_cgroup_sk_free(struct sock *sk); + static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) + { +- if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_pressure) +- return true; ++ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) ++ return !!memcg->tcpmem_pressure; + do { + if (time_before(jiffies, READ_ONCE(memcg->socket_pressure))) + return true; +diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h +index e86cf6642d212..2fd973d188c47 100644 +--- a/include/linux/nfs_xdr.h ++++ b/include/linux/nfs_xdr.h +@@ -670,6 +670,7 @@ struct nfs_pgio_res { + struct { + unsigned int replen; /* used by read */ + int eof; /* used by read */ ++ void * scratch; /* used by read */ + }; + struct { + struct nfs_writeverf * verf; /* used by write */ +diff --git a/include/linux/nls.h b/include/linux/nls.h +index 499e486b3722d..e0bf8367b274a 100644 +--- a/include/linux/nls.h ++++ b/include/linux/nls.h +@@ -47,7 +47,7 @@ enum utf16_endian { + /* nls_base.c */ + extern int __register_nls(struct nls_table *, struct module *); + extern int unregister_nls(struct nls_table *); +-extern struct nls_table *load_nls(char *); ++extern struct nls_table *load_nls(const char *charset); + extern void unload_nls(struct nls_table *); + extern struct nls_table *load_nls_default(void); + #define register_nls(nls) __register_nls((nls), THIS_MODULE) +diff --git a/include/linux/pci.h b/include/linux/pci.h +index 9f617ffdb863f..eccaf1abea79d 100644 +--- a/include/linux/pci.h ++++ b/include/linux/pci.h +@@ -409,6 +409,7 @@ struct pci_dev { + */ + unsigned int irq; + struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */ ++ struct resource driver_exclusive_resource; /* driver exclusive resource ranges */ + + bool match_driver; /* Skip attaching driver */ + +@@ -465,6 +466,7 @@ struct pci_dev { + pci_dev_flags_t dev_flags; + atomic_t enable_cnt; /* pci_enable_device has been called */ + ++ spinlock_t pcie_cap_lock; /* Protects RMW ops in capability accessors */ + u32 saved_config_space[16]; /* Config space saved at suspend time */ + struct hlist_head saved_cap_space; + int rom_attr_enabled; /* Display of ROM attribute enabled? */ +@@ -1208,11 +1210,40 @@ int pcie_capability_read_word(struct pci_dev *dev, int pos, u16 *val); + int pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val); + int pcie_capability_write_word(struct pci_dev *dev, int pos, u16 val); + int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val); +-int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos, +- u16 clear, u16 set); ++int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos, ++ u16 clear, u16 set); ++int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos, ++ u16 clear, u16 set); + int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos, + u32 clear, u32 set); + ++/** ++ * pcie_capability_clear_and_set_word - RMW accessor for PCI Express Capability Registers ++ * @dev: PCI device structure of the PCI Express device ++ * @pos: PCI Express Capability Register ++ * @clear: Clear bitmask ++ * @set: Set bitmask ++ * ++ * Perform a Read-Modify-Write (RMW) operation using @clear and @set ++ * bitmasks on PCI Express Capability Register at @pos. Certain PCI Express ++ * Capability Registers are accessed concurrently in RMW fashion, hence ++ * require locking which is handled transparently to the caller. ++ */ ++static inline int pcie_capability_clear_and_set_word(struct pci_dev *dev, ++ int pos, ++ u16 clear, u16 set) ++{ ++ switch (pos) { ++ case PCI_EXP_LNKCTL: ++ case PCI_EXP_RTCTL: ++ return pcie_capability_clear_and_set_word_locked(dev, pos, ++ clear, set); ++ default: ++ return pcie_capability_clear_and_set_word_unlocked(dev, pos, ++ clear, set); ++ } ++} ++ + static inline int pcie_capability_set_word(struct pci_dev *dev, int pos, + u16 set) + { +@@ -1408,6 +1439,21 @@ int pci_request_selected_regions(struct pci_dev *, int, const char *); + int pci_request_selected_regions_exclusive(struct pci_dev *, int, const char *); + void pci_release_selected_regions(struct pci_dev *, int); + ++static inline __must_check struct resource * ++pci_request_config_region_exclusive(struct pci_dev *pdev, unsigned int offset, ++ unsigned int len, const char *name) ++{ ++ return __request_region(&pdev->driver_exclusive_resource, offset, len, ++ name, IORESOURCE_EXCLUSIVE); ++} ++ ++static inline void pci_release_config_region(struct pci_dev *pdev, ++ unsigned int offset, ++ unsigned int len) ++{ ++ __release_region(&pdev->driver_exclusive_resource, offset, len); ++} ++ + /* drivers/pci/bus.c */ + void pci_add_resource(struct list_head *resources, struct resource *res); + void pci_add_resource_offset(struct list_head *resources, struct resource *res, +@@ -2487,6 +2533,7 @@ void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); + #define pci_crit(pdev, fmt, arg...) dev_crit(&(pdev)->dev, fmt, ##arg) + #define pci_err(pdev, fmt, arg...) dev_err(&(pdev)->dev, fmt, ##arg) + #define pci_warn(pdev, fmt, arg...) dev_warn(&(pdev)->dev, fmt, ##arg) ++#define pci_warn_once(pdev, fmt, arg...) dev_warn_once(&(pdev)->dev, fmt, ##arg) + #define pci_notice(pdev, fmt, arg...) dev_notice(&(pdev)->dev, fmt, ##arg) + #define pci_info(pdev, fmt, arg...) dev_info(&(pdev)->dev, fmt, ##arg) + #define pci_dbg(pdev, fmt, arg...) dev_dbg(&(pdev)->dev, fmt, ##arg) +diff --git a/include/linux/security.h b/include/linux/security.h +index ca1b7109c0dbb..a6c97cc57caa0 100644 +--- a/include/linux/security.h ++++ b/include/linux/security.h +@@ -293,6 +293,7 @@ int security_bprm_creds_from_file(struct linux_binprm *bprm, struct file *file); + int security_bprm_check(struct linux_binprm *bprm); + void security_bprm_committing_creds(struct linux_binprm *bprm); + void security_bprm_committed_creds(struct linux_binprm *bprm); ++int security_fs_context_submount(struct fs_context *fc, struct super_block *reference); + int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc); + int security_fs_context_parse_param(struct fs_context *fc, struct fs_parameter *param); + int security_sb_alloc(struct super_block *sb); +@@ -625,6 +626,11 @@ static inline void security_bprm_committed_creds(struct linux_binprm *bprm) + { + } + ++static inline int security_fs_context_submount(struct fs_context *fc, ++ struct super_block *reference) ++{ ++ return 0; ++} + static inline int security_fs_context_dup(struct fs_context *fc, + struct fs_context *src_fc) + { +diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h +index 04c59f8d801f1..422f4ca656cf9 100644 +--- a/include/linux/trace_events.h ++++ b/include/linux/trace_events.h +@@ -863,7 +863,8 @@ extern int perf_uprobe_init(struct perf_event *event, + extern void perf_uprobe_destroy(struct perf_event *event); + extern int bpf_get_uprobe_info(const struct perf_event *event, + u32 *fd_type, const char **filename, +- u64 *probe_offset, bool perf_type_tracepoint); ++ u64 *probe_offset, u64 *probe_addr, ++ bool perf_type_tracepoint); + #endif + extern int ftrace_profile_set_filter(struct perf_event *event, int event_id, + char *filter_str); +diff --git a/include/linux/usb/typec_altmode.h b/include/linux/usb/typec_altmode.h +index 350d49012659b..28aeef8f9e7b5 100644 +--- a/include/linux/usb/typec_altmode.h ++++ b/include/linux/usb/typec_altmode.h +@@ -67,7 +67,7 @@ struct typec_altmode_ops { + + int typec_altmode_enter(struct typec_altmode *altmode, u32 *vdo); + int typec_altmode_exit(struct typec_altmode *altmode); +-void typec_altmode_attention(struct typec_altmode *altmode, u32 vdo); ++int typec_altmode_attention(struct typec_altmode *altmode, u32 vdo); + int typec_altmode_vdm(struct typec_altmode *altmode, + const u32 header, const u32 *vdo, int count); + int typec_altmode_notify(struct typec_altmode *altmode, unsigned long conf, +diff --git a/include/media/cec.h b/include/media/cec.h +index abee41ae02d0e..9c007f83569aa 100644 +--- a/include/media/cec.h ++++ b/include/media/cec.h +@@ -113,22 +113,25 @@ struct cec_fh { + #define CEC_FREE_TIME_TO_USEC(ft) ((ft) * 2400) + + struct cec_adap_ops { +- /* Low-level callbacks */ ++ /* Low-level callbacks, called with adap->lock held */ + int (*adap_enable)(struct cec_adapter *adap, bool enable); + int (*adap_monitor_all_enable)(struct cec_adapter *adap, bool enable); + int (*adap_monitor_pin_enable)(struct cec_adapter *adap, bool enable); + int (*adap_log_addr)(struct cec_adapter *adap, u8 logical_addr); +- void (*adap_configured)(struct cec_adapter *adap, bool configured); ++ void (*adap_unconfigured)(struct cec_adapter *adap); + int (*adap_transmit)(struct cec_adapter *adap, u8 attempts, + u32 signal_free_time, struct cec_msg *msg); ++ void (*adap_nb_transmit_canceled)(struct cec_adapter *adap, ++ const struct cec_msg *msg); + void (*adap_status)(struct cec_adapter *adap, struct seq_file *file); + void (*adap_free)(struct cec_adapter *adap); + +- /* Error injection callbacks */ ++ /* Error injection callbacks, called without adap->lock held */ + int (*error_inj_show)(struct cec_adapter *adap, struct seq_file *sf); + bool (*error_inj_parse_line)(struct cec_adapter *adap, char *line); + +- /* High-level CEC message callback */ ++ /* High-level CEC message callback, called without adap->lock held */ ++ void (*configured)(struct cec_adapter *adap); + int (*received)(struct cec_adapter *adap, struct cec_msg *msg); + }; + +diff --git a/include/net/lwtunnel.h b/include/net/lwtunnel.h +index 6f15e6fa154e6..53bd2d02a4f0d 100644 +--- a/include/net/lwtunnel.h ++++ b/include/net/lwtunnel.h +@@ -16,9 +16,12 @@ + #define LWTUNNEL_STATE_INPUT_REDIRECT BIT(1) + #define LWTUNNEL_STATE_XMIT_REDIRECT BIT(2) + ++/* LWTUNNEL_XMIT_CONTINUE should be distinguishable from dst_output return ++ * values (NET_XMIT_xxx and NETDEV_TX_xxx in linux/netdevice.h) for safety. ++ */ + enum { + LWTUNNEL_XMIT_DONE, +- LWTUNNEL_XMIT_CONTINUE, ++ LWTUNNEL_XMIT_CONTINUE = 0x100, + }; + + +diff --git a/include/net/mac80211.h b/include/net/mac80211.h +index 8a338c33118f9..43173204d6d5e 100644 +--- a/include/net/mac80211.h ++++ b/include/net/mac80211.h +@@ -1141,9 +1141,11 @@ struct ieee80211_tx_info { + u8 ampdu_ack_len; + u8 ampdu_len; + u8 antenna; ++ u8 pad; + u16 tx_time; + u8 flags; +- void *status_driver_data[18 / sizeof(void *)]; ++ u8 pad2; ++ void *status_driver_data[16 / sizeof(void *)]; + } status; + struct { + struct ieee80211_tx_rate driver_rates[ +diff --git a/include/net/tcp.h b/include/net/tcp.h +index e9c8f88f47696..5fd69f2342a44 100644 +--- a/include/net/tcp.h ++++ b/include/net/tcp.h +@@ -355,7 +355,6 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos, + struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp, + bool force_schedule); + +-void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks); + static inline void tcp_dec_quickack_mode(struct sock *sk, + const unsigned int pkts) + { +diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h +index fcf25f1642a3a..d27d9fb7174c8 100644 +--- a/include/scsi/scsi_host.h ++++ b/include/scsi/scsi_host.h +@@ -757,7 +757,7 @@ extern void scsi_remove_host(struct Scsi_Host *); + extern struct Scsi_Host *scsi_host_get(struct Scsi_Host *); + extern int scsi_host_busy(struct Scsi_Host *shost); + extern void scsi_host_put(struct Scsi_Host *t); +-extern struct Scsi_Host *scsi_host_lookup(unsigned short); ++extern struct Scsi_Host *scsi_host_lookup(unsigned int hostnum); + extern const char *scsi_host_state_name(enum scsi_host_state); + extern void scsi_host_complete_all_commands(struct Scsi_Host *shost, + enum scsi_host_status status); +diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h +index ee2dcfb3d6602..d7f7c04a6e0c1 100644 +--- a/include/uapi/linux/sync_file.h ++++ b/include/uapi/linux/sync_file.h +@@ -52,7 +52,7 @@ struct sync_fence_info { + * @name: name of fence + * @status: status of fence. 1: signaled 0:active <0:error + * @flags: sync_file_info flags +- * @num_fences number of fences in the sync_file ++ * @num_fences: number of fences in the sync_file + * @pad: padding for 64-bit alignment, should always be zero + * @sync_fence_info: pointer to array of structs sync_fence_info with all + * fences in the sync_file +diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h +index b5e7d082b8adf..d4a4e3cab3c2a 100644 +--- a/include/uapi/linux/v4l2-controls.h ++++ b/include/uapi/linux/v4l2-controls.h +@@ -2411,6 +2411,9 @@ struct v4l2_ctrl_hevc_slice_params { + * @poc_st_curr_after: provides the index of the short term after references + * in DPB array + * @poc_lt_curr: provides the index of the long term references in DPB array ++ * @num_delta_pocs_of_ref_rps_idx: same as the derived value NumDeltaPocs[RefRpsIdx], ++ * can be used to parse the RPS data in slice headers ++ * instead of skipping it with @short_term_ref_pic_set_size. + * @reserved: padding field. Should be zeroed by applications. + * @dpb: the decoded picture buffer, for meta-data about reference frames + * @flags: see V4L2_HEVC_DECODE_PARAM_FLAG_{} +@@ -2426,7 +2429,8 @@ struct v4l2_ctrl_hevc_decode_params { + __u8 poc_st_curr_before[V4L2_HEVC_DPB_ENTRIES_NUM_MAX]; + __u8 poc_st_curr_after[V4L2_HEVC_DPB_ENTRIES_NUM_MAX]; + __u8 poc_lt_curr[V4L2_HEVC_DPB_ENTRIES_NUM_MAX]; +- __u8 reserved[4]; ++ __u8 num_delta_pocs_of_ref_rps_idx; ++ __u8 reserved[3]; + struct v4l2_hevc_dpb_entry dpb[V4L2_HEVC_DPB_ENTRIES_NUM_MAX]; + __u64 flags; + }; +diff --git a/init/Kconfig b/init/Kconfig +index 2028ed4d50f5b..de255842f5d09 100644 +--- a/init/Kconfig ++++ b/init/Kconfig +@@ -627,6 +627,7 @@ config TASK_IO_ACCOUNTING + + config PSI + bool "Pressure stall information tracking" ++ select KERNFS + help + Collect metrics that indicate how overcommitted the CPU, memory, + and IO capacity are in the system. +diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c +index b0e47fe1eb4bb..6d455e2428b90 100644 +--- a/io_uring/io_uring.c ++++ b/io_uring/io_uring.c +@@ -1457,6 +1457,9 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min) + break; + nr_events += ret; + ret = 0; ++ ++ if (task_sigpending(current)) ++ return -EINTR; + } while (nr_events < min && !need_resched()); + + return ret; +@@ -2240,7 +2243,9 @@ static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx) + } + + /* drop invalid entries */ ++ spin_lock(&ctx->completion_lock); + ctx->cq_extra--; ++ spin_unlock(&ctx->completion_lock); + WRITE_ONCE(ctx->rings->sq_dropped, + READ_ONCE(ctx->rings->sq_dropped) + 1); + return NULL; +diff --git a/kernel/auditsc.c b/kernel/auditsc.c +index 9f8c05228d6d6..a2240f54fc224 100644 +--- a/kernel/auditsc.c ++++ b/kernel/auditsc.c +@@ -2456,6 +2456,8 @@ void __audit_inode_child(struct inode *parent, + } + } + ++ cond_resched(); ++ + /* is there a matching child entry? */ + list_for_each_entry(n, &context->names_list, list) { + /* can only match entries that have a name */ +diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c +index fb78bb26786fc..7582ec4fd4131 100644 +--- a/kernel/bpf/btf.c ++++ b/kernel/bpf/btf.c +@@ -5788,7 +5788,7 @@ error: + * that also allows using an array of int as a scratch + * space. e.g. skb->cb[]. + */ +- if (off + size > mtrue_end) { ++ if (off + size > mtrue_end && !(*flag & PTR_UNTRUSTED)) { + bpf_log(log, + "access beyond the end of member %s (mend:%u) in struct %s with off %u size %u\n", + mname, mtrue_end, tname, off, size); +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 3c414e0ac819e..3052680201e57 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -10401,6 +10401,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + return -EINVAL; + } + ++ /* check src2 operand */ ++ err = check_reg_arg(env, insn->dst_reg, SRC_OP); ++ if (err) ++ return err; ++ ++ dst_reg = ®s[insn->dst_reg]; + if (BPF_SRC(insn->code) == BPF_X) { + if (insn->imm != 0) { + verbose(env, "BPF_JMP/JMP32 uses reserved fields\n"); +@@ -10412,12 +10418,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + if (err) + return err; + +- if (is_pointer_value(env, insn->src_reg)) { ++ src_reg = ®s[insn->src_reg]; ++ if (!(reg_is_pkt_pointer_any(dst_reg) && reg_is_pkt_pointer_any(src_reg)) && ++ is_pointer_value(env, insn->src_reg)) { + verbose(env, "R%d pointer comparison prohibited\n", + insn->src_reg); + return -EACCES; + } +- src_reg = ®s[insn->src_reg]; + } else { + if (insn->src_reg != BPF_REG_0) { + verbose(env, "BPF_JMP/JMP32 uses reserved fields\n"); +@@ -10425,12 +10432,6 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + } + } + +- /* check src2 operand */ +- err = check_reg_arg(env, insn->dst_reg, SRC_OP); +- if (err) +- return err; +- +- dst_reg = ®s[insn->dst_reg]; + is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32; + + if (BPF_SRC(insn->code) == BPF_K) { +diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c +index db3e05b6b4dd2..79e6a5d4c29a1 100644 +--- a/kernel/cgroup/cpuset.c ++++ b/kernel/cgroup/cpuset.c +@@ -1606,11 +1606,16 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp, + } + + /* +- * Skip the whole subtree if the cpumask remains the same +- * and has no partition root state and force flag not set. ++ * Skip the whole subtree if ++ * 1) the cpumask remains the same, ++ * 2) has no partition root state, ++ * 3) force flag not set, and ++ * 4) for v2 load balance state same as its parent. + */ + if (!cp->partition_root_state && !force && +- cpumask_equal(tmp->new_cpus, cp->effective_cpus)) { ++ cpumask_equal(tmp->new_cpus, cp->effective_cpus) && ++ (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) || ++ (is_sched_load_balance(parent) == is_sched_load_balance(cp)))) { + pos_css = css_rightmost_descendant(pos_css); + continue; + } +@@ -1693,6 +1698,20 @@ update_parent_subparts: + + update_tasks_cpumask(cp, tmp->new_cpus); + ++ /* ++ * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE ++ * from parent if current cpuset isn't a valid partition root ++ * and their load balance states differ. ++ */ ++ if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && ++ !is_partition_valid(cp) && ++ (is_sched_load_balance(parent) != is_sched_load_balance(cp))) { ++ if (is_sched_load_balance(parent)) ++ set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); ++ else ++ clear_bit(CS_SCHED_LOAD_BALANCE, &cp->flags); ++ } ++ + /* + * On legacy hierarchy, if the effective cpumask of any non- + * empty cpuset is changed, we need to rebuild sched domains. +@@ -3213,6 +3232,14 @@ static int cpuset_css_online(struct cgroup_subsys_state *css) + cs->use_parent_ecpus = true; + parent->child_ecpus_count++; + } ++ ++ /* ++ * For v2, clear CS_SCHED_LOAD_BALANCE if parent is isolated ++ */ ++ if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) && ++ !is_sched_load_balance(parent)) ++ clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags); ++ + spin_unlock_irq(&callback_lock); + + if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags)) +diff --git a/kernel/cgroup/namespace.c b/kernel/cgroup/namespace.c +index 0d5c29879a50b..144a464e45c66 100644 +--- a/kernel/cgroup/namespace.c ++++ b/kernel/cgroup/namespace.c +@@ -149,9 +149,3 @@ const struct proc_ns_operations cgroupns_operations = { + .install = cgroupns_install, + .owner = cgroupns_owner, + }; +- +-static __init int cgroup_namespaces_init(void) +-{ +- return 0; +-} +-subsys_initcall(cgroup_namespaces_init); +diff --git a/kernel/cpu.c b/kernel/cpu.c +index 98a7a7b1471b7..f8eb1825f704f 100644 +--- a/kernel/cpu.c ++++ b/kernel/cpu.c +@@ -1215,8 +1215,22 @@ out: + return ret; + } + ++struct cpu_down_work { ++ unsigned int cpu; ++ enum cpuhp_state target; ++}; ++ ++static long __cpu_down_maps_locked(void *arg) ++{ ++ struct cpu_down_work *work = arg; ++ ++ return _cpu_down(work->cpu, 0, work->target); ++} ++ + static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target) + { ++ struct cpu_down_work work = { .cpu = cpu, .target = target, }; ++ + /* + * If the platform does not support hotplug, report it explicitly to + * differentiate it from a transient offlining failure. +@@ -1225,7 +1239,15 @@ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target) + return -EOPNOTSUPP; + if (cpu_hotplug_disabled) + return -EBUSY; +- return _cpu_down(cpu, 0, target); ++ ++ /* ++ * Ensure that the control task does not run on the to be offlined ++ * CPU to prevent a deadlock against cfs_b->period_timer. ++ */ ++ cpu = cpumask_any_but(cpu_online_mask, cpu); ++ if (cpu >= nr_cpu_ids) ++ return -EBUSY; ++ return work_on_cpu(cpu, __cpu_down_maps_locked, &work); + } + + static int cpu_down(unsigned int cpu, enum cpuhp_state target) +diff --git a/kernel/kprobes.c b/kernel/kprobes.c +index 00e177de91ccd..3da9726232ff9 100644 +--- a/kernel/kprobes.c ++++ b/kernel/kprobes.c +@@ -1545,6 +1545,17 @@ static int check_ftrace_location(struct kprobe *p) + return 0; + } + ++static bool is_cfi_preamble_symbol(unsigned long addr) ++{ ++ char symbuf[KSYM_NAME_LEN]; ++ ++ if (lookup_symbol_name(addr, symbuf)) ++ return false; ++ ++ return str_has_prefix("__cfi_", symbuf) || ++ str_has_prefix("__pfx_", symbuf); ++} ++ + static int check_kprobe_address_safe(struct kprobe *p, + struct module **probed_mod) + { +@@ -1563,7 +1574,8 @@ static int check_kprobe_address_safe(struct kprobe *p, + within_kprobe_blacklist((unsigned long) p->addr) || + jump_label_text_reserved(p->addr, p->addr) || + static_call_text_reserved(p->addr, p->addr) || +- find_bug((unsigned long)p->addr)) { ++ find_bug((unsigned long)p->addr) || ++ is_cfi_preamble_symbol((unsigned long)p->addr)) { + ret = -EINVAL; + goto out; + } +diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c +index 2b7b6ddab4f70..0bbcd1344f218 100644 +--- a/kernel/printk/printk_ringbuffer.c ++++ b/kernel/printk/printk_ringbuffer.c +@@ -1735,7 +1735,7 @@ static bool copy_data(struct prb_data_ring *data_ring, + if (!buf || !buf_size) + return true; + +- data_size = min_t(u16, buf_size, len); ++ data_size = min_t(unsigned int, buf_size, len); + + memcpy(&buf[0], data, data_size); /* LMM(copy_data:A) */ + return true; +diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c +index d49a9d66e0000..3a93c53f615f0 100644 +--- a/kernel/rcu/refscale.c ++++ b/kernel/rcu/refscale.c +@@ -867,12 +867,11 @@ ref_scale_init(void) + VERBOSE_SCALEOUT("Starting %d reader threads", nreaders); + + for (i = 0; i < nreaders; i++) { ++ init_waitqueue_head(&reader_tasks[i].wq); + firsterr = torture_create_kthread(ref_scale_reader, (void *)i, + reader_tasks[i].task); + if (torture_init_error(firsterr)) + goto unwind; +- +- init_waitqueue_head(&(reader_tasks[i].wq)); + } + + // Main Task +diff --git a/kernel/resource.c b/kernel/resource.c +index 1aeeededdd4c8..8f52f88009652 100644 +--- a/kernel/resource.c ++++ b/kernel/resource.c +@@ -1693,18 +1693,15 @@ static int strict_iomem_checks; + * + * Returns true if exclusive to the kernel, otherwise returns false. + */ +-bool iomem_is_exclusive(u64 addr) ++bool resource_is_exclusive(struct resource *root, u64 addr, resource_size_t size) + { + const unsigned int exclusive_system_ram = IORESOURCE_SYSTEM_RAM | + IORESOURCE_EXCLUSIVE; + bool skip_children = false, err = false; +- int size = PAGE_SIZE; + struct resource *p; + +- addr = addr & PAGE_MASK; +- + read_lock(&resource_lock); +- for_each_resource(&iomem_resource, p, skip_children) { ++ for_each_resource(root, p, skip_children) { + if (p->start >= addr + size) + break; + if (p->end < addr) { +@@ -1743,6 +1740,12 @@ bool iomem_is_exclusive(u64 addr) + return err; + } + ++bool iomem_is_exclusive(u64 addr) ++{ ++ return resource_is_exclusive(&iomem_resource, addr & PAGE_MASK, ++ PAGE_SIZE); ++} ++ + struct resource_entry *resource_list_create_entry(struct resource *res, + size_t extra_size) + { +diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c +index 4f5796dd26a56..576eb2f51f043 100644 +--- a/kernel/sched/rt.c ++++ b/kernel/sched/rt.c +@@ -25,7 +25,7 @@ unsigned int sysctl_sched_rt_period = 1000000; + int sysctl_sched_rt_runtime = 950000; + + #ifdef CONFIG_SYSCTL +-static int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE; ++static int sysctl_sched_rr_timeslice = (MSEC_PER_SEC * RR_TIMESLICE) / HZ; + static int sched_rt_handler(struct ctl_table *table, int write, void *buffer, + size_t *lenp, loff_t *ppos); + static int sched_rr_handler(struct ctl_table *table, int write, void *buffer, +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c +index 1ad89eec2a55f..798e1841d2863 100644 +--- a/kernel/time/tick-sched.c ++++ b/kernel/time/tick-sched.c +@@ -1050,7 +1050,7 @@ static bool report_idle_softirq(void) + return false; + + /* On RT, softirqs handling may be waiting on some lock */ +- if (!local_bh_blocked()) ++ if (local_bh_blocked()) + return false; + + pr_warn("NOHZ tick-stop error: local softirq work is pending, handler #%02x!!!\n", +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c +index ad04390883ada..9fc5db194027b 100644 +--- a/kernel/trace/bpf_trace.c ++++ b/kernel/trace/bpf_trace.c +@@ -2390,7 +2390,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id, + #ifdef CONFIG_UPROBE_EVENTS + if (flags & TRACE_EVENT_FL_UPROBE) + err = bpf_get_uprobe_info(event, fd_type, buf, +- probe_offset, ++ probe_offset, probe_addr, + event->attr.type == PERF_TYPE_TRACEPOINT); + #endif + } +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 1a87cb70f1eb5..54ccdca395311 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -6616,10 +6616,36 @@ tracing_max_lat_write(struct file *filp, const char __user *ubuf, + + #endif + ++static int open_pipe_on_cpu(struct trace_array *tr, int cpu) ++{ ++ if (cpu == RING_BUFFER_ALL_CPUS) { ++ if (cpumask_empty(tr->pipe_cpumask)) { ++ cpumask_setall(tr->pipe_cpumask); ++ return 0; ++ } ++ } else if (!cpumask_test_cpu(cpu, tr->pipe_cpumask)) { ++ cpumask_set_cpu(cpu, tr->pipe_cpumask); ++ return 0; ++ } ++ return -EBUSY; ++} ++ ++static void close_pipe_on_cpu(struct trace_array *tr, int cpu) ++{ ++ if (cpu == RING_BUFFER_ALL_CPUS) { ++ WARN_ON(!cpumask_full(tr->pipe_cpumask)); ++ cpumask_clear(tr->pipe_cpumask); ++ } else { ++ WARN_ON(!cpumask_test_cpu(cpu, tr->pipe_cpumask)); ++ cpumask_clear_cpu(cpu, tr->pipe_cpumask); ++ } ++} ++ + static int tracing_open_pipe(struct inode *inode, struct file *filp) + { + struct trace_array *tr = inode->i_private; + struct trace_iterator *iter; ++ int cpu; + int ret; + + ret = tracing_check_open_get_tr(tr); +@@ -6627,13 +6653,16 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp) + return ret; + + mutex_lock(&trace_types_lock); ++ cpu = tracing_get_cpu(inode); ++ ret = open_pipe_on_cpu(tr, cpu); ++ if (ret) ++ goto fail_pipe_on_cpu; + + /* create a buffer to store the information to pass to userspace */ + iter = kzalloc(sizeof(*iter), GFP_KERNEL); + if (!iter) { + ret = -ENOMEM; +- __trace_array_put(tr); +- goto out; ++ goto fail_alloc_iter; + } + + trace_seq_init(&iter->seq); +@@ -6656,7 +6685,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp) + + iter->tr = tr; + iter->array_buffer = &tr->array_buffer; +- iter->cpu_file = tracing_get_cpu(inode); ++ iter->cpu_file = cpu; + mutex_init(&iter->mutex); + filp->private_data = iter; + +@@ -6666,12 +6695,15 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp) + nonseekable_open(inode, filp); + + tr->trace_ref++; +-out: ++ + mutex_unlock(&trace_types_lock); + return ret; + + fail: + kfree(iter); ++fail_alloc_iter: ++ close_pipe_on_cpu(tr, cpu); ++fail_pipe_on_cpu: + __trace_array_put(tr); + mutex_unlock(&trace_types_lock); + return ret; +@@ -6688,7 +6720,7 @@ static int tracing_release_pipe(struct inode *inode, struct file *file) + + if (iter->trace->pipe_close) + iter->trace->pipe_close(iter); +- ++ close_pipe_on_cpu(tr, iter->cpu_file); + mutex_unlock(&trace_types_lock); + + free_cpumask_var(iter->started); +@@ -7484,6 +7516,11 @@ out: + return ret; + } + ++static void tracing_swap_cpu_buffer(void *tr) ++{ ++ update_max_tr_single((struct trace_array *)tr, current, smp_processor_id()); ++} ++ + static ssize_t + tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt, + loff_t *ppos) +@@ -7542,13 +7579,15 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt, + ret = tracing_alloc_snapshot_instance(tr); + if (ret < 0) + break; +- local_irq_disable(); + /* Now, we're going to swap */ +- if (iter->cpu_file == RING_BUFFER_ALL_CPUS) ++ if (iter->cpu_file == RING_BUFFER_ALL_CPUS) { ++ local_irq_disable(); + update_max_tr(tr, current, smp_processor_id(), NULL); +- else +- update_max_tr_single(tr, current, iter->cpu_file); +- local_irq_enable(); ++ local_irq_enable(); ++ } else { ++ smp_call_function_single(iter->cpu_file, tracing_swap_cpu_buffer, ++ (void *)tr, 1); ++ } + break; + default: + if (tr->allocated_snapshot) { +@@ -9356,6 +9395,9 @@ static struct trace_array *trace_array_create(const char *name) + if (!alloc_cpumask_var(&tr->tracing_cpumask, GFP_KERNEL)) + goto out_free_tr; + ++ if (!zalloc_cpumask_var(&tr->pipe_cpumask, GFP_KERNEL)) ++ goto out_free_tr; ++ + tr->trace_flags = global_trace.trace_flags & ~ZEROED_TRACE_FLAGS; + + cpumask_copy(tr->tracing_cpumask, cpu_all_mask); +@@ -9397,6 +9439,7 @@ static struct trace_array *trace_array_create(const char *name) + out_free_tr: + ftrace_free_ftrace_ops(tr); + free_trace_buffers(tr); ++ free_cpumask_var(tr->pipe_cpumask); + free_cpumask_var(tr->tracing_cpumask); + kfree(tr->name); + kfree(tr); +@@ -9499,6 +9542,7 @@ static int __remove_instance(struct trace_array *tr) + } + kfree(tr->topts); + ++ free_cpumask_var(tr->pipe_cpumask); + free_cpumask_var(tr->tracing_cpumask); + kfree(tr->name); + kfree(tr); +@@ -10223,12 +10267,14 @@ __init static int tracer_alloc_buffers(void) + if (trace_create_savedcmd() < 0) + goto out_free_temp_buffer; + ++ if (!zalloc_cpumask_var(&global_trace.pipe_cpumask, GFP_KERNEL)) ++ goto out_free_savedcmd; ++ + /* TODO: make the number of buffers hot pluggable with CPUS */ + if (allocate_trace_buffers(&global_trace, ring_buf_size) < 0) { + MEM_FAIL(1, "tracer: failed to allocate ring buffer!\n"); +- goto out_free_savedcmd; ++ goto out_free_pipe_cpumask; + } +- + if (global_trace.buffer_disabled) + tracing_off(); + +@@ -10281,6 +10327,8 @@ __init static int tracer_alloc_buffers(void) + + return 0; + ++out_free_pipe_cpumask: ++ free_cpumask_var(global_trace.pipe_cpumask); + out_free_savedcmd: + free_saved_cmdlines_buffer(savedcmd); + out_free_temp_buffer: +diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h +index 3d3505286aa7f..dbb86b0dd3b7b 100644 +--- a/kernel/trace/trace.h ++++ b/kernel/trace/trace.h +@@ -366,6 +366,8 @@ struct trace_array { + struct list_head events; + struct trace_event_file *trace_marker_file; + cpumask_var_t tracing_cpumask; /* only trace on set CPUs */ ++ /* one per_cpu trace_pipe can be opened by only one user */ ++ cpumask_var_t pipe_cpumask; + int ref; + int trace_ref; + #ifdef CONFIG_FUNCTION_TRACER +diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c +index 2f37a6e68aa9f..b791524a6536a 100644 +--- a/kernel/trace/trace_hwlat.c ++++ b/kernel/trace/trace_hwlat.c +@@ -635,7 +635,7 @@ static int s_mode_show(struct seq_file *s, void *v) + else + seq_printf(s, "%s", thread_mode_str[mode]); + +- if (mode != MODE_MAX) ++ if (mode < MODE_MAX - 1) /* if mode is any but last */ + seq_puts(s, " "); + + return 0; +diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c +index 2ac06a642863a..127c78aec17db 100644 +--- a/kernel/trace/trace_uprobe.c ++++ b/kernel/trace/trace_uprobe.c +@@ -1418,7 +1418,7 @@ static void uretprobe_perf_func(struct trace_uprobe *tu, unsigned long func, + + int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type, + const char **filename, u64 *probe_offset, +- bool perf_type_tracepoint) ++ u64 *probe_addr, bool perf_type_tracepoint) + { + const char *pevent = trace_event_name(event->tp_event); + const char *group = event->tp_event->class->system; +@@ -1435,6 +1435,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type, + : BPF_FD_TYPE_UPROBE; + *filename = tu->filename; + *probe_offset = tu->offset; ++ *probe_addr = 0; + return 0; + } + #endif /* CONFIG_PERF_EVENTS */ +diff --git a/lib/xarray.c b/lib/xarray.c +index ea9ce1f0b3863..e9bd29826e8b0 100644 +--- a/lib/xarray.c ++++ b/lib/xarray.c +@@ -204,7 +204,7 @@ static void *xas_descend(struct xa_state *xas, struct xa_node *node) + void *entry = xa_entry(xas->xa, node, offset); + + xas->xa_node = node; +- if (xa_is_sibling(entry)) { ++ while (xa_is_sibling(entry)) { + offset = xa_to_sibling(entry); + entry = xa_entry(xas->xa, node, offset); + if (node->shift && xa_is_node(entry)) +diff --git a/mm/shmem.c b/mm/shmem.c +index 10365ced5b1fc..806741bbe4a68 100644 +--- a/mm/shmem.c ++++ b/mm/shmem.c +@@ -3485,6 +3485,8 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) + unsigned long long size; + char *rest; + int opt; ++ kuid_t kuid; ++ kgid_t kgid; + + opt = fs_parse(fc, shmem_fs_parameters, param, &result); + if (opt < 0) +@@ -3520,14 +3522,32 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) + ctx->mode = result.uint_32 & 07777; + break; + case Opt_uid: +- ctx->uid = make_kuid(current_user_ns(), result.uint_32); +- if (!uid_valid(ctx->uid)) ++ kuid = make_kuid(current_user_ns(), result.uint_32); ++ if (!uid_valid(kuid)) + goto bad_value; ++ ++ /* ++ * The requested uid must be representable in the ++ * filesystem's idmapping. ++ */ ++ if (!kuid_has_mapping(fc->user_ns, kuid)) ++ goto bad_value; ++ ++ ctx->uid = kuid; + break; + case Opt_gid: +- ctx->gid = make_kgid(current_user_ns(), result.uint_32); +- if (!gid_valid(ctx->gid)) ++ kgid = make_kgid(current_user_ns(), result.uint_32); ++ if (!gid_valid(kgid)) + goto bad_value; ++ ++ /* ++ * The requested gid must be representable in the ++ * filesystem's idmapping. ++ */ ++ if (!kgid_has_mapping(fc->user_ns, kgid)) ++ goto bad_value; ++ ++ ctx->gid = kgid; + break; + case Opt_huge: + ctx->huge = result.uint_32; +diff --git a/mm/util.c b/mm/util.c +index 12984e76767eb..ce3bb17c97b9d 100644 +--- a/mm/util.c ++++ b/mm/util.c +@@ -1127,7 +1127,9 @@ void mem_dump_obj(void *object) + if (vmalloc_dump_obj(object)) + return; + +- if (virt_addr_valid(object)) ++ if (is_vmalloc_addr(object)) ++ type = "vmalloc memory"; ++ else if (virt_addr_valid(object)) + type = "non-slab/vmalloc memory"; + else if (object == NULL) + type = "NULL pointer"; +diff --git a/mm/vmalloc.c b/mm/vmalloc.c +index 80bd104a4d42e..67a10a04df041 100644 +--- a/mm/vmalloc.c ++++ b/mm/vmalloc.c +@@ -4041,14 +4041,32 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms) + #ifdef CONFIG_PRINTK + bool vmalloc_dump_obj(void *object) + { +- struct vm_struct *vm; + void *objp = (void *)PAGE_ALIGN((unsigned long)object); ++ const void *caller; ++ struct vm_struct *vm; ++ struct vmap_area *va; ++ unsigned long addr; ++ unsigned int nr_pages; ++ ++ if (!spin_trylock(&vmap_area_lock)) ++ return false; ++ va = __find_vmap_area((unsigned long)objp, &vmap_area_root); ++ if (!va) { ++ spin_unlock(&vmap_area_lock); ++ return false; ++ } + +- vm = find_vm_area(objp); +- if (!vm) ++ vm = va->vm; ++ if (!vm) { ++ spin_unlock(&vmap_area_lock); + return false; ++ } ++ addr = (unsigned long)vm->addr; ++ caller = vm->caller; ++ nr_pages = vm->nr_pages; ++ spin_unlock(&vmap_area_lock); + pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n", +- vm->nr_pages, (unsigned long)vm->addr, vm->caller); ++ nr_pages, addr, caller); + return true; + } + #endif +diff --git a/mm/vmpressure.c b/mm/vmpressure.c +index b52644771cc43..22c6689d93027 100644 +--- a/mm/vmpressure.c ++++ b/mm/vmpressure.c +@@ -244,6 +244,14 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, + if (mem_cgroup_disabled()) + return; + ++ /* ++ * The in-kernel users only care about the reclaim efficiency ++ * for this @memcg rather than the whole subtree, and there ++ * isn't and won't be any in-kernel user in a legacy cgroup. ++ */ ++ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && !tree) ++ return; ++ + vmpr = memcg_to_vmpressure(memcg); + + /* +diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c +index 3f3eb03cda7d6..3cf660d8a0a7b 100644 +--- a/net/9p/trans_virtio.c ++++ b/net/9p/trans_virtio.c +@@ -385,7 +385,7 @@ static void handle_rerror(struct p9_req_t *req, int in_hdr_len, + void *to = req->rc.sdata + in_hdr_len; + + // Fits entirely into the static data? Nothing to do. +- if (req->rc.size < in_hdr_len) ++ if (req->rc.size < in_hdr_len || !pages) + return; + + // Really long error message? Tough, truncate the reply. Might get +@@ -429,7 +429,7 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req, + struct page **in_pages = NULL, **out_pages = NULL; + struct virtio_chan *chan = client->trans; + struct scatterlist *sgs[4]; +- size_t offs; ++ size_t offs = 0; + int need_drop = 0; + int kicked = 0; + +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index d034bf2a999e1..146553c0054f6 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -1074,9 +1074,9 @@ void hci_uuids_clear(struct hci_dev *hdev) + + void hci_link_keys_clear(struct hci_dev *hdev) + { +- struct link_key *key; ++ struct link_key *key, *tmp; + +- list_for_each_entry(key, &hdev->link_keys, list) { ++ list_for_each_entry_safe(key, tmp, &hdev->link_keys, list) { + list_del_rcu(&key->list); + kfree_rcu(key, rcu); + } +@@ -1084,9 +1084,9 @@ void hci_link_keys_clear(struct hci_dev *hdev) + + void hci_smp_ltks_clear(struct hci_dev *hdev) + { +- struct smp_ltk *k; ++ struct smp_ltk *k, *tmp; + +- list_for_each_entry(k, &hdev->long_term_keys, list) { ++ list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) { + list_del_rcu(&k->list); + kfree_rcu(k, rcu); + } +@@ -1094,9 +1094,9 @@ void hci_smp_ltks_clear(struct hci_dev *hdev) + + void hci_smp_irks_clear(struct hci_dev *hdev) + { +- struct smp_irk *k; ++ struct smp_irk *k, *tmp; + +- list_for_each_entry(k, &hdev->identity_resolving_keys, list) { ++ list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) { + list_del_rcu(&k->list); + kfree_rcu(k, rcu); + } +@@ -1104,9 +1104,9 @@ void hci_smp_irks_clear(struct hci_dev *hdev) + + void hci_blocked_keys_clear(struct hci_dev *hdev) + { +- struct blocked_key *b; ++ struct blocked_key *b, *tmp; + +- list_for_each_entry(b, &hdev->blocked_keys, list) { ++ list_for_each_entry_safe(b, tmp, &hdev->blocked_keys, list) { + list_del_rcu(&b->list); + kfree_rcu(b, rcu); + } +@@ -1949,15 +1949,15 @@ int hci_add_adv_monitor(struct hci_dev *hdev, struct adv_monitor *monitor) + + switch (hci_get_adv_monitor_offload_ext(hdev)) { + case HCI_ADV_MONITOR_EXT_NONE: +- bt_dev_dbg(hdev, "%s add monitor %d status %d", hdev->name, ++ bt_dev_dbg(hdev, "add monitor %d status %d", + monitor->handle, status); + /* Message was not forwarded to controller - not an error */ + break; + + case HCI_ADV_MONITOR_EXT_MSFT: + status = msft_add_monitor_pattern(hdev, monitor); +- bt_dev_dbg(hdev, "%s add monitor %d msft status %d", hdev->name, +- monitor->handle, status); ++ bt_dev_dbg(hdev, "add monitor %d msft status %d", ++ handle, status); + break; + } + +@@ -1976,15 +1976,15 @@ static int hci_remove_adv_monitor(struct hci_dev *hdev, + + switch (hci_get_adv_monitor_offload_ext(hdev)) { + case HCI_ADV_MONITOR_EXT_NONE: /* also goes here when powered off */ +- bt_dev_dbg(hdev, "%s remove monitor %d status %d", hdev->name, ++ bt_dev_dbg(hdev, "remove monitor %d status %d", + monitor->handle, status); + goto free_monitor; + + case HCI_ADV_MONITOR_EXT_MSFT: + handle = monitor->handle; + status = msft_remove_monitor(hdev, monitor); +- bt_dev_dbg(hdev, "%s remove monitor %d msft status %d", +- hdev->name, handle, status); ++ bt_dev_dbg(hdev, "remove monitor %d msft status %d", ++ handle, status); + break; + } + +diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c +index 699e4f400df29..5cd2e775915be 100644 +--- a/net/bluetooth/iso.c ++++ b/net/bluetooth/iso.c +@@ -1394,7 +1394,7 @@ static int iso_sock_release(struct socket *sock) + + iso_sock_close(sk); + +- if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime && ++ if (sock_flag(sk, SOCK_LINGER) && READ_ONCE(sk->sk_lingertime) && + !(current->flags & PF_EXITING)) { + lock_sock(sk); + err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime); +diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c +index 1755f91a66f6a..6d4168cfeb563 100644 +--- a/net/bluetooth/sco.c ++++ b/net/bluetooth/sco.c +@@ -1255,7 +1255,7 @@ static int sco_sock_release(struct socket *sock) + + sco_sock_close(sk); + +- if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime && ++ if (sock_flag(sk, SOCK_LINGER) && READ_ONCE(sk->sk_lingertime) && + !(current->flags & PF_EXITING)) { + lock_sock(sk); + err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime); +diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c +index b65962682771f..75204d36d7f90 100644 +--- a/net/bridge/br_stp_if.c ++++ b/net/bridge/br_stp_if.c +@@ -201,9 +201,6 @@ int br_stp_set_enabled(struct net_bridge *br, unsigned long val, + { + ASSERT_RTNL(); + +- if (!net_eq(dev_net(br->dev), &init_net)) +- NL_SET_ERR_MSG_MOD(extack, "STP does not work in non-root netns"); +- + if (br_mrp_enabled(br)) { + NL_SET_ERR_MSG_MOD(extack, + "STP can't be enabled if MRP is already enabled"); +diff --git a/net/core/filter.c b/net/core/filter.c +index 419ce7c61bd6b..9fd7c88b5db4e 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -7259,6 +7259,8 @@ BPF_CALL_3(bpf_sk_assign, struct sk_buff *, skb, struct sock *, sk, u64, flags) + return -ENETUNREACH; + if (unlikely(sk_fullsock(sk) && sk->sk_reuseport)) + return -ESOCKTNOSUPPORT; ++ if (sk_unhashed(sk)) ++ return -EOPNOTSUPP; + if (sk_is_refcounted(sk) && + unlikely(!refcount_inc_not_zero(&sk->sk_refcnt))) + return -ENOENT; +diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c +index 8b6b5e72b2179..4a0797f0a154b 100644 +--- a/net/core/lwt_bpf.c ++++ b/net/core/lwt_bpf.c +@@ -60,9 +60,8 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt, + ret = BPF_OK; + } else { + skb_reset_mac_header(skb); +- ret = skb_do_redirect(skb); +- if (ret == 0) +- ret = BPF_REDIRECT; ++ skb_do_redirect(skb); ++ ret = BPF_REDIRECT; + } + break; + +@@ -255,7 +254,7 @@ static int bpf_lwt_xmit_reroute(struct sk_buff *skb) + + err = dst_output(dev_net(skb_dst(skb)->dev), skb->sk, skb); + if (unlikely(err)) +- return err; ++ return net_xmit_errno(err); + + /* ip[6]_finish_output2 understand LWTUNNEL_XMIT_DONE */ + return LWTUNNEL_XMIT_DONE; +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index b6c16db86c719..24bf4aa222d27 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -4135,21 +4135,20 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, + struct sk_buff *segs = NULL; + struct sk_buff *tail = NULL; + struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list; +- skb_frag_t *frag = skb_shinfo(head_skb)->frags; + unsigned int mss = skb_shinfo(head_skb)->gso_size; + unsigned int doffset = head_skb->data - skb_mac_header(head_skb); +- struct sk_buff *frag_skb = head_skb; + unsigned int offset = doffset; + unsigned int tnl_hlen = skb_tnl_header_len(head_skb); + unsigned int partial_segs = 0; + unsigned int headroom; + unsigned int len = head_skb->len; ++ struct sk_buff *frag_skb; ++ skb_frag_t *frag; + __be16 proto; + bool csum, sg; +- int nfrags = skb_shinfo(head_skb)->nr_frags; + int err = -ENOMEM; + int i = 0; +- int pos; ++ int nfrags, pos; + + if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) && + mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) { +@@ -4226,6 +4225,13 @@ normal: + headroom = skb_headroom(head_skb); + pos = skb_headlen(head_skb); + ++ if (skb_orphan_frags(head_skb, GFP_ATOMIC)) ++ return ERR_PTR(-ENOMEM); ++ ++ nfrags = skb_shinfo(head_skb)->nr_frags; ++ frag = skb_shinfo(head_skb)->frags; ++ frag_skb = head_skb; ++ + do { + struct sk_buff *nskb; + skb_frag_t *nskb_frag; +@@ -4246,6 +4252,10 @@ normal: + (skb_headlen(list_skb) == len || sg)) { + BUG_ON(skb_headlen(list_skb) > len); + ++ nskb = skb_clone(list_skb, GFP_ATOMIC); ++ if (unlikely(!nskb)) ++ goto err; ++ + i = 0; + nfrags = skb_shinfo(list_skb)->nr_frags; + frag = skb_shinfo(list_skb)->frags; +@@ -4264,12 +4274,8 @@ normal: + frag++; + } + +- nskb = skb_clone(list_skb, GFP_ATOMIC); + list_skb = list_skb->next; + +- if (unlikely(!nskb)) +- goto err; +- + if (unlikely(pskb_trim(nskb, len))) { + kfree_skb(nskb); + goto err; +@@ -4345,12 +4351,16 @@ normal: + skb_shinfo(nskb)->flags |= skb_shinfo(head_skb)->flags & + SKBFL_SHARED_FRAG; + +- if (skb_orphan_frags(frag_skb, GFP_ATOMIC) || +- skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC)) ++ if (skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC)) + goto err; + + while (pos < offset + len) { + if (i >= nfrags) { ++ if (skb_orphan_frags(list_skb, GFP_ATOMIC) || ++ skb_zerocopy_clone(nskb, list_skb, ++ GFP_ATOMIC)) ++ goto err; ++ + i = 0; + nfrags = skb_shinfo(list_skb)->nr_frags; + frag = skb_shinfo(list_skb)->frags; +@@ -4364,10 +4374,6 @@ normal: + i--; + frag--; + } +- if (skb_orphan_frags(frag_skb, GFP_ATOMIC) || +- skb_zerocopy_clone(nskb, frag_skb, +- GFP_ATOMIC)) +- goto err; + + list_skb = list_skb->next; + } +diff --git a/net/core/sock.c b/net/core/sock.c +index 509773919d302..fc475845c94d5 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -425,6 +425,7 @@ static int sock_set_timeout(long *timeo_p, sockptr_t optval, int optlen, + { + struct __kernel_sock_timeval tv; + int err = sock_copy_user_timeval(&tv, optval, optlen, old_timeval); ++ long val; + + if (err) + return err; +@@ -435,7 +436,7 @@ static int sock_set_timeout(long *timeo_p, sockptr_t optval, int optlen, + if (tv.tv_sec < 0) { + static int warned __read_mostly; + +- *timeo_p = 0; ++ WRITE_ONCE(*timeo_p, 0); + if (warned < 10 && net_ratelimit()) { + warned++; + pr_info("%s: `%s' (pid %d) tries to set negative timeout\n", +@@ -443,11 +444,12 @@ static int sock_set_timeout(long *timeo_p, sockptr_t optval, int optlen, + } + return 0; + } +- *timeo_p = MAX_SCHEDULE_TIMEOUT; +- if (tv.tv_sec == 0 && tv.tv_usec == 0) +- return 0; +- if (tv.tv_sec < (MAX_SCHEDULE_TIMEOUT / HZ - 1)) +- *timeo_p = tv.tv_sec * HZ + DIV_ROUND_UP((unsigned long)tv.tv_usec, USEC_PER_SEC / HZ); ++ val = MAX_SCHEDULE_TIMEOUT; ++ if ((tv.tv_sec || tv.tv_usec) && ++ (tv.tv_sec < (MAX_SCHEDULE_TIMEOUT / HZ - 1))) ++ val = tv.tv_sec * HZ + DIV_ROUND_UP((unsigned long)tv.tv_usec, ++ USEC_PER_SEC / HZ); ++ WRITE_ONCE(*timeo_p, val); + return 0; + } + +@@ -791,7 +793,7 @@ EXPORT_SYMBOL(sock_set_reuseport); + void sock_no_linger(struct sock *sk) + { + lock_sock(sk); +- sk->sk_lingertime = 0; ++ WRITE_ONCE(sk->sk_lingertime, 0); + sock_set_flag(sk, SOCK_LINGER); + release_sock(sk); + } +@@ -809,9 +811,9 @@ void sock_set_sndtimeo(struct sock *sk, s64 secs) + { + lock_sock(sk); + if (secs && secs < MAX_SCHEDULE_TIMEOUT / HZ - 1) +- sk->sk_sndtimeo = secs * HZ; ++ WRITE_ONCE(sk->sk_sndtimeo, secs * HZ); + else +- sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT; ++ WRITE_ONCE(sk->sk_sndtimeo, MAX_SCHEDULE_TIMEOUT); + release_sock(sk); + } + EXPORT_SYMBOL(sock_set_sndtimeo); +@@ -1217,15 +1219,15 @@ set_sndbuf: + ret = -EFAULT; + break; + } +- if (!ling.l_onoff) ++ if (!ling.l_onoff) { + sock_reset_flag(sk, SOCK_LINGER); +- else { +-#if (BITS_PER_LONG == 32) +- if ((unsigned int)ling.l_linger >= MAX_SCHEDULE_TIMEOUT/HZ) +- sk->sk_lingertime = MAX_SCHEDULE_TIMEOUT; ++ } else { ++ unsigned long t_sec = ling.l_linger; ++ ++ if (t_sec >= MAX_SCHEDULE_TIMEOUT / HZ) ++ WRITE_ONCE(sk->sk_lingertime, MAX_SCHEDULE_TIMEOUT); + else +-#endif +- sk->sk_lingertime = (unsigned int)ling.l_linger * HZ; ++ WRITE_ONCE(sk->sk_lingertime, t_sec * HZ); + sock_set_flag(sk, SOCK_LINGER); + } + break; +@@ -1676,7 +1678,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname, + case SO_LINGER: + lv = sizeof(v.ling); + v.ling.l_onoff = sock_flag(sk, SOCK_LINGER); +- v.ling.l_linger = sk->sk_lingertime / HZ; ++ v.ling.l_linger = READ_ONCE(sk->sk_lingertime) / HZ; + break; + + case SO_BSDCOMPAT: +@@ -1708,12 +1710,14 @@ int sk_getsockopt(struct sock *sk, int level, int optname, + + case SO_RCVTIMEO_OLD: + case SO_RCVTIMEO_NEW: +- lv = sock_get_timeout(sk->sk_rcvtimeo, &v, SO_RCVTIMEO_OLD == optname); ++ lv = sock_get_timeout(READ_ONCE(sk->sk_rcvtimeo), &v, ++ SO_RCVTIMEO_OLD == optname); + break; + + case SO_SNDTIMEO_OLD: + case SO_SNDTIMEO_NEW: +- lv = sock_get_timeout(sk->sk_sndtimeo, &v, SO_SNDTIMEO_OLD == optname); ++ lv = sock_get_timeout(READ_ONCE(sk->sk_sndtimeo), &v, ++ SO_SNDTIMEO_OLD == optname); + break; + + case SO_RCVLOWAT: +diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c +index bfececa9e244e..8f5d3c0881118 100644 +--- a/net/dccp/ipv4.c ++++ b/net/dccp/ipv4.c +@@ -255,12 +255,17 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info) + int err; + struct net *net = dev_net(skb->dev); + +- /* Only need dccph_dport & dccph_sport which are the first +- * 4 bytes in dccp header. ++ /* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x, ++ * which is in byte 7 of the dccp header. + * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us. ++ * ++ * Later on, we want to access the sequence number fields, which are ++ * beyond 8 bytes, so we have to pskb_may_pull() ourselves. + */ +- BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8); +- BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8); ++ dh = (struct dccp_hdr *)(skb->data + offset); ++ if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh))) ++ return -EINVAL; ++ iph = (struct iphdr *)skb->data; + dh = (struct dccp_hdr *)(skb->data + offset); + + sk = __inet_lookup_established(net, &dccp_hashinfo, +diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c +index b51ce6f8ceba0..2b09e2644b13f 100644 +--- a/net/dccp/ipv6.c ++++ b/net/dccp/ipv6.c +@@ -74,7 +74,7 @@ static inline __u64 dccp_v6_init_sequence(struct sk_buff *skb) + static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + u8 type, u8 code, int offset, __be32 info) + { +- const struct ipv6hdr *hdr = (const struct ipv6hdr *)skb->data; ++ const struct ipv6hdr *hdr; + const struct dccp_hdr *dh; + struct dccp_sock *dp; + struct ipv6_pinfo *np; +@@ -83,12 +83,17 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + __u64 seq; + struct net *net = dev_net(skb->dev); + +- /* Only need dccph_dport & dccph_sport which are the first +- * 4 bytes in dccp header. ++ /* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x, ++ * which is in byte 7 of the dccp header. + * Our caller (icmpv6_notify()) already pulled 8 bytes for us. ++ * ++ * Later on, we want to access the sequence number fields, which are ++ * beyond 8 bytes, so we have to pskb_may_pull() ourselves. + */ +- BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8); +- BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8); ++ dh = (struct dccp_hdr *)(skb->data + offset); ++ if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh))) ++ return -EINVAL; ++ hdr = (const struct ipv6hdr *)skb->data; + dh = (struct dccp_hdr *)(skb->data + offset); + + sk = __inet6_lookup_established(net, &dccp_hashinfo, +diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c +index 81be3e0f0e704..cbc4816ed7d83 100644 +--- a/net/ipv4/igmp.c ++++ b/net/ipv4/igmp.c +@@ -353,8 +353,9 @@ static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu) + struct flowi4 fl4; + int hlen = LL_RESERVED_SPACE(dev); + int tlen = dev->needed_tailroom; +- unsigned int size = mtu; ++ unsigned int size; + ++ size = min(mtu, IP_MAX_MTU); + while (1) { + skb = alloc_skb(size + hlen + tlen, + GFP_ATOMIC | __GFP_NOWARN); +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c +index acfe58d2f1dd7..ebd2cea5b7d7a 100644 +--- a/net/ipv4/ip_output.c ++++ b/net/ipv4/ip_output.c +@@ -214,7 +214,7 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s + if (lwtunnel_xmit_redirect(dst->lwtstate)) { + int res = lwtunnel_xmit(skb); + +- if (res < 0 || res == LWTUNNEL_XMIT_DONE) ++ if (res != LWTUNNEL_XMIT_CONTINUE) + return res; + } + +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index e2d3ea2e34561..c697836f2b5b4 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -287,7 +287,7 @@ static void tcp_incr_quickack(struct sock *sk, unsigned int max_quickacks) + icsk->icsk_ack.quick = quickacks; + } + +-void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks) ++static void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks) + { + struct inet_connection_sock *icsk = inet_csk(sk); + +@@ -295,7 +295,6 @@ void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks) + inet_csk_exit_pingpong_mode(sk); + icsk->icsk_ack.ato = TCP_ATO_MIN; + } +-EXPORT_SYMBOL(tcp_enter_quickack_mode); + + /* Send ACKs quickly, if "quick" count is not exhausted + * and the session is not interactive. +diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c +index cf354c29ec123..44b49f7d1a9e6 100644 +--- a/net/ipv4/tcp_timer.c ++++ b/net/ipv4/tcp_timer.c +@@ -441,6 +441,22 @@ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req) + req->timeout << req->num_timeout, TCP_RTO_MAX); + } + ++static bool tcp_rtx_probe0_timed_out(const struct sock *sk, ++ const struct sk_buff *skb) ++{ ++ const struct tcp_sock *tp = tcp_sk(sk); ++ const int timeout = TCP_RTO_MAX * 2; ++ u32 rcv_delta, rtx_delta; ++ ++ rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp; ++ if (rcv_delta <= timeout) ++ return false; ++ ++ rtx_delta = (u32)msecs_to_jiffies(tcp_time_stamp(tp) - ++ (tp->retrans_stamp ?: tcp_skb_timestamp(skb))); ++ ++ return rtx_delta > timeout; ++} + + /** + * tcp_retransmit_timer() - The TCP retransmit timeout handler +@@ -506,7 +522,7 @@ void tcp_retransmit_timer(struct sock *sk) + tp->snd_una, tp->snd_nxt); + } + #endif +- if (tcp_jiffies32 - tp->rcv_tstamp > TCP_RTO_MAX) { ++ if (tcp_rtx_probe0_timed_out(sk, skb)) { + tcp_write_err(sk); + goto out; + } +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c +index 956d6797c76f3..42c1f7d9a980a 100644 +--- a/net/ipv4/udp.c ++++ b/net/ipv4/udp.c +@@ -445,14 +445,24 @@ static struct sock *udp4_lib_lookup2(struct net *net, + score = compute_score(sk, net, saddr, sport, + daddr, hnum, dif, sdif); + if (score > badness) { +- result = lookup_reuseport(net, sk, skb, +- saddr, sport, daddr, hnum); ++ badness = score; ++ result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); ++ if (!result) { ++ result = sk; ++ continue; ++ } ++ + /* Fall back to scoring if group has connections */ +- if (result && !reuseport_has_conns(sk)) ++ if (!reuseport_has_conns(sk)) + return result; + +- result = result ? : sk; +- badness = score; ++ /* Reuseport logic returned an error, keep original score. */ ++ if (IS_ERR(result)) ++ continue; ++ ++ badness = compute_score(result, net, saddr, sport, ++ daddr, hnum, dif, sdif); ++ + } + } + return result; +diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c +index 95a55c6630add..34192f7a166fb 100644 +--- a/net/ipv6/ip6_output.c ++++ b/net/ipv6/ip6_output.c +@@ -112,7 +112,7 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff * + if (lwtunnel_xmit_redirect(dst->lwtstate)) { + int res = lwtunnel_xmit(skb); + +- if (res < 0 || res == LWTUNNEL_XMIT_DONE) ++ if (res != LWTUNNEL_XMIT_CONTINUE) + return res; + } + +diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c +index 27348172b25b9..64b36c2ba774a 100644 +--- a/net/ipv6/udp.c ++++ b/net/ipv6/udp.c +@@ -193,14 +193,23 @@ static struct sock *udp6_lib_lookup2(struct net *net, + score = compute_score(sk, net, saddr, sport, + daddr, hnum, dif, sdif); + if (score > badness) { +- result = lookup_reuseport(net, sk, skb, +- saddr, sport, daddr, hnum); ++ badness = score; ++ result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum); ++ if (!result) { ++ result = sk; ++ continue; ++ } ++ + /* Fall back to scoring if group has connections */ +- if (result && !reuseport_has_conns(sk)) ++ if (!reuseport_has_conns(sk)) + return result; + +- result = result ? : sk; +- badness = score; ++ /* Reuseport logic returned an error, keep original score. */ ++ if (IS_ERR(result)) ++ continue; ++ ++ badness = compute_score(sk, net, saddr, sport, ++ daddr, hnum, dif, sdif); + } + } + return result; +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c +index 763cefd0cc268..2f9e1abdf375d 100644 +--- a/net/mac80211/tx.c ++++ b/net/mac80211/tx.c +@@ -4391,7 +4391,7 @@ static void ieee80211_mlo_multicast_tx(struct net_device *dev, + struct sk_buff *skb) + { + struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); +- unsigned long links = sdata->vif.valid_links; ++ unsigned long links = sdata->vif.active_links; + unsigned int link; + u32 ctrl_flags = IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX; + +@@ -5827,7 +5827,7 @@ void __ieee80211_tx_skb_tid_band(struct ieee80211_sub_if_data *sdata, + rcu_read_unlock(); + + if (WARN_ON_ONCE(link == ARRAY_SIZE(sdata->vif.link_conf))) +- link = ffs(sdata->vif.valid_links) - 1; ++ link = ffs(sdata->vif.active_links) - 1; + } + + IEEE80211_SKB_CB(skb)->control.flags |= +@@ -5863,7 +5863,7 @@ void ieee80211_tx_skb_tid(struct ieee80211_sub_if_data *sdata, + band = chanctx_conf->def.chan->band; + } else { + WARN_ON(link_id >= 0 && +- !(sdata->vif.valid_links & BIT(link_id))); ++ !(sdata->vif.active_links & BIT(link_id))); + /* MLD transmissions must not rely on the band */ + band = 0; + } +diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c +index 005a7ce87217e..bf4f91b78e1dc 100644 +--- a/net/netfilter/ipset/ip_set_hash_netportnet.c ++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c +@@ -36,6 +36,7 @@ MODULE_ALIAS("ip_set_hash:net,port,net"); + #define IP_SET_HASH_WITH_PROTO + #define IP_SET_HASH_WITH_NETS + #define IPSET_NET_COUNT 2 ++#define IP_SET_HASH_WITH_NET0 + + /* IPv4 variant */ + +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c +index a67ea9c3ae57d..c307c57a93e57 100644 +--- a/net/netfilter/nft_exthdr.c ++++ b/net/netfilter/nft_exthdr.c +@@ -238,7 +238,12 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr, + if (!tcph) + goto err; + ++ if (skb_ensure_writable(pkt->skb, nft_thoff(pkt) + tcphdr_len)) ++ goto err; ++ ++ tcph = (struct tcphdr *)(pkt->skb->data + nft_thoff(pkt)); + opt = (u8 *)tcph; ++ + for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) { + union { + __be16 v16; +@@ -253,15 +258,6 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr, + if (i + optl > tcphdr_len || priv->len + priv->offset > optl) + goto err; + +- if (skb_ensure_writable(pkt->skb, +- nft_thoff(pkt) + i + priv->len)) +- goto err; +- +- tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, +- &tcphdr_len); +- if (!tcph) +- goto err; +- + offset = i + priv->offset; + + switch (priv->len) { +@@ -325,9 +321,9 @@ static void nft_exthdr_tcp_strip_eval(const struct nft_expr *expr, + if (skb_ensure_writable(pkt->skb, nft_thoff(pkt) + tcphdr_len)) + goto drop; + +- opt = (u8 *)nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len); +- if (!opt) +- goto err; ++ tcph = (struct tcphdr *)(pkt->skb->data + nft_thoff(pkt)); ++ opt = (u8 *)tcph; ++ + for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) { + unsigned int j; + +diff --git a/net/netfilter/xt_sctp.c b/net/netfilter/xt_sctp.c +index 680015ba7cb6e..d4bf089c9e3f9 100644 +--- a/net/netfilter/xt_sctp.c ++++ b/net/netfilter/xt_sctp.c +@@ -150,6 +150,8 @@ static int sctp_mt_check(const struct xt_mtchk_param *par) + { + const struct xt_sctp_info *info = par->matchinfo; + ++ if (info->flag_count > ARRAY_SIZE(info->flag_info)) ++ return -EINVAL; + if (info->flags & ~XT_SCTP_VALID_FLAGS) + return -EINVAL; + if (info->invflags & ~XT_SCTP_VALID_FLAGS) +diff --git a/net/netfilter/xt_u32.c b/net/netfilter/xt_u32.c +index 177b40d08098b..117d4615d6684 100644 +--- a/net/netfilter/xt_u32.c ++++ b/net/netfilter/xt_u32.c +@@ -96,11 +96,32 @@ static bool u32_mt(const struct sk_buff *skb, struct xt_action_param *par) + return ret ^ data->invert; + } + ++static int u32_mt_checkentry(const struct xt_mtchk_param *par) ++{ ++ const struct xt_u32 *data = par->matchinfo; ++ const struct xt_u32_test *ct; ++ unsigned int i; ++ ++ if (data->ntests > ARRAY_SIZE(data->tests)) ++ return -EINVAL; ++ ++ for (i = 0; i < data->ntests; ++i) { ++ ct = &data->tests[i]; ++ ++ if (ct->nnums > ARRAY_SIZE(ct->location) || ++ ct->nvalues > ARRAY_SIZE(ct->value)) ++ return -EINVAL; ++ } ++ ++ return 0; ++} ++ + static struct xt_match xt_u32_mt_reg __read_mostly = { + .name = "u32", + .revision = 0, + .family = NFPROTO_UNSPEC, + .match = u32_mt, ++ .checkentry = u32_mt_checkentry, + .matchsize = sizeof(struct xt_u32), + .me = THIS_MODULE, + }; +diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c +index 54c0830039470..27511c90a26f4 100644 +--- a/net/netlabel/netlabel_kapi.c ++++ b/net/netlabel/netlabel_kapi.c +@@ -857,7 +857,8 @@ int netlbl_catmap_setlong(struct netlbl_lsm_catmap **catmap, + + offset -= iter->startbit; + idx = offset / NETLBL_CATMAP_MAPSIZE; +- iter->bitmap[idx] |= bitmap << (offset % NETLBL_CATMAP_MAPSIZE); ++ iter->bitmap[idx] |= (NETLBL_CATMAP_MAPTYPE)bitmap ++ << (offset % NETLBL_CATMAP_MAPSIZE); + + return 0; + } +diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c +index 5a4cb796150f5..ec5747969f964 100644 +--- a/net/netrom/af_netrom.c ++++ b/net/netrom/af_netrom.c +@@ -660,6 +660,11 @@ static int nr_connect(struct socket *sock, struct sockaddr *uaddr, + goto out_release; + } + ++ if (sock->state == SS_CONNECTING) { ++ err = -EALREADY; ++ goto out_release; ++ } ++ + sk->sk_state = TCP_CLOSE; + sock->state = SS_UNCONNECTED; + +diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c +index 49bae3d5006b0..6f2f135aab676 100644 +--- a/net/sched/em_meta.c ++++ b/net/sched/em_meta.c +@@ -502,7 +502,7 @@ META_COLLECTOR(int_sk_lingertime) + *err = -1; + return; + } +- dst->value = sk->sk_lingertime / HZ; ++ dst->value = READ_ONCE(sk->sk_lingertime) / HZ; + } + + META_COLLECTOR(int_sk_err_qlen) +@@ -568,7 +568,7 @@ META_COLLECTOR(int_sk_rcvtimeo) + *err = -1; + return; + } +- dst->value = sk->sk_rcvtimeo / HZ; ++ dst->value = READ_ONCE(sk->sk_rcvtimeo) / HZ; + } + + META_COLLECTOR(int_sk_sndtimeo) +@@ -579,7 +579,7 @@ META_COLLECTOR(int_sk_sndtimeo) + *err = -1; + return; + } +- dst->value = sk->sk_sndtimeo / HZ; ++ dst->value = READ_ONCE(sk->sk_sndtimeo) / HZ; + } + + META_COLLECTOR(int_sk_sendmsg_off) +diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c +index 70b0c5873d326..61d52594ff6d8 100644 +--- a/net/sched/sch_hfsc.c ++++ b/net/sched/sch_hfsc.c +@@ -1012,6 +1012,10 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid, + if (parent == NULL) + return -ENOENT; + } ++ if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) { ++ NL_SET_ERR_MSG(extack, "Invalid parent - parent class must have FSC"); ++ return -EINVAL; ++ } + + if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0) + return -EINVAL; +diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c +index 463c4a58d2c36..970c6a486a9b0 100644 +--- a/net/sctp/sm_sideeffect.c ++++ b/net/sctp/sm_sideeffect.c +@@ -1251,7 +1251,10 @@ static int sctp_side_effects(enum sctp_event_type event_type, + default: + pr_err("impossible disposition %d in state %d, event_type %d, event_id %d\n", + status, state, event_type, subtype.chunk); +- BUG(); ++ error = status; ++ if (error >= 0) ++ error = -EINVAL; ++ WARN_ON_ONCE(1); + break; + } + +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index 84219c5121bc2..f774d840759d6 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -1807,7 +1807,7 @@ void smc_close_non_accepted(struct sock *sk) + lock_sock(sk); + if (!sk->sk_lingertime) + /* wait for peer closing */ +- sk->sk_lingertime = SMC_MAX_STREAM_WAIT_TIMEOUT; ++ WRITE_ONCE(sk->sk_lingertime, SMC_MAX_STREAM_WAIT_TIMEOUT); + __smc_release(smc); + release_sock(sk); + sock_put(sk); /* sock_hold above */ +diff --git a/net/socket.c b/net/socket.c +index c2e0a22f16d9b..d281a7ef4b1d3 100644 +--- a/net/socket.c ++++ b/net/socket.c +@@ -3507,7 +3507,11 @@ EXPORT_SYMBOL(kernel_accept); + int kernel_connect(struct socket *sock, struct sockaddr *addr, int addrlen, + int flags) + { +- return sock->ops->connect(sock, addr, addrlen, flags); ++ struct sockaddr_storage address; ++ ++ memcpy(&address, addr, addrlen); ++ ++ return sock->ops->connect(sock, (struct sockaddr *)&address, addrlen, flags); + } + EXPORT_SYMBOL(kernel_connect); + +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index c2363d44a1ffc..12c7c89d5be1d 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -323,6 +323,7 @@ nl80211_pmsr_ftm_req_attr_policy[NL80211_PMSR_FTM_REQ_ATTR_MAX + 1] = { + [NL80211_PMSR_FTM_REQ_ATTR_TRIGGER_BASED] = { .type = NLA_FLAG }, + [NL80211_PMSR_FTM_REQ_ATTR_NON_TRIGGER_BASED] = { .type = NLA_FLAG }, + [NL80211_PMSR_FTM_REQ_ATTR_LMR_FEEDBACK] = { .type = NLA_FLAG }, ++ [NL80211_PMSR_FTM_REQ_ATTR_BSS_COLOR] = { .type = NLA_U8 }, + }; + + static const struct nla_policy +diff --git a/net/wireless/util.c b/net/wireless/util.c +index 39680e7bad45a..f433f3fdd9e94 100644 +--- a/net/wireless/util.c ++++ b/net/wireless/util.c +@@ -5,7 +5,7 @@ + * Copyright 2007-2009 Johannes Berg + * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright 2017 Intel Deutschland GmbH +- * Copyright (C) 2018-2022 Intel Corporation ++ * Copyright (C) 2018-2023 Intel Corporation + */ + #include + #include +@@ -2479,6 +2479,13 @@ void cfg80211_remove_links(struct wireless_dev *wdev) + { + unsigned int link_id; + ++ /* ++ * links are controlled by upper layers (userspace/cfg) ++ * only for AP mode, so only remove them here for AP ++ */ ++ if (wdev->iftype != NL80211_IFTYPE_AP) ++ return; ++ + wdev_lock(wdev); + if (wdev->valid_links) { + for_each_valid_link(wdev, link_id) +diff --git a/samples/bpf/tracex3_kern.c b/samples/bpf/tracex3_kern.c +index bde6591cb20c5..af235bd6615b1 100644 +--- a/samples/bpf/tracex3_kern.c ++++ b/samples/bpf/tracex3_kern.c +@@ -11,6 +11,12 @@ + #include + #include + ++struct start_key { ++ dev_t dev; ++ u32 _pad; ++ sector_t sector; ++}; ++ + struct { + __uint(type, BPF_MAP_TYPE_HASH); + __type(key, long); +@@ -18,16 +24,17 @@ struct { + __uint(max_entries, 4096); + } my_map SEC(".maps"); + +-/* kprobe is NOT a stable ABI. If kernel internals change this bpf+kprobe +- * example will no longer be meaningful +- */ +-SEC("kprobe/blk_mq_start_request") +-int bpf_prog1(struct pt_regs *ctx) ++/* from /sys/kernel/tracing/events/block/block_io_start/format */ ++SEC("tracepoint/block/block_io_start") ++int bpf_prog1(struct trace_event_raw_block_rq *ctx) + { +- long rq = PT_REGS_PARM1(ctx); + u64 val = bpf_ktime_get_ns(); ++ struct start_key key = { ++ .dev = ctx->dev, ++ .sector = ctx->sector ++ }; + +- bpf_map_update_elem(&my_map, &rq, &val, BPF_ANY); ++ bpf_map_update_elem(&my_map, &key, &val, BPF_ANY); + return 0; + } + +@@ -49,21 +56,26 @@ struct { + __uint(max_entries, SLOTS); + } lat_map SEC(".maps"); + +-SEC("kprobe/__blk_account_io_done") +-int bpf_prog2(struct pt_regs *ctx) ++/* from /sys/kernel/tracing/events/block/block_io_done/format */ ++SEC("tracepoint/block/block_io_done") ++int bpf_prog2(struct trace_event_raw_block_rq *ctx) + { +- long rq = PT_REGS_PARM1(ctx); ++ struct start_key key = { ++ .dev = ctx->dev, ++ .sector = ctx->sector ++ }; ++ + u64 *value, l, base; + u32 index; + +- value = bpf_map_lookup_elem(&my_map, &rq); ++ value = bpf_map_lookup_elem(&my_map, &key); + if (!value) + return 0; + + u64 cur_time = bpf_ktime_get_ns(); + u64 delta = cur_time - *value; + +- bpf_map_delete_elem(&my_map, &rq); ++ bpf_map_delete_elem(&my_map, &key); + + /* the lines below are computing index = log10(delta)*10 + * using integer arithmetic +diff --git a/samples/bpf/tracex6_kern.c b/samples/bpf/tracex6_kern.c +index acad5712d8b4f..fd602c2774b8b 100644 +--- a/samples/bpf/tracex6_kern.c ++++ b/samples/bpf/tracex6_kern.c +@@ -2,6 +2,8 @@ + #include + #include + #include ++#include ++#include + + struct { + __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); +@@ -45,13 +47,24 @@ int bpf_prog1(struct pt_regs *ctx) + return 0; + } + +-SEC("kprobe/htab_map_lookup_elem") +-int bpf_prog2(struct pt_regs *ctx) ++/* ++ * Since *_map_lookup_elem can't be expected to trigger bpf programs ++ * due to potential deadlocks (bpf_disable_instrumentation), this bpf ++ * program will be attached to bpf_map_copy_value (which is called ++ * from map_lookup_elem) and will only filter the hashtable type. ++ */ ++SEC("kprobe/bpf_map_copy_value") ++int BPF_KPROBE(bpf_prog2, struct bpf_map *map) + { + u32 key = bpf_get_smp_processor_id(); + struct bpf_perf_event_value *val, buf; ++ enum bpf_map_type type; + int error; + ++ type = BPF_CORE_READ(map, map_type); ++ if (type != BPF_MAP_TYPE_HASH) ++ return 0; ++ + error = bpf_perf_event_read_value(&counters, key, &buf, sizeof(buf)); + if (error) + return 0; +diff --git a/scripts/rust_is_available.sh b/scripts/rust_is_available.sh +index aebbf19139709..7a925d2b20fc7 100755 +--- a/scripts/rust_is_available.sh ++++ b/scripts/rust_is_available.sh +@@ -2,8 +2,6 @@ + # SPDX-License-Identifier: GPL-2.0 + # + # Tests whether a suitable Rust toolchain is available. +-# +-# Pass `-v` for human output and more checks (as warnings). + + set -e + +@@ -23,21 +21,17 @@ get_canonical_version() + + # Check that the Rust compiler exists. + if ! command -v "$RUSTC" >/dev/null; then +- if [ "$1" = -v ]; then +- echo >&2 "***" +- echo >&2 "*** Rust compiler '$RUSTC' could not be found." +- echo >&2 "***" +- fi ++ echo >&2 "***" ++ echo >&2 "*** Rust compiler '$RUSTC' could not be found." ++ echo >&2 "***" + exit 1 + fi + + # Check that the Rust bindings generator exists. + if ! command -v "$BINDGEN" >/dev/null; then +- if [ "$1" = -v ]; then +- echo >&2 "***" +- echo >&2 "*** Rust bindings generator '$BINDGEN' could not be found." +- echo >&2 "***" +- fi ++ echo >&2 "***" ++ echo >&2 "*** Rust bindings generator '$BINDGEN' could not be found." ++ echo >&2 "***" + exit 1 + fi + +@@ -53,16 +47,14 @@ rust_compiler_min_version=$($min_tool_version rustc) + rust_compiler_cversion=$(get_canonical_version $rust_compiler_version) + rust_compiler_min_cversion=$(get_canonical_version $rust_compiler_min_version) + if [ "$rust_compiler_cversion" -lt "$rust_compiler_min_cversion" ]; then +- if [ "$1" = -v ]; then +- echo >&2 "***" +- echo >&2 "*** Rust compiler '$RUSTC' is too old." +- echo >&2 "*** Your version: $rust_compiler_version" +- echo >&2 "*** Minimum version: $rust_compiler_min_version" +- echo >&2 "***" +- fi ++ echo >&2 "***" ++ echo >&2 "*** Rust compiler '$RUSTC' is too old." ++ echo >&2 "*** Your version: $rust_compiler_version" ++ echo >&2 "*** Minimum version: $rust_compiler_min_version" ++ echo >&2 "***" + exit 1 + fi +-if [ "$1" = -v ] && [ "$rust_compiler_cversion" -gt "$rust_compiler_min_cversion" ]; then ++if [ "$rust_compiler_cversion" -gt "$rust_compiler_min_cversion" ]; then + echo >&2 "***" + echo >&2 "*** Rust compiler '$RUSTC' is too new. This may or may not work." + echo >&2 "*** Your version: $rust_compiler_version" +@@ -82,16 +74,14 @@ rust_bindings_generator_min_version=$($min_tool_version bindgen) + rust_bindings_generator_cversion=$(get_canonical_version $rust_bindings_generator_version) + rust_bindings_generator_min_cversion=$(get_canonical_version $rust_bindings_generator_min_version) + if [ "$rust_bindings_generator_cversion" -lt "$rust_bindings_generator_min_cversion" ]; then +- if [ "$1" = -v ]; then +- echo >&2 "***" +- echo >&2 "*** Rust bindings generator '$BINDGEN' is too old." +- echo >&2 "*** Your version: $rust_bindings_generator_version" +- echo >&2 "*** Minimum version: $rust_bindings_generator_min_version" +- echo >&2 "***" +- fi ++ echo >&2 "***" ++ echo >&2 "*** Rust bindings generator '$BINDGEN' is too old." ++ echo >&2 "*** Your version: $rust_bindings_generator_version" ++ echo >&2 "*** Minimum version: $rust_bindings_generator_min_version" ++ echo >&2 "***" + exit 1 + fi +-if [ "$1" = -v ] && [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_generator_min_cversion" ]; then ++if [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_generator_min_cversion" ]; then + echo >&2 "***" + echo >&2 "*** Rust bindings generator '$BINDGEN' is too new. This may or may not work." + echo >&2 "*** Your version: $rust_bindings_generator_version" +@@ -100,23 +90,39 @@ if [ "$1" = -v ] && [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_ge + fi + + # Check that the `libclang` used by the Rust bindings generator is suitable. ++# ++# In order to do that, first invoke `bindgen` to get the `libclang` version ++# found by `bindgen`. This step may already fail if, for instance, `libclang` ++# is not found, thus inform the user in such a case. ++bindgen_libclang_output=$( \ ++ LC_ALL=C "$BINDGEN" $(dirname $0)/rust_is_available_bindgen_libclang.h 2>&1 >/dev/null ++) || bindgen_libclang_code=$? ++if [ -n "$bindgen_libclang_code" ]; then ++ echo >&2 "***" ++ echo >&2 "*** Running '$BINDGEN' to check the libclang version (used by the Rust" ++ echo >&2 "*** bindings generator) failed with code $bindgen_libclang_code. This may be caused by" ++ echo >&2 "*** a failure to locate libclang. See output and docs below for details:" ++ echo >&2 "***" ++ echo >&2 "$bindgen_libclang_output" ++ echo >&2 "***" ++ exit 1 ++fi ++ ++# `bindgen` returned successfully, thus use the output to check that the version ++# of the `libclang` found by the Rust bindings generator is suitable. + bindgen_libclang_version=$( \ +- LC_ALL=C "$BINDGEN" $(dirname $0)/rust_is_available_bindgen_libclang.h 2>&1 >/dev/null \ +- | grep -F 'clang version ' \ +- | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' \ +- | head -n 1 \ ++ echo "$bindgen_libclang_output" \ ++ | sed -nE 's:.*clang version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p' + ) + bindgen_libclang_min_version=$($min_tool_version llvm) + bindgen_libclang_cversion=$(get_canonical_version $bindgen_libclang_version) + bindgen_libclang_min_cversion=$(get_canonical_version $bindgen_libclang_min_version) + if [ "$bindgen_libclang_cversion" -lt "$bindgen_libclang_min_cversion" ]; then +- if [ "$1" = -v ]; then +- echo >&2 "***" +- echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN') is too old." +- echo >&2 "*** Your version: $bindgen_libclang_version" +- echo >&2 "*** Minimum version: $bindgen_libclang_min_version" +- echo >&2 "***" +- fi ++ echo >&2 "***" ++ echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN') is too old." ++ echo >&2 "*** Your version: $bindgen_libclang_version" ++ echo >&2 "*** Minimum version: $bindgen_libclang_min_version" ++ echo >&2 "***" + exit 1 + fi + +@@ -125,21 +131,19 @@ fi + # + # In the future, we might be able to perform a full version check, see + # https://github.com/rust-lang/rust-bindgen/issues/2138. +-if [ "$1" = -v ]; then +- cc_name=$($(dirname $0)/cc-version.sh "$CC" | cut -f1 -d' ') +- if [ "$cc_name" = Clang ]; then +- clang_version=$( \ +- LC_ALL=C "$CC" --version 2>/dev/null \ +- | sed -nE '1s:.*version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p' +- ) +- if [ "$clang_version" != "$bindgen_libclang_version" ]; then +- echo >&2 "***" +- echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN')" +- echo >&2 "*** version does not match Clang's. This may be a problem." +- echo >&2 "*** libclang version: $bindgen_libclang_version" +- echo >&2 "*** Clang version: $clang_version" +- echo >&2 "***" +- fi ++cc_name=$($(dirname $0)/cc-version.sh $CC | cut -f1 -d' ') ++if [ "$cc_name" = Clang ]; then ++ clang_version=$( \ ++ LC_ALL=C $CC --version 2>/dev/null \ ++ | sed -nE '1s:.*version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p' ++ ) ++ if [ "$clang_version" != "$bindgen_libclang_version" ]; then ++ echo >&2 "***" ++ echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN')" ++ echo >&2 "*** version does not match Clang's. This may be a problem." ++ echo >&2 "*** libclang version: $bindgen_libclang_version" ++ echo >&2 "*** Clang version: $clang_version" ++ echo >&2 "***" + fi + fi + +@@ -150,11 +154,9 @@ rustc_sysroot=$("$RUSTC" $KRUSTFLAGS --print sysroot) + rustc_src=${RUST_LIB_SRC:-"$rustc_sysroot/lib/rustlib/src/rust/library"} + rustc_src_core="$rustc_src/core/src/lib.rs" + if [ ! -e "$rustc_src_core" ]; then +- if [ "$1" = -v ]; then +- echo >&2 "***" +- echo >&2 "*** Source code for the 'core' standard library could not be found" +- echo >&2 "*** at '$rustc_src_core'." +- echo >&2 "***" +- fi ++ echo >&2 "***" ++ echo >&2 "*** Source code for the 'core' standard library could not be found" ++ echo >&2 "*** at '$rustc_src_core'." ++ echo >&2 "***" + exit 1 + fi +diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig +index 60a511c6b583e..c17660bf5f347 100644 +--- a/security/integrity/ima/Kconfig ++++ b/security/integrity/ima/Kconfig +@@ -248,18 +248,6 @@ config IMA_APPRAISE_MODSIG + The modsig keyword can be used in the IMA policy to allow a hook + to accept such signatures. + +-config IMA_TRUSTED_KEYRING +- bool "Require all keys on the .ima keyring be signed (deprecated)" +- depends on IMA_APPRAISE && SYSTEM_TRUSTED_KEYRING +- depends on INTEGRITY_ASYMMETRIC_KEYS +- select INTEGRITY_TRUSTED_KEYRING +- default y +- help +- This option requires that all keys added to the .ima +- keyring be signed by a key on the system trusted keyring. +- +- This option is deprecated in favor of INTEGRITY_TRUSTED_KEYRING +- + config IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY + bool "Permit keys validly signed by a built-in or secondary CA cert (EXPERIMENTAL)" + depends on SYSTEM_TRUSTED_KEYRING +diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c +index d54f73c558f72..19be69fa4d052 100644 +--- a/security/keys/keyctl.c ++++ b/security/keys/keyctl.c +@@ -980,14 +980,19 @@ long keyctl_chown_key(key_serial_t id, uid_t user, gid_t group) + ret = -EACCES; + down_write(&key->sem); + +- if (!capable(CAP_SYS_ADMIN)) { ++ { ++ bool is_privileged_op = false; ++ + /* only the sysadmin can chown a key to some other UID */ + if (user != (uid_t) -1 && !uid_eq(key->uid, uid)) +- goto error_put; ++ is_privileged_op = true; + + /* only the sysadmin can set the key's GID to a group other + * than one of those that the current process subscribes to */ + if (group != (gid_t) -1 && !gid_eq(gid, key->gid) && !in_group_p(gid)) ++ is_privileged_op = true; ++ ++ if (is_privileged_op && !capable(CAP_SYS_ADMIN)) + goto error_put; + } + +@@ -1088,7 +1093,7 @@ long keyctl_setperm_key(key_serial_t id, key_perm_t perm) + down_write(&key->sem); + + /* if we're not the sysadmin, we can only change a key that we own */ +- if (capable(CAP_SYS_ADMIN) || uid_eq(key->uid, current_fsuid())) { ++ if (uid_eq(key->uid, current_fsuid()) || capable(CAP_SYS_ADMIN)) { + key->perm = perm; + notify_key(key, NOTIFY_KEY_SETATTR, 0); + ret = 0; +diff --git a/security/security.c b/security/security.c +index 75dc0947ee0cf..5fa286ae9908d 100644 +--- a/security/security.c ++++ b/security/security.c +@@ -882,6 +882,20 @@ void security_bprm_committed_creds(struct linux_binprm *bprm) + call_void_hook(bprm_committed_creds, bprm); + } + ++/** ++ * security_fs_context_submount() - Initialise fc->security ++ * @fc: new filesystem context ++ * @reference: dentry reference for submount/remount ++ * ++ * Fill out the ->security field for a new fs_context. ++ * ++ * Return: Returns 0 on success or negative error code on failure. ++ */ ++int security_fs_context_submount(struct fs_context *fc, struct super_block *reference) ++{ ++ return call_int_hook(fs_context_submount, 0, fc, reference); ++} ++ + int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc) + { + return call_int_hook(fs_context_dup, 0, fc, src_fc); +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c +index f553c370397ee..26c9e4da4efcf 100644 +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -2766,6 +2766,27 @@ static int selinux_umount(struct vfsmount *mnt, int flags) + FILESYSTEM__UNMOUNT, NULL); + } + ++static int selinux_fs_context_submount(struct fs_context *fc, ++ struct super_block *reference) ++{ ++ const struct superblock_security_struct *sbsec; ++ struct selinux_mnt_opts *opts; ++ ++ opts = kzalloc(sizeof(*opts), GFP_KERNEL); ++ if (!opts) ++ return -ENOMEM; ++ ++ sbsec = selinux_superblock(reference); ++ if (sbsec->flags & FSCONTEXT_MNT) ++ opts->fscontext_sid = sbsec->sid; ++ if (sbsec->flags & CONTEXT_MNT) ++ opts->context_sid = sbsec->mntpoint_sid; ++ if (sbsec->flags & DEFCONTEXT_MNT) ++ opts->defcontext_sid = sbsec->def_sid; ++ fc->security = opts; ++ return 0; ++} ++ + static int selinux_fs_context_dup(struct fs_context *fc, + struct fs_context *src_fc) + { +@@ -7263,6 +7284,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = { + /* + * PUT "CLONING" (ACCESSING + ALLOCATING) HOOKS HERE + */ ++ LSM_HOOK_INIT(fs_context_submount, selinux_fs_context_submount), + LSM_HOOK_INIT(fs_context_dup, selinux_fs_context_dup), + LSM_HOOK_INIT(fs_context_parse_param, selinux_fs_context_parse_param), + LSM_HOOK_INIT(sb_eat_lsm_opts, selinux_sb_eat_lsm_opts), +diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c +index b6306d71c9088..67dcd31cd3f3d 100644 +--- a/security/smack/smack_lsm.c ++++ b/security/smack/smack_lsm.c +@@ -611,6 +611,56 @@ out_opt_err: + return -EINVAL; + } + ++/** ++ * smack_fs_context_submount - Initialise security data for a filesystem context ++ * @fc: The filesystem context. ++ * @reference: reference superblock ++ * ++ * Returns 0 on success or -ENOMEM on error. ++ */ ++static int smack_fs_context_submount(struct fs_context *fc, ++ struct super_block *reference) ++{ ++ struct superblock_smack *sbsp; ++ struct smack_mnt_opts *ctx; ++ struct inode_smack *isp; ++ ++ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); ++ if (!ctx) ++ return -ENOMEM; ++ fc->security = ctx; ++ ++ sbsp = smack_superblock(reference); ++ isp = smack_inode(reference->s_root->d_inode); ++ ++ if (sbsp->smk_default) { ++ ctx->fsdefault = kstrdup(sbsp->smk_default->smk_known, GFP_KERNEL); ++ if (!ctx->fsdefault) ++ return -ENOMEM; ++ } ++ ++ if (sbsp->smk_floor) { ++ ctx->fsfloor = kstrdup(sbsp->smk_floor->smk_known, GFP_KERNEL); ++ if (!ctx->fsfloor) ++ return -ENOMEM; ++ } ++ ++ if (sbsp->smk_hat) { ++ ctx->fshat = kstrdup(sbsp->smk_hat->smk_known, GFP_KERNEL); ++ if (!ctx->fshat) ++ return -ENOMEM; ++ } ++ ++ if (isp->smk_flags & SMK_INODE_TRANSMUTE) { ++ if (sbsp->smk_root) { ++ ctx->fstransmute = kstrdup(sbsp->smk_root->smk_known, GFP_KERNEL); ++ if (!ctx->fstransmute) ++ return -ENOMEM; ++ } ++ } ++ return 0; ++} ++ + /** + * smack_fs_context_dup - Duplicate the security data on fs_context duplication + * @fc: The new filesystem context. +@@ -4792,6 +4842,7 @@ static struct security_hook_list smack_hooks[] __lsm_ro_after_init = { + LSM_HOOK_INIT(ptrace_traceme, smack_ptrace_traceme), + LSM_HOOK_INIT(syslog, smack_syslog), + ++ LSM_HOOK_INIT(fs_context_submount, smack_fs_context_submount), + LSM_HOOK_INIT(fs_context_dup, smack_fs_context_dup), + LSM_HOOK_INIT(fs_context_parse_param, smack_fs_context_parse_param), + +diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c +index 4b58526450d49..da7db9e22ce7c 100644 +--- a/security/smack/smackfs.c ++++ b/security/smack/smackfs.c +@@ -896,7 +896,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf, + } + + ret = sscanf(rule, "%d", &catlen); +- if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM) ++ if (ret != 1 || catlen < 0 || catlen > SMACK_CIPSO_MAXCATNUM) + goto out; + + if (format == SMK_FIXED24_FMT && +diff --git a/sound/Kconfig b/sound/Kconfig +index e56d96d2b11ca..1903c35d799e1 100644 +--- a/sound/Kconfig ++++ b/sound/Kconfig +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0-only + menuconfig SOUND + tristate "Sound card support" +- depends on HAS_IOMEM ++ depends on HAS_IOMEM || UML + help + If you have a sound card in your computer, i.e. if it can say more + than an occasional beep, say Y. +diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c +index 42c2ada8e8887..c96483091f30a 100644 +--- a/sound/core/pcm_compat.c ++++ b/sound/core/pcm_compat.c +@@ -253,10 +253,14 @@ static int snd_pcm_ioctl_hw_params_compat(struct snd_pcm_substream *substream, + goto error; + } + +- if (refine) ++ if (refine) { + err = snd_pcm_hw_refine(substream, data); +- else ++ if (err < 0) ++ goto error; ++ err = fixup_unreferenced_params(substream, data); ++ } else { + err = snd_pcm_hw_params(substream, data); ++ } + if (err < 0) + goto error; + if (copy_to_user(data32, data, sizeof(*data32)) || +diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c +index 07efb38f58ac1..f2940b29595f0 100644 +--- a/sound/core/seq/oss/seq_oss_midi.c ++++ b/sound/core/seq/oss/seq_oss_midi.c +@@ -37,6 +37,7 @@ struct seq_oss_midi { + struct snd_midi_event *coder; /* MIDI event coder */ + struct seq_oss_devinfo *devinfo; /* assigned OSSseq device */ + snd_use_lock_t use_lock; ++ struct mutex open_mutex; + }; + + +@@ -172,6 +173,7 @@ snd_seq_oss_midi_check_new_port(struct snd_seq_port_info *pinfo) + mdev->flags = pinfo->capability; + mdev->opened = 0; + snd_use_lock_init(&mdev->use_lock); ++ mutex_init(&mdev->open_mutex); + + /* copy and truncate the name of synth device */ + strscpy(mdev->name, pinfo->name, sizeof(mdev->name)); +@@ -322,15 +324,17 @@ snd_seq_oss_midi_open(struct seq_oss_devinfo *dp, int dev, int fmode) + int perm; + struct seq_oss_midi *mdev; + struct snd_seq_port_subscribe subs; ++ int err; + + mdev = get_mididev(dp, dev); + if (!mdev) + return -ENODEV; + ++ mutex_lock(&mdev->open_mutex); + /* already used? */ + if (mdev->opened && mdev->devinfo != dp) { +- snd_use_lock_free(&mdev->use_lock); +- return -EBUSY; ++ err = -EBUSY; ++ goto unlock; + } + + perm = 0; +@@ -340,14 +344,14 @@ snd_seq_oss_midi_open(struct seq_oss_devinfo *dp, int dev, int fmode) + perm |= PERM_READ; + perm &= mdev->flags; + if (perm == 0) { +- snd_use_lock_free(&mdev->use_lock); +- return -ENXIO; ++ err = -ENXIO; ++ goto unlock; + } + + /* already opened? */ + if ((mdev->opened & perm) == perm) { +- snd_use_lock_free(&mdev->use_lock); +- return 0; ++ err = 0; ++ goto unlock; + } + + perm &= ~mdev->opened; +@@ -372,13 +376,17 @@ snd_seq_oss_midi_open(struct seq_oss_devinfo *dp, int dev, int fmode) + } + + if (! mdev->opened) { +- snd_use_lock_free(&mdev->use_lock); +- return -ENXIO; ++ err = -ENXIO; ++ goto unlock; + } + + mdev->devinfo = dp; ++ err = 0; ++ ++ unlock: ++ mutex_unlock(&mdev->open_mutex); + snd_use_lock_free(&mdev->use_lock); +- return 0; ++ return err; + } + + /* +@@ -393,10 +401,9 @@ snd_seq_oss_midi_close(struct seq_oss_devinfo *dp, int dev) + mdev = get_mididev(dp, dev); + if (!mdev) + return -ENODEV; +- if (! mdev->opened || mdev->devinfo != dp) { +- snd_use_lock_free(&mdev->use_lock); +- return 0; +- } ++ mutex_lock(&mdev->open_mutex); ++ if (!mdev->opened || mdev->devinfo != dp) ++ goto unlock; + + memset(&subs, 0, sizeof(subs)); + if (mdev->opened & PERM_WRITE) { +@@ -415,6 +422,8 @@ snd_seq_oss_midi_close(struct seq_oss_devinfo *dp, int dev) + mdev->opened = 0; + mdev->devinfo = NULL; + ++ unlock: ++ mutex_unlock(&mdev->open_mutex); + snd_use_lock_free(&mdev->use_lock); + return 0; + } +diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c +index 534ea7a256ec3..606b318f34e56 100644 +--- a/sound/pci/ac97/ac97_codec.c ++++ b/sound/pci/ac97/ac97_codec.c +@@ -2070,10 +2070,9 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template, + .dev_disconnect = snd_ac97_dev_disconnect, + }; + +- if (!rac97) +- return -EINVAL; +- if (snd_BUG_ON(!bus || !template)) ++ if (snd_BUG_ON(!bus || !template || !rac97)) + return -EINVAL; ++ *rac97 = NULL; + if (snd_BUG_ON(template->num >= 4)) + return -EINVAL; + if (bus->codec[template->num]) +diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c +index 0ba1fbcbb21e4..627899959ffe8 100644 +--- a/sound/pci/hda/patch_cs8409.c ++++ b/sound/pci/hda/patch_cs8409.c +@@ -888,7 +888,7 @@ static void cs42l42_resume(struct sub_codec *cs42l42) + + /* Initialize CS42L42 companion codec */ + cs8409_i2c_bulk_write(cs42l42, cs42l42->init_seq, cs42l42->init_seq_num); +- usleep_range(30000, 35000); ++ msleep(CS42L42_INIT_TIMEOUT_MS); + + /* Clear interrupts, by reading interrupt status registers */ + cs8409_i2c_bulk_read(cs42l42, irq_regs, ARRAY_SIZE(irq_regs)); +diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h +index 2a8dfb4ff046b..937e9387abdc7 100644 +--- a/sound/pci/hda/patch_cs8409.h ++++ b/sound/pci/hda/patch_cs8409.h +@@ -229,6 +229,7 @@ enum cs8409_coefficient_index_registers { + #define CS42L42_I2C_SLEEP_US (2000) + #define CS42L42_PDN_TIMEOUT_US (250000) + #define CS42L42_PDN_SLEEP_US (2000) ++#define CS42L42_INIT_TIMEOUT_MS (45) + #define CS42L42_FULL_SCALE_VOL_MASK (2) + #define CS42L42_FULL_SCALE_VOL_0DB (1) + #define CS42L42_FULL_SCALE_VOL_MINUS6DB (0) +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index aa475154c582f..f70e0ad81607e 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -9591,7 +9591,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x8b8a, "HP", ALC236_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8b8b, "HP", ALC236_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8b8d, "HP", ALC236_FIXUP_HP_GPIO_LED), +- SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ++ SND_PCI_QUIRK(0x103c, 0x8b8f, "HP", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x8b96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), + SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), +diff --git a/sound/soc/atmel/atmel-i2s.c b/sound/soc/atmel/atmel-i2s.c +index 425d66edbf867..5e43ff0b537a3 100644 +--- a/sound/soc/atmel/atmel-i2s.c ++++ b/sound/soc/atmel/atmel-i2s.c +@@ -163,11 +163,14 @@ struct atmel_i2s_gck_param { + + #define I2S_MCK_12M288 12288000UL + #define I2S_MCK_11M2896 11289600UL ++#define I2S_MCK_6M144 6144000UL + + /* mck = (32 * (imckfs+1) / (imckdiv+1)) * fs */ + static const struct atmel_i2s_gck_param gck_params[] = { ++ /* mck = 6.144Mhz */ ++ { 8000, I2S_MCK_6M144, 1, 47}, /* mck = 768 fs */ ++ + /* mck = 12.288MHz */ +- { 8000, I2S_MCK_12M288, 0, 47}, /* mck = 1536 fs */ + { 16000, I2S_MCK_12M288, 1, 47}, /* mck = 768 fs */ + { 24000, I2S_MCK_12M288, 3, 63}, /* mck = 512 fs */ + { 32000, I2S_MCK_12M288, 3, 47}, /* mck = 384 fs */ +diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig +index 965ae55fa1607..0904827e2f3db 100644 +--- a/sound/soc/codecs/Kconfig ++++ b/sound/soc/codecs/Kconfig +@@ -1552,6 +1552,7 @@ config SND_SOC_STA529 + config SND_SOC_STAC9766 + tristate + depends on SND_SOC_AC97_BUS ++ select REGMAP_AC97 + + config SND_SOC_STI_SAS + tristate "codec Audio support for STI SAS codec" +diff --git a/sound/soc/codecs/cs43130.h b/sound/soc/codecs/cs43130.h +index 1dd8936743132..90e8895275e77 100644 +--- a/sound/soc/codecs/cs43130.h ++++ b/sound/soc/codecs/cs43130.h +@@ -381,88 +381,88 @@ struct cs43130_clk_gen { + + /* frm_size = 16 */ + static const struct cs43130_clk_gen cs43130_16_clk_gen[] = { +- { 22579200, 32000, .v = { 441, 10, }, }, +- { 22579200, 44100, .v = { 32, 1, }, }, +- { 22579200, 48000, .v = { 147, 5, }, }, +- { 22579200, 88200, .v = { 16, 1, }, }, +- { 22579200, 96000, .v = { 147, 10, }, }, +- { 22579200, 176400, .v = { 8, 1, }, }, +- { 22579200, 192000, .v = { 147, 20, }, }, +- { 22579200, 352800, .v = { 4, 1, }, }, +- { 22579200, 384000, .v = { 147, 40, }, }, +- { 24576000, 32000, .v = { 48, 1, }, }, +- { 24576000, 44100, .v = { 5120, 147, }, }, +- { 24576000, 48000, .v = { 32, 1, }, }, +- { 24576000, 88200, .v = { 2560, 147, }, }, +- { 24576000, 96000, .v = { 16, 1, }, }, +- { 24576000, 176400, .v = { 1280, 147, }, }, +- { 24576000, 192000, .v = { 8, 1, }, }, +- { 24576000, 352800, .v = { 640, 147, }, }, +- { 24576000, 384000, .v = { 4, 1, }, }, ++ { 22579200, 32000, .v = { 10, 441, }, }, ++ { 22579200, 44100, .v = { 1, 32, }, }, ++ { 22579200, 48000, .v = { 5, 147, }, }, ++ { 22579200, 88200, .v = { 1, 16, }, }, ++ { 22579200, 96000, .v = { 10, 147, }, }, ++ { 22579200, 176400, .v = { 1, 8, }, }, ++ { 22579200, 192000, .v = { 20, 147, }, }, ++ { 22579200, 352800, .v = { 1, 4, }, }, ++ { 22579200, 384000, .v = { 40, 147, }, }, ++ { 24576000, 32000, .v = { 1, 48, }, }, ++ { 24576000, 44100, .v = { 147, 5120, }, }, ++ { 24576000, 48000, .v = { 1, 32, }, }, ++ { 24576000, 88200, .v = { 147, 2560, }, }, ++ { 24576000, 96000, .v = { 1, 16, }, }, ++ { 24576000, 176400, .v = { 147, 1280, }, }, ++ { 24576000, 192000, .v = { 1, 8, }, }, ++ { 24576000, 352800, .v = { 147, 640, }, }, ++ { 24576000, 384000, .v = { 1, 4, }, }, + }; + + /* frm_size = 32 */ + static const struct cs43130_clk_gen cs43130_32_clk_gen[] = { +- { 22579200, 32000, .v = { 441, 20, }, }, +- { 22579200, 44100, .v = { 16, 1, }, }, +- { 22579200, 48000, .v = { 147, 10, }, }, +- { 22579200, 88200, .v = { 8, 1, }, }, +- { 22579200, 96000, .v = { 147, 20, }, }, +- { 22579200, 176400, .v = { 4, 1, }, }, +- { 22579200, 192000, .v = { 147, 40, }, }, +- { 22579200, 352800, .v = { 2, 1, }, }, +- { 22579200, 384000, .v = { 147, 80, }, }, +- { 24576000, 32000, .v = { 24, 1, }, }, +- { 24576000, 44100, .v = { 2560, 147, }, }, +- { 24576000, 48000, .v = { 16, 1, }, }, +- { 24576000, 88200, .v = { 1280, 147, }, }, +- { 24576000, 96000, .v = { 8, 1, }, }, +- { 24576000, 176400, .v = { 640, 147, }, }, +- { 24576000, 192000, .v = { 4, 1, }, }, +- { 24576000, 352800, .v = { 320, 147, }, }, +- { 24576000, 384000, .v = { 2, 1, }, }, ++ { 22579200, 32000, .v = { 20, 441, }, }, ++ { 22579200, 44100, .v = { 1, 16, }, }, ++ { 22579200, 48000, .v = { 10, 147, }, }, ++ { 22579200, 88200, .v = { 1, 8, }, }, ++ { 22579200, 96000, .v = { 20, 147, }, }, ++ { 22579200, 176400, .v = { 1, 4, }, }, ++ { 22579200, 192000, .v = { 40, 147, }, }, ++ { 22579200, 352800, .v = { 1, 2, }, }, ++ { 22579200, 384000, .v = { 80, 147, }, }, ++ { 24576000, 32000, .v = { 1, 24, }, }, ++ { 24576000, 44100, .v = { 147, 2560, }, }, ++ { 24576000, 48000, .v = { 1, 16, }, }, ++ { 24576000, 88200, .v = { 147, 1280, }, }, ++ { 24576000, 96000, .v = { 1, 8, }, }, ++ { 24576000, 176400, .v = { 147, 640, }, }, ++ { 24576000, 192000, .v = { 1, 4, }, }, ++ { 24576000, 352800, .v = { 147, 320, }, }, ++ { 24576000, 384000, .v = { 1, 2, }, }, + }; + + /* frm_size = 48 */ + static const struct cs43130_clk_gen cs43130_48_clk_gen[] = { +- { 22579200, 32000, .v = { 147, 100, }, }, +- { 22579200, 44100, .v = { 32, 3, }, }, +- { 22579200, 48000, .v = { 49, 5, }, }, +- { 22579200, 88200, .v = { 16, 3, }, }, +- { 22579200, 96000, .v = { 49, 10, }, }, +- { 22579200, 176400, .v = { 8, 3, }, }, +- { 22579200, 192000, .v = { 49, 20, }, }, +- { 22579200, 352800, .v = { 4, 3, }, }, +- { 22579200, 384000, .v = { 49, 40, }, }, +- { 24576000, 32000, .v = { 16, 1, }, }, +- { 24576000, 44100, .v = { 5120, 441, }, }, +- { 24576000, 48000, .v = { 32, 3, }, }, +- { 24576000, 88200, .v = { 2560, 441, }, }, +- { 24576000, 96000, .v = { 16, 3, }, }, +- { 24576000, 176400, .v = { 1280, 441, }, }, +- { 24576000, 192000, .v = { 8, 3, }, }, +- { 24576000, 352800, .v = { 640, 441, }, }, +- { 24576000, 384000, .v = { 4, 3, }, }, ++ { 22579200, 32000, .v = { 100, 147, }, }, ++ { 22579200, 44100, .v = { 3, 32, }, }, ++ { 22579200, 48000, .v = { 5, 49, }, }, ++ { 22579200, 88200, .v = { 3, 16, }, }, ++ { 22579200, 96000, .v = { 10, 49, }, }, ++ { 22579200, 176400, .v = { 3, 8, }, }, ++ { 22579200, 192000, .v = { 20, 49, }, }, ++ { 22579200, 352800, .v = { 3, 4, }, }, ++ { 22579200, 384000, .v = { 40, 49, }, }, ++ { 24576000, 32000, .v = { 1, 16, }, }, ++ { 24576000, 44100, .v = { 441, 5120, }, }, ++ { 24576000, 48000, .v = { 3, 32, }, }, ++ { 24576000, 88200, .v = { 441, 2560, }, }, ++ { 24576000, 96000, .v = { 3, 16, }, }, ++ { 24576000, 176400, .v = { 441, 1280, }, }, ++ { 24576000, 192000, .v = { 3, 8, }, }, ++ { 24576000, 352800, .v = { 441, 640, }, }, ++ { 24576000, 384000, .v = { 3, 4, }, }, + }; + + /* frm_size = 64 */ + static const struct cs43130_clk_gen cs43130_64_clk_gen[] = { +- { 22579200, 32000, .v = { 441, 40, }, }, +- { 22579200, 44100, .v = { 8, 1, }, }, +- { 22579200, 48000, .v = { 147, 20, }, }, +- { 22579200, 88200, .v = { 4, 1, }, }, +- { 22579200, 96000, .v = { 147, 40, }, }, +- { 22579200, 176400, .v = { 2, 1, }, }, +- { 22579200, 192000, .v = { 147, 80, }, }, ++ { 22579200, 32000, .v = { 40, 441, }, }, ++ { 22579200, 44100, .v = { 1, 8, }, }, ++ { 22579200, 48000, .v = { 20, 147, }, }, ++ { 22579200, 88200, .v = { 1, 4, }, }, ++ { 22579200, 96000, .v = { 40, 147, }, }, ++ { 22579200, 176400, .v = { 1, 2, }, }, ++ { 22579200, 192000, .v = { 80, 147, }, }, + { 22579200, 352800, .v = { 1, 1, }, }, +- { 24576000, 32000, .v = { 12, 1, }, }, +- { 24576000, 44100, .v = { 1280, 147, }, }, +- { 24576000, 48000, .v = { 8, 1, }, }, +- { 24576000, 88200, .v = { 640, 147, }, }, +- { 24576000, 96000, .v = { 4, 1, }, }, +- { 24576000, 176400, .v = { 320, 147, }, }, +- { 24576000, 192000, .v = { 2, 1, }, }, +- { 24576000, 352800, .v = { 160, 147, }, }, ++ { 24576000, 32000, .v = { 1, 12, }, }, ++ { 24576000, 44100, .v = { 147, 1280, }, }, ++ { 24576000, 48000, .v = { 1, 8, }, }, ++ { 24576000, 88200, .v = { 147, 640, }, }, ++ { 24576000, 96000, .v = { 1, 4, }, }, ++ { 24576000, 176400, .v = { 147, 320, }, }, ++ { 24576000, 192000, .v = { 1, 2, }, }, ++ { 24576000, 352800, .v = { 147, 160, }, }, + { 24576000, 384000, .v = { 1, 1, }, }, + }; + +diff --git a/sound/soc/codecs/da7219-aad.c b/sound/soc/codecs/da7219-aad.c +index bba73c44c219f..9251490548e8c 100644 +--- a/sound/soc/codecs/da7219-aad.c ++++ b/sound/soc/codecs/da7219-aad.c +@@ -353,11 +353,15 @@ static irqreturn_t da7219_aad_irq_thread(int irq, void *data) + struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component); + u8 events[DA7219_AAD_IRQ_REG_MAX]; + u8 statusa; +- int i, report = 0, mask = 0; ++ int i, ret, report = 0, mask = 0; + + /* Read current IRQ events */ +- regmap_bulk_read(da7219->regmap, DA7219_ACCDET_IRQ_EVENT_A, +- events, DA7219_AAD_IRQ_REG_MAX); ++ ret = regmap_bulk_read(da7219->regmap, DA7219_ACCDET_IRQ_EVENT_A, ++ events, DA7219_AAD_IRQ_REG_MAX); ++ if (ret) { ++ dev_warn_ratelimited(component->dev, "Failed to read IRQ events: %d\n", ret); ++ return IRQ_NONE; ++ } + + if (!events[DA7219_AAD_IRQ_REG_A] && !events[DA7219_AAD_IRQ_REG_B]) + return IRQ_NONE; +@@ -863,6 +867,8 @@ void da7219_aad_suspend(struct snd_soc_component *component) + } + } + } ++ ++ synchronize_irq(da7219_aad->irq); + } + + void da7219_aad_resume(struct snd_soc_component *component) +diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c +index 87775378362e7..c4e4ab93fdb6d 100644 +--- a/sound/soc/codecs/es8316.c ++++ b/sound/soc/codecs/es8316.c +@@ -153,7 +153,7 @@ static const char * const es8316_dmic_txt[] = { + "dmic data at high level", + "dmic data at low level", + }; +-static const unsigned int es8316_dmic_values[] = { 0, 1, 2 }; ++static const unsigned int es8316_dmic_values[] = { 0, 2, 3 }; + static const struct soc_enum es8316_dmic_src_enum = + SOC_VALUE_ENUM_SINGLE(ES8316_ADC_DMIC, 0, 3, + ARRAY_SIZE(es8316_dmic_txt), +diff --git a/sound/soc/codecs/nau8821.c b/sound/soc/codecs/nau8821.c +index 4a72b94e84104..efd92656a060d 100644 +--- a/sound/soc/codecs/nau8821.c ++++ b/sound/soc/codecs/nau8821.c +@@ -10,6 +10,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -25,6 +26,13 @@ + #include + #include "nau8821.h" + ++#define NAU8821_JD_ACTIVE_HIGH BIT(0) ++ ++static int nau8821_quirk; ++static int quirk_override = -1; ++module_param_named(quirk, quirk_override, uint, 0444); ++MODULE_PARM_DESC(quirk, "Board-specific quirk override"); ++ + #define NAU_FREF_MAX 13500000 + #define NAU_FVCO_MAX 100000000 + #define NAU_FVCO_MIN 90000000 +@@ -1696,6 +1704,33 @@ static int nau8821_setup_irq(struct nau8821 *nau8821) + return 0; + } + ++/* Please keep this list alphabetically sorted */ ++static const struct dmi_system_id nau8821_quirk_table[] = { ++ { ++ /* Positivo CW14Q01P-V2 */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), ++ DMI_MATCH(DMI_BOARD_NAME, "CW14Q01P-V2"), ++ }, ++ .driver_data = (void *)(NAU8821_JD_ACTIVE_HIGH), ++ }, ++ {} ++}; ++ ++static void nau8821_check_quirks(void) ++{ ++ const struct dmi_system_id *dmi_id; ++ ++ if (quirk_override != -1) { ++ nau8821_quirk = quirk_override; ++ return; ++ } ++ ++ dmi_id = dmi_first_match(nau8821_quirk_table); ++ if (dmi_id) ++ nau8821_quirk = (unsigned long)dmi_id->driver_data; ++} ++ + static int nau8821_i2c_probe(struct i2c_client *i2c) + { + struct device *dev = &i2c->dev; +@@ -1716,6 +1751,12 @@ static int nau8821_i2c_probe(struct i2c_client *i2c) + + nau8821->dev = dev; + nau8821->irq = i2c->irq; ++ ++ nau8821_check_quirks(); ++ ++ if (nau8821_quirk & NAU8821_JD_ACTIVE_HIGH) ++ nau8821->jkdet_polarity = 0; ++ + nau8821_print_device_properties(nau8821); + + nau8821_reset_chip(nau8821->regmap); +diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c +index c1a94229dc7e3..868a61c8b0608 100644 +--- a/sound/soc/codecs/rt5682-sdw.c ++++ b/sound/soc/codecs/rt5682-sdw.c +@@ -786,8 +786,15 @@ static int __maybe_unused rt5682_dev_resume(struct device *dev) + if (!rt5682->first_hw_init) + return 0; + +- if (!slave->unattach_request) ++ if (!slave->unattach_request) { ++ if (rt5682->disable_irq == true) { ++ mutex_lock(&rt5682->disable_irq_lock); ++ sdw_write_no_pm(slave, SDW_SCP_INTMASK1, SDW_SCP_INT1_IMPL_DEF); ++ rt5682->disable_irq = false; ++ mutex_unlock(&rt5682->disable_irq_lock); ++ } + goto regmap_sync; ++ } + + time = wait_for_completion_timeout(&slave->initialization_complete, + msecs_to_jiffies(RT5682_PROBE_TIMEOUT)); +diff --git a/sound/soc/codecs/rt711-sdca-sdw.c b/sound/soc/codecs/rt711-sdca-sdw.c +index e23cec4c457de..487d3010ddc19 100644 +--- a/sound/soc/codecs/rt711-sdca-sdw.c ++++ b/sound/soc/codecs/rt711-sdca-sdw.c +@@ -442,8 +442,16 @@ static int __maybe_unused rt711_sdca_dev_resume(struct device *dev) + if (!rt711->first_hw_init) + return 0; + +- if (!slave->unattach_request) ++ if (!slave->unattach_request) { ++ if (rt711->disable_irq == true) { ++ mutex_lock(&rt711->disable_irq_lock); ++ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK1, SDW_SCP_SDCA_INTMASK_SDCA_0); ++ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK2, SDW_SCP_SDCA_INTMASK_SDCA_8); ++ rt711->disable_irq = false; ++ mutex_unlock(&rt711->disable_irq_lock); ++ } + goto regmap_sync; ++ } + + time = wait_for_completion_timeout(&slave->initialization_complete, + msecs_to_jiffies(RT711_PROBE_TIMEOUT)); +diff --git a/sound/soc/codecs/rt711-sdw.c b/sound/soc/codecs/rt711-sdw.c +index 4fe68bcf2a7c2..9545b8a7eb192 100644 +--- a/sound/soc/codecs/rt711-sdw.c ++++ b/sound/soc/codecs/rt711-sdw.c +@@ -541,8 +541,15 @@ static int __maybe_unused rt711_dev_resume(struct device *dev) + if (!rt711->first_hw_init) + return 0; + +- if (!slave->unattach_request) ++ if (!slave->unattach_request) { ++ if (rt711->disable_irq == true) { ++ mutex_lock(&rt711->disable_irq_lock); ++ sdw_write_no_pm(slave, SDW_SCP_INTMASK1, SDW_SCP_INT1_IMPL_DEF); ++ rt711->disable_irq = false; ++ mutex_unlock(&rt711->disable_irq_lock); ++ } + goto regmap_sync; ++ } + + time = wait_for_completion_timeout(&slave->initialization_complete, + msecs_to_jiffies(RT711_PROBE_TIMEOUT)); +diff --git a/sound/soc/sof/amd/acp.c b/sound/soc/sof/amd/acp.c +index 8afd67ba1e5a3..f8d2372a758f4 100644 +--- a/sound/soc/sof/amd/acp.c ++++ b/sound/soc/sof/amd/acp.c +@@ -349,9 +349,9 @@ static irqreturn_t acp_irq_handler(int irq, void *dev_id) + unsigned int val; + + val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET); +- if (val) { +- val |= ACP_DSP_TO_HOST_IRQ; +- snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET, val); ++ if (val & ACP_DSP_TO_HOST_IRQ) { ++ snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET, ++ ACP_DSP_TO_HOST_IRQ); + return IRQ_WAKE_THREAD; + } + +diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c +index f4bd1e8ae4b6c..23260aa1919d3 100644 +--- a/sound/usb/mixer_maps.c ++++ b/sound/usb/mixer_maps.c +@@ -374,6 +374,15 @@ static const struct usbmix_name_map corsair_virtuoso_map[] = { + { 0 } + }; + ++/* Microsoft USB Link headset */ ++/* a guess work: raw playback volume values are from 2 to 129 */ ++static const struct usbmix_dB_map ms_usb_link_dB = { -3225, 0, true }; ++static const struct usbmix_name_map ms_usb_link_map[] = { ++ { 9, NULL, .dB = &ms_usb_link_dB }, ++ { 10, NULL }, /* Headset Capture volume; seems non-working, disabled */ ++ { 0 } /* terminator */ ++}; ++ + /* ASUS ROG Zenith II with Realtek ALC1220-VB */ + static const struct usbmix_name_map asus_zenith_ii_map[] = { + { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */ +@@ -668,6 +677,11 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = { + .id = USB_ID(0x1395, 0x0025), + .map = sennheiser_pc8_map, + }, ++ { ++ /* Microsoft USB Link headset */ ++ .id = USB_ID(0x045e, 0x083c), ++ .map = ms_usb_link_map, ++ }, + { 0 } /* terminator */ + }; + +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c +index 6cf55b7f7a041..4667d543f7481 100644 +--- a/sound/usb/quirks.c ++++ b/sound/usb/quirks.c +@@ -1874,8 +1874,10 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip, + + /* XMOS based USB DACs */ + switch (chip->usb_id) { +- case USB_ID(0x1511, 0x0037): /* AURALiC VEGA */ +- case USB_ID(0x21ed, 0xd75a): /* Accuphase DAC-60 option card */ ++ case USB_ID(0x139f, 0x5504): /* Nagra DAC */ ++ case USB_ID(0x20b1, 0x3089): /* Mola-Mola DAC */ ++ case USB_ID(0x2522, 0x0007): /* LH Labs Geek Out 1V5 */ ++ case USB_ID(0x2522, 0x0009): /* LH Labs Geek Pulse X Inifinity 2V0 */ + case USB_ID(0x2522, 0x0012): /* LH Labs VI DAC Infinity */ + case USB_ID(0x2772, 0x0230): /* Pro-Ject Pre Box S2 Digital */ + if (fp->altsetting == 2) +@@ -1885,14 +1887,18 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip, + case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */ + case USB_ID(0x10cb, 0x0103): /* The Bit Opus #3; with fp->dsd_raw */ + case USB_ID(0x16d0, 0x06b2): /* NuPrime DAC-10 */ +- case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */ ++ case USB_ID(0x16d0, 0x06b4): /* NuPrime Audio HD-AVP/AVA */ + case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */ ++ case USB_ID(0x16d0, 0x09d8): /* NuPrime IDA-8 */ + case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */ ++ case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */ + case USB_ID(0x1db5, 0x0003): /* Bryston BDA3 */ ++ case USB_ID(0x20a0, 0x4143): /* WaveIO USB Audio 2.0 */ + case USB_ID(0x22e1, 0xca01): /* HDTA Serenade DSD */ + case USB_ID(0x249c, 0x9326): /* M2Tech Young MkIII */ + case USB_ID(0x2616, 0x0106): /* PS Audio NuWave DAC */ + case USB_ID(0x2622, 0x0041): /* Audiolab M-DAC+ */ ++ case USB_ID(0x278b, 0x5100): /* Rotel RC-1590 */ + case USB_ID(0x27f7, 0x3002): /* W4S DAC-2v2SE */ + case USB_ID(0x29a2, 0x0086): /* Mutec MC3+ USB */ + case USB_ID(0x6b42, 0x0042): /* MSB Technology */ +@@ -1902,9 +1908,6 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip, + + /* Amanero Combo384 USB based DACs with native DSD support */ + case USB_ID(0x16d0, 0x071a): /* Amanero - Combo384 */ +- case USB_ID(0x2ab6, 0x0004): /* T+A DAC8DSD-V2.0, MP1000E-V2.0, MP2000R-V2.0, MP2500R-V2.0, MP3100HV-V2.0 */ +- case USB_ID(0x2ab6, 0x0005): /* T+A USB HD Audio 1 */ +- case USB_ID(0x2ab6, 0x0006): /* T+A USB HD Audio 2 */ + if (fp->altsetting == 2) { + switch (le16_to_cpu(chip->dev->descriptor.bcdDevice)) { + case 0x199: +@@ -2011,6 +2014,9 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = { + QUIRK_FLAG_IGNORE_CTL_ERROR), + DEVICE_FLG(0x041e, 0x4080, /* Creative Live Cam VF0610 */ + QUIRK_FLAG_GET_SAMPLE_RATE), ++ DEVICE_FLG(0x045e, 0x083c, /* MS USB Link headset */ ++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY | ++ QUIRK_FLAG_DISABLE_AUTOSUSPEND), + DEVICE_FLG(0x046d, 0x084c, /* Logitech ConferenceCam Connect */ + QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY_1M), + DEVICE_FLG(0x046d, 0x0991, /* Logitech QuickCam Pro */ +@@ -2046,6 +2052,9 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = { + QUIRK_FLAG_IFACE_DELAY), + DEVICE_FLG(0x0644, 0x805f, /* TEAC Model 12 */ + QUIRK_FLAG_FORCE_IFACE_RESET), ++ DEVICE_FLG(0x0644, 0x806b, /* TEAC UD-701 */ ++ QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY | ++ QUIRK_FLAG_IFACE_DELAY), + DEVICE_FLG(0x06f8, 0xb000, /* Hercules DJ Console (Windows Edition) */ + QUIRK_FLAG_IGNORE_CTL_ERROR), + DEVICE_FLG(0x06f8, 0xd002, /* Hercules DJ Console (Macintosh Edition) */ +@@ -2084,6 +2093,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = { + QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY), + DEVICE_FLG(0x154e, 0x3006, /* Marantz SA-14S1 */ + QUIRK_FLAG_ITF_USB_DSD_DAC | QUIRK_FLAG_CTL_MSG_DELAY), ++ DEVICE_FLG(0x154e, 0x300b, /* Marantz SA-KI RUBY / SA-12 */ ++ QUIRK_FLAG_DSD_RAW), + DEVICE_FLG(0x154e, 0x500e, /* Denon DN-X1600 */ + QUIRK_FLAG_IGNORE_CLOCK_SOURCE), + DEVICE_FLG(0x1686, 0x00dd, /* Zoom R16/24 */ +@@ -2128,6 +2139,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = { + QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), + DEVICE_FLG(0x21b4, 0x0081, /* AudioQuest DragonFly */ + QUIRK_FLAG_GET_SAMPLE_RATE), ++ DEVICE_FLG(0x21b4, 0x0230, /* Ayre QB-9 Twenty */ ++ QUIRK_FLAG_DSD_RAW), ++ DEVICE_FLG(0x21b4, 0x0232, /* Ayre QX-5 Twenty */ ++ QUIRK_FLAG_DSD_RAW), + DEVICE_FLG(0x2522, 0x0007, /* LH Labs Geek Out HD Audio 1V5 */ + QUIRK_FLAG_SET_IFACE_FIRST), + DEVICE_FLG(0x2708, 0x0002, /* Audient iD14 */ +@@ -2170,12 +2185,18 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = { + QUIRK_FLAG_VALIDATE_RATES), + VENDOR_FLG(0x1235, /* Focusrite Novation */ + QUIRK_FLAG_VALIDATE_RATES), ++ VENDOR_FLG(0x1511, /* AURALiC */ ++ QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x152a, /* Thesycon devices */ + QUIRK_FLAG_DSD_RAW), ++ VENDOR_FLG(0x18d1, /* iBasso devices */ ++ QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x1de7, /* Phoenix Audio */ + QUIRK_FLAG_GET_SAMPLE_RATE), + VENDOR_FLG(0x20b1, /* XMOS based devices */ + QUIRK_FLAG_DSD_RAW), ++ VENDOR_FLG(0x21ed, /* Accuphase Laboratory */ ++ QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x22d9, /* Oppo */ + QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x23ba, /* Playback Design */ +@@ -2191,10 +2212,14 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = { + QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x2ab6, /* T+A devices */ + QUIRK_FLAG_DSD_RAW), ++ VENDOR_FLG(0x2d87, /* Cayin device */ ++ QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x3336, /* HEM devices */ + QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x3353, /* Khadas devices */ + QUIRK_FLAG_DSD_RAW), ++ VENDOR_FLG(0x35f4, /* MSB Technology */ ++ QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0x3842, /* EVGA */ + QUIRK_FLAG_DSD_RAW), + VENDOR_FLG(0xc502, /* HiBy devices */ +diff --git a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c +index eb05ea53afb12..26004f0c5a6ae 100644 +--- a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c ++++ b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c +@@ -15,6 +15,19 @@ enum bpf_obj_type { + BPF_OBJ_BTF, + }; + ++struct bpf_perf_link___local { ++ struct bpf_link link; ++ struct file *perf_file; ++} __attribute__((preserve_access_index)); ++ ++struct perf_event___local { ++ u64 bpf_cookie; ++} __attribute__((preserve_access_index)); ++ ++enum bpf_link_type___local { ++ BPF_LINK_TYPE_PERF_EVENT___local = 7, ++}; ++ + extern const void bpf_link_fops __ksym; + extern const void bpf_map_fops __ksym; + extern const void bpf_prog_fops __ksym; +@@ -41,10 +54,10 @@ static __always_inline __u32 get_obj_id(void *ent, enum bpf_obj_type type) + /* could be used only with BPF_LINK_TYPE_PERF_EVENT links */ + static __u64 get_bpf_cookie(struct bpf_link *link) + { +- struct bpf_perf_link *perf_link; +- struct perf_event *event; ++ struct bpf_perf_link___local *perf_link; ++ struct perf_event___local *event; + +- perf_link = container_of(link, struct bpf_perf_link, link); ++ perf_link = container_of(link, struct bpf_perf_link___local, link); + event = BPF_CORE_READ(perf_link, perf_file, private_data); + return BPF_CORE_READ(event, bpf_cookie); + } +@@ -84,10 +97,13 @@ int iter(struct bpf_iter__task_file *ctx) + e.pid = task->tgid; + e.id = get_obj_id(file->private_data, obj_type); + +- if (obj_type == BPF_OBJ_LINK) { ++ if (obj_type == BPF_OBJ_LINK && ++ bpf_core_enum_value_exists(enum bpf_link_type___local, ++ BPF_LINK_TYPE_PERF_EVENT___local)) { + struct bpf_link *link = (struct bpf_link *) file->private_data; + +- if (BPF_CORE_READ(link, type) == BPF_LINK_TYPE_PERF_EVENT) { ++ if (link->type == bpf_core_enum_value(enum bpf_link_type___local, ++ BPF_LINK_TYPE_PERF_EVENT___local)) { + e.has_bpf_cookie = true; + e.bpf_cookie = get_bpf_cookie(link); + } +diff --git a/tools/bpf/bpftool/skeleton/profiler.bpf.c b/tools/bpf/bpftool/skeleton/profiler.bpf.c +index ce5b65e07ab10..2f80edc682f11 100644 +--- a/tools/bpf/bpftool/skeleton/profiler.bpf.c ++++ b/tools/bpf/bpftool/skeleton/profiler.bpf.c +@@ -4,6 +4,12 @@ + #include + #include + ++struct bpf_perf_event_value___local { ++ __u64 counter; ++ __u64 enabled; ++ __u64 running; ++} __attribute__((preserve_access_index)); ++ + /* map of perf event fds, num_cpu * num_metric entries */ + struct { + __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); +@@ -15,14 +21,14 @@ struct { + struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __uint(key_size, sizeof(u32)); +- __uint(value_size, sizeof(struct bpf_perf_event_value)); ++ __uint(value_size, sizeof(struct bpf_perf_event_value___local)); + } fentry_readings SEC(".maps"); + + /* accumulated readings */ + struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __uint(key_size, sizeof(u32)); +- __uint(value_size, sizeof(struct bpf_perf_event_value)); ++ __uint(value_size, sizeof(struct bpf_perf_event_value___local)); + } accum_readings SEC(".maps"); + + /* sample counts, one per cpu */ +@@ -39,7 +45,7 @@ const volatile __u32 num_metric = 1; + SEC("fentry/XXX") + int BPF_PROG(fentry_XXX) + { +- struct bpf_perf_event_value *ptrs[MAX_NUM_MATRICS]; ++ struct bpf_perf_event_value___local *ptrs[MAX_NUM_MATRICS]; + u32 key = bpf_get_smp_processor_id(); + u32 i; + +@@ -53,10 +59,10 @@ int BPF_PROG(fentry_XXX) + } + + for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) { +- struct bpf_perf_event_value reading; ++ struct bpf_perf_event_value___local reading; + int err; + +- err = bpf_perf_event_read_value(&events, key, &reading, ++ err = bpf_perf_event_read_value(&events, key, (void *)&reading, + sizeof(reading)); + if (err) + return 0; +@@ -68,14 +74,14 @@ int BPF_PROG(fentry_XXX) + } + + static inline void +-fexit_update_maps(u32 id, struct bpf_perf_event_value *after) ++fexit_update_maps(u32 id, struct bpf_perf_event_value___local *after) + { +- struct bpf_perf_event_value *before, diff; ++ struct bpf_perf_event_value___local *before, diff; + + before = bpf_map_lookup_elem(&fentry_readings, &id); + /* only account samples with a valid fentry_reading */ + if (before && before->counter) { +- struct bpf_perf_event_value *accum; ++ struct bpf_perf_event_value___local *accum; + + diff.counter = after->counter - before->counter; + diff.enabled = after->enabled - before->enabled; +@@ -93,7 +99,7 @@ fexit_update_maps(u32 id, struct bpf_perf_event_value *after) + SEC("fexit/XXX") + int BPF_PROG(fexit_XXX) + { +- struct bpf_perf_event_value readings[MAX_NUM_MATRICS]; ++ struct bpf_perf_event_value___local readings[MAX_NUM_MATRICS]; + u32 cpu = bpf_get_smp_processor_id(); + u32 i, zero = 0; + int err; +@@ -102,7 +108,8 @@ int BPF_PROG(fexit_XXX) + /* read all events before updating the maps, to reduce error */ + for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) { + err = bpf_perf_event_read_value(&events, cpu + i * num_cpu, +- readings + i, sizeof(*readings)); ++ (void *)(readings + i), ++ sizeof(*readings)); + if (err) + return 0; + } +diff --git a/tools/bpf/resolve_btfids/Build b/tools/bpf/resolve_btfids/Build +index ae82da03f9bf9..077de3829c722 100644 +--- a/tools/bpf/resolve_btfids/Build ++++ b/tools/bpf/resolve_btfids/Build +@@ -1,3 +1,5 @@ ++hostprogs := resolve_btfids ++ + resolve_btfids-y += main.o + resolve_btfids-y += rbtree.o + resolve_btfids-y += zalloc.o +@@ -7,4 +9,4 @@ resolve_btfids-y += str_error_r.o + + $(OUTPUT)%.o: ../../lib/%.c FORCE + $(call rule_mkdir) +- $(call if_changed_dep,cc_o_c) ++ $(call if_changed_dep,host_cc_o_c) +diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile +index 19a3112e271ac..4b8079f294f65 100644 +--- a/tools/bpf/resolve_btfids/Makefile ++++ b/tools/bpf/resolve_btfids/Makefile +@@ -17,15 +17,15 @@ else + MAKEFLAGS=--no-print-directory + endif + +-# always use the host compiler +-AR = $(HOSTAR) +-CC = $(HOSTCC) +-LD = $(HOSTLD) +-ARCH = $(HOSTARCH) ++# Overrides for the prepare step libraries. ++HOST_OVERRIDES := AR="$(HOSTAR)" CC="$(HOSTCC)" LD="$(HOSTLD)" ARCH="$(HOSTARCH)" \ ++ CROSS_COMPILE="" EXTRA_CFLAGS="$(HOSTCFLAGS)" ++ + RM ?= rm ++HOSTCC ?= gcc ++HOSTLD ?= ld ++HOSTAR ?= ar + CROSS_COMPILE = +-CFLAGS := $(KBUILD_HOSTCFLAGS) +-LDFLAGS := $(KBUILD_HOSTLDFLAGS) + + OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/ + +@@ -35,51 +35,64 @@ SUBCMD_SRC := $(srctree)/tools/lib/subcmd/ + BPFOBJ := $(OUTPUT)/libbpf/libbpf.a + LIBBPF_OUT := $(abspath $(dir $(BPFOBJ)))/ + SUBCMDOBJ := $(OUTPUT)/libsubcmd/libsubcmd.a ++SUBCMD_OUT := $(abspath $(dir $(SUBCMDOBJ)))/ + + LIBBPF_DESTDIR := $(LIBBPF_OUT) + LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)include + ++SUBCMD_DESTDIR := $(SUBCMD_OUT) ++SUBCMD_INCLUDE := $(SUBCMD_DESTDIR)include ++ + BINARY := $(OUTPUT)/resolve_btfids + BINARY_IN := $(BINARY)-in.o + + all: $(BINARY) + ++prepare: $(BPFOBJ) $(SUBCMDOBJ) ++ + $(OUTPUT) $(OUTPUT)/libsubcmd $(LIBBPF_OUT): + $(call msg,MKDIR,,$@) + $(Q)mkdir -p $(@) + + $(SUBCMDOBJ): fixdep FORCE | $(OUTPUT)/libsubcmd +- $(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(abspath $(dir $@))/ $(abspath $@) ++ $(Q)$(MAKE) -C $(SUBCMD_SRC) OUTPUT=$(SUBCMD_OUT) \ ++ DESTDIR=$(SUBCMD_DESTDIR) $(HOST_OVERRIDES) prefix= subdir= \ ++ $(abspath $@) install_headers + + $(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OUT) + $(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC) OUTPUT=$(LIBBPF_OUT) \ +- DESTDIR=$(LIBBPF_DESTDIR) prefix= EXTRA_CFLAGS="$(CFLAGS)" \ ++ DESTDIR=$(LIBBPF_DESTDIR) $(HOST_OVERRIDES) prefix= subdir= \ + $(abspath $@) install_headers + +-CFLAGS += -g \ ++LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null) ++LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf) ++ ++HOSTCFLAGS_resolve_btfids += -g \ + -I$(srctree)/tools/include \ + -I$(srctree)/tools/include/uapi \ + -I$(LIBBPF_INCLUDE) \ +- -I$(SUBCMD_SRC) ++ -I$(SUBCMD_INCLUDE) \ ++ $(LIBELF_FLAGS) + +-LIBS = -lelf -lz ++LIBS = $(LIBELF_LIBS) -lz + +-export srctree OUTPUT CFLAGS Q ++export srctree OUTPUT HOSTCFLAGS_resolve_btfids Q HOSTCC HOSTLD HOSTAR + include $(srctree)/tools/build/Makefile.include + +-$(BINARY_IN): $(BPFOBJ) fixdep FORCE | $(OUTPUT) ++$(BINARY_IN): fixdep FORCE prepare | $(OUTPUT) + $(Q)$(MAKE) $(build)=resolve_btfids + + $(BINARY): $(BPFOBJ) $(SUBCMDOBJ) $(BINARY_IN) + $(call msg,LINK,$@) +- $(Q)$(CC) $(BINARY_IN) $(LDFLAGS) -o $@ $(BPFOBJ) $(SUBCMDOBJ) $(LIBS) ++ $(Q)$(HOSTCC) $(BINARY_IN) $(KBUILD_HOSTLDFLAGS) -o $@ $(BPFOBJ) $(SUBCMDOBJ) $(LIBS) + + clean_objects := $(wildcard $(OUTPUT)/*.o \ + $(OUTPUT)/.*.o.cmd \ + $(OUTPUT)/.*.o.d \ + $(LIBBPF_OUT) \ + $(LIBBPF_DESTDIR) \ +- $(OUTPUT)/libsubcmd \ ++ $(SUBCMD_OUT) \ ++ $(SUBCMD_DESTDIR) \ + $(OUTPUT)/resolve_btfids) + + ifneq ($(clean_objects),) +@@ -96,4 +109,4 @@ tags: + + FORCE: + +-.PHONY: all FORCE clean tags ++.PHONY: all FORCE clean tags prepare +diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c +index 80cd7843c6778..77058174082d7 100644 +--- a/tools/bpf/resolve_btfids/main.c ++++ b/tools/bpf/resolve_btfids/main.c +@@ -75,7 +75,7 @@ + #include + #include + #include +-#include ++#include + + #define BTF_IDS_SECTION ".BTF_ids" + #define BTF_ID "__BTF_ID__" +diff --git a/tools/hv/vmbus_testing b/tools/hv/vmbus_testing +index e7212903dd1d9..4467979d8f699 100755 +--- a/tools/hv/vmbus_testing ++++ b/tools/hv/vmbus_testing +@@ -164,7 +164,7 @@ def recursive_file_lookup(path, file_map): + def get_all_devices_test_status(file_map): + + for device in file_map: +- if (get_test_state(locate_state(device, file_map)) is 1): ++ if (get_test_state(locate_state(device, file_map)) == 1): + print("Testing = ON for: {}" + .format(device.split("/")[5])) + else: +@@ -203,7 +203,7 @@ def write_test_files(path, value): + def set_test_state(state_path, state_value, quiet): + + write_test_files(state_path, state_value) +- if (get_test_state(state_path) is 1): ++ if (get_test_state(state_path) == 1): + if (not quiet): + print("Testing = ON for device: {}" + .format(state_path.split("/")[5])) +diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c +index b9a29d1053765..eeb2693128d8a 100644 +--- a/tools/lib/bpf/libbpf.c ++++ b/tools/lib/bpf/libbpf.c +@@ -6063,7 +6063,11 @@ static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_progra + if (main_prog == subprog) + return 0; + relos = libbpf_reallocarray(main_prog->reloc_desc, new_cnt, sizeof(*relos)); +- if (!relos) ++ /* if new count is zero, reallocarray can return a valid NULL result; ++ * in this case the previous pointer will be freed, so we *have to* ++ * reassign old pointer to the new value (even if it's NULL) ++ */ ++ if (!relos && new_cnt) + return -ENOMEM; + if (subprog->nr_reloc) + memcpy(relos + main_prog->nr_reloc, subprog->reloc_desc, +@@ -8345,7 +8349,8 @@ int bpf_program__set_insns(struct bpf_program *prog, + return -EBUSY; + + insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns)); +- if (!insns) { ++ /* NULL is a valid return from reallocarray if the new count is zero */ ++ if (!insns && new_insn_cnt) { + pr_warn("prog '%s': failed to realloc prog code\n", prog->name); + return -ENOMEM; + } +@@ -8640,7 +8645,11 @@ int libbpf_unregister_prog_handler(int handler_id) + + /* try to shrink the array, but it's ok if we couldn't */ + sec_defs = libbpf_reallocarray(custom_sec_defs, custom_sec_def_cnt, sizeof(*sec_defs)); +- if (sec_defs) ++ /* if new count is zero, reallocarray can return a valid NULL result; ++ * in this case the previous pointer will be freed, so we *have to* ++ * reassign old pointer to the new value (even if it's NULL) ++ */ ++ if (sec_defs || custom_sec_def_cnt == 0) + custom_sec_defs = sec_defs; + + return 0; +diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c +index 49f3c3b7f6095..af1cb30556b46 100644 +--- a/tools/lib/bpf/usdt.c ++++ b/tools/lib/bpf/usdt.c +@@ -852,8 +852,11 @@ static int bpf_link_usdt_detach(struct bpf_link *link) + * system is so exhausted on memory, it's the least of user's + * concerns, probably. + * So just do our best here to return those IDs to usdt_manager. ++ * Another edge case when we can legitimately get NULL is when ++ * new_cnt is zero, which can happen in some edge cases, so we ++ * need to be careful about that. + */ +- if (new_free_ids) { ++ if (new_free_ids || new_cnt == 0) { + memcpy(new_free_ids + man->free_spec_cnt, usdt_link->spec_ids, + usdt_link->spec_cnt * sizeof(*usdt_link->spec_ids)); + man->free_spec_ids = new_free_ids; +diff --git a/tools/lib/subcmd/Makefile b/tools/lib/subcmd/Makefile +index 8f1a09cdfd17e..b87213263a5e0 100644 +--- a/tools/lib/subcmd/Makefile ++++ b/tools/lib/subcmd/Makefile +@@ -17,6 +17,15 @@ RM = rm -f + + MAKEFLAGS += --no-print-directory + ++INSTALL = install ++ ++# Use DESTDIR for installing into a different root directory. ++# This is useful for building a package. The program will be ++# installed in this directory as if it was the root directory. ++# Then the build tool can move it later. ++DESTDIR ?= ++DESTDIR_SQ = '$(subst ','\'',$(DESTDIR))' ++ + LIBFILE = $(OUTPUT)libsubcmd.a + + CFLAGS := -ggdb3 -Wall -Wextra -std=gnu99 -fPIC +@@ -48,6 +57,18 @@ CFLAGS += $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) + + SUBCMD_IN := $(OUTPUT)libsubcmd-in.o + ++ifeq ($(LP64), 1) ++ libdir_relative = lib64 ++else ++ libdir_relative = lib ++endif ++ ++prefix ?= ++libdir = $(prefix)/$(libdir_relative) ++ ++# Shell quotes ++libdir_SQ = $(subst ','\'',$(libdir)) ++ + all: + + export srctree OUTPUT CC LD CFLAGS V +@@ -61,6 +82,37 @@ $(SUBCMD_IN): FORCE + $(LIBFILE): $(SUBCMD_IN) + $(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $(SUBCMD_IN) + ++define do_install_mkdir ++ if [ ! -d '$(DESTDIR_SQ)$1' ]; then \ ++ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$1'; \ ++ fi ++endef ++ ++define do_install ++ if [ ! -d '$2' ]; then \ ++ $(INSTALL) -d -m 755 '$2'; \ ++ fi; \ ++ $(INSTALL) $1 $(if $3,-m $3,) '$2' ++endef ++ ++install_lib: $(LIBFILE) ++ $(call QUIET_INSTALL, $(LIBFILE)) \ ++ $(call do_install_mkdir,$(libdir_SQ)); \ ++ cp -fpR $(LIBFILE) $(DESTDIR)$(libdir_SQ) ++ ++HDRS := exec-cmd.h help.h pager.h parse-options.h run-command.h ++INSTALL_HDRS_PFX := $(DESTDIR)$(prefix)/include/subcmd ++INSTALL_HDRS := $(addprefix $(INSTALL_HDRS_PFX)/, $(HDRS)) ++ ++$(INSTALL_HDRS): $(INSTALL_HDRS_PFX)/%.h: %.h ++ $(call QUIET_INSTALL, $@) \ ++ $(call do_install,$<,$(INSTALL_HDRS_PFX)/,644) ++ ++install_headers: $(INSTALL_HDRS) ++ $(call QUIET_INSTALL, libsubcmd_headers) ++ ++install: install_lib install_headers ++ + clean: + $(call QUIET_CLEAN, libsubcmd) $(RM) $(LIBFILE); \ + find $(or $(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM) +diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c +index e00520cc63498..cffaf2245d4f1 100644 +--- a/tools/testing/radix-tree/multiorder.c ++++ b/tools/testing/radix-tree/multiorder.c +@@ -159,7 +159,7 @@ void multiorder_tagged_iteration(struct xarray *xa) + item_kill_tree(xa); + } + +-bool stop_iteration = false; ++bool stop_iteration; + + static void *creator_func(void *ptr) + { +@@ -201,6 +201,7 @@ static void multiorder_iteration_race(struct xarray *xa) + pthread_t worker_thread[num_threads]; + int i; + ++ stop_iteration = false; + pthread_create(&worker_thread[0], NULL, &creator_func, xa); + for (i = 1; i < num_threads; i++) + pthread_create(&worker_thread[i], NULL, &iterator_func, xa); +@@ -211,6 +212,61 @@ static void multiorder_iteration_race(struct xarray *xa) + item_kill_tree(xa); + } + ++static void *load_creator(void *ptr) ++{ ++ /* 'order' is set up to ensure we have sibling entries */ ++ unsigned int order; ++ struct radix_tree_root *tree = ptr; ++ int i; ++ ++ rcu_register_thread(); ++ item_insert_order(tree, 3 << RADIX_TREE_MAP_SHIFT, 0); ++ item_insert_order(tree, 2 << RADIX_TREE_MAP_SHIFT, 0); ++ for (i = 0; i < 10000; i++) { ++ for (order = 1; order < RADIX_TREE_MAP_SHIFT; order++) { ++ unsigned long index = (3 << RADIX_TREE_MAP_SHIFT) - ++ (1 << order); ++ item_insert_order(tree, index, order); ++ item_delete_rcu(tree, index); ++ } ++ } ++ rcu_unregister_thread(); ++ ++ stop_iteration = true; ++ return NULL; ++} ++ ++static void *load_worker(void *ptr) ++{ ++ unsigned long index = (3 << RADIX_TREE_MAP_SHIFT) - 1; ++ ++ rcu_register_thread(); ++ while (!stop_iteration) { ++ struct item *item = xa_load(ptr, index); ++ assert(!xa_is_internal(item)); ++ } ++ rcu_unregister_thread(); ++ ++ return NULL; ++} ++ ++static void load_race(struct xarray *xa) ++{ ++ const int num_threads = sysconf(_SC_NPROCESSORS_ONLN) * 4; ++ pthread_t worker_thread[num_threads]; ++ int i; ++ ++ stop_iteration = false; ++ pthread_create(&worker_thread[0], NULL, &load_creator, xa); ++ for (i = 1; i < num_threads; i++) ++ pthread_create(&worker_thread[i], NULL, &load_worker, xa); ++ ++ for (i = 0; i < num_threads; i++) ++ pthread_join(worker_thread[i], NULL); ++ ++ item_kill_tree(xa); ++} ++ + static DEFINE_XARRAY(array); + + void multiorder_checks(void) +@@ -218,12 +274,20 @@ void multiorder_checks(void) + multiorder_iteration(&array); + multiorder_tagged_iteration(&array); + multiorder_iteration_race(&array); ++ load_race(&array); + + radix_tree_cpu_dead(0); + } + +-int __weak main(void) ++int __weak main(int argc, char **argv) + { ++ int opt; ++ ++ while ((opt = getopt(argc, argv, "ls:v")) != -1) { ++ if (opt == 'v') ++ test_verbose++; ++ } ++ + rcu_register_thread(); + radix_tree_init(); + multiorder_checks(); +diff --git a/tools/testing/selftests/bpf/benchs/run_bench_rename.sh b/tools/testing/selftests/bpf/benchs/run_bench_rename.sh +index 16f774b1cdbed..7b281dbe41656 100755 +--- a/tools/testing/selftests/bpf/benchs/run_bench_rename.sh ++++ b/tools/testing/selftests/bpf/benchs/run_bench_rename.sh +@@ -2,7 +2,7 @@ + + set -eufo pipefail + +-for i in base kprobe kretprobe rawtp fentry fexit fmodret ++for i in base kprobe kretprobe rawtp fentry fexit + do + summary=$(sudo ./bench -w2 -d5 -a rename-$i | tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-) + printf "%-10s: %s\n" $i "$summary" +diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c +index 8a838ea8bdf3b..b2998896f9f7b 100644 +--- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c ++++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c +@@ -123,12 +123,13 @@ static void test_bpf_nf_ct(int mode) + ASSERT_EQ(skel->data->test_snat_addr, 0, "Test for source natting"); + ASSERT_EQ(skel->data->test_dnat_addr, 0, "Test for destination natting"); + end: +- if (srv_client_fd != -1) +- close(srv_client_fd); + if (client_fd != -1) + close(client_fd); ++ if (srv_client_fd != -1) ++ close(srv_client_fd); + if (srv_fd != -1) + close(srv_fd); ++ + snprintf(cmd, sizeof(cmd), iptables, "-D"); + system(cmd); + test_bpf_nf__destroy(skel); +diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +index 5af1ee8f0e6ee..36071f3f15ba1 100644 +--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c ++++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +@@ -171,8 +171,8 @@ static void verify_fail(struct kfunc_test_params *param) + case tc_test: + topts.data_in = &pkt_v4; + topts.data_size_in = sizeof(pkt_v4); +- break; + topts.repeat = 1; ++ break; + } + + skel = kfunc_call_fail__open_opts(&opts); +diff --git a/tools/testing/selftests/bpf/progs/test_cls_redirect.h b/tools/testing/selftests/bpf/progs/test_cls_redirect.h +index 76eab0aacba0c..233b089d1fbac 100644 +--- a/tools/testing/selftests/bpf/progs/test_cls_redirect.h ++++ b/tools/testing/selftests/bpf/progs/test_cls_redirect.h +@@ -12,6 +12,15 @@ + #include + #include + ++/* offsetof() is used in static asserts, and the libbpf-redefined CO-RE ++ * friendly version breaks compilation for older clang versions <= 15 ++ * when invoked in a static assert. Restore original here. ++ */ ++#ifdef offsetof ++#undef offsetof ++#define offsetof(type, member) __builtin_offsetof(type, member) ++#endif ++ + struct gre_base_hdr { + uint16_t flags; + uint16_t protocol; +diff --git a/tools/testing/selftests/futex/functional/futex_wait_timeout.c b/tools/testing/selftests/futex/functional/futex_wait_timeout.c +index 3651ce17beeb9..d183f878360bc 100644 +--- a/tools/testing/selftests/futex/functional/futex_wait_timeout.c ++++ b/tools/testing/selftests/futex/functional/futex_wait_timeout.c +@@ -24,6 +24,7 @@ + + static long timeout_ns = 100000; /* 100us default timeout */ + static futex_t futex_pi; ++static pthread_barrier_t barrier; + + void usage(char *prog) + { +@@ -48,6 +49,8 @@ void *get_pi_lock(void *arg) + if (ret != 0) + error("futex_lock_pi failed\n", ret); + ++ pthread_barrier_wait(&barrier); ++ + /* Blocks forever */ + ret = futex_wait(&lock, 0, NULL, 0); + error("futex_wait failed\n", ret); +@@ -130,6 +133,7 @@ int main(int argc, char *argv[]) + basename(argv[0])); + ksft_print_msg("\tArguments: timeout=%ldns\n", timeout_ns); + ++ pthread_barrier_init(&barrier, NULL, 2); + pthread_create(&thread, NULL, get_pi_lock, NULL); + + /* initialize relative timeout */ +@@ -163,6 +167,9 @@ int main(int argc, char *argv[]) + res = futex_wait_requeue_pi(&f1, f1, &futex_pi, &to, 0); + test_timeout(res, &ret, "futex_wait_requeue_pi monotonic", ETIMEDOUT); + ++ /* Wait until the other thread calls futex_lock_pi() */ ++ pthread_barrier_wait(&barrier); ++ pthread_barrier_destroy(&barrier); + /* + * FUTEX_LOCK_PI with CLOCK_REALTIME + * Due to historical reasons, FUTEX_LOCK_PI supports only realtime +diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h +index 25f4d54067c0e..584687c3286dd 100644 +--- a/tools/testing/selftests/kselftest_harness.h ++++ b/tools/testing/selftests/kselftest_harness.h +@@ -937,7 +937,11 @@ void __wait_for_test(struct __test_metadata *t) + fprintf(TH_LOG_STREAM, + "# %s: Test terminated by timeout\n", t->name); + } else if (WIFEXITED(status)) { +- if (t->termsig != -1) { ++ if (WEXITSTATUS(status) == 255) { ++ /* SKIP */ ++ t->passed = 1; ++ t->skip = 1; ++ } else if (t->termsig != -1) { + t->passed = 0; + fprintf(TH_LOG_STREAM, + "# %s: Test exited normally instead of by signal (code: %d)\n", +@@ -949,11 +953,6 @@ void __wait_for_test(struct __test_metadata *t) + case 0: + t->passed = 1; + break; +- /* SKIP */ +- case 255: +- t->passed = 1; +- t->skip = 1; +- break; + /* Other failure, assume step report. */ + default: + t->passed = 0; +diff --git a/tools/testing/selftests/resctrl/Makefile b/tools/testing/selftests/resctrl/Makefile +index 73d53257df42f..5073dbc961258 100644 +--- a/tools/testing/selftests/resctrl/Makefile ++++ b/tools/testing/selftests/resctrl/Makefile +@@ -7,4 +7,4 @@ TEST_GEN_PROGS := resctrl_tests + + include ../lib.mk + +-$(OUTPUT)/resctrl_tests: $(wildcard *.c) ++$(OUTPUT)/resctrl_tests: $(wildcard *.[ch]) +diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c +index 0485863a169f2..338f714453935 100644 +--- a/tools/testing/selftests/resctrl/cache.c ++++ b/tools/testing/selftests/resctrl/cache.c +@@ -89,21 +89,19 @@ static int reset_enable_llc_perf(pid_t pid, int cpu_no) + static int get_llc_perf(unsigned long *llc_perf_miss) + { + __u64 total_misses; ++ int ret; + + /* Stop counters after one span to get miss rate */ + + ioctl(fd_lm, PERF_EVENT_IOC_DISABLE, 0); + +- if (read(fd_lm, &rf_cqm, sizeof(struct read_format)) == -1) { ++ ret = read(fd_lm, &rf_cqm, sizeof(struct read_format)); ++ if (ret == -1) { + perror("Could not get llc misses through perf"); +- + return -1; + } + + total_misses = rf_cqm.values[0].value; +- +- close(fd_lm); +- + *llc_perf_miss = total_misses; + + return 0; +@@ -258,19 +256,25 @@ int cat_val(struct resctrl_val_param *param) + memflush, operation, resctrl_val)) { + fprintf(stderr, "Error-running fill buffer\n"); + ret = -1; +- break; ++ goto pe_close; + } + + sleep(1); + ret = measure_cache_vals(param, bm_pid); + if (ret) +- break; ++ goto pe_close; ++ ++ close(fd_lm); + } else { + break; + } + } + + return ret; ++ ++pe_close: ++ close(fd_lm); ++ return ret; + } + + /* +diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c +index c20d0a7ecbe63..ab1d91328d67b 100644 +--- a/tools/testing/selftests/resctrl/fill_buf.c ++++ b/tools/testing/selftests/resctrl/fill_buf.c +@@ -184,12 +184,13 @@ fill_cache(unsigned long long buf_size, int malloc_and_init, int memflush, + else + ret = fill_cache_write(start_ptr, end_ptr, resctrl_val); + ++ free(startptr); ++ + if (ret) { + printf("\n Error in fill cache read/write...\n"); + return -1; + } + +- free(startptr); + + return 0; + } +diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h +index f44fa2de4d986..dbe5cfb545585 100644 +--- a/tools/testing/selftests/resctrl/resctrl.h ++++ b/tools/testing/selftests/resctrl/resctrl.h +@@ -43,6 +43,7 @@ + do { \ + perror(err_msg); \ + kill(ppid, SIGKILL); \ ++ umount_resctrlfs(); \ + exit(EXIT_FAILURE); \ + } while (0) + +diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c +index 9584eb57e0eda..365d30779768a 100644 +--- a/virt/kvm/vfio.c ++++ b/virt/kvm/vfio.c +@@ -21,7 +21,7 @@ + #include + #endif + +-struct kvm_vfio_group { ++struct kvm_vfio_file { + struct list_head node; + struct file *file; + #ifdef CONFIG_SPAPR_TCE_IOMMU +@@ -30,7 +30,7 @@ struct kvm_vfio_group { + }; + + struct kvm_vfio { +- struct list_head group_list; ++ struct list_head file_list; + struct mutex lock; + bool noncoherent; + }; +@@ -98,34 +98,35 @@ static struct iommu_group *kvm_vfio_file_iommu_group(struct file *file) + } + + static void kvm_spapr_tce_release_vfio_group(struct kvm *kvm, +- struct kvm_vfio_group *kvg) ++ struct kvm_vfio_file *kvf) + { +- if (WARN_ON_ONCE(!kvg->iommu_group)) ++ if (WARN_ON_ONCE(!kvf->iommu_group)) + return; + +- kvm_spapr_tce_release_iommu_group(kvm, kvg->iommu_group); +- iommu_group_put(kvg->iommu_group); +- kvg->iommu_group = NULL; ++ kvm_spapr_tce_release_iommu_group(kvm, kvf->iommu_group); ++ iommu_group_put(kvf->iommu_group); ++ kvf->iommu_group = NULL; + } + #endif + + /* +- * Groups can use the same or different IOMMU domains. If the same then +- * adding a new group may change the coherency of groups we've previously +- * been told about. We don't want to care about any of that so we retest +- * each group and bail as soon as we find one that's noncoherent. This +- * means we only ever [un]register_noncoherent_dma once for the whole device. ++ * Groups/devices can use the same or different IOMMU domains. If the same ++ * then adding a new group/device may change the coherency of groups/devices ++ * we've previously been told about. We don't want to care about any of ++ * that so we retest each group/device and bail as soon as we find one that's ++ * noncoherent. This means we only ever [un]register_noncoherent_dma once ++ * for the whole device. + */ + static void kvm_vfio_update_coherency(struct kvm_device *dev) + { + struct kvm_vfio *kv = dev->private; + bool noncoherent = false; +- struct kvm_vfio_group *kvg; ++ struct kvm_vfio_file *kvf; + + mutex_lock(&kv->lock); + +- list_for_each_entry(kvg, &kv->group_list, node) { +- if (!kvm_vfio_file_enforced_coherent(kvg->file)) { ++ list_for_each_entry(kvf, &kv->file_list, node) { ++ if (!kvm_vfio_file_enforced_coherent(kvf->file)) { + noncoherent = true; + break; + } +@@ -143,10 +144,10 @@ static void kvm_vfio_update_coherency(struct kvm_device *dev) + mutex_unlock(&kv->lock); + } + +-static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd) ++static int kvm_vfio_file_add(struct kvm_device *dev, unsigned int fd) + { + struct kvm_vfio *kv = dev->private; +- struct kvm_vfio_group *kvg; ++ struct kvm_vfio_file *kvf; + struct file *filp; + int ret; + +@@ -162,27 +163,27 @@ static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd) + + mutex_lock(&kv->lock); + +- list_for_each_entry(kvg, &kv->group_list, node) { +- if (kvg->file == filp) { ++ list_for_each_entry(kvf, &kv->file_list, node) { ++ if (kvf->file == filp) { + ret = -EEXIST; + goto err_unlock; + } + } + +- kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT); +- if (!kvg) { ++ kvf = kzalloc(sizeof(*kvf), GFP_KERNEL_ACCOUNT); ++ if (!kvf) { + ret = -ENOMEM; + goto err_unlock; + } + +- kvg->file = filp; +- list_add_tail(&kvg->node, &kv->group_list); ++ kvf->file = filp; ++ list_add_tail(&kvf->node, &kv->file_list); + + kvm_arch_start_assignment(dev->kvm); ++ kvm_vfio_file_set_kvm(kvf->file, dev->kvm); + + mutex_unlock(&kv->lock); + +- kvm_vfio_file_set_kvm(kvg->file, dev->kvm); + kvm_vfio_update_coherency(dev); + + return 0; +@@ -193,10 +194,10 @@ err_fput: + return ret; + } + +-static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd) ++static int kvm_vfio_file_del(struct kvm_device *dev, unsigned int fd) + { + struct kvm_vfio *kv = dev->private; +- struct kvm_vfio_group *kvg; ++ struct kvm_vfio_file *kvf; + struct fd f; + int ret; + +@@ -208,18 +209,18 @@ static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd) + + mutex_lock(&kv->lock); + +- list_for_each_entry(kvg, &kv->group_list, node) { +- if (kvg->file != f.file) ++ list_for_each_entry(kvf, &kv->file_list, node) { ++ if (kvf->file != f.file) + continue; + +- list_del(&kvg->node); ++ list_del(&kvf->node); + kvm_arch_end_assignment(dev->kvm); + #ifdef CONFIG_SPAPR_TCE_IOMMU +- kvm_spapr_tce_release_vfio_group(dev->kvm, kvg); ++ kvm_spapr_tce_release_vfio_group(dev->kvm, kvf); + #endif +- kvm_vfio_file_set_kvm(kvg->file, NULL); +- fput(kvg->file); +- kfree(kvg); ++ kvm_vfio_file_set_kvm(kvf->file, NULL); ++ fput(kvf->file); ++ kfree(kvf); + ret = 0; + break; + } +@@ -234,12 +235,12 @@ static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd) + } + + #ifdef CONFIG_SPAPR_TCE_IOMMU +-static int kvm_vfio_group_set_spapr_tce(struct kvm_device *dev, +- void __user *arg) ++static int kvm_vfio_file_set_spapr_tce(struct kvm_device *dev, ++ void __user *arg) + { + struct kvm_vfio_spapr_tce param; + struct kvm_vfio *kv = dev->private; +- struct kvm_vfio_group *kvg; ++ struct kvm_vfio_file *kvf; + struct fd f; + int ret; + +@@ -254,20 +255,20 @@ static int kvm_vfio_group_set_spapr_tce(struct kvm_device *dev, + + mutex_lock(&kv->lock); + +- list_for_each_entry(kvg, &kv->group_list, node) { +- if (kvg->file != f.file) ++ list_for_each_entry(kvf, &kv->file_list, node) { ++ if (kvf->file != f.file) + continue; + +- if (!kvg->iommu_group) { +- kvg->iommu_group = kvm_vfio_file_iommu_group(kvg->file); +- if (WARN_ON_ONCE(!kvg->iommu_group)) { ++ if (!kvf->iommu_group) { ++ kvf->iommu_group = kvm_vfio_file_iommu_group(kvf->file); ++ if (WARN_ON_ONCE(!kvf->iommu_group)) { + ret = -EIO; + goto err_fdput; + } + } + + ret = kvm_spapr_tce_attach_iommu_group(dev->kvm, param.tablefd, +- kvg->iommu_group); ++ kvf->iommu_group); + break; + } + +@@ -278,8 +279,8 @@ err_fdput: + } + #endif + +-static int kvm_vfio_set_group(struct kvm_device *dev, long attr, +- void __user *arg) ++static int kvm_vfio_set_file(struct kvm_device *dev, long attr, ++ void __user *arg) + { + int32_t __user *argp = arg; + int32_t fd; +@@ -288,16 +289,16 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, + case KVM_DEV_VFIO_GROUP_ADD: + if (get_user(fd, argp)) + return -EFAULT; +- return kvm_vfio_group_add(dev, fd); ++ return kvm_vfio_file_add(dev, fd); + + case KVM_DEV_VFIO_GROUP_DEL: + if (get_user(fd, argp)) + return -EFAULT; +- return kvm_vfio_group_del(dev, fd); ++ return kvm_vfio_file_del(dev, fd); + + #ifdef CONFIG_SPAPR_TCE_IOMMU + case KVM_DEV_VFIO_GROUP_SET_SPAPR_TCE: +- return kvm_vfio_group_set_spapr_tce(dev, arg); ++ return kvm_vfio_file_set_spapr_tce(dev, arg); + #endif + } + +@@ -309,8 +310,8 @@ static int kvm_vfio_set_attr(struct kvm_device *dev, + { + switch (attr->group) { + case KVM_DEV_VFIO_GROUP: +- return kvm_vfio_set_group(dev, attr->attr, +- u64_to_user_ptr(attr->addr)); ++ return kvm_vfio_set_file(dev, attr->attr, ++ u64_to_user_ptr(attr->addr)); + } + + return -ENXIO; +@@ -339,16 +340,16 @@ static int kvm_vfio_has_attr(struct kvm_device *dev, + static void kvm_vfio_release(struct kvm_device *dev) + { + struct kvm_vfio *kv = dev->private; +- struct kvm_vfio_group *kvg, *tmp; ++ struct kvm_vfio_file *kvf, *tmp; + +- list_for_each_entry_safe(kvg, tmp, &kv->group_list, node) { ++ list_for_each_entry_safe(kvf, tmp, &kv->file_list, node) { + #ifdef CONFIG_SPAPR_TCE_IOMMU +- kvm_spapr_tce_release_vfio_group(dev->kvm, kvg); ++ kvm_spapr_tce_release_vfio_group(dev->kvm, kvf); + #endif +- kvm_vfio_file_set_kvm(kvg->file, NULL); +- fput(kvg->file); +- list_del(&kvg->node); +- kfree(kvg); ++ kvm_vfio_file_set_kvm(kvf->file, NULL); ++ fput(kvf->file); ++ list_del(&kvf->node); ++ kfree(kvf); + kvm_arch_end_assignment(dev->kvm); + } + +@@ -382,7 +383,7 @@ static int kvm_vfio_create(struct kvm_device *dev, u32 type) + if (!kv) + return -ENOMEM; + +- INIT_LIST_HEAD(&kv->group_list); ++ INIT_LIST_HEAD(&kv->file_list); + mutex_init(&kv->lock); + + dev->private = kv;