public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.17 commit in: /
Date: Fri,  3 Aug 2018 12:19:03 +0000 (UTC)	[thread overview]
Message-ID: <1533298727.a435a0a68c5f50f33231d974dc564c153c825d1f.mpagano@gentoo> (raw)

commit:     a435a0a68c5f50f33231d974dc564c153c825d1f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug  3 12:18:47 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug  3 12:18:47 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a435a0a6

Linux patch 4.17.12

 0000_README              |     4 +
 1011_linux-4.17.12.patch | 11595 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11599 insertions(+)

diff --git a/0000_README b/0000_README
index a0836f2..6e0bb48 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-4.17.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.17.11
 
+Patch:  1011_linux-4.17.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.17.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-4.17.12.patch b/1011_linux-4.17.12.patch
new file mode 100644
index 0000000..9dd5854
--- /dev/null
+++ b/1011_linux-4.17.12.patch
@@ -0,0 +1,11595 @@
+diff --git a/Documentation/devicetree/bindings/net/dsa/qca8k.txt b/Documentation/devicetree/bindings/net/dsa/qca8k.txt
+index 9c67ee4890d7..bbcb255c3150 100644
+--- a/Documentation/devicetree/bindings/net/dsa/qca8k.txt
++++ b/Documentation/devicetree/bindings/net/dsa/qca8k.txt
+@@ -2,7 +2,10 @@
+ 
+ Required properties:
+ 
+-- compatible: should be "qca,qca8337"
++- compatible: should be one of:
++    "qca,qca8334"
++    "qca,qca8337"
++
+ - #size-cells: must be 0
+ - #address-cells: must be 1
+ 
+@@ -14,6 +17,20 @@ port and PHY id, each subnode describing a port needs to have a valid phandle
+ referencing the internal PHY connected to it. The CPU port of this switch is
+ always port 0.
+ 
++A CPU port node has the following optional node:
++
++- fixed-link            : Fixed-link subnode describing a link to a non-MDIO
++                          managed entity. See
++                          Documentation/devicetree/bindings/net/fixed-link.txt
++                          for details.
++
++For QCA8K the 'fixed-link' sub-node supports only the following properties:
++
++- 'speed' (integer, mandatory), to indicate the link speed. Accepted
++  values are 10, 100 and 1000
++- 'full-duplex' (boolean, optional), to indicate that full duplex is
++  used. When absent, half duplex is assumed.
++
+ Example:
+ 
+ 
+@@ -53,6 +70,10 @@ Example:
+ 					label = "cpu";
+ 					ethernet = <&gmac1>;
+ 					phy-mode = "rgmii";
++					fixed-link {
++						speed = 1000;
++						full-duplex;
++					};
+ 				};
+ 
+ 				port@1 {
+diff --git a/Documentation/devicetree/bindings/net/meson-dwmac.txt b/Documentation/devicetree/bindings/net/meson-dwmac.txt
+index 61cada22ae6c..1321bb194ed9 100644
+--- a/Documentation/devicetree/bindings/net/meson-dwmac.txt
++++ b/Documentation/devicetree/bindings/net/meson-dwmac.txt
+@@ -11,6 +11,7 @@ Required properties on all platforms:
+ 			- "amlogic,meson8b-dwmac"
+ 			- "amlogic,meson8m2-dwmac"
+ 			- "amlogic,meson-gxbb-dwmac"
++			- "amlogic,meson-axg-dwmac"
+ 		Additionally "snps,dwmac" and any applicable more
+ 		detailed version number described in net/stmmac.txt
+ 		should be used.
+diff --git a/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt
+index 2c12f9789116..54ecb8ab7788 100644
+--- a/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/meson,pinctrl.txt
+@@ -3,8 +3,10 @@
+ Required properties for the root node:
+  - compatible: one of "amlogic,meson8-cbus-pinctrl"
+ 		      "amlogic,meson8b-cbus-pinctrl"
++		      "amlogic,meson8m2-cbus-pinctrl"
+ 		      "amlogic,meson8-aobus-pinctrl"
+ 		      "amlogic,meson8b-aobus-pinctrl"
++		      "amlogic,meson8m2-aobus-pinctrl"
+ 		      "amlogic,meson-gxbb-periphs-pinctrl"
+ 		      "amlogic,meson-gxbb-aobus-pinctrl"
+ 		      "amlogic,meson-gxl-periphs-pinctrl"
+diff --git a/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt b/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
+index 74b2f03c1515..fa56697a1ba6 100644
+--- a/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
++++ b/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
+@@ -7,6 +7,7 @@ Required properties:
+ 	         - "renesas,r7s72100-wdt" (RZ/A1)
+ 	         - "renesas,r8a7795-wdt" (R-Car H3)
+ 	         - "renesas,r8a7796-wdt" (R-Car M3-W)
++		 - "renesas,r8a77965-wdt" (R-Car M3-N)
+ 	         - "renesas,r8a77970-wdt" (R-Car V3M)
+ 	         - "renesas,r8a77995-wdt" (R-Car D3)
+ 
+diff --git a/Documentation/vfio-mediated-device.txt b/Documentation/vfio-mediated-device.txt
+index 1b3950346532..c3f69bcaf96e 100644
+--- a/Documentation/vfio-mediated-device.txt
++++ b/Documentation/vfio-mediated-device.txt
+@@ -145,6 +145,11 @@ The functions in the mdev_parent_ops structure are as follows:
+ * create: allocate basic resources in a driver for a mediated device
+ * remove: free resources in a driver when a mediated device is destroyed
+ 
++(Note that mdev-core provides no implicit serialization of create/remove
++callbacks per mdev parent device, per mdev type, or any other categorization.
++Vendor drivers are expected to be fully asynchronous in this respect or
++provide their own internal resource protection.)
++
+ The callbacks in the mdev_parent_ops structure are as follows:
+ 
+ * open: open callback of mediated device
+diff --git a/Makefile b/Makefile
+index e2664c641109..790e8faf0ddc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 17
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/emev2.dtsi b/arch/arm/boot/dts/emev2.dtsi
+index 42ea246e71cb..fec1241b858f 100644
+--- a/arch/arm/boot/dts/emev2.dtsi
++++ b/arch/arm/boot/dts/emev2.dtsi
+@@ -31,13 +31,13 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		cpu@0 {
++		cpu0: cpu@0 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0>;
+ 			clock-frequency = <533000000>;
+ 		};
+-		cpu@1 {
++		cpu1: cpu@1 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <1>;
+@@ -57,6 +57,7 @@
+ 		compatible = "arm,cortex-a9-pmu";
+ 		interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-affinity = <&cpu0>, <&cpu1>;
+ 	};
+ 
+ 	clocks@e0110000 {
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index d5628af2e301..563451167e7f 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -559,8 +559,6 @@
+ 		status = "okay";
+ 
+ 		port@2 {
+-			reg = <2>;
+-
+ 			lvds0_out: endpoint {
+ 				remote-endpoint = <&panel_in_lvds0>;
+ 			};
+diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi
+index 3d65c0192f69..ab4fc5b99ad3 100644
+--- a/arch/arm/boot/dts/imx53.dtsi
++++ b/arch/arm/boot/dts/imx53.dtsi
+@@ -488,6 +488,10 @@
+ 							remote-endpoint = <&ipu_di0_lvds0>;
+ 						};
+ 					};
++
++					port@2 {
++						reg = <2>;
++					};
+ 				};
+ 
+ 				lvds-channel@1 {
+@@ -503,6 +507,10 @@
+ 							remote-endpoint = <&ipu_di1_lvds1>;
+ 						};
+ 					};
++
++					port@2 {
++						reg = <2>;
++					};
+ 				};
+ 			};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi
+index a32089132263..855dc6f9df75 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard-revb1.dtsi
+@@ -17,7 +17,6 @@
+ 	imx6qdl-wandboard {
+ 		pinctrl_hog: hoggrp {
+ 			fsl,pins = <
+-				MX6QDL_PAD_GPIO_0__CCM_CLKO1		0x130b0		/* GPIO_0_CLKO */
+ 				MX6QDL_PAD_GPIO_2__GPIO1_IO02		0x80000000	/* uSDHC1 CD */
+ 				MX6QDL_PAD_EIM_DA9__GPIO3_IO09		0x80000000	/* uSDHC3 CD */
+ 				MX6QDL_PAD_EIM_EB1__GPIO2_IO29		0x0f0b0		/* WL_REF_ON */
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi
+index 8d893a78cdf0..49a0a557e62e 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard-revc1.dtsi
+@@ -17,7 +17,6 @@
+ 	imx6qdl-wandboard {
+ 		pinctrl_hog: hoggrp {
+ 			fsl,pins = <
+-				MX6QDL_PAD_GPIO_0__CCM_CLKO1		0x130b0		/* GPIO_0_CLKO */
+ 				MX6QDL_PAD_GPIO_2__GPIO1_IO02		0x80000000	/* uSDHC1 CD */
+ 				MX6QDL_PAD_EIM_DA9__GPIO3_IO09		0x80000000	/* uSDHC3 CD */
+ 				MX6QDL_PAD_CSI0_DAT14__GPIO6_IO00	0x0f0b0		/* WIFI_ON (reset, active low) */
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi
+index 3a8a4952d45e..69d9c8661439 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard-revd1.dtsi
+@@ -147,7 +147,6 @@
+ 	imx6qdl-wandboard {
+ 		pinctrl_hog: hoggrp {
+ 			fsl,pins = <
+-				MX6QDL_PAD_GPIO_0__CCM_CLKO1     	0x130b0
+ 				MX6QDL_PAD_EIM_D22__USB_OTG_PWR		0x80000000	/* USB Power Enable */
+ 				MX6QDL_PAD_GPIO_2__GPIO1_IO02		0x80000000	/* USDHC1 CD */
+ 				MX6QDL_PAD_EIM_DA9__GPIO3_IO09		0x80000000	/* uSDHC3 CD */
+diff --git a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+index ed96d7b5feab..6b0a86fa72d3 100644
+--- a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+@@ -83,6 +83,8 @@
+ 	status = "okay";
+ 
+ 	codec: sgtl5000@a {
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_mclk>;
+ 		compatible = "fsl,sgtl5000";
+ 		reg = <0x0a>;
+ 		clocks = <&clks IMX6QDL_CLK_CKO>;
+@@ -142,6 +144,12 @@
+ 			>;
+ 		};
+ 
++		pinctrl_mclk: mclkgrp {
++			fsl,pins = <
++				MX6QDL_PAD_GPIO_0__CCM_CLKO1		0x130b0
++			>;
++		};
++
+ 		pinctrl_spdif: spdifgrp {
+ 			fsl,pins = <
+ 				MX6QDL_PAD_ENET_RXD0__SPDIF_OUT		0x1b0b0
+diff --git a/arch/arm/boot/dts/sh73a0.dtsi b/arch/arm/boot/dts/sh73a0.dtsi
+index 914a7c2a584f..b0c20544df20 100644
+--- a/arch/arm/boot/dts/sh73a0.dtsi
++++ b/arch/arm/boot/dts/sh73a0.dtsi
+@@ -22,7 +22,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		cpu@0 {
++		cpu0: cpu@0 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0>;
+@@ -31,7 +31,7 @@
+ 			power-domains = <&pd_a2sl>;
+ 			next-level-cache = <&L2>;
+ 		};
+-		cpu@1 {
++		cpu1: cpu@1 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <1>;
+@@ -91,6 +91,7 @@
+ 		compatible = "arm,cortex-a9-pmu";
+ 		interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-affinity = <&cpu0>, <&cpu1>;
+ 	};
+ 
+ 	cmt1: timer@e6138000 {
+diff --git a/arch/arm/boot/dts/stih407-pinctrl.dtsi b/arch/arm/boot/dts/stih407-pinctrl.dtsi
+index 53c6888d1fc0..e393519fb84c 100644
+--- a/arch/arm/boot/dts/stih407-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stih407-pinctrl.dtsi
+@@ -52,7 +52,7 @@
+ 			st,syscfg = <&syscfg_sbc>;
+ 			reg = <0x0961f080 0x4>;
+ 			reg-names = "irqmux";
+-			interrupts = <GIC_SPI 188 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "irqmux";
+ 			ranges = <0 0x09610000 0x6000>;
+ 
+@@ -376,7 +376,7 @@
+ 			st,syscfg = <&syscfg_front>;
+ 			reg = <0x0920f080 0x4>;
+ 			reg-names = "irqmux";
+-			interrupts = <GIC_SPI 189 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "irqmux";
+ 			ranges = <0 0x09200000 0x10000>;
+ 
+@@ -936,7 +936,7 @@
+ 			st,syscfg = <&syscfg_front>;
+ 			reg = <0x0921f080 0x4>;
+ 			reg-names = "irqmux";
+-			interrupts = <GIC_SPI 190 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "irqmux";
+ 			ranges = <0 0x09210000 0x10000>;
+ 
+@@ -969,7 +969,7 @@
+ 			st,syscfg = <&syscfg_rear>;
+ 			reg = <0x0922f080 0x4>;
+ 			reg-names = "irqmux";
+-			interrupts = <GIC_SPI 191 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "irqmux";
+ 			ranges = <0 0x09220000 0x6000>;
+ 
+@@ -1164,7 +1164,7 @@
+ 			st,syscfg = <&syscfg_flash>;
+ 			reg = <0x0923f080 0x4>;
+ 			reg-names = "irqmux";
+-			interrupts = <GIC_SPI 192 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "irqmux";
+ 			ranges = <0 0x09230000 0x3000>;
+ 
+diff --git a/arch/arm/boot/dts/stih410.dtsi b/arch/arm/boot/dts/stih410.dtsi
+index 3313005ee15c..888548ea9b5c 100644
+--- a/arch/arm/boot/dts/stih410.dtsi
++++ b/arch/arm/boot/dts/stih410.dtsi
+@@ -43,7 +43,7 @@
+ 		ohci0: usb@9a03c00 {
+ 			compatible = "st,st-ohci-300x";
+ 			reg = <0x9a03c00 0x100>;
+-			interrupts = <GIC_SPI 180 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+ 				 <&clk_s_c0_flexgen CLK_RX_ICN_DISP_0>;
+ 			resets = <&powerdown STIH407_USB2_PORT0_POWERDOWN>,
+@@ -58,7 +58,7 @@
+ 		ehci0: usb@9a03e00 {
+ 			compatible = "st,st-ehci-300x";
+ 			reg = <0x9a03e00 0x100>;
+-			interrupts = <GIC_SPI 151 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_usb0>;
+ 			clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+@@ -75,7 +75,7 @@
+ 		ohci1: usb@9a83c00 {
+ 			compatible = "st,st-ohci-300x";
+ 			reg = <0x9a83c00 0x100>;
+-			interrupts = <GIC_SPI 181 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+ 				 <&clk_s_c0_flexgen CLK_RX_ICN_DISP_0>;
+ 			resets = <&powerdown STIH407_USB2_PORT1_POWERDOWN>,
+@@ -90,7 +90,7 @@
+ 		ehci1: usb@9a83e00 {
+ 			compatible = "st,st-ehci-300x";
+ 			reg = <0x9a83e00 0x100>;
+-			interrupts = <GIC_SPI 153 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 153 IRQ_TYPE_LEVEL_HIGH>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_usb1>;
+ 			clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>,
+@@ -202,7 +202,7 @@
+ 				reg = <0x8d04000 0x1000>;
+ 				reg-names = "hdmi-reg";
+ 				#sound-dai-cells = <0>;
+-				interrupts = <GIC_SPI 106 IRQ_TYPE_NONE>;
++				interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
+ 				interrupt-names	= "irq";
+ 				clock-names = "pix",
+ 					      "tmds",
+@@ -254,7 +254,7 @@
+ 		bdisp0:bdisp@9f10000 {
+ 			compatible = "st,stih407-bdisp";
+ 			reg = <0x9f10000 0x1000>;
+-			interrupts = <GIC_SPI 38 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 			clock-names = "bdisp";
+ 			clocks = <&clk_s_c0_flexgen CLK_IC_BDISP_0>;
+ 		};
+@@ -263,8 +263,8 @@
+ 			compatible = "st,st-hva";
+ 			reg = <0x8c85000 0x400>, <0x6000000 0x40000>;
+ 			reg-names = "hva_registers", "hva_esram";
+-			interrupts = <GIC_SPI 58 IRQ_TYPE_NONE>,
+-				     <GIC_SPI 59 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>;
+ 			clock-names = "clk_hva";
+ 			clocks = <&clk_s_c0_flexgen CLK_HVA>;
+ 		};
+@@ -292,7 +292,7 @@
+ 			reg = <0x94a087c 0x64>;
+ 			clocks = <&clk_sysin>;
+ 			clock-names = "cec-clk";
+-			interrupts = <GIC_SPI 140 IRQ_TYPE_NONE>;
++			interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "cec-irq";
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_cec0_default>;
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index 5539fba892ce..ba2c10d1db4a 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -708,7 +708,7 @@ static inline void emit_a32_arsh_r64(const u8 dst[], const u8 src[], bool dstk,
+ }
+ 
+ /* dst = dst >> src */
+-static inline void emit_a32_lsr_r64(const u8 dst[], const u8 src[], bool dstk,
++static inline void emit_a32_rsh_r64(const u8 dst[], const u8 src[], bool dstk,
+ 				     bool sstk, struct jit_ctx *ctx) {
+ 	const u8 *tmp = bpf2a32[TMP_REG_1];
+ 	const u8 *tmp2 = bpf2a32[TMP_REG_2];
+@@ -724,7 +724,7 @@ static inline void emit_a32_lsr_r64(const u8 dst[], const u8 src[], bool dstk,
+ 		emit(ARM_LDR_I(rm, ARM_SP, STACK_VAR(dst_hi)), ctx);
+ 	}
+ 
+-	/* Do LSH operation */
++	/* Do RSH operation */
+ 	emit(ARM_RSB_I(ARM_IP, rt, 32), ctx);
+ 	emit(ARM_SUBS_I(tmp2[0], rt, 32), ctx);
+ 	emit(ARM_MOV_SR(ARM_LR, rd, SRTYPE_LSR, rt), ctx);
+@@ -774,7 +774,7 @@ static inline void emit_a32_lsh_i64(const u8 dst[], bool dstk,
+ }
+ 
+ /* dst = dst >> val */
+-static inline void emit_a32_lsr_i64(const u8 dst[], bool dstk,
++static inline void emit_a32_rsh_i64(const u8 dst[], bool dstk,
+ 				    const u32 val, struct jit_ctx *ctx) {
+ 	const u8 *tmp = bpf2a32[TMP_REG_1];
+ 	const u8 *tmp2 = bpf2a32[TMP_REG_2];
+@@ -1330,7 +1330,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ 	case BPF_ALU64 | BPF_RSH | BPF_K:
+ 		if (unlikely(imm > 63))
+ 			return -EINVAL;
+-		emit_a32_lsr_i64(dst, dstk, imm, ctx);
++		emit_a32_rsh_i64(dst, dstk, imm, ctx);
+ 		break;
+ 	/* dst = dst << src */
+ 	case BPF_ALU64 | BPF_LSH | BPF_X:
+@@ -1338,7 +1338,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ 		break;
+ 	/* dst = dst >> src */
+ 	case BPF_ALU64 | BPF_RSH | BPF_X:
+-		emit_a32_lsr_r64(dst, src, dstk, sstk, ctx);
++		emit_a32_rsh_r64(dst, src, dstk, sstk, ctx);
+ 		break;
+ 	/* dst = dst >> src (signed) */
+ 	case BPF_ALU64 | BPF_ARSH | BPF_X:
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 2a7f36abd2dd..326ee6b59aaa 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -93,20 +93,12 @@
+ 		regulator-always-on;
+ 	};
+ 
+-	rsnd_ak4613: sound {
+-		compatible = "simple-audio-card";
++	sound_card: sound {
++		compatible = "audio-graph-card";
+ 
+-		simple-audio-card,format = "left_j";
+-		simple-audio-card,bitclock-master = <&sndcpu>;
+-		simple-audio-card,frame-master = <&sndcpu>;
++		label = "rcar-sound";
+ 
+-		sndcpu: simple-audio-card,cpu {
+-			sound-dai = <&rcar_sound>;
+-		};
+-
+-		sndcodec: simple-audio-card,codec {
+-			sound-dai = <&ak4613>;
+-		};
++		dais = <&rsnd_port0>;
+ 	};
+ 
+ 	vbus0_usb2: regulator-vbus0-usb2 {
+@@ -322,6 +314,12 @@
+ 		asahi-kasei,out4-single-end;
+ 		asahi-kasei,out5-single-end;
+ 		asahi-kasei,out6-single-end;
++
++		port {
++			ak4613_endpoint: endpoint {
++				remote-endpoint = <&rsnd_endpoint0>;
++			};
++		};
+ 	};
+ 
+ 	cs2000: clk_multiplier@4f {
+@@ -581,10 +579,18 @@
+ 		 <&audio_clk_c>,
+ 		 <&cpg CPG_CORE CPG_AUDIO_CLK_I>;
+ 
+-	rcar_sound,dai {
+-		dai0 {
+-			playback = <&ssi0 &src0 &dvc0>;
+-			capture  = <&ssi1 &src1 &dvc1>;
++	ports {
++		rsnd_port0: port@0 {
++			rsnd_endpoint0: endpoint {
++				remote-endpoint = <&ak4613_endpoint>;
++
++				dai-format = "left_j";
++				bitclock-master = <&rsnd_endpoint0>;
++				frame-master = <&rsnd_endpoint0>;
++
++				playback = <&ssi0 &src0 &dvc0>;
++				capture  = <&ssi1 &src1 &dvc1>;
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index fe005df02ed3..13ec8815a91b 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -333,6 +333,8 @@ CONFIG_GPIO_XGENE_SB=y
+ CONFIG_GPIO_PCA953X=y
+ CONFIG_GPIO_PCA953X_IRQ=y
+ CONFIG_GPIO_MAX77620=y
++CONFIG_POWER_AVS=y
++CONFIG_ROCKCHIP_IODOMAIN=y
+ CONFIG_POWER_RESET_MSM=y
+ CONFIG_POWER_RESET_XGENE=y
+ CONFIG_POWER_RESET_SYSCON=y
+diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
+index 4f5fd2a36e6e..3b0938281541 100644
+--- a/arch/arm64/include/asm/cmpxchg.h
++++ b/arch/arm64/include/asm/cmpxchg.h
+@@ -204,7 +204,9 @@ static inline void __cmpwait_case_##name(volatile void *ptr,		\
+ 	unsigned long tmp;						\
+ 									\
+ 	asm volatile(							\
+-	"	ldxr" #sz "\t%" #w "[tmp], %[v]\n"		\
++	"	sevl\n"							\
++	"	wfe\n"							\
++	"	ldxr" #sz "\t%" #w "[tmp], %[v]\n"			\
+ 	"	eor	%" #w "[tmp], %" #w "[tmp], %" #w "[val]\n"	\
+ 	"	cbnz	%" #w "[tmp], 1f\n"				\
+ 	"	wfe\n"							\
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 1b18b4722420..86d9f9d303b0 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -611,11 +611,13 @@ void __init mem_init(void)
+ 	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
+ #endif
+ 
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
+ 	/*
+ 	 * Make sure we chose the upper bound of sizeof(struct page)
+-	 * correctly.
++	 * correctly when sizing the VMEMMAP array.
+ 	 */
+ 	BUILD_BUG_ON(sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT));
++#endif
+ 
+ 	if (PAGE_SIZE >= 16384 && get_num_physpages() <= 128) {
+ 		extern int sysctl_overcommit_memory;
+diff --git a/arch/microblaze/boot/Makefile b/arch/microblaze/boot/Makefile
+index fd46385a4c97..600e5a198bd2 100644
+--- a/arch/microblaze/boot/Makefile
++++ b/arch/microblaze/boot/Makefile
+@@ -22,17 +22,19 @@ $(obj)/linux.bin.gz: $(obj)/linux.bin FORCE
+ quiet_cmd_cp = CP      $< $@$2
+ 	cmd_cp = cat $< >$@$2 || (rm -f $@ && echo false)
+ 
+-quiet_cmd_strip = STRIP   $@
++quiet_cmd_strip = STRIP   $< $@$2
+ 	cmd_strip = $(STRIP) -K microblaze_start -K _end -K __log_buf \
+-				-K _fdt_start vmlinux -o $@
++				-K _fdt_start $< -o $@$2
+ 
+ UIMAGE_LOADADDR = $(CONFIG_KERNEL_BASE_ADDR)
++UIMAGE_IN = $@
++UIMAGE_OUT = $@.ub
+ 
+ $(obj)/simpleImage.%: vmlinux FORCE
+ 	$(call if_changed,cp,.unstrip)
+ 	$(call if_changed,objcopy)
+ 	$(call if_changed,uimage)
+-	$(call if_changed,strip)
+-	@echo 'Kernel: $@ is ready' ' (#'`cat .version`')'
++	$(call if_changed,strip,.strip)
++	@echo 'Kernel: $(UIMAGE_OUT) is ready' ' (#'`cat .version`')'
+ 
+ clean-files += simpleImage.*.unstrip linux.bin.ub
+diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
+index c7c63959ba91..e582d2c88092 100644
+--- a/arch/powerpc/include/asm/barrier.h
++++ b/arch/powerpc/include/asm/barrier.h
+@@ -76,6 +76,21 @@ do {									\
+ 	___p1;								\
+ })
+ 
++#ifdef CONFIG_PPC_BOOK3S_64
++/*
++ * Prevent execution of subsequent instructions until preceding branches have
++ * been fully resolved and are no longer executing speculatively.
++ */
++#define barrier_nospec_asm ori 31,31,0
++
++// This also acts as a compiler barrier due to the memory clobber.
++#define barrier_nospec() asm (stringify_in_c(barrier_nospec_asm) ::: "memory")
++
++#else /* !CONFIG_PPC_BOOK3S_64 */
++#define barrier_nospec_asm
++#define barrier_nospec()
++#endif
++
+ #include <asm-generic/barrier.h>
+ 
+ #endif /* _ASM_POWERPC_BARRIER_H */
+diff --git a/arch/powerpc/include/asm/cache.h b/arch/powerpc/include/asm/cache.h
+index c1d257aa4c2d..66298461b640 100644
+--- a/arch/powerpc/include/asm/cache.h
++++ b/arch/powerpc/include/asm/cache.h
+@@ -9,11 +9,14 @@
+ #if defined(CONFIG_PPC_8xx) || defined(CONFIG_403GCX)
+ #define L1_CACHE_SHIFT		4
+ #define MAX_COPY_PREFETCH	1
++#define IFETCH_ALIGN_SHIFT	2
+ #elif defined(CONFIG_PPC_E500MC)
+ #define L1_CACHE_SHIFT		6
+ #define MAX_COPY_PREFETCH	4
++#define IFETCH_ALIGN_SHIFT	3
+ #elif defined(CONFIG_PPC32)
+ #define MAX_COPY_PREFETCH	4
++#define IFETCH_ALIGN_SHIFT	3	/* 603 fetches 2 insn at a time */
+ #if defined(CONFIG_PPC_47x)
+ #define L1_CACHE_SHIFT		7
+ #else
+diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
+index 0409c80c32c0..18ef59a9886d 100644
+--- a/arch/powerpc/include/asm/pkeys.h
++++ b/arch/powerpc/include/asm/pkeys.h
+@@ -26,6 +26,8 @@ extern u32 initial_allocation_mask; /* bits set for reserved keys */
+ # define VM_PKEY_BIT2	VM_HIGH_ARCH_2
+ # define VM_PKEY_BIT3	VM_HIGH_ARCH_3
+ # define VM_PKEY_BIT4	VM_HIGH_ARCH_4
++#elif !defined(VM_PKEY_BIT4)
++# define VM_PKEY_BIT4	VM_HIGH_ARCH_4
+ #endif
+ 
+ #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
+diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
+index b8a329f04814..e03c437a4d06 100644
+--- a/arch/powerpc/kernel/eeh_driver.c
++++ b/arch/powerpc/kernel/eeh_driver.c
+@@ -458,9 +458,11 @@ static void *eeh_add_virt_device(void *data, void *userdata)
+ 
+ 	driver = eeh_pcid_get(dev);
+ 	if (driver) {
+-		eeh_pcid_put(dev);
+-		if (driver->err_handler)
++		if (driver->err_handler) {
++			eeh_pcid_put(dev);
+ 			return NULL;
++		}
++		eeh_pcid_put(dev);
+ 	}
+ 
+ #ifdef CONFIG_PCI_IOV
+@@ -497,17 +499,19 @@ static void *eeh_rmv_device(void *data, void *userdata)
+ 	if (eeh_dev_removed(edev))
+ 		return NULL;
+ 
+-	driver = eeh_pcid_get(dev);
+-	if (driver) {
+-		eeh_pcid_put(dev);
+-		if (removed &&
+-		    eeh_pe_passed(edev->pe))
+-			return NULL;
+-		if (removed &&
+-		    driver->err_handler &&
+-		    driver->err_handler->error_detected &&
+-		    driver->err_handler->slot_reset)
++	if (removed) {
++		if (eeh_pe_passed(edev->pe))
+ 			return NULL;
++		driver = eeh_pcid_get(dev);
++		if (driver) {
++			if (driver->err_handler &&
++			    driver->err_handler->error_detected &&
++			    driver->err_handler->slot_reset) {
++				eeh_pcid_put(dev);
++				return NULL;
++			}
++			eeh_pcid_put(dev);
++		}
+ 	}
+ 
+ 	/* Remove it from PCI subsystem */
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index d8670a37d70c..6cab07e76732 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -913,7 +913,7 @@ start_here:
+ 	tovirt(r6,r6)
+ 	lis	r5, abatron_pteptrs@h
+ 	ori	r5, r5, abatron_pteptrs@l
+-	stw	r5, 0xf0(r0)	/* Must match your Abatron config file */
++	stw	r5, 0xf0(0)	/* Must match your Abatron config file */
+ 	tophys(r5,r5)
+ 	stw	r6, 0(r5)
+ 
+diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c
+index 85ad2f78b889..51cba7d7a4fd 100644
+--- a/arch/powerpc/kernel/pci_32.c
++++ b/arch/powerpc/kernel/pci_32.c
+@@ -11,6 +11,7 @@
+ #include <linux/sched.h>
+ #include <linux/errno.h>
+ #include <linux/bootmem.h>
++#include <linux/syscalls.h>
+ #include <linux/irq.h>
+ #include <linux/list.h>
+ #include <linux/of.h>
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index f9d6befb55a6..67f9c157bcc0 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -334,6 +334,7 @@ static void __init prom_print_dec(unsigned long val)
+ 	call_prom("write", 3, 1, prom.stdout, buf+i, size);
+ }
+ 
++__printf(1, 2)
+ static void __init prom_printf(const char *format, ...)
+ {
+ 	const char *p, *q, *s;
+@@ -1160,7 +1161,7 @@ static void __init prom_send_capabilities(void)
+ 		 */
+ 
+ 		cores = DIV_ROUND_UP(NR_CPUS, prom_count_smt_threads());
+-		prom_printf("Max number of cores passed to firmware: %lu (NR_CPUS = %lu)\n",
++		prom_printf("Max number of cores passed to firmware: %u (NR_CPUS = %d)\n",
+ 			    cores, NR_CPUS);
+ 
+ 		ibm_architecture_vec.vec5.max_cpus = cpu_to_be32(cores);
+@@ -1242,7 +1243,7 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
+ 
+ 	if (align)
+ 		base = _ALIGN_UP(base, align);
+-	prom_debug("alloc_up(%x, %x)\n", size, align);
++	prom_debug("%s(%lx, %lx)\n", __func__, size, align);
+ 	if (ram_top == 0)
+ 		prom_panic("alloc_up() called with mem not initialized\n");
+ 
+@@ -1253,7 +1254,7 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
+ 
+ 	for(; (base + size) <= alloc_top; 
+ 	    base = _ALIGN_UP(base + 0x100000, align)) {
+-		prom_debug("    trying: 0x%x\n\r", base);
++		prom_debug("    trying: 0x%lx\n\r", base);
+ 		addr = (unsigned long)prom_claim(base, size, 0);
+ 		if (addr != PROM_ERROR && addr != 0)
+ 			break;
+@@ -1265,12 +1266,12 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
+ 		return 0;
+ 	alloc_bottom = addr + size;
+ 
+-	prom_debug(" -> %x\n", addr);
+-	prom_debug("  alloc_bottom : %x\n", alloc_bottom);
+-	prom_debug("  alloc_top    : %x\n", alloc_top);
+-	prom_debug("  alloc_top_hi : %x\n", alloc_top_high);
+-	prom_debug("  rmo_top      : %x\n", rmo_top);
+-	prom_debug("  ram_top      : %x\n", ram_top);
++	prom_debug(" -> %lx\n", addr);
++	prom_debug("  alloc_bottom : %lx\n", alloc_bottom);
++	prom_debug("  alloc_top    : %lx\n", alloc_top);
++	prom_debug("  alloc_top_hi : %lx\n", alloc_top_high);
++	prom_debug("  rmo_top      : %lx\n", rmo_top);
++	prom_debug("  ram_top      : %lx\n", ram_top);
+ 
+ 	return addr;
+ }
+@@ -1285,7 +1286,7 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
+ {
+ 	unsigned long base, addr = 0;
+ 
+-	prom_debug("alloc_down(%x, %x, %s)\n", size, align,
++	prom_debug("%s(%lx, %lx, %s)\n", __func__, size, align,
+ 		   highmem ? "(high)" : "(low)");
+ 	if (ram_top == 0)
+ 		prom_panic("alloc_down() called with mem not initialized\n");
+@@ -1313,7 +1314,7 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
+ 	base = _ALIGN_DOWN(alloc_top - size, align);
+ 	for (; base > alloc_bottom;
+ 	     base = _ALIGN_DOWN(base - 0x100000, align))  {
+-		prom_debug("    trying: 0x%x\n\r", base);
++		prom_debug("    trying: 0x%lx\n\r", base);
+ 		addr = (unsigned long)prom_claim(base, size, 0);
+ 		if (addr != PROM_ERROR && addr != 0)
+ 			break;
+@@ -1324,12 +1325,12 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
+ 	alloc_top = addr;
+ 
+  bail:
+-	prom_debug(" -> %x\n", addr);
+-	prom_debug("  alloc_bottom : %x\n", alloc_bottom);
+-	prom_debug("  alloc_top    : %x\n", alloc_top);
+-	prom_debug("  alloc_top_hi : %x\n", alloc_top_high);
+-	prom_debug("  rmo_top      : %x\n", rmo_top);
+-	prom_debug("  ram_top      : %x\n", ram_top);
++	prom_debug(" -> %lx\n", addr);
++	prom_debug("  alloc_bottom : %lx\n", alloc_bottom);
++	prom_debug("  alloc_top    : %lx\n", alloc_top);
++	prom_debug("  alloc_top_hi : %lx\n", alloc_top_high);
++	prom_debug("  rmo_top      : %lx\n", rmo_top);
++	prom_debug("  ram_top      : %lx\n", ram_top);
+ 
+ 	return addr;
+ }
+@@ -1455,7 +1456,7 @@ static void __init prom_init_mem(void)
+ 
+ 			if (size == 0)
+ 				continue;
+-			prom_debug("    %x %x\n", base, size);
++			prom_debug("    %lx %lx\n", base, size);
+ 			if (base == 0 && (of_platform & PLATFORM_LPAR))
+ 				rmo_top = size;
+ 			if ((base + size) > ram_top)
+@@ -1475,12 +1476,12 @@ static void __init prom_init_mem(void)
+ 
+ 	if (prom_memory_limit) {
+ 		if (prom_memory_limit <= alloc_bottom) {
+-			prom_printf("Ignoring mem=%x <= alloc_bottom.\n",
+-				prom_memory_limit);
++			prom_printf("Ignoring mem=%lx <= alloc_bottom.\n",
++				    prom_memory_limit);
+ 			prom_memory_limit = 0;
+ 		} else if (prom_memory_limit >= ram_top) {
+-			prom_printf("Ignoring mem=%x >= ram_top.\n",
+-				prom_memory_limit);
++			prom_printf("Ignoring mem=%lx >= ram_top.\n",
++				    prom_memory_limit);
+ 			prom_memory_limit = 0;
+ 		} else {
+ 			ram_top = prom_memory_limit;
+@@ -1512,12 +1513,13 @@ static void __init prom_init_mem(void)
+ 		alloc_bottom = PAGE_ALIGN(prom_initrd_end);
+ 
+ 	prom_printf("memory layout at init:\n");
+-	prom_printf("  memory_limit : %x (16 MB aligned)\n", prom_memory_limit);
+-	prom_printf("  alloc_bottom : %x\n", alloc_bottom);
+-	prom_printf("  alloc_top    : %x\n", alloc_top);
+-	prom_printf("  alloc_top_hi : %x\n", alloc_top_high);
+-	prom_printf("  rmo_top      : %x\n", rmo_top);
+-	prom_printf("  ram_top      : %x\n", ram_top);
++	prom_printf("  memory_limit : %lx (16 MB aligned)\n",
++		    prom_memory_limit);
++	prom_printf("  alloc_bottom : %lx\n", alloc_bottom);
++	prom_printf("  alloc_top    : %lx\n", alloc_top);
++	prom_printf("  alloc_top_hi : %lx\n", alloc_top_high);
++	prom_printf("  rmo_top      : %lx\n", rmo_top);
++	prom_printf("  ram_top      : %lx\n", ram_top);
+ }
+ 
+ static void __init prom_close_stdin(void)
+@@ -1578,7 +1580,7 @@ static void __init prom_instantiate_opal(void)
+ 		return;
+ 	}
+ 
+-	prom_printf("instantiating opal at 0x%x...", base);
++	prom_printf("instantiating opal at 0x%llx...", base);
+ 
+ 	if (call_prom_ret("call-method", 4, 3, rets,
+ 			  ADDR("load-opal-runtime"),
+@@ -1594,10 +1596,10 @@ static void __init prom_instantiate_opal(void)
+ 
+ 	reserve_mem(base, size);
+ 
+-	prom_debug("opal base     = 0x%x\n", base);
+-	prom_debug("opal align    = 0x%x\n", align);
+-	prom_debug("opal entry    = 0x%x\n", entry);
+-	prom_debug("opal size     = 0x%x\n", (long)size);
++	prom_debug("opal base     = 0x%llx\n", base);
++	prom_debug("opal align    = 0x%llx\n", align);
++	prom_debug("opal entry    = 0x%llx\n", entry);
++	prom_debug("opal size     = 0x%llx\n", size);
+ 
+ 	prom_setprop(opal_node, "/ibm,opal", "opal-base-address",
+ 		     &base, sizeof(base));
+@@ -1674,7 +1676,7 @@ static void __init prom_instantiate_rtas(void)
+ 
+ 	prom_debug("rtas base     = 0x%x\n", base);
+ 	prom_debug("rtas entry    = 0x%x\n", entry);
+-	prom_debug("rtas size     = 0x%x\n", (long)size);
++	prom_debug("rtas size     = 0x%x\n", size);
+ 
+ 	prom_debug("prom_instantiate_rtas: end...\n");
+ }
+@@ -1732,7 +1734,7 @@ static void __init prom_instantiate_sml(void)
+ 	if (base == 0)
+ 		prom_panic("Could not allocate memory for sml\n");
+ 
+-	prom_printf("instantiating sml at 0x%x...", base);
++	prom_printf("instantiating sml at 0x%llx...", base);
+ 
+ 	memset((void *)base, 0, size);
+ 
+@@ -1751,8 +1753,8 @@ static void __init prom_instantiate_sml(void)
+ 	prom_setprop(ibmvtpm_node, "/vdevice/vtpm", "linux,sml-size",
+ 		     &size, sizeof(size));
+ 
+-	prom_debug("sml base     = 0x%x\n", base);
+-	prom_debug("sml size     = 0x%x\n", (long)size);
++	prom_debug("sml base     = 0x%llx\n", base);
++	prom_debug("sml size     = 0x%x\n", size);
+ 
+ 	prom_debug("prom_instantiate_sml: end...\n");
+ }
+@@ -1845,7 +1847,7 @@ static void __init prom_initialize_tce_table(void)
+ 
+ 		prom_debug("TCE table: %s\n", path);
+ 		prom_debug("\tnode = 0x%x\n", node);
+-		prom_debug("\tbase = 0x%x\n", base);
++		prom_debug("\tbase = 0x%llx\n", base);
+ 		prom_debug("\tsize = 0x%x\n", minsize);
+ 
+ 		/* Initialize the table to have a one-to-one mapping
+@@ -1932,12 +1934,12 @@ static void __init prom_hold_cpus(void)
+ 	}
+ 
+ 	prom_debug("prom_hold_cpus: start...\n");
+-	prom_debug("    1) spinloop       = 0x%x\n", (unsigned long)spinloop);
+-	prom_debug("    1) *spinloop      = 0x%x\n", *spinloop);
+-	prom_debug("    1) acknowledge    = 0x%x\n",
++	prom_debug("    1) spinloop       = 0x%lx\n", (unsigned long)spinloop);
++	prom_debug("    1) *spinloop      = 0x%lx\n", *spinloop);
++	prom_debug("    1) acknowledge    = 0x%lx\n",
+ 		   (unsigned long)acknowledge);
+-	prom_debug("    1) *acknowledge   = 0x%x\n", *acknowledge);
+-	prom_debug("    1) secondary_hold = 0x%x\n", secondary_hold);
++	prom_debug("    1) *acknowledge   = 0x%lx\n", *acknowledge);
++	prom_debug("    1) secondary_hold = 0x%lx\n", secondary_hold);
+ 
+ 	/* Set the common spinloop variable, so all of the secondary cpus
+ 	 * will block when they are awakened from their OF spinloop.
+@@ -1965,7 +1967,7 @@ static void __init prom_hold_cpus(void)
+ 		prom_getprop(node, "reg", &reg, sizeof(reg));
+ 		cpu_no = be32_to_cpu(reg);
+ 
+-		prom_debug("cpu hw idx   = %lu\n", cpu_no);
++		prom_debug("cpu hw idx   = %u\n", cpu_no);
+ 
+ 		/* Init the acknowledge var which will be reset by
+ 		 * the secondary cpu when it awakens from its OF
+@@ -1975,7 +1977,7 @@ static void __init prom_hold_cpus(void)
+ 
+ 		if (cpu_no != prom.cpu) {
+ 			/* Primary Thread of non-boot cpu or any thread */
+-			prom_printf("starting cpu hw idx %lu... ", cpu_no);
++			prom_printf("starting cpu hw idx %u... ", cpu_no);
+ 			call_prom("start-cpu", 3, 0, node,
+ 				  secondary_hold, cpu_no);
+ 
+@@ -1986,11 +1988,11 @@ static void __init prom_hold_cpus(void)
+ 			if (*acknowledge == cpu_no)
+ 				prom_printf("done\n");
+ 			else
+-				prom_printf("failed: %x\n", *acknowledge);
++				prom_printf("failed: %lx\n", *acknowledge);
+ 		}
+ #ifdef CONFIG_SMP
+ 		else
+-			prom_printf("boot cpu hw idx %lu\n", cpu_no);
++			prom_printf("boot cpu hw idx %u\n", cpu_no);
+ #endif /* CONFIG_SMP */
+ 	}
+ 
+@@ -2268,7 +2270,7 @@ static void __init *make_room(unsigned long *mem_start, unsigned long *mem_end,
+ 	while ((*mem_start + needed) > *mem_end) {
+ 		unsigned long room, chunk;
+ 
+-		prom_debug("Chunk exhausted, claiming more at %x...\n",
++		prom_debug("Chunk exhausted, claiming more at %lx...\n",
+ 			   alloc_bottom);
+ 		room = alloc_top - alloc_bottom;
+ 		if (room > DEVTREE_CHUNK_SIZE)
+@@ -2494,7 +2496,7 @@ static void __init flatten_device_tree(void)
+ 	room = alloc_top - alloc_bottom - 0x4000;
+ 	if (room > DEVTREE_CHUNK_SIZE)
+ 		room = DEVTREE_CHUNK_SIZE;
+-	prom_debug("starting device tree allocs at %x\n", alloc_bottom);
++	prom_debug("starting device tree allocs at %lx\n", alloc_bottom);
+ 
+ 	/* Now try to claim that */
+ 	mem_start = (unsigned long)alloc_up(room, PAGE_SIZE);
+@@ -2557,7 +2559,7 @@ static void __init flatten_device_tree(void)
+ 		int i;
+ 		prom_printf("reserved memory map:\n");
+ 		for (i = 0; i < mem_reserve_cnt; i++)
+-			prom_printf("  %x - %x\n",
++			prom_printf("  %llx - %llx\n",
+ 				    be64_to_cpu(mem_reserve_map[i].base),
+ 				    be64_to_cpu(mem_reserve_map[i].size));
+ 	}
+@@ -2567,9 +2569,9 @@ static void __init flatten_device_tree(void)
+ 	 */
+ 	mem_reserve_cnt = MEM_RESERVE_MAP_SIZE;
+ 
+-	prom_printf("Device tree strings 0x%x -> 0x%x\n",
++	prom_printf("Device tree strings 0x%lx -> 0x%lx\n",
+ 		    dt_string_start, dt_string_end);
+-	prom_printf("Device tree struct  0x%x -> 0x%x\n",
++	prom_printf("Device tree struct  0x%lx -> 0x%lx\n",
+ 		    dt_struct_start, dt_struct_end);
+ }
+ 
+@@ -3001,7 +3003,7 @@ static void __init prom_find_boot_cpu(void)
+ 	prom_getprop(cpu_pkg, "reg", &rval, sizeof(rval));
+ 	prom.cpu = be32_to_cpu(rval);
+ 
+-	prom_debug("Booting CPU hw index = %lu\n", prom.cpu);
++	prom_debug("Booting CPU hw index = %d\n", prom.cpu);
+ }
+ 
+ static void __init prom_check_initrd(unsigned long r3, unsigned long r4)
+@@ -3023,8 +3025,8 @@ static void __init prom_check_initrd(unsigned long r3, unsigned long r4)
+ 		reserve_mem(prom_initrd_start,
+ 			    prom_initrd_end - prom_initrd_start);
+ 
+-		prom_debug("initrd_start=0x%x\n", prom_initrd_start);
+-		prom_debug("initrd_end=0x%x\n", prom_initrd_end);
++		prom_debug("initrd_start=0x%lx\n", prom_initrd_start);
++		prom_debug("initrd_end=0x%lx\n", prom_initrd_end);
+ 	}
+ #endif /* CONFIG_BLK_DEV_INITRD */
+ }
+@@ -3277,7 +3279,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
+ 	/* Don't print anything after quiesce under OPAL, it crashes OFW */
+ 	if (of_platform != PLATFORM_OPAL) {
+ 		prom_printf("Booting Linux via __start() @ 0x%lx ...\n", kbase);
+-		prom_debug("->dt_header_start=0x%x\n", hdr);
++		prom_debug("->dt_header_start=0x%lx\n", hdr);
+ 	}
+ 
+ #ifdef CONFIG_PPC32
+diff --git a/arch/powerpc/lib/string.S b/arch/powerpc/lib/string.S
+index a787776822d8..0378def28d41 100644
+--- a/arch/powerpc/lib/string.S
++++ b/arch/powerpc/lib/string.S
+@@ -12,6 +12,7 @@
+ #include <asm/errno.h>
+ #include <asm/ppc_asm.h>
+ #include <asm/export.h>
++#include <asm/cache.h>
+ 
+ 	.text
+ 	
+@@ -23,7 +24,7 @@ _GLOBAL(strncpy)
+ 	mtctr	r5
+ 	addi	r6,r3,-1
+ 	addi	r4,r4,-1
+-	.balign 16
++	.balign IFETCH_ALIGN_BYTES
+ 1:	lbzu	r0,1(r4)
+ 	cmpwi	0,r0,0
+ 	stbu	r0,1(r6)
+@@ -43,7 +44,7 @@ _GLOBAL(strncmp)
+ 	mtctr	r5
+ 	addi	r5,r3,-1
+ 	addi	r4,r4,-1
+-	.balign 16
++	.balign IFETCH_ALIGN_BYTES
+ 1:	lbzu	r3,1(r5)
+ 	cmpwi	1,r3,0
+ 	lbzu	r0,1(r4)
+@@ -77,7 +78,7 @@ _GLOBAL(memchr)
+ 	beq-	2f
+ 	mtctr	r5
+ 	addi	r3,r3,-1
+-	.balign 16
++	.balign IFETCH_ALIGN_BYTES
+ 1:	lbzu	r0,1(r3)
+ 	cmpw	0,r0,r4
+ 	bdnzf	2,1b
+diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
+index 66577cc66dc9..2f4b33b24b3b 100644
+--- a/arch/powerpc/mm/slb.c
++++ b/arch/powerpc/mm/slb.c
+@@ -63,14 +63,14 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
+ 	 * updating it.  No write barriers are needed here, provided
+ 	 * we only update the current CPU's SLB shadow buffer.
+ 	 */
+-	p->save_area[index].esid = 0;
+-	p->save_area[index].vsid = cpu_to_be64(mk_vsid_data(ea, ssize, flags));
+-	p->save_area[index].esid = cpu_to_be64(mk_esid_data(ea, ssize, index));
++	WRITE_ONCE(p->save_area[index].esid, 0);
++	WRITE_ONCE(p->save_area[index].vsid, cpu_to_be64(mk_vsid_data(ea, ssize, flags)));
++	WRITE_ONCE(p->save_area[index].esid, cpu_to_be64(mk_esid_data(ea, ssize, index)));
+ }
+ 
+ static inline void slb_shadow_clear(enum slb_index index)
+ {
+-	get_slb_shadow()->save_area[index].esid = 0;
++	WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
+ }
+ 
+ static inline void create_shadowed_slbe(unsigned long ea, int ssize,
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 0ef3d9580e98..5299013bd9c9 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -202,25 +202,37 @@ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
+ 
+ static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func)
+ {
++	unsigned int i, ctx_idx = ctx->idx;
++
++	/* Load function address into r12 */
++	PPC_LI64(12, func);
++
++	/* For bpf-to-bpf function calls, the callee's address is unknown
++	 * until the last extra pass. As seen above, we use PPC_LI64() to
++	 * load the callee's address, but this may optimize the number of
++	 * instructions required based on the nature of the address.
++	 *
++	 * Since we don't want the number of instructions emitted to change,
++	 * we pad the optimized PPC_LI64() call with NOPs to guarantee that
++	 * we always have a five-instruction sequence, which is the maximum
++	 * that PPC_LI64() can emit.
++	 */
++	for (i = ctx->idx - ctx_idx; i < 5; i++)
++		PPC_NOP();
++
+ #ifdef PPC64_ELF_ABI_v1
+-	/* func points to the function descriptor */
+-	PPC_LI64(b2p[TMP_REG_2], func);
+-	/* Load actual entry point from function descriptor */
+-	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0);
+-	/* ... and move it to LR */
+-	PPC_MTLR(b2p[TMP_REG_1]);
+ 	/*
+ 	 * Load TOC from function descriptor at offset 8.
+ 	 * We can clobber r2 since we get called through a
+ 	 * function pointer (so caller will save/restore r2)
+ 	 * and since we don't use a TOC ourself.
+ 	 */
+-	PPC_BPF_LL(2, b2p[TMP_REG_2], 8);
+-#else
+-	/* We can clobber r12 */
+-	PPC_FUNC_ADDR(12, func);
+-	PPC_MTLR(12);
++	PPC_BPF_LL(2, 12, 8);
++	/* Load actual entry point from function descriptor */
++	PPC_BPF_LL(12, 12, 0);
+ #endif
++
++	PPC_MTLR(12);
+ 	PPC_BLRL();
+ }
+ 
+diff --git a/arch/powerpc/platforms/chrp/time.c b/arch/powerpc/platforms/chrp/time.c
+index 03d115aaa191..acde7bbe0716 100644
+--- a/arch/powerpc/platforms/chrp/time.c
++++ b/arch/powerpc/platforms/chrp/time.c
+@@ -28,6 +28,8 @@
+ #include <asm/sections.h>
+ #include <asm/time.h>
+ 
++#include <platforms/chrp/chrp.h>
++
+ extern spinlock_t rtc_lock;
+ 
+ #define NVRAM_AS0  0x74
+@@ -63,7 +65,7 @@ long __init chrp_time_init(void)
+ 	return 0;
+ }
+ 
+-int chrp_cmos_clock_read(int addr)
++static int chrp_cmos_clock_read(int addr)
+ {
+ 	if (nvram_as1 != 0)
+ 		outb(addr>>8, nvram_as1);
+@@ -71,7 +73,7 @@ int chrp_cmos_clock_read(int addr)
+ 	return (inb(nvram_data));
+ }
+ 
+-void chrp_cmos_clock_write(unsigned long val, int addr)
++static void chrp_cmos_clock_write(unsigned long val, int addr)
+ {
+ 	if (nvram_as1 != 0)
+ 		outb(addr>>8, nvram_as1);
+diff --git a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+index 89c54de88b7a..bf4a125faec6 100644
+--- a/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
++++ b/arch/powerpc/platforms/embedded6xx/hlwd-pic.c
+@@ -35,6 +35,8 @@
+  */
+ #define HW_BROADWAY_ICR		0x00
+ #define HW_BROADWAY_IMR		0x04
++#define HW_STARLET_ICR		0x08
++#define HW_STARLET_IMR		0x0c
+ 
+ 
+ /*
+@@ -74,6 +76,9 @@ static void hlwd_pic_unmask(struct irq_data *d)
+ 	void __iomem *io_base = irq_data_get_irq_chip_data(d);
+ 
+ 	setbits32(io_base + HW_BROADWAY_IMR, 1 << irq);
++
++	/* Make sure the ARM (aka. Starlet) doesn't handle this interrupt. */
++	clrbits32(io_base + HW_STARLET_IMR, 1 << irq);
+ }
+ 
+ 
+diff --git a/arch/powerpc/platforms/powermac/bootx_init.c b/arch/powerpc/platforms/powermac/bootx_init.c
+index c3c9bbb3573a..ba0964c17620 100644
+--- a/arch/powerpc/platforms/powermac/bootx_init.c
++++ b/arch/powerpc/platforms/powermac/bootx_init.c
+@@ -468,7 +468,7 @@ void __init bootx_init(unsigned long r3, unsigned long r4)
+ 	boot_infos_t *bi = (boot_infos_t *) r4;
+ 	unsigned long hdr;
+ 	unsigned long space;
+-	unsigned long ptr, x;
++	unsigned long ptr;
+ 	char *model;
+ 	unsigned long offset = reloc_offset();
+ 
+@@ -562,6 +562,8 @@ void __init bootx_init(unsigned long r3, unsigned long r4)
+ 	 * MMU switched OFF, so this should not be useful anymore.
+ 	 */
+ 	if (bi->version < 4) {
++		unsigned long x __maybe_unused;
++
+ 		bootx_printf("Touching pages...\n");
+ 
+ 		/*
+diff --git a/arch/powerpc/platforms/powermac/setup.c b/arch/powerpc/platforms/powermac/setup.c
+index ab668cb72263..8b2eab1340f4 100644
+--- a/arch/powerpc/platforms/powermac/setup.c
++++ b/arch/powerpc/platforms/powermac/setup.c
+@@ -352,6 +352,7 @@ static int pmac_late_init(void)
+ }
+ machine_late_initcall(powermac, pmac_late_init);
+ 
++void note_bootable_part(dev_t dev, int part, int goodness);
+ /*
+  * This is __ref because we check for "initializing" before
+  * touching any of the __init sensitive things and "initializing"
+diff --git a/arch/s390/include/asm/cpu_mf.h b/arch/s390/include/asm/cpu_mf.h
+index f58d17e9dd65..de023a9a88ca 100644
+--- a/arch/s390/include/asm/cpu_mf.h
++++ b/arch/s390/include/asm/cpu_mf.h
+@@ -113,7 +113,7 @@ struct hws_basic_entry {
+ 
+ struct hws_diag_entry {
+ 	unsigned int def:16;	    /* 0-15  Data Entry Format		 */
+-	unsigned int R:14;	    /* 16-19 and 20-30 reserved		 */
++	unsigned int R:15;	    /* 16-19 and 20-30 reserved		 */
+ 	unsigned int I:1;	    /* 31 entry valid or invalid	 */
+ 	u8	     data[];	    /* Machine-dependent sample data	 */
+ } __packed;
+@@ -129,7 +129,9 @@ struct hws_trailer_entry {
+ 			unsigned int f:1;	/* 0 - Block Full Indicator   */
+ 			unsigned int a:1;	/* 1 - Alert request control  */
+ 			unsigned int t:1;	/* 2 - Timestamp format	      */
+-			unsigned long long:61;	/* 3 - 63: Reserved	      */
++			unsigned int :29;	/* 3 - 31: Reserved	      */
++			unsigned int bsdes:16;	/* 32-47: size of basic SDE   */
++			unsigned int dsdes:16;	/* 48-63: size of diagnostic SDE */
+ 		};
+ 		unsigned long long flags;	/* 0 - 63: All indicators     */
+ 	};
+diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
+index 271dfc260e82..3d7d0046cf49 100644
+--- a/arch/sh/boards/mach-migor/setup.c
++++ b/arch/sh/boards/mach-migor/setup.c
+@@ -359,7 +359,7 @@ static struct gpiod_lookup_table ov7725_gpios = {
+ static struct gpiod_lookup_table tw9910_gpios = {
+ 	.dev_id		= "0-0045",
+ 	.table		= {
+-		GPIO_LOOKUP("sh7722_pfc", GPIO_PTT2, "pdn", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("sh7722_pfc", GPIO_PTT2, "pdn", GPIO_ACTIVE_LOW),
+ 		GPIO_LOOKUP("sh7722_pfc", GPIO_PTT3, "rstb", GPIO_ACTIVE_LOW),
+ 	},
+ };
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index a7956fc7ca1d..3b0f93eb3cc0 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -218,7 +218,7 @@ void uncore_perf_event_update(struct intel_uncore_box *box, struct perf_event *e
+ 	u64 prev_count, new_count, delta;
+ 	int shift;
+ 
+-	if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
++	if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
+ 		shift = 64 - uncore_fixed_ctr_bits(box);
+ 	else
+ 		shift = 64 - uncore_perf_ctr_bits(box);
+diff --git a/arch/x86/events/intel/uncore_nhmex.c b/arch/x86/events/intel/uncore_nhmex.c
+index 93e7a8397cde..173e2674be6e 100644
+--- a/arch/x86/events/intel/uncore_nhmex.c
++++ b/arch/x86/events/intel/uncore_nhmex.c
+@@ -246,7 +246,7 @@ static void nhmex_uncore_msr_enable_event(struct intel_uncore_box *box, struct p
+ {
+ 	struct hw_perf_event *hwc = &event->hw;
+ 
+-	if (hwc->idx >= UNCORE_PMC_IDX_FIXED)
++	if (hwc->idx == UNCORE_PMC_IDX_FIXED)
+ 		wrmsrl(hwc->config_base, NHMEX_PMON_CTL_EN_BIT0);
+ 	else if (box->pmu->type->event_mask & NHMEX_PMON_CTL_EN_BIT0)
+ 		wrmsrl(hwc->config_base, hwc->config | NHMEX_PMON_CTL_EN_BIT22);
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 77e201301528..08286269fd24 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -70,7 +70,7 @@ static DEFINE_MUTEX(microcode_mutex);
+ /*
+  * Serialize late loading so that CPUs get updated one-by-one.
+  */
+-static DEFINE_SPINLOCK(update_lock);
++static DEFINE_RAW_SPINLOCK(update_lock);
+ 
+ struct ucode_cpu_info		ucode_cpu_info[NR_CPUS];
+ 
+@@ -560,9 +560,9 @@ static int __reload_late(void *info)
+ 	if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC))
+ 		return -1;
+ 
+-	spin_lock(&update_lock);
++	raw_spin_lock(&update_lock);
+ 	apply_microcode_local(&err);
+-	spin_unlock(&update_lock);
++	raw_spin_unlock(&update_lock);
+ 
+ 	/* siblings return UCODE_OK because their engine got updated already */
+ 	if (err > UCODE_NFOUND) {
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 8494dbae41b9..030c6bb240d9 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -891,7 +891,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache,
+ 	if (cache->nobjs >= min)
+ 		return 0;
+ 	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
+-		page = (void *)__get_free_page(GFP_KERNEL);
++		page = (void *)__get_free_page(GFP_KERNEL_ACCOUNT);
+ 		if (!page)
+ 			return -ENOMEM;
+ 		cache->objects[cache->nobjs++] = page;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 26110c202b19..ace53c2934dc 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1768,7 +1768,10 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
+ 	unsigned long npages, npinned, size;
+ 	unsigned long locked, lock_limit;
+ 	struct page **pages;
+-	int first, last;
++	unsigned long first, last;
++
++	if (ulen == 0 || uaddr + ulen < uaddr)
++		return NULL;
+ 
+ 	/* Calculate number of pages. */
+ 	first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+@@ -6947,6 +6950,9 @@ static int svm_register_enc_region(struct kvm *kvm,
+ 	if (!sev_guest(kvm))
+ 		return -ENOTTY;
+ 
++	if (range->addr > ULONG_MAX || range->size > ULONG_MAX)
++		return -EINVAL;
++
+ 	region = kzalloc(sizeof(*region), GFP_KERNEL);
+ 	if (!region)
+ 		return -ENOMEM;
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 771ae9730ac6..1f0951d36424 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -1898,7 +1898,6 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+ 
+ 	if (!RB_EMPTY_NODE(&rq->rb_node))
+ 		goto end;
+-	spin_lock_irq(&bfqq->bfqd->lock);
+ 
+ 	/*
+ 	 * If next and rq belong to the same bfq_queue and next is older
+@@ -1923,7 +1922,6 @@ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+ 	bfq_remove_request(q, next);
+ 	bfqg_stats_update_io_remove(bfqq_group(bfqq), next->cmd_flags);
+ 
+-	spin_unlock_irq(&bfqq->bfqd->lock);
+ end:
+ 	bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags);
+ }
+diff --git a/block/bio.c b/block/bio.c
+index 53e0f0a1ed94..9f7fa24d8b15 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -881,16 +881,16 @@ EXPORT_SYMBOL(bio_add_page);
+  */
+ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ {
+-	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
++	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt, idx;
+ 	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
+ 	struct page **pages = (struct page **)bv;
+-	size_t offset, diff;
++	size_t offset;
+ 	ssize_t size;
+ 
+ 	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+ 	if (unlikely(size <= 0))
+ 		return size ? size : -EFAULT;
+-	nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
++	idx = nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
+ 
+ 	/*
+ 	 * Deep magic below:  We need to walk the pinned pages backwards
+@@ -903,17 +903,15 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ 	bio->bi_iter.bi_size += size;
+ 	bio->bi_vcnt += nr_pages;
+ 
+-	diff = (nr_pages * PAGE_SIZE - offset) - size;
+-	while (nr_pages--) {
+-		bv[nr_pages].bv_page = pages[nr_pages];
+-		bv[nr_pages].bv_len = PAGE_SIZE;
+-		bv[nr_pages].bv_offset = 0;
++	while (idx--) {
++		bv[idx].bv_page = pages[idx];
++		bv[idx].bv_len = PAGE_SIZE;
++		bv[idx].bv_offset = 0;
+ 	}
+ 
+ 	bv[0].bv_offset += offset;
+ 	bv[0].bv_len -= offset;
+-	if (diff)
+-		bv[bio->bi_vcnt - 1].bv_len -= diff;
++	bv[nr_pages - 1].bv_len -= nr_pages * PAGE_SIZE - offset - size;
+ 
+ 	iov_iter_advance(iter, size);
+ 	return 0;
+@@ -1808,6 +1806,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
+ 		bio_integrity_trim(split);
+ 
+ 	bio_advance(bio, split->bi_iter.bi_size);
++	bio->bi_iter.bi_done = 0;
+ 
+ 	if (bio_flagged(bio, BIO_TRACE_COMPLETION))
+ 		bio_set_flag(split, BIO_TRACE_COMPLETION);
+diff --git a/crypto/authenc.c b/crypto/authenc.c
+index d3d6d72fe649..4fa8d40d947b 100644
+--- a/crypto/authenc.c
++++ b/crypto/authenc.c
+@@ -108,6 +108,7 @@ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
+ 				       CRYPTO_TFM_RES_MASK);
+ 
+ out:
++	memzero_explicit(&keys, sizeof(keys));
+ 	return err;
+ 
+ badkey:
+diff --git a/crypto/authencesn.c b/crypto/authencesn.c
+index 15f91ddd7f0e..50b804747e20 100644
+--- a/crypto/authencesn.c
++++ b/crypto/authencesn.c
+@@ -90,6 +90,7 @@ static int crypto_authenc_esn_setkey(struct crypto_aead *authenc_esn, const u8 *
+ 					   CRYPTO_TFM_RES_MASK);
+ 
+ out:
++	memzero_explicit(&keys, sizeof(keys));
+ 	return err;
+ 
+ badkey:
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index eb091375c873..9706613eecf9 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -70,6 +70,10 @@ ACPI_MODULE_NAME("acpi_lpss");
+ #define LPSS_SAVE_CTX			BIT(4)
+ #define LPSS_NO_D3_DELAY		BIT(5)
+ 
++/* Crystal Cove PMIC shares same ACPI ID between different platforms */
++#define BYT_CRC_HRV			2
++#define CHT_CRC_HRV			3
++
+ struct lpss_private_data;
+ 
+ struct lpss_device_desc {
+@@ -163,7 +167,7 @@ static void byt_pwm_setup(struct lpss_private_data *pdata)
+ 	if (!adev->pnp.unique_id || strcmp(adev->pnp.unique_id, "1"))
+ 		return;
+ 
+-	if (!acpi_dev_present("INT33FD", NULL, -1))
++	if (!acpi_dev_present("INT33FD", NULL, BYT_CRC_HRV))
+ 		pwm_add_table(byt_pwm_lookup, ARRAY_SIZE(byt_pwm_lookup));
+ }
+ 
+@@ -875,6 +879,7 @@ static void acpi_lpss_dismiss(struct device *dev)
+ #define LPSS_GPIODEF0_DMA_LLP		BIT(13)
+ 
+ static DEFINE_MUTEX(lpss_iosf_mutex);
++static bool lpss_iosf_d3_entered;
+ 
+ static void lpss_iosf_enter_d3_state(void)
+ {
+@@ -917,6 +922,9 @@ static void lpss_iosf_enter_d3_state(void)
+ 
+ 	iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
+ 			LPSS_IOSF_GPIODEF0, value1, mask1);
++
++	lpss_iosf_d3_entered = true;
++
+ exit:
+ 	mutex_unlock(&lpss_iosf_mutex);
+ }
+@@ -931,6 +939,11 @@ static void lpss_iosf_exit_d3_state(void)
+ 
+ 	mutex_lock(&lpss_iosf_mutex);
+ 
++	if (!lpss_iosf_d3_entered)
++		goto exit;
++
++	lpss_iosf_d3_entered = false;
++
+ 	iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
+ 			LPSS_IOSF_GPIODEF0, value1, mask1);
+ 
+@@ -940,13 +953,13 @@ static void lpss_iosf_exit_d3_state(void)
+ 	iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE,
+ 			LPSS_IOSF_PMCSR, value2, mask2);
+ 
++exit:
+ 	mutex_unlock(&lpss_iosf_mutex);
+ }
+ 
+-static int acpi_lpss_suspend(struct device *dev, bool runtime)
++static int acpi_lpss_suspend(struct device *dev, bool wakeup)
+ {
+ 	struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
+-	bool wakeup = runtime || device_may_wakeup(dev);
+ 	int ret;
+ 
+ 	if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
+@@ -959,14 +972,14 @@ static int acpi_lpss_suspend(struct device *dev, bool runtime)
+ 	 * wrong status for devices being about to be powered off. See
+ 	 * lpss_iosf_enter_d3_state() for further information.
+ 	 */
+-	if ((runtime || !pm_suspend_via_firmware()) &&
++	if (acpi_target_system_state() == ACPI_STATE_S0 &&
+ 	    lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
+ 		lpss_iosf_enter_d3_state();
+ 
+ 	return ret;
+ }
+ 
+-static int acpi_lpss_resume(struct device *dev, bool runtime)
++static int acpi_lpss_resume(struct device *dev)
+ {
+ 	struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
+ 	int ret;
+@@ -975,8 +988,7 @@ static int acpi_lpss_resume(struct device *dev, bool runtime)
+ 	 * This call is kept first to be in symmetry with
+ 	 * acpi_lpss_runtime_suspend() one.
+ 	 */
+-	if ((runtime || !pm_resume_via_firmware()) &&
+-	    lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
++	if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
+ 		lpss_iosf_exit_d3_state();
+ 
+ 	ret = acpi_dev_resume(dev);
+@@ -1000,12 +1012,12 @@ static int acpi_lpss_suspend_late(struct device *dev)
+ 		return 0;
+ 
+ 	ret = pm_generic_suspend_late(dev);
+-	return ret ? ret : acpi_lpss_suspend(dev, false);
++	return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
+ }
+ 
+ static int acpi_lpss_resume_early(struct device *dev)
+ {
+-	int ret = acpi_lpss_resume(dev, false);
++	int ret = acpi_lpss_resume(dev);
+ 
+ 	return ret ? ret : pm_generic_resume_early(dev);
+ }
+@@ -1020,7 +1032,7 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
+ 
+ static int acpi_lpss_runtime_resume(struct device *dev)
+ {
+-	int ret = acpi_lpss_resume(dev, true);
++	int ret = acpi_lpss_resume(dev);
+ 
+ 	return ret ? ret : pm_generic_runtime_resume(dev);
+ }
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index ee840be150b5..44f35ab3347d 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -709,15 +709,20 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 			} else
+ 			    if ((walk_state->
+ 				 parse_flags & ACPI_PARSE_MODULE_LEVEL)
++				&& status != AE_CTRL_TRANSFER
+ 				&& ACPI_FAILURE(status)) {
+ 				/*
+-				 * ACPI_PARSE_MODULE_LEVEL means that we are loading a table by
+-				 * executing it as a control method. However, if we encounter
+-				 * an error while loading the table, we need to keep trying to
+-				 * load the table rather than aborting the table load. Set the
+-				 * status to AE_OK to proceed with the table load. If we get a
+-				 * failure at this point, it means that the dispatcher got an
+-				 * error while processing Op (most likely an AML operand error.
++				 * ACPI_PARSE_MODULE_LEVEL flag means that we are currently
++				 * loading a table by executing it as a control method.
++				 * However, if we encounter an error while loading the table,
++				 * we need to keep trying to load the table rather than
++				 * aborting the table load (setting the status to AE_OK
++				 * continues the table load). If we get a failure at this
++				 * point, it means that the dispatcher got an error while
++				 * processing Op (most likely an AML operand error) or a
++				 * control method was called from module level and the
++				 * dispatcher returned AE_CTRL_TRANSFER. In the latter case,
++				 * leave the status alone, there's nothing wrong with it.
+ 				 */
+ 				status = AE_OK;
+ 			}
+diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
+index 0da18bde6a16..dd946b62fedd 100644
+--- a/drivers/acpi/pci_root.c
++++ b/drivers/acpi/pci_root.c
+@@ -472,9 +472,11 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm)
+ 	}
+ 
+ 	control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL
+-		| OSC_PCI_EXPRESS_NATIVE_HP_CONTROL
+ 		| OSC_PCI_EXPRESS_PME_CONTROL;
+ 
++	if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
++		control |= OSC_PCI_EXPRESS_NATIVE_HP_CONTROL;
++
+ 	if (pci_aer_available()) {
+ 		if (aer_acpi_firmware_first())
+ 			dev_info(&device->dev,
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 513b260bcff1..f5942d09854c 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2218,12 +2218,16 @@ static void ata_eh_link_autopsy(struct ata_link *link)
+ 		if (qc->err_mask & ~AC_ERR_OTHER)
+ 			qc->err_mask &= ~AC_ERR_OTHER;
+ 
+-		/* SENSE_VALID trumps dev/unknown error and revalidation */
++		/*
++		 * SENSE_VALID trumps dev/unknown error and revalidation. Upper
++		 * layers will determine whether the command is worth retrying
++		 * based on the sense data and device class/type. Otherwise,
++		 * determine directly if the command is worth retrying using its
++		 * error mask and flags.
++		 */
+ 		if (qc->flags & ATA_QCFLAG_SENSE_VALID)
+ 			qc->err_mask &= ~(AC_ERR_DEV | AC_ERR_OTHER);
+-
+-		/* determine whether the command is worth retrying */
+-		if (ata_eh_worth_retry(qc))
++		else if (ata_eh_worth_retry(qc))
+ 			qc->flags |= ATA_QCFLAG_RETRY;
+ 
+ 		/* accumulate error info */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index b937cc1e2c07..836756a5c35c 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -276,6 +276,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x04ca, 0x3011), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME },
+ 	{ USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME },
++	{ USB_DEVICE(0x04ca, 0x301a), .driver_info = BTUSB_QCA_ROME },
+ 
+ 	/* Broadcom BCM2035 */
+ 	{ USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
+@@ -371,6 +372,9 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Additional Realtek 8723BU Bluetooth devices */
+ 	{ USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+ 
++	/* Additional Realtek 8723DE Bluetooth devices */
++	{ USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
++
+ 	/* Additional Realtek 8821AE Bluetooth devices */
+ 	{ USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x13d3, 0x3414), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 330e9b29e145..8b017e84db53 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -880,7 +880,7 @@ static int qca_set_baudrate(struct hci_dev *hdev, uint8_t baudrate)
+ 	 */
+ 	set_current_state(TASK_UNINTERRUPTIBLE);
+ 	schedule_timeout(msecs_to_jiffies(BAUDRATE_SETTLE_TIMEOUT_MS));
+-	set_current_state(TASK_INTERRUPTIBLE);
++	set_current_state(TASK_RUNNING);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index cd888d4ee605..bd449ad52442 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1895,14 +1895,22 @@ static int
+ write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
+ {
+ 	size_t bytes;
+-	__u32 buf[16];
++	__u32 t, buf[16];
+ 	const char __user *p = buffer;
+ 
+ 	while (count > 0) {
++		int b, i = 0;
++
+ 		bytes = min(count, sizeof(buf));
+ 		if (copy_from_user(&buf, p, bytes))
+ 			return -EFAULT;
+ 
++		for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) {
++			if (!arch_get_random_int(&t))
++				break;
++			buf[i] ^= t;
++		}
++
+ 		count -= bytes;
+ 		p += bytes;
+ 
+diff --git a/drivers/clk/clk-si544.c b/drivers/clk/clk-si544.c
+index 1c96a9f6c022..1e2a3b8f9454 100644
+--- a/drivers/clk/clk-si544.c
++++ b/drivers/clk/clk-si544.c
+@@ -207,6 +207,7 @@ static int si544_calc_muldiv(struct clk_si544_muldiv *settings,
+ 
+ 	/* And the fractional bits using the remainder */
+ 	vco = (u64)tmp << 32;
++	vco += FXO / 2; /* Round to nearest multiple */
+ 	do_div(vco, FXO);
+ 	settings->fb_div_frac = vco;
+ 
+diff --git a/drivers/clk/ingenic/jz4770-cgu.c b/drivers/clk/ingenic/jz4770-cgu.c
+index c78d369b9403..992bb2e146d6 100644
+--- a/drivers/clk/ingenic/jz4770-cgu.c
++++ b/drivers/clk/ingenic/jz4770-cgu.c
+@@ -194,9 +194,10 @@ static const struct ingenic_cgu_clk_info jz4770_cgu_clocks[] = {
+ 		.div = { CGU_REG_CPCCR, 16, 1, 4, 22, -1, -1 },
+ 	},
+ 	[JZ4770_CLK_C1CLK] = {
+-		"c1clk", CGU_CLK_DIV,
++		"c1clk", CGU_CLK_DIV | CGU_CLK_GATE,
+ 		.parents = { JZ4770_CLK_PLL0, },
+ 		.div = { CGU_REG_CPCCR, 12, 1, 4, 22, -1, -1 },
++		.gate = { CGU_REG_OPCR, 31, true }, // disable CCLK stop on idle
+ 	},
+ 	[JZ4770_CLK_PCLK] = {
+ 		"pclk", CGU_CLK_DIV,
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index 11d6419788c2..4d614d7615a4 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -1106,7 +1106,7 @@ static void *ocram_alloc_mem(size_t size, void **other)
+ 
+ static void ocram_free_mem(void *p, size_t size, void *other)
+ {
+-	gen_pool_free((struct gen_pool *)other, (u32)p, size);
++	gen_pool_free((struct gen_pool *)other, (unsigned long)p, size);
+ }
+ 
+ static const struct edac_device_prv_data ocramecc_data = {
+diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c
+index 761d8279abca..b2864bdfb30e 100644
+--- a/drivers/gpio/gpio-uniphier.c
++++ b/drivers/gpio/gpio-uniphier.c
+@@ -181,7 +181,11 @@ static int uniphier_gpio_to_irq(struct gpio_chip *chip, unsigned int offset)
+ 	fwspec.fwnode = of_node_to_fwnode(chip->parent->of_node);
+ 	fwspec.param_count = 2;
+ 	fwspec.param[0] = offset - UNIPHIER_GPIO_IRQ_OFFSET;
+-	fwspec.param[1] = IRQ_TYPE_NONE;
++	/*
++	 * IRQ_TYPE_NONE is rejected by the parent irq domain. Set LEVEL_HIGH
++	 * temporarily. Anyway, ->irq_set_type() will override it later.
++	 */
++	fwspec.param[1] = IRQ_TYPE_LEVEL_HIGH;
+ 
+ 	return irq_create_fwspec_mapping(&fwspec);
+ }
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 586d15137c03..d4411c8becf7 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -64,7 +64,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ 	 * Note that active low is the default.
+ 	 */
+ 	if (IS_ENABLED(CONFIG_REGULATOR) &&
+-	    (of_device_is_compatible(np, "reg-fixed-voltage") ||
++	    (of_device_is_compatible(np, "regulator-fixed") ||
++	     of_device_is_compatible(np, "reg-fixed-voltage") ||
+ 	     of_device_is_compatible(np, "regulator-gpio"))) {
+ 		/*
+ 		 * The regulator GPIO handles are specified such that the
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+index bd67f4cb8e6c..3b3ee737657c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+@@ -316,7 +316,7 @@ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
+ 	unsigned long end = addr + amdgpu_bo_size(bo) - 1;
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ 	struct amdgpu_mn *rmn;
+-	struct amdgpu_mn_node *node = NULL;
++	struct amdgpu_mn_node *node = NULL, *new_node;
+ 	struct list_head bos;
+ 	struct interval_tree_node *it;
+ 
+@@ -324,6 +324,10 @@ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
+ 	if (IS_ERR(rmn))
+ 		return PTR_ERR(rmn);
+ 
++	new_node = kmalloc(sizeof(*new_node), GFP_KERNEL);
++	if (!new_node)
++		return -ENOMEM;
++
+ 	INIT_LIST_HEAD(&bos);
+ 
+ 	down_write(&rmn->lock);
+@@ -337,13 +341,10 @@ int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr)
+ 		list_splice(&node->bos, &bos);
+ 	}
+ 
+-	if (!node) {
+-		node = kmalloc(sizeof(struct amdgpu_mn_node), GFP_KERNEL);
+-		if (!node) {
+-			up_write(&rmn->lock);
+-			return -ENOMEM;
+-		}
+-	}
++	if (!node)
++		node = new_node;
++	else
++		kfree(new_node);
+ 
+ 	bo->mn = rmn;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index b52f26e7db98..d1c4beb79ee6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -689,8 +689,12 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 		return -EINVAL;
+ 
+ 	/* A shared bo cannot be migrated to VRAM */
+-	if (bo->prime_shared_count && (domain == AMDGPU_GEM_DOMAIN_VRAM))
+-		return -EINVAL;
++	if (bo->prime_shared_count) {
++		if (domain & AMDGPU_GEM_DOMAIN_GTT)
++			domain = AMDGPU_GEM_DOMAIN_GTT;
++		else
++			return -EINVAL;
++	}
+ 
+ 	if (bo->pin_count) {
+ 		uint32_t mem_type = bo->tbo.mem.mem_type;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 79afffa00772..8dafb10b7832 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4037,7 +4037,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 		}
+ 		spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+ 
+-		if (!pflip_needed) {
++		if (!pflip_needed || plane->type == DRM_PLANE_TYPE_OVERLAY) {
+ 			WARN_ON(!dm_new_plane_state->dc_state);
+ 
+ 			plane_states_constructed[planes_count] = dm_new_plane_state->dc_state;
+@@ -4783,7 +4783,8 @@ static int dm_update_planes_state(struct dc *dc,
+ 
+ 		/* Remove any changed/removed planes */
+ 		if (!enable) {
+-			if (pflip_needed)
++			if (pflip_needed &&
++			    plane->type != DRM_PLANE_TYPE_OVERLAY)
+ 				continue;
+ 
+ 			if (!old_plane_crtc)
+@@ -4830,7 +4831,8 @@ static int dm_update_planes_state(struct dc *dc,
+ 			if (!dm_new_crtc_state->stream)
+ 				continue;
+ 
+-			if (pflip_needed)
++			if (pflip_needed &&
++			    plane->type != DRM_PLANE_TYPE_OVERLAY)
+ 				continue;
+ 
+ 			WARN_ON(dm_new_plane_state->dc_state);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 4be21bf54749..a910f01838ab 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -555,6 +555,9 @@ static inline int dm_irq_state(struct amdgpu_device *adev,
+ 		return 0;
+ 	}
+ 
++	if (acrtc->otg_inst == -1)
++		return 0;
++
+ 	irq_source = dal_irq_type + acrtc->otg_inst;
+ 
+ 	st = (state == AMDGPU_IRQ_STATE_ENABLE);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index d0575999f172..09c93f6ebb10 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -279,7 +279,9 @@ dce110_set_input_transfer_func(struct pipe_ctx *pipe_ctx,
+ 	build_prescale_params(&prescale_params, plane_state);
+ 	ipp->funcs->ipp_program_prescale(ipp, &prescale_params);
+ 
+-	if (plane_state->gamma_correction && dce_use_lut(plane_state->format))
++	if (plane_state->gamma_correction &&
++			!plane_state->gamma_correction->is_identity &&
++			dce_use_lut(plane_state->format))
+ 		ipp->funcs->ipp_program_input_lut(ipp, plane_state->gamma_correction);
+ 
+ 	if (tf == NULL) {
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 18b5b2ff47fe..df2dce8b8e39 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3715,14 +3715,17 @@ static int smu7_trim_dpm_states(struct pp_hwmgr *hwmgr,
+ static int smu7_generate_dpm_level_enable_mask(
+ 		struct pp_hwmgr *hwmgr, const void *input)
+ {
+-	int result;
++	int result = 0;
+ 	const struct phm_set_power_state_input *states =
+ 			(const struct phm_set_power_state_input *)input;
+ 	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+ 	const struct smu7_power_state *smu7_ps =
+ 			cast_const_phw_smu7_power_state(states->pnew_state);
+ 
+-	result = smu7_trim_dpm_states(hwmgr, smu7_ps);
++	/*skip the trim if od is enabled*/
++	if (!hwmgr->od_enabled)
++		result = smu7_trim_dpm_states(hwmgr, smu7_ps);
++
+ 	if (result)
+ 		return result;
+ 
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index c825c76edc1d..d09ee6864ac7 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -1429,7 +1429,9 @@ drm_atomic_set_crtc_for_plane(struct drm_plane_state *plane_state,
+ {
+ 	struct drm_plane *plane = plane_state->plane;
+ 	struct drm_crtc_state *crtc_state;
+-
++	/* Nothing to do for same crtc*/
++	if (plane_state->crtc == crtc)
++		return 0;
+ 	if (plane_state->crtc) {
+ 		crtc_state = drm_atomic_get_crtc_state(plane_state->state,
+ 						       plane_state->crtc);
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index c35654591c12..3448e8e44c35 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -2881,31 +2881,9 @@ commit:
+ 	return 0;
+ }
+ 
+-/**
+- * drm_atomic_helper_disable_all - disable all currently active outputs
+- * @dev: DRM device
+- * @ctx: lock acquisition context
+- *
+- * Loops through all connectors, finding those that aren't turned off and then
+- * turns them off by setting their DPMS mode to OFF and deactivating the CRTC
+- * that they are connected to.
+- *
+- * This is used for example in suspend/resume to disable all currently active
+- * functions when suspending. If you just want to shut down everything at e.g.
+- * driver unload, look at drm_atomic_helper_shutdown().
+- *
+- * Note that if callers haven't already acquired all modeset locks this might
+- * return -EDEADLK, which must be handled by calling drm_modeset_backoff().
+- *
+- * Returns:
+- * 0 on success or a negative error code on failure.
+- *
+- * See also:
+- * drm_atomic_helper_suspend(), drm_atomic_helper_resume() and
+- * drm_atomic_helper_shutdown().
+- */
+-int drm_atomic_helper_disable_all(struct drm_device *dev,
+-				  struct drm_modeset_acquire_ctx *ctx)
++static int __drm_atomic_helper_disable_all(struct drm_device *dev,
++					   struct drm_modeset_acquire_ctx *ctx,
++					   bool clean_old_fbs)
+ {
+ 	struct drm_atomic_state *state;
+ 	struct drm_connector_state *conn_state;
+@@ -2957,8 +2935,11 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
+ 			goto free;
+ 
+ 		drm_atomic_set_fb_for_plane(plane_state, NULL);
+-		plane_mask |= BIT(drm_plane_index(plane));
+-		plane->old_fb = plane->fb;
++
++		if (clean_old_fbs) {
++			plane->old_fb = plane->fb;
++			plane_mask |= BIT(drm_plane_index(plane));
++		}
+ 	}
+ 
+ 	ret = drm_atomic_commit(state);
+@@ -2969,6 +2950,34 @@ free:
+ 	return ret;
+ }
+ 
++/**
++ * drm_atomic_helper_disable_all - disable all currently active outputs
++ * @dev: DRM device
++ * @ctx: lock acquisition context
++ *
++ * Loops through all connectors, finding those that aren't turned off and then
++ * turns them off by setting their DPMS mode to OFF and deactivating the CRTC
++ * that they are connected to.
++ *
++ * This is used for example in suspend/resume to disable all currently active
++ * functions when suspending. If you just want to shut down everything at e.g.
++ * driver unload, look at drm_atomic_helper_shutdown().
++ *
++ * Note that if callers haven't already acquired all modeset locks this might
++ * return -EDEADLK, which must be handled by calling drm_modeset_backoff().
++ *
++ * Returns:
++ * 0 on success or a negative error code on failure.
++ *
++ * See also:
++ * drm_atomic_helper_suspend(), drm_atomic_helper_resume() and
++ * drm_atomic_helper_shutdown().
++ */
++int drm_atomic_helper_disable_all(struct drm_device *dev,
++				  struct drm_modeset_acquire_ctx *ctx)
++{
++	return __drm_atomic_helper_disable_all(dev, ctx, false);
++}
+ EXPORT_SYMBOL(drm_atomic_helper_disable_all);
+ 
+ /**
+@@ -2991,7 +3000,7 @@ void drm_atomic_helper_shutdown(struct drm_device *dev)
+ 	while (1) {
+ 		ret = drm_modeset_lock_all_ctx(dev, &ctx);
+ 		if (!ret)
+-			ret = drm_atomic_helper_disable_all(dev, &ctx);
++			ret = __drm_atomic_helper_disable_all(dev, &ctx, true);
+ 
+ 		if (ret != -EDEADLK)
+ 			break;
+@@ -3095,16 +3104,11 @@ int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state,
+ 	struct drm_connector_state *new_conn_state;
+ 	struct drm_crtc *crtc;
+ 	struct drm_crtc_state *new_crtc_state;
+-	unsigned plane_mask = 0;
+-	struct drm_device *dev = state->dev;
+-	int ret;
+ 
+ 	state->acquire_ctx = ctx;
+ 
+-	for_each_new_plane_in_state(state, plane, new_plane_state, i) {
+-		plane_mask |= BIT(drm_plane_index(plane));
++	for_each_new_plane_in_state(state, plane, new_plane_state, i)
+ 		state->planes[i].old_state = plane->state;
+-	}
+ 
+ 	for_each_new_crtc_in_state(state, crtc, new_crtc_state, i)
+ 		state->crtcs[i].old_state = crtc->state;
+@@ -3112,11 +3116,7 @@ int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state,
+ 	for_each_new_connector_in_state(state, connector, new_conn_state, i)
+ 		state->connectors[i].old_state = connector->state;
+ 
+-	ret = drm_atomic_commit(state);
+-	if (plane_mask)
+-		drm_atomic_clean_old_fb(dev, plane_mask, ret);
+-
+-	return ret;
++	return drm_atomic_commit(state);
+ }
+ EXPORT_SYMBOL(drm_atomic_helper_commit_duplicated_state);
+ 
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 6fac4129e6a2..658830620ca3 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2941,12 +2941,14 @@ static void drm_dp_mst_dump_mstb(struct seq_file *m,
+ 	}
+ }
+ 
++#define DP_PAYLOAD_TABLE_SIZE		64
++
+ static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
+ 				  char *buf)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < 64; i += 16) {
++	for (i = 0; i < DP_PAYLOAD_TABLE_SIZE; i += 16) {
+ 		if (drm_dp_dpcd_read(mgr->aux,
+ 				     DP_PAYLOAD_TABLE_UPDATE_STATUS + i,
+ 				     &buf[i], 16) != 16)
+@@ -3015,7 +3017,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ 
+ 	mutex_lock(&mgr->lock);
+ 	if (mgr->mst_primary) {
+-		u8 buf[64];
++		u8 buf[DP_PAYLOAD_TABLE_SIZE];
+ 		int ret;
+ 
+ 		ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, buf, DP_RECEIVER_CAP_SIZE);
+@@ -3033,8 +3035,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
+ 		seq_printf(m, " revision: hw: %x.%x sw: %x.%x\n",
+ 			   buf[0x9] >> 4, buf[0x9] & 0xf, buf[0xa], buf[0xb]);
+ 		if (dump_dp_payload_table(mgr, buf))
+-			seq_printf(m, "payload table: %*ph\n", 63, buf);
+-
++			seq_printf(m, "payload table: %*ph\n", DP_PAYLOAD_TABLE_SIZE, buf);
+ 	}
+ 
+ 	mutex_unlock(&mgr->lock);
+diff --git a/drivers/gpu/drm/gma500/psb_intel_drv.h b/drivers/gpu/drm/gma500/psb_intel_drv.h
+index e8e4ea14b12b..e05e5399af2d 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_drv.h
++++ b/drivers/gpu/drm/gma500/psb_intel_drv.h
+@@ -255,7 +255,7 @@ extern int intelfb_remove(struct drm_device *dev,
+ extern bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder,
+ 				      const struct drm_display_mode *mode,
+ 				      struct drm_display_mode *adjusted_mode);
+-extern int psb_intel_lvds_mode_valid(struct drm_connector *connector,
++extern enum drm_mode_status psb_intel_lvds_mode_valid(struct drm_connector *connector,
+ 				     struct drm_display_mode *mode);
+ extern int psb_intel_lvds_set_property(struct drm_connector *connector,
+ 					struct drm_property *property,
+diff --git a/drivers/gpu/drm/gma500/psb_intel_lvds.c b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+index be3eefec5152..8baf6325c6e4 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_lvds.c
++++ b/drivers/gpu/drm/gma500/psb_intel_lvds.c
+@@ -343,7 +343,7 @@ static void psb_intel_lvds_restore(struct drm_connector *connector)
+ 	}
+ }
+ 
+-int psb_intel_lvds_mode_valid(struct drm_connector *connector,
++enum drm_mode_status psb_intel_lvds_mode_valid(struct drm_connector *connector,
+ 				 struct drm_display_mode *mode)
+ {
+ 	struct drm_psb_private *dev_priv = connector->dev->dev_private;
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index ce18b6cf6e68..e3ce2f448020 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -804,6 +804,7 @@ enum intel_sbi_destination {
+ #define QUIRK_BACKLIGHT_PRESENT (1<<3)
+ #define QUIRK_PIN_SWIZZLED_PAGES (1<<5)
+ #define QUIRK_INCREASE_T12_DELAY (1<<6)
++#define QUIRK_INCREASE_DDI_DISABLED_TIME (1<<7)
+ 
+ struct intel_fbdev;
+ struct intel_fbc_work;
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index 1d14ebc7480d..b752f6221731 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -1605,15 +1605,24 @@ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state)
+ 	I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp);
+ }
+ 
+-void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
+-				       enum transcoder cpu_transcoder)
++void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state)
+ {
++	struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
++	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
++	enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
+ 	i915_reg_t reg = TRANS_DDI_FUNC_CTL(cpu_transcoder);
+ 	uint32_t val = I915_READ(reg);
+ 
+ 	val &= ~(TRANS_DDI_FUNC_ENABLE | TRANS_DDI_PORT_MASK | TRANS_DDI_DP_VC_PAYLOAD_ALLOC);
+ 	val |= TRANS_DDI_PORT_NONE;
+ 	I915_WRITE(reg, val);
++
++	if (dev_priv->quirks & QUIRK_INCREASE_DDI_DISABLED_TIME &&
++	    intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) {
++		DRM_DEBUG_KMS("Quirk Increase DDI disabled time\n");
++		/* Quirk time at 100ms for reliable operation */
++		msleep(100);
++	}
+ }
+ 
+ int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder,
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 84011e08adc3..f943c1049c0b 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -5685,7 +5685,7 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
+ 		intel_ddi_set_vc_payload_alloc(intel_crtc->config, false);
+ 
+ 	if (!transcoder_is_dsi(cpu_transcoder))
+-		intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder);
++		intel_ddi_disable_transcoder_func(old_crtc_state);
+ 
+ 	if (INTEL_GEN(dev_priv) >= 9)
+ 		skylake_scaler_disable(intel_crtc);
+@@ -14388,6 +14388,18 @@ static void quirk_increase_t12_delay(struct drm_device *dev)
+ 	DRM_INFO("Applying T12 delay quirk\n");
+ }
+ 
++/*
++ * GeminiLake NUC HDMI outputs require additional off time
++ * this allows the onboard retimer to correctly sync to signal
++ */
++static void quirk_increase_ddi_disabled_time(struct drm_device *dev)
++{
++	struct drm_i915_private *dev_priv = to_i915(dev);
++
++	dev_priv->quirks |= QUIRK_INCREASE_DDI_DISABLED_TIME;
++	DRM_INFO("Applying Increase DDI Disabled quirk\n");
++}
++
+ struct intel_quirk {
+ 	int device;
+ 	int subsystem_vendor;
+@@ -14474,6 +14486,13 @@ static struct intel_quirk intel_quirks[] = {
+ 
+ 	/* Toshiba Satellite P50-C-18C */
+ 	{ 0x191B, 0x1179, 0xF840, quirk_increase_t12_delay },
++
++	/* GeminiLake NUC */
++	{ 0x3185, 0x8086, 0x2072, quirk_increase_ddi_disabled_time },
++	{ 0x3184, 0x8086, 0x2072, quirk_increase_ddi_disabled_time },
++	/* ASRock ITX*/
++	{ 0x3185, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
++	{ 0x3184, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
+ };
+ 
+ static void intel_init_quirks(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index a80fbad9be0f..04d2774fe0ac 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -1368,8 +1368,7 @@ void hsw_fdi_link_train(struct intel_crtc *crtc,
+ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port);
+ bool intel_ddi_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe);
+ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state);
+-void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
+-				       enum transcoder cpu_transcoder);
++void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state);
+ void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state);
+ void intel_ddi_disable_pipe_clock(const  struct intel_crtc_state *crtc_state);
+ struct intel_encoder *
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dma.c b/drivers/gpu/drm/nouveau/nouveau_dma.c
+index 10e84f6ca2b7..e0664d28802b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dma.c
++++ b/drivers/gpu/drm/nouveau/nouveau_dma.c
+@@ -80,18 +80,10 @@ READ_GET(struct nouveau_channel *chan, uint64_t *prev_get, int *timeout)
+ }
+ 
+ void
+-nv50_dma_push(struct nouveau_channel *chan, struct nouveau_bo *bo,
+-	      int delta, int length)
++nv50_dma_push(struct nouveau_channel *chan, u64 offset, int length)
+ {
+-	struct nouveau_cli *cli = (void *)chan->user.client;
+ 	struct nouveau_bo *pb = chan->push.buffer;
+-	struct nouveau_vma *vma;
+ 	int ip = (chan->dma.ib_put * 2) + chan->dma.ib_base;
+-	u64 offset;
+-
+-	vma = nouveau_vma_find(bo, &cli->vmm);
+-	BUG_ON(!vma);
+-	offset = vma->addr + delta;
+ 
+ 	BUG_ON(chan->dma.ib_free < 1);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_dma.h b/drivers/gpu/drm/nouveau/nouveau_dma.h
+index 74e10b14a7da..89c87111bbbd 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_dma.h
++++ b/drivers/gpu/drm/nouveau/nouveau_dma.h
+@@ -31,8 +31,7 @@
+ #include "nouveau_chan.h"
+ 
+ int nouveau_dma_wait(struct nouveau_channel *, int slots, int size);
+-void nv50_dma_push(struct nouveau_channel *, struct nouveau_bo *,
+-		   int delta, int length);
++void nv50_dma_push(struct nouveau_channel *, u64 addr, int length);
+ 
+ /*
+  * There's a hw race condition where you can't jump to your PUT offset,
+@@ -151,7 +150,7 @@ FIRE_RING(struct nouveau_channel *chan)
+ 	chan->accel_done = true;
+ 
+ 	if (chan->dma.ib_max) {
+-		nv50_dma_push(chan, chan->push.buffer, chan->dma.put << 2,
++		nv50_dma_push(chan, chan->push.addr + (chan->dma.put << 2),
+ 			      (chan->dma.cur - chan->dma.put) << 2);
+ 	} else {
+ 		WRITE_PUT(chan->dma.cur);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 591d9c29ede7..f8e67ab5c598 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -116,24 +116,22 @@ nouveau_name(struct drm_device *dev)
+ }
+ 
+ static inline bool
+-nouveau_cli_work_ready(struct dma_fence *fence, bool wait)
++nouveau_cli_work_ready(struct dma_fence *fence)
+ {
+-	if (!dma_fence_is_signaled(fence)) {
+-		if (!wait)
+-			return false;
+-		WARN_ON(dma_fence_wait_timeout(fence, false, 2 * HZ) <= 0);
+-	}
++	if (!dma_fence_is_signaled(fence))
++		return false;
+ 	dma_fence_put(fence);
+ 	return true;
+ }
+ 
+ static void
+-nouveau_cli_work_flush(struct nouveau_cli *cli, bool wait)
++nouveau_cli_work(struct work_struct *w)
+ {
++	struct nouveau_cli *cli = container_of(w, typeof(*cli), work);
+ 	struct nouveau_cli_work *work, *wtmp;
+ 	mutex_lock(&cli->lock);
+ 	list_for_each_entry_safe(work, wtmp, &cli->worker, head) {
+-		if (!work->fence || nouveau_cli_work_ready(work->fence, wait)) {
++		if (!work->fence || nouveau_cli_work_ready(work->fence)) {
+ 			list_del(&work->head);
+ 			work->func(work);
+ 		}
+@@ -161,17 +159,17 @@ nouveau_cli_work_queue(struct nouveau_cli *cli, struct dma_fence *fence,
+ 	mutex_unlock(&cli->lock);
+ }
+ 
+-static void
+-nouveau_cli_work(struct work_struct *w)
+-{
+-	struct nouveau_cli *cli = container_of(w, typeof(*cli), work);
+-	nouveau_cli_work_flush(cli, false);
+-}
+-
+ static void
+ nouveau_cli_fini(struct nouveau_cli *cli)
+ {
+-	nouveau_cli_work_flush(cli, true);
++	/* All our channels are dead now, which means all the fences they
++	 * own are signalled, and all callback functions have been called.
++	 *
++	 * So, after flushing the workqueue, there should be nothing left.
++	 */
++	flush_work(&cli->work);
++	WARN_ON(!list_empty(&cli->worker));
++
+ 	usif_client_fini(cli);
+ 	nouveau_vmm_fini(&cli->vmm);
+ 	nvif_mmu_fini(&cli->mmu);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index e72a7e37eb0a..707e02c80f18 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -432,7 +432,20 @@ retry:
+ 			}
+ 		}
+ 
+-		b->user_priv = (uint64_t)(unsigned long)nvbo;
++		if (cli->vmm.vmm.object.oclass >= NVIF_CLASS_VMM_NV50) {
++			struct nouveau_vmm *vmm = &cli->vmm;
++			struct nouveau_vma *vma = nouveau_vma_find(nvbo, vmm);
++			if (!vma) {
++				NV_PRINTK(err, cli, "vma not found!\n");
++				ret = -EINVAL;
++				break;
++			}
++
++			b->user_priv = (uint64_t)(unsigned long)vma;
++		} else {
++			b->user_priv = (uint64_t)(unsigned long)nvbo;
++		}
++
+ 		nvbo->reserved_by = file_priv;
+ 		nvbo->pbbo_index = i;
+ 		if ((b->valid_domains & NOUVEAU_GEM_DOMAIN_VRAM) &&
+@@ -763,10 +776,10 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data,
+ 		}
+ 
+ 		for (i = 0; i < req->nr_push; i++) {
+-			struct nouveau_bo *nvbo = (void *)(unsigned long)
++			struct nouveau_vma *vma = (void *)(unsigned long)
+ 				bo[push[i].bo_index].user_priv;
+ 
+-			nv50_dma_push(chan, nvbo, push[i].offset,
++			nv50_dma_push(chan, vma->addr + push[i].offset,
+ 				      push[i].length);
+ 		}
+ 	} else
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
+index 84bd703dd897..8305cb67cbfc 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
+@@ -155,10 +155,10 @@ gk104_fifo_runlist_commit(struct gk104_fifo *fifo, int runl)
+ 				    (target << 28));
+ 	nvkm_wr32(device, 0x002274, (runl << 20) | nr);
+ 
+-	if (wait_event_timeout(fifo->runlist[runl].wait,
+-			       !(nvkm_rd32(device, 0x002284 + (runl * 0x08))
+-				       & 0x00100000),
+-			       msecs_to_jiffies(2000)) == 0)
++	if (nvkm_msec(device, 2000,
++		if (!(nvkm_rd32(device, 0x002284 + (runl * 0x08)) & 0x00100000))
++			break;
++	) < 0)
+ 		nvkm_error(subdev, "runlist %d update timeout\n", runl);
+ unlock:
+ 	mutex_unlock(&subdev->mutex);
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index df9469a8fdb1..2aea2bdff99b 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -852,7 +852,7 @@ static int radeon_lvds_get_modes(struct drm_connector *connector)
+ 	return ret;
+ }
+ 
+-static int radeon_lvds_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_lvds_mode_valid(struct drm_connector *connector,
+ 				  struct drm_display_mode *mode)
+ {
+ 	struct drm_encoder *encoder = radeon_best_single_encoder(connector);
+@@ -1012,7 +1012,7 @@ static int radeon_vga_get_modes(struct drm_connector *connector)
+ 	return ret;
+ }
+ 
+-static int radeon_vga_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_vga_mode_valid(struct drm_connector *connector,
+ 				  struct drm_display_mode *mode)
+ {
+ 	struct drm_device *dev = connector->dev;
+@@ -1156,7 +1156,7 @@ static int radeon_tv_get_modes(struct drm_connector *connector)
+ 	return 1;
+ }
+ 
+-static int radeon_tv_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_tv_mode_valid(struct drm_connector *connector,
+ 				struct drm_display_mode *mode)
+ {
+ 	if ((mode->hdisplay > 1024) || (mode->vdisplay > 768))
+@@ -1498,7 +1498,7 @@ static void radeon_dvi_force(struct drm_connector *connector)
+ 		radeon_connector->use_digital = true;
+ }
+ 
+-static int radeon_dvi_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_dvi_mode_valid(struct drm_connector *connector,
+ 				  struct drm_display_mode *mode)
+ {
+ 	struct drm_device *dev = connector->dev;
+@@ -1800,7 +1800,7 @@ out:
+ 	return ret;
+ }
+ 
+-static int radeon_dp_mode_valid(struct drm_connector *connector,
++static enum drm_mode_status radeon_dp_mode_valid(struct drm_connector *connector,
+ 				  struct drm_display_mode *mode)
+ {
+ 	struct drm_device *dev = connector->dev;
+diff --git a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+index 3e8bf79bea58..0259cfe894d6 100644
+--- a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
++++ b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+@@ -358,6 +358,8 @@ static void rockchip_dp_unbind(struct device *dev, struct device *master,
+ 	analogix_dp_unbind(dp->adp);
+ 	rockchip_drm_psr_unregister(&dp->encoder);
+ 	dp->encoder.funcs->destroy(&dp->encoder);
++
++	dp->adp = ERR_PTR(-ENODEV);
+ }
+ 
+ static const struct component_ops rockchip_dp_component_ops = {
+@@ -381,6 +383,7 @@ static int rockchip_dp_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	dp->dev = dev;
++	dp->adp = ERR_PTR(-ENODEV);
+ 	dp->plat_data.panel = panel;
+ 
+ 	ret = rockchip_dp_of_probe(dp);
+@@ -404,6 +407,9 @@ static int rockchip_dp_suspend(struct device *dev)
+ {
+ 	struct rockchip_dp_device *dp = dev_get_drvdata(dev);
+ 
++	if (IS_ERR(dp->adp))
++		return 0;
++
+ 	return analogix_dp_suspend(dp->adp);
+ }
+ 
+@@ -411,6 +417,9 @@ static int rockchip_dp_resume(struct device *dev)
+ {
+ 	struct rockchip_dp_device *dp = dev_get_drvdata(dev);
+ 
++	if (IS_ERR(dp->adp))
++		return 0;
++
+ 	return analogix_dp_resume(dp->adp);
+ }
+ #endif
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 1a3277e483d5..16e80308c6db 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -392,9 +392,6 @@ static void ltdc_crtc_update_clut(struct drm_crtc *crtc)
+ 	u32 val;
+ 	int i;
+ 
+-	if (!crtc || !crtc->state)
+-		return;
+-
+ 	if (!crtc->state->color_mgmt_changed || !crtc->state->gamma_lut)
+ 		return;
+ 
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 03db71173f5d..f1d5f76e9c33 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -223,10 +223,14 @@ static int host1x_probe(struct platform_device *pdev)
+ 		struct iommu_domain_geometry *geometry;
+ 		unsigned long order;
+ 
++		err = iova_cache_get();
++		if (err < 0)
++			goto put_group;
++
+ 		host->domain = iommu_domain_alloc(&platform_bus_type);
+ 		if (!host->domain) {
+ 			err = -ENOMEM;
+-			goto put_group;
++			goto put_cache;
+ 		}
+ 
+ 		err = iommu_attach_group(host->domain, host->group);
+@@ -234,6 +238,7 @@ static int host1x_probe(struct platform_device *pdev)
+ 			if (err == -ENODEV) {
+ 				iommu_domain_free(host->domain);
+ 				host->domain = NULL;
++				iova_cache_put();
+ 				iommu_group_put(host->group);
+ 				host->group = NULL;
+ 				goto skip_iommu;
+@@ -308,6 +313,9 @@ fail_detach_device:
+ fail_free_domain:
+ 	if (host->domain)
+ 		iommu_domain_free(host->domain);
++put_cache:
++	if (host->group)
++		iova_cache_put();
+ put_group:
+ 	iommu_group_put(host->group);
+ 
+@@ -328,6 +336,7 @@ static int host1x_remove(struct platform_device *pdev)
+ 		put_iova_domain(&host->iova);
+ 		iommu_detach_group(host->domain, host->group);
+ 		iommu_domain_free(host->domain);
++		iova_cache_put();
+ 		iommu_group_put(host->group);
+ 	}
+ 
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index febb21ee190e..584b10d3fc3d 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -2,7 +2,7 @@
+  *  Plantronics USB HID Driver
+  *
+  *  Copyright (c) 2014 JD Cole <jd.cole@plantronics.com>
+- *  Copyright (c) 2015 Terry Junge <terry.junge@plantronics.com>
++ *  Copyright (c) 2015-2018 Terry Junge <terry.junge@plantronics.com>
+  */
+ 
+ /*
+@@ -48,6 +48,10 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ 	unsigned short mapped_key;
+ 	unsigned long plt_type = (unsigned long)hid_get_drvdata(hdev);
+ 
++	/* special case for PTT products */
++	if (field->application == HID_GD_JOYSTICK)
++		goto defaulted;
++
+ 	/* handle volume up/down mapping */
+ 	/* non-standard types or multi-HID interfaces - plt_type is PID */
+ 	if (!(plt_type & HID_USAGE_PAGE)) {
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index a92377285034..8cc2b71c680b 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -1054,6 +1054,14 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 	pm_runtime_enable(&client->dev);
+ 	device_enable_async_suspend(&client->dev);
+ 
++	/* Make sure there is something at this address */
++	ret = i2c_smbus_read_byte(client);
++	if (ret < 0) {
++		dev_dbg(&client->dev, "nothing at this address: %d\n", ret);
++		ret = -ENXIO;
++		goto err_pm;
++	}
++
+ 	ret = i2c_hid_fetch_hid_descriptor(ihid);
+ 	if (ret < 0)
+ 		goto err_pm;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index c6915b835396..21d4efa74de2 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -32,6 +32,7 @@
+ #include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
++#include <linux/reset.h>
+ #include <linux/slab.h>
+ 
+ /* register offsets */
+@@ -111,8 +112,9 @@
+ #define ID_ARBLOST	(1 << 3)
+ #define ID_NACK		(1 << 4)
+ /* persistent flags */
++#define ID_P_NO_RXDMA	(1 << 30) /* HW forbids RXDMA sometimes */
+ #define ID_P_PM_BLOCKED	(1 << 31)
+-#define ID_P_MASK	ID_P_PM_BLOCKED
++#define ID_P_MASK	(ID_P_PM_BLOCKED | ID_P_NO_RXDMA)
+ 
+ enum rcar_i2c_type {
+ 	I2C_RCAR_GEN1,
+@@ -141,6 +143,8 @@ struct rcar_i2c_priv {
+ 	struct dma_chan *dma_rx;
+ 	struct scatterlist sg;
+ 	enum dma_data_direction dma_direction;
++
++	struct reset_control *rstc;
+ };
+ 
+ #define rcar_i2c_priv_to_dev(p)		((p)->adap.dev.parent)
+@@ -370,6 +374,11 @@ static void rcar_i2c_dma_unmap(struct rcar_i2c_priv *priv)
+ 	dma_unmap_single(chan->device->dev, sg_dma_address(&priv->sg),
+ 			 sg_dma_len(&priv->sg), priv->dma_direction);
+ 
++	/* Gen3 can only do one RXDMA per transfer and we just completed it */
++	if (priv->devtype == I2C_RCAR_GEN3 &&
++	    priv->dma_direction == DMA_FROM_DEVICE)
++		priv->flags |= ID_P_NO_RXDMA;
++
+ 	priv->dma_direction = DMA_NONE;
+ }
+ 
+@@ -407,8 +416,9 @@ static void rcar_i2c_dma(struct rcar_i2c_priv *priv)
+ 	unsigned char *buf;
+ 	int len;
+ 
+-	/* Do not use DMA if it's not available or for messages < 8 bytes */
+-	if (IS_ERR(chan) || msg->len < 8 || !(msg->flags & I2C_M_DMA_SAFE))
++	/* Do various checks to see if DMA is feasible at all */
++	if (IS_ERR(chan) || msg->len < 8 || !(msg->flags & I2C_M_DMA_SAFE) ||
++	    (read && priv->flags & ID_P_NO_RXDMA))
+ 		return;
+ 
+ 	if (read) {
+@@ -737,6 +747,25 @@ static void rcar_i2c_release_dma(struct rcar_i2c_priv *priv)
+ 	}
+ }
+ 
++/* I2C is a special case, we need to poll the status of a reset */
++static int rcar_i2c_do_reset(struct rcar_i2c_priv *priv)
++{
++	int i, ret;
++
++	ret = reset_control_reset(priv->rstc);
++	if (ret)
++		return ret;
++
++	for (i = 0; i < LOOP_TIMEOUT; i++) {
++		ret = reset_control_status(priv->rstc);
++		if (ret == 0)
++			return 0;
++		udelay(1);
++	}
++
++	return -ETIMEDOUT;
++}
++
+ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 				struct i2c_msg *msgs,
+ 				int num)
+@@ -748,6 +777,16 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 
+ 	pm_runtime_get_sync(dev);
+ 
++	/* Gen3 needs a reset before allowing RXDMA once */
++	if (priv->devtype == I2C_RCAR_GEN3) {
++		priv->flags |= ID_P_NO_RXDMA;
++		if (!IS_ERR(priv->rstc)) {
++			ret = rcar_i2c_do_reset(priv);
++			if (ret == 0)
++				priv->flags &= ~ID_P_NO_RXDMA;
++		}
++	}
++
+ 	rcar_i2c_init(priv);
+ 
+ 	ret = rcar_i2c_bus_barrier(priv);
+@@ -918,6 +957,15 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto out_pm_put;
+ 
++	if (priv->devtype == I2C_RCAR_GEN3) {
++		priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
++		if (!IS_ERR(priv->rstc)) {
++			ret = reset_control_status(priv->rstc);
++			if (ret < 0)
++				priv->rstc = ERR_PTR(-ENOTSUPP);
++		}
++	}
++
+ 	/* Stay always active when multi-master to keep arbitration working */
+ 	if (of_property_read_bool(dev->of_node, "multi-master"))
+ 		priv->flags |= ID_P_PM_BLOCKED;
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index b28452a55a08..55a224e8e798 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -1557,7 +1557,8 @@ static int add_oui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ 			    mad_reg_req->oui, 3)) {
+ 			method = &(*vendor_table)->vendor_class[
+ 						vclass]->method_table[i];
+-			BUG_ON(!*method);
++			if (!*method)
++				goto error3;
+ 			goto check_in_use;
+ 		}
+ 	}
+@@ -1567,10 +1568,12 @@ static int add_oui_reg_req(struct ib_mad_reg_req *mad_reg_req,
+ 				vclass]->oui[i])) {
+ 			method = &(*vendor_table)->vendor_class[
+ 				vclass]->method_table[i];
+-			BUG_ON(*method);
+ 			/* Allocate method table for this OUI */
+-			if ((ret = allocate_method_table(method)))
+-				goto error3;
++			if (!*method) {
++				ret = allocate_method_table(method);
++				if (ret)
++					goto error3;
++			}
+ 			memcpy((*vendor_table)->vendor_class[vclass]->oui[i],
+ 			       mad_reg_req->oui, 3);
+ 			goto check_in_use;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index eab43b17e9cf..ec8fb289621f 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -235,7 +235,7 @@ static struct ucma_multicast* ucma_alloc_multicast(struct ucma_context *ctx)
+ 		return NULL;
+ 
+ 	mutex_lock(&mut);
+-	mc->id = idr_alloc(&multicast_idr, mc, 0, 0, GFP_KERNEL);
++	mc->id = idr_alloc(&multicast_idr, NULL, 0, 0, GFP_KERNEL);
+ 	mutex_unlock(&mut);
+ 	if (mc->id < 0)
+ 		goto error;
+@@ -1421,6 +1421,10 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ 		goto err3;
+ 	}
+ 
++	mutex_lock(&mut);
++	idr_replace(&multicast_idr, mc, mc->id);
++	mutex_unlock(&mut);
++
+ 	mutex_unlock(&file->mut);
+ 	ucma_put_ctx(ctx);
+ 	return 0;
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 21a887c9523b..7a300e3eb0c2 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -3478,6 +3478,11 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file,
+ 		goto err_uobj;
+ 	}
+ 
++	if (qp->qp_type != IB_QPT_UD && qp->qp_type != IB_QPT_RAW_PACKET) {
++		err = -EINVAL;
++		goto err_put;
++	}
++
+ 	flow_attr = kzalloc(sizeof(*flow_attr) + cmd.flow_attr.num_of_specs *
+ 			    sizeof(union ib_flow_spec), GFP_KERNEL);
+ 	if (!flow_attr) {
+diff --git a/drivers/infiniband/sw/rdmavt/Kconfig b/drivers/infiniband/sw/rdmavt/Kconfig
+index 2b5513da7e83..98e798007f75 100644
+--- a/drivers/infiniband/sw/rdmavt/Kconfig
++++ b/drivers/infiniband/sw/rdmavt/Kconfig
+@@ -1,6 +1,6 @@
+ config INFINIBAND_RDMAVT
+ 	tristate "RDMA verbs transport library"
+-	depends on 64BIT
++	depends on 64BIT && ARCH_DMA_ADDR_T_64BIT
+ 	depends on PCI
+ 	select DMA_VIRT_OPS
+ 	---help---
+diff --git a/drivers/infiniband/sw/rxe/Kconfig b/drivers/infiniband/sw/rxe/Kconfig
+index bad4a576d7cf..67ae960ab523 100644
+--- a/drivers/infiniband/sw/rxe/Kconfig
++++ b/drivers/infiniband/sw/rxe/Kconfig
+@@ -1,6 +1,7 @@
+ config RDMA_RXE
+ 	tristate "Software RDMA over Ethernet (RoCE) driver"
+ 	depends on INET && PCI && INFINIBAND
++	depends on !64BIT || ARCH_DMA_ADDR_T_64BIT
+ 	select NET_UDP_TUNNEL
+ 	select CRYPTO_CRC32
+ 	select DMA_VIRT_OPS
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 37f954b704a6..f8f3aac944ea 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1264,6 +1264,8 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0611", 0 },
+ 	{ "ELAN0612", 0 },
+ 	{ "ELAN0618", 0 },
++	{ "ELAN061D", 0 },
++	{ "ELAN0622", 0 },
+ 	{ "ELAN1000", 0 },
+ 	{ }
+ };
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index b353d494ad40..136f6e7bf797 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -527,6 +527,13 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "N24_25BU"),
+ 		},
+ 	},
++	{
++		/* Lenovo LaVie Z */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/irqchip/irq-ls-scfg-msi.c b/drivers/irqchip/irq-ls-scfg-msi.c
+index 57e3d900f19e..1ec3bfe56693 100644
+--- a/drivers/irqchip/irq-ls-scfg-msi.c
++++ b/drivers/irqchip/irq-ls-scfg-msi.c
+@@ -21,6 +21,7 @@
+ #include <linux/of_pci.h>
+ #include <linux/of_platform.h>
+ #include <linux/spinlock.h>
++#include <linux/dma-iommu.h>
+ 
+ #define MSI_IRQS_PER_MSIR	32
+ #define MSI_MSIR_OFFSET		4
+@@ -94,6 +95,8 @@ static void ls_scfg_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
+ 
+ 	if (msi_affinity_flag)
+ 		msg->data |= cpumask_first(data->common->affinity);
++
++	iommu_dma_map_msi_msg(data->irq, msg);
+ }
+ 
+ static int ls_scfg_msi_set_affinity(struct irq_data *irq_data,
+diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
+index 94d5d97c9d8a..8c371423b111 100644
+--- a/drivers/lightnvm/pblk-core.c
++++ b/drivers/lightnvm/pblk-core.c
+@@ -278,7 +278,9 @@ void pblk_free_rqd(struct pblk *pblk, struct nvm_rq *rqd, int type)
+ 		return;
+ 	}
+ 
+-	nvm_dev_dma_free(dev->parent, rqd->meta_list, rqd->dma_meta_list);
++	if (rqd->meta_list)
++		nvm_dev_dma_free(dev->parent, rqd->meta_list,
++				rqd->dma_meta_list);
+ 	mempool_free(rqd, pool);
+ }
+ 
+@@ -316,7 +318,7 @@ int pblk_bio_add_pages(struct pblk *pblk, struct bio *bio, gfp_t flags,
+ 
+ 	return 0;
+ err:
+-	pblk_bio_free_pages(pblk, bio, 0, i - 1);
++	pblk_bio_free_pages(pblk, bio, (bio->bi_vcnt - i), i);
+ 	return -1;
+ }
+ 
+diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c
+index 52fdd85dbc97..58946ffebe81 100644
+--- a/drivers/lightnvm/pblk-rb.c
++++ b/drivers/lightnvm/pblk-rb.c
+@@ -142,10 +142,9 @@ static void clean_wctx(struct pblk_w_ctx *w_ctx)
+ {
+ 	int flags;
+ 
+-try:
+ 	flags = READ_ONCE(w_ctx->flags);
+-	if (!(flags & PBLK_SUBMITTED_ENTRY))
+-		goto try;
++	WARN_ONCE(!(flags & PBLK_SUBMITTED_ENTRY),
++			"pblk: overwriting unsubmitted data\n");
+ 
+ 	/* Release flags on context. Protect from writes and reads */
+ 	smp_store_release(&w_ctx->flags, PBLK_WRITABLE_ENTRY);
+diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
+index 9eee10f69df0..d528617c637b 100644
+--- a/drivers/lightnvm/pblk-read.c
++++ b/drivers/lightnvm/pblk-read.c
+@@ -219,7 +219,7 @@ static int pblk_partial_read_bio(struct pblk *pblk, struct nvm_rq *rqd,
+ 	new_bio = bio_alloc(GFP_KERNEL, nr_holes);
+ 
+ 	if (pblk_bio_add_pages(pblk, new_bio, GFP_KERNEL, nr_holes))
+-		goto err;
++		goto err_add_pages;
+ 
+ 	if (nr_holes != new_bio->bi_vcnt) {
+ 		pr_err("pblk: malformed bio\n");
+@@ -310,10 +310,10 @@ static int pblk_partial_read_bio(struct pblk *pblk, struct nvm_rq *rqd,
+ 	return NVM_IO_OK;
+ 
+ err:
+-	pr_err("pblk: failed to perform partial read\n");
+-
+ 	/* Free allocated pages in new bio */
+-	pblk_bio_free_pages(pblk, bio, 0, new_bio->bi_vcnt);
++	pblk_bio_free_pages(pblk, new_bio, 0, new_bio->bi_vcnt);
++err_add_pages:
++	pr_err("pblk: failed to perform partial read\n");
+ 	__pblk_end_io_read(pblk, rqd, false);
+ 	return NVM_IO_ERR;
+ }
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index bac480d75d1d..df9eb1a04f26 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6525,6 +6525,9 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
+ 	char b[BDEVNAME_SIZE];
+ 	struct md_rdev *rdev;
+ 
++	if (!mddev->pers)
++		return -ENODEV;
++
+ 	rdev = find_rdev(mddev, dev);
+ 	if (!rdev)
+ 		return -ENXIO;
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index e9e3308cb0a7..4445179aa4c8 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -2474,6 +2474,8 @@ static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio)
+ 		fix_read_error(conf, r1_bio->read_disk,
+ 			       r1_bio->sector, r1_bio->sectors);
+ 		unfreeze_array(conf);
++	} else if (mddev->ro == 0 && test_bit(FailFast, &rdev->flags)) {
++		md_error(mddev, rdev);
+ 	} else {
+ 		r1_bio->bios[r1_bio->read_disk] = IO_BLOCKED;
+ 	}
+diff --git a/drivers/media/cec/cec-pin-error-inj.c b/drivers/media/cec/cec-pin-error-inj.c
+index aaa899a175ce..c0088d3b8e3d 100644
+--- a/drivers/media/cec/cec-pin-error-inj.c
++++ b/drivers/media/cec/cec-pin-error-inj.c
+@@ -81,10 +81,9 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ 	u64 *error;
+ 	u8 *args;
+ 	bool has_op;
+-	u32 op;
++	u8 op;
+ 	u8 mode;
+ 	u8 pos;
+-	u8 v;
+ 
+ 	p = skip_spaces(p);
+ 	token = strsep(&p, delims);
+@@ -146,12 +145,18 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ 	comma = strchr(token, ',');
+ 	if (comma)
+ 		*comma++ = '\0';
+-	if (!strcmp(token, "any"))
+-		op = CEC_ERROR_INJ_OP_ANY;
+-	else if (!kstrtou8(token, 0, &v))
+-		op = v;
+-	else
++	if (!strcmp(token, "any")) {
++		has_op = false;
++		error = pin->error_inj + CEC_ERROR_INJ_OP_ANY;
++		args = pin->error_inj_args[CEC_ERROR_INJ_OP_ANY];
++	} else if (!kstrtou8(token, 0, &op)) {
++		has_op = true;
++		error = pin->error_inj + op;
++		args = pin->error_inj_args[op];
++	} else {
+ 		return false;
++	}
++
+ 	mode = CEC_ERROR_INJ_MODE_ONCE;
+ 	if (comma) {
+ 		if (!strcmp(comma, "off"))
+@@ -166,10 +171,6 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ 			return false;
+ 	}
+ 
+-	error = pin->error_inj + op;
+-	args = pin->error_inj_args[op];
+-	has_op = op <= 0xff;
+-
+ 	token = strsep(&p, delims);
+ 	if (p) {
+ 		p = skip_spaces(p);
+@@ -203,16 +204,18 @@ bool cec_pin_error_inj_parse_line(struct cec_adapter *adap, char *line)
+ 		mode_mask = CEC_ERROR_INJ_MODE_MASK << mode_offset;
+ 		arg_idx = cec_error_inj_cmds[i].arg_idx;
+ 
+-		if (mode_offset == CEC_ERROR_INJ_RX_ARB_LOST_OFFSET ||
+-		    mode_offset == CEC_ERROR_INJ_TX_ADD_BYTES_OFFSET)
+-			is_bit_pos = false;
+-
+ 		if (mode_offset == CEC_ERROR_INJ_RX_ARB_LOST_OFFSET) {
+ 			if (has_op)
+ 				return false;
+ 			if (!has_pos)
+ 				pos = 0x0f;
++			is_bit_pos = false;
++		} else if (mode_offset == CEC_ERROR_INJ_TX_ADD_BYTES_OFFSET) {
++			if (!has_pos || !pos)
++				return false;
++			is_bit_pos = false;
+ 		}
++
+ 		if (arg_idx >= 0 && is_bit_pos) {
+ 			if (!has_pos || pos >= 160)
+ 				return false;
+diff --git a/drivers/media/common/siano/smsendian.c b/drivers/media/common/siano/smsendian.c
+index bfe831c10b1c..b95a631f23f9 100644
+--- a/drivers/media/common/siano/smsendian.c
++++ b/drivers/media/common/siano/smsendian.c
+@@ -35,7 +35,7 @@ void smsendian_handle_tx_message(void *buffer)
+ 	switch (msg->x_msg_header.msg_type) {
+ 	case MSG_SMS_DATA_DOWNLOAD_REQ:
+ 	{
+-		msg->msg_data[0] = le32_to_cpu(msg->msg_data[0]);
++		msg->msg_data[0] = le32_to_cpu((__force __le32)(msg->msg_data[0]));
+ 		break;
+ 	}
+ 
+@@ -44,7 +44,7 @@ void smsendian_handle_tx_message(void *buffer)
+ 				sizeof(struct sms_msg_hdr))/4;
+ 
+ 		for (i = 0; i < msg_words; i++)
+-			msg->msg_data[i] = le32_to_cpu(msg->msg_data[i]);
++			msg->msg_data[i] = le32_to_cpu((__force __le32)msg->msg_data[i]);
+ 
+ 		break;
+ 	}
+@@ -64,7 +64,7 @@ void smsendian_handle_rx_message(void *buffer)
+ 	{
+ 		struct sms_version_res *ver =
+ 			(struct sms_version_res *) msg;
+-		ver->chip_model = le16_to_cpu(ver->chip_model);
++		ver->chip_model = le16_to_cpu((__force __le16)ver->chip_model);
+ 		break;
+ 	}
+ 
+@@ -81,7 +81,7 @@ void smsendian_handle_rx_message(void *buffer)
+ 				sizeof(struct sms_msg_hdr))/4;
+ 
+ 		for (i = 0; i < msg_words; i++)
+-			msg->msg_data[i] = le32_to_cpu(msg->msg_data[i]);
++			msg->msg_data[i] = le32_to_cpu((__force __le32)msg->msg_data[i]);
+ 
+ 		break;
+ 	}
+@@ -95,9 +95,9 @@ void smsendian_handle_message_header(void *msg)
+ #ifdef __BIG_ENDIAN
+ 	struct sms_msg_hdr *phdr = (struct sms_msg_hdr *)msg;
+ 
+-	phdr->msg_type = le16_to_cpu(phdr->msg_type);
+-	phdr->msg_length = le16_to_cpu(phdr->msg_length);
+-	phdr->msg_flags = le16_to_cpu(phdr->msg_flags);
++	phdr->msg_type = le16_to_cpu((__force __le16)phdr->msg_type);
++	phdr->msg_length = le16_to_cpu((__force __le16)phdr->msg_length);
++	phdr->msg_flags = le16_to_cpu((__force __le16)phdr->msg_flags);
+ #endif /* __BIG_ENDIAN */
+ }
+ EXPORT_SYMBOL_GPL(smsendian_handle_message_header);
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index d3f7bb33a54d..f32ec7342ef0 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -916,9 +916,12 @@ void vb2_buffer_done(struct vb2_buffer *vb, enum vb2_buffer_state state)
+ 	dprintk(4, "done processing on buffer %d, state: %d\n",
+ 			vb->index, state);
+ 
+-	/* sync buffers */
+-	for (plane = 0; plane < vb->num_planes; ++plane)
+-		call_void_memop(vb, finish, vb->planes[plane].mem_priv);
++	if (state != VB2_BUF_STATE_QUEUED &&
++	    state != VB2_BUF_STATE_REQUEUEING) {
++		/* sync buffers */
++		for (plane = 0; plane < vb->num_planes; ++plane)
++			call_void_memop(vb, finish, vb->planes[plane].mem_priv);
++	}
+ 
+ 	spin_lock_irqsave(&q->done_lock, flags);
+ 	if (state == VB2_BUF_STATE_QUEUED ||
+diff --git a/drivers/media/i2c/smiapp/smiapp-core.c b/drivers/media/i2c/smiapp/smiapp-core.c
+index 3b7ace395ee6..e1f8208581aa 100644
+--- a/drivers/media/i2c/smiapp/smiapp-core.c
++++ b/drivers/media/i2c/smiapp/smiapp-core.c
+@@ -1001,7 +1001,7 @@ static int smiapp_read_nvm(struct smiapp_sensor *sensor,
+ 		if (rval)
+ 			goto out;
+ 
+-		for (i = 0; i < 1000; i++) {
++		for (i = 1000; i > 0; i--) {
+ 			rval = smiapp_read(
+ 				sensor,
+ 				SMIAPP_REG_U8_DATA_TRANSFER_IF_1_STATUS, &s);
+@@ -1012,11 +1012,10 @@ static int smiapp_read_nvm(struct smiapp_sensor *sensor,
+ 			if (s & SMIAPP_DATA_TRANSFER_IF_1_STATUS_RD_READY)
+ 				break;
+ 
+-			if (--i == 0) {
+-				rval = -ETIMEDOUT;
+-				goto out;
+-			}
+-
++		}
++		if (!i) {
++			rval = -ETIMEDOUT;
++			goto out;
+ 		}
+ 
+ 		for (i = 0; i < SMIAPP_NVM_PAGE_SIZE; i++) {
+diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c
+index 35e81f7c0d2f..ae59c3177555 100644
+--- a/drivers/media/media-device.c
++++ b/drivers/media/media-device.c
+@@ -54,9 +54,10 @@ static int media_device_close(struct file *filp)
+ 	return 0;
+ }
+ 
+-static int media_device_get_info(struct media_device *dev,
+-				 struct media_device_info *info)
++static long media_device_get_info(struct media_device *dev, void *arg)
+ {
++	struct media_device_info *info = arg;
++
+ 	memset(info, 0, sizeof(*info));
+ 
+ 	if (dev->driver_name[0])
+@@ -93,9 +94,9 @@ static struct media_entity *find_entity(struct media_device *mdev, u32 id)
+ 	return NULL;
+ }
+ 
+-static long media_device_enum_entities(struct media_device *mdev,
+-				       struct media_entity_desc *entd)
++static long media_device_enum_entities(struct media_device *mdev, void *arg)
+ {
++	struct media_entity_desc *entd = arg;
+ 	struct media_entity *ent;
+ 
+ 	ent = find_entity(mdev, entd->id);
+@@ -146,9 +147,9 @@ static void media_device_kpad_to_upad(const struct media_pad *kpad,
+ 	upad->flags = kpad->flags;
+ }
+ 
+-static long media_device_enum_links(struct media_device *mdev,
+-				    struct media_links_enum *links)
++static long media_device_enum_links(struct media_device *mdev, void *arg)
+ {
++	struct media_links_enum *links = arg;
+ 	struct media_entity *entity;
+ 
+ 	entity = find_entity(mdev, links->entity);
+@@ -195,9 +196,9 @@ static long media_device_enum_links(struct media_device *mdev,
+ 	return 0;
+ }
+ 
+-static long media_device_setup_link(struct media_device *mdev,
+-				    struct media_link_desc *linkd)
++static long media_device_setup_link(struct media_device *mdev, void *arg)
+ {
++	struct media_link_desc *linkd = arg;
+ 	struct media_link *link = NULL;
+ 	struct media_entity *source;
+ 	struct media_entity *sink;
+@@ -225,9 +226,9 @@ static long media_device_setup_link(struct media_device *mdev,
+ 	return __media_entity_setup_link(link, linkd->flags);
+ }
+ 
+-static long media_device_get_topology(struct media_device *mdev,
+-				      struct media_v2_topology *topo)
++static long media_device_get_topology(struct media_device *mdev, void *arg)
+ {
++	struct media_v2_topology *topo = arg;
+ 	struct media_entity *entity;
+ 	struct media_interface *intf;
+ 	struct media_pad *pad;
+diff --git a/drivers/media/pci/saa7164/saa7164-fw.c b/drivers/media/pci/saa7164/saa7164-fw.c
+index ef4906406ebf..a50461861133 100644
+--- a/drivers/media/pci/saa7164/saa7164-fw.c
++++ b/drivers/media/pci/saa7164/saa7164-fw.c
+@@ -426,7 +426,8 @@ int saa7164_downloadfirmware(struct saa7164_dev *dev)
+ 			__func__, fw->size);
+ 
+ 		if (fw->size != fwlength) {
+-			printk(KERN_ERR "xc5000: firmware incorrect size\n");
++			printk(KERN_ERR "saa7164: firmware incorrect size %zu != %u\n",
++				fw->size, fwlength);
+ 			ret = -ENOMEM;
+ 			goto out;
+ 		}
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index c3fafa97b2d0..0ea8dd44026c 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1228,7 +1228,8 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 		vc->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+ 		vc->vidq.min_buffers_needed = 2;
+ 		vc->vidq.lock = &vc->vb_mutex;
+-		vc->vidq.gfp_flags = GFP_DMA32;
++		vc->vidq.gfp_flags = dev->dma_mode != TW686X_DMA_MODE_MEMCPY ?
++				     GFP_DMA32 : 0;
+ 		vc->vidq.dev = &dev->pci_dev->dev;
+ 
+ 		err = vb2_queue_init(&vc->vidq);
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index 8eb000e3d8fd..f2db5128d786 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -1945,6 +1945,7 @@ error_csi2:
+ 
+ static void isp_detach_iommu(struct isp_device *isp)
+ {
++	arm_iommu_detach_device(isp->dev);
+ 	arm_iommu_release_mapping(isp->mapping);
+ 	isp->mapping = NULL;
+ }
+@@ -1961,8 +1962,7 @@ static int isp_attach_iommu(struct isp_device *isp)
+ 	mapping = arm_iommu_create_mapping(&platform_bus_type, SZ_1G, SZ_2G);
+ 	if (IS_ERR(mapping)) {
+ 		dev_err(isp->dev, "failed to create ARM IOMMU mapping\n");
+-		ret = PTR_ERR(mapping);
+-		goto error;
++		return PTR_ERR(mapping);
+ 	}
+ 
+ 	isp->mapping = mapping;
+@@ -1977,7 +1977,8 @@ static int isp_attach_iommu(struct isp_device *isp)
+ 	return 0;
+ 
+ error:
+-	isp_detach_iommu(isp);
++	arm_iommu_release_mapping(isp->mapping);
++	isp->mapping = NULL;
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/platform/rcar_jpu.c b/drivers/media/platform/rcar_jpu.c
+index f6092ae45912..8b44a849ab41 100644
+--- a/drivers/media/platform/rcar_jpu.c
++++ b/drivers/media/platform/rcar_jpu.c
+@@ -1280,7 +1280,7 @@ static int jpu_open(struct file *file)
+ 		/* ...issue software reset */
+ 		ret = jpu_reset(jpu);
+ 		if (ret)
+-			goto device_prepare_rollback;
++			goto jpu_reset_rollback;
+ 	}
+ 
+ 	jpu->ref_count++;
+@@ -1288,6 +1288,8 @@ static int jpu_open(struct file *file)
+ 	mutex_unlock(&jpu->mutex);
+ 	return 0;
+ 
++jpu_reset_rollback:
++	clk_disable_unprepare(jpu->clk);
+ device_prepare_rollback:
+ 	mutex_unlock(&jpu->mutex);
+ v4l_prepare_rollback:
+diff --git a/drivers/media/platform/renesas-ceu.c b/drivers/media/platform/renesas-ceu.c
+index 6599dba5ab84..dec1b3572e9b 100644
+--- a/drivers/media/platform/renesas-ceu.c
++++ b/drivers/media/platform/renesas-ceu.c
+@@ -777,8 +777,15 @@ static int ceu_try_fmt(struct ceu_device *ceudev, struct v4l2_format *v4l2_fmt)
+ 	const struct ceu_fmt *ceu_fmt;
+ 	int ret;
+ 
++	/*
++	 * Set format on sensor sub device: bus format used to produce memory
++	 * format is selected at initialization time.
++	 */
+ 	struct v4l2_subdev_format sd_format = {
+-		.which = V4L2_SUBDEV_FORMAT_TRY,
++		.which	= V4L2_SUBDEV_FORMAT_TRY,
++		.format	= {
++			.code = ceu_sd->mbus_fmt.mbus_code,
++		},
+ 	};
+ 
+ 	switch (pix->pixelformat) {
+@@ -800,10 +807,6 @@ static int ceu_try_fmt(struct ceu_device *ceudev, struct v4l2_format *v4l2_fmt)
+ 	v4l_bound_align_image(&pix->width, 2, CEU_MAX_WIDTH, 4,
+ 			      &pix->height, 4, CEU_MAX_HEIGHT, 4, 0);
+ 
+-	/*
+-	 * Set format on sensor sub device: bus format used to produce memory
+-	 * format is selected at initialization time.
+-	 */
+ 	v4l2_fill_mbus_format_mplane(&sd_format.format, pix);
+ 	ret = v4l2_subdev_call(v4l2_sd, pad, set_fmt, &pad_cfg, &sd_format);
+ 	if (ret)
+@@ -827,8 +830,15 @@ static int ceu_set_fmt(struct ceu_device *ceudev, struct v4l2_format *v4l2_fmt)
+ 	struct v4l2_subdev *v4l2_sd = ceu_sd->v4l2_sd;
+ 	int ret;
+ 
++	/*
++	 * Set format on sensor sub device: bus format used to produce memory
++	 * format is selected at initialization time.
++	 */
+ 	struct v4l2_subdev_format format = {
+ 		.which = V4L2_SUBDEV_FORMAT_ACTIVE,
++		.format	= {
++			.code = ceu_sd->mbus_fmt.mbus_code,
++		},
+ 	};
+ 
+ 	ret = ceu_try_fmt(ceudev, v4l2_fmt);
+diff --git a/drivers/media/radio/si470x/radio-si470x-i2c.c b/drivers/media/radio/si470x/radio-si470x-i2c.c
+index 41709b24b28f..f6d1fc3e5e1d 100644
+--- a/drivers/media/radio/si470x/radio-si470x-i2c.c
++++ b/drivers/media/radio/si470x/radio-si470x-i2c.c
+@@ -91,7 +91,7 @@ MODULE_PARM_DESC(max_rds_errors, "RDS maximum block errors: *1*");
+  */
+ int si470x_get_register(struct si470x_device *radio, int regnr)
+ {
+-	u16 buf[READ_REG_NUM];
++	__be16 buf[READ_REG_NUM];
+ 	struct i2c_msg msgs[1] = {
+ 		{
+ 			.addr = radio->client->addr,
+@@ -116,7 +116,7 @@ int si470x_get_register(struct si470x_device *radio, int regnr)
+ int si470x_set_register(struct si470x_device *radio, int regnr)
+ {
+ 	int i;
+-	u16 buf[WRITE_REG_NUM];
++	__be16 buf[WRITE_REG_NUM];
+ 	struct i2c_msg msgs[1] = {
+ 		{
+ 			.addr = radio->client->addr,
+@@ -146,7 +146,7 @@ int si470x_set_register(struct si470x_device *radio, int regnr)
+ static int si470x_get_all_registers(struct si470x_device *radio)
+ {
+ 	int i;
+-	u16 buf[READ_REG_NUM];
++	__be16 buf[READ_REG_NUM];
+ 	struct i2c_msg msgs[1] = {
+ 		{
+ 			.addr = radio->client->addr,
+diff --git a/drivers/media/rc/ir-mce_kbd-decoder.c b/drivers/media/rc/ir-mce_kbd-decoder.c
+index 5478fe08f9d3..d94f1c190f62 100644
+--- a/drivers/media/rc/ir-mce_kbd-decoder.c
++++ b/drivers/media/rc/ir-mce_kbd-decoder.c
+@@ -324,11 +324,13 @@ again:
+ 			scancode = data->body & 0xffff;
+ 			dev_dbg(&dev->dev, "keyboard data 0x%08x\n",
+ 				data->body);
+-			if (dev->timeout)
+-				delay = usecs_to_jiffies(dev->timeout / 1000);
+-			else
+-				delay = msecs_to_jiffies(100);
+-			mod_timer(&data->rx_timeout, jiffies + delay);
++			if (scancode) {
++				delay = nsecs_to_jiffies(dev->timeout) +
++					msecs_to_jiffies(100);
++				mod_timer(&data->rx_timeout, jiffies + delay);
++			} else {
++				del_timer(&data->rx_timeout);
++			}
+ 			/* Pass data to keyboard buffer parser */
+ 			ir_mce_kbd_process_keyboard_data(dev, scancode);
+ 			lsc.rc_proto = RC_PROTO_MCIR2_KBD;
+diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
+index 3f493e0b0716..5f2f61f000fc 100644
+--- a/drivers/media/usb/em28xx/em28xx-dvb.c
++++ b/drivers/media/usb/em28xx/em28xx-dvb.c
+@@ -199,6 +199,7 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ 	int rc;
+ 	struct em28xx_i2c_bus *i2c_bus = dvb->adapter.priv;
+ 	struct em28xx *dev = i2c_bus->dev;
++	struct usb_device *udev = interface_to_usbdev(dev->intf);
+ 	int dvb_max_packet_size, packet_multiplier, dvb_alt;
+ 
+ 	if (dev->dvb_xfer_bulk) {
+@@ -217,6 +218,7 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ 		dvb_alt = dev->dvb_alt_isoc;
+ 	}
+ 
++	usb_set_interface(udev, dev->ifnum, dvb_alt);
+ 	rc = em28xx_set_mode(dev, EM28XX_DIGITAL_MODE);
+ 	if (rc < 0)
+ 		return rc;
+@@ -1392,7 +1394,7 @@ static int em28174_dvb_init_hauppauge_wintv_dualhd_01595(struct em28xx *dev)
+ 
+ 	dvb->i2c_client_tuner = dvb_module_probe("si2157", NULL,
+ 						 adapter,
+-						 0x60, &si2157_config);
++						 addr, &si2157_config);
+ 	if (!dvb->i2c_client_tuner) {
+ 		dvb_module_release(dvb->i2c_client_demod);
+ 		return -ENODEV;
+diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
+index a4803ac192bb..1d49a8dd4a37 100644
+--- a/drivers/memory/tegra/mc.c
++++ b/drivers/memory/tegra/mc.c
+@@ -20,14 +20,6 @@
+ #include "mc.h"
+ 
+ #define MC_INTSTATUS 0x000
+-#define  MC_INT_DECERR_MTS (1 << 16)
+-#define  MC_INT_SECERR_SEC (1 << 13)
+-#define  MC_INT_DECERR_VPR (1 << 12)
+-#define  MC_INT_INVALID_APB_ASID_UPDATE (1 << 11)
+-#define  MC_INT_INVALID_SMMU_PAGE (1 << 10)
+-#define  MC_INT_ARBITRATION_EMEM (1 << 9)
+-#define  MC_INT_SECURITY_VIOLATION (1 << 8)
+-#define  MC_INT_DECERR_EMEM (1 << 6)
+ 
+ #define MC_INTMASK 0x004
+ 
+@@ -248,12 +240,13 @@ static const char *const error_names[8] = {
+ static irqreturn_t tegra_mc_irq(int irq, void *data)
+ {
+ 	struct tegra_mc *mc = data;
+-	unsigned long status, mask;
++	unsigned long status;
+ 	unsigned int bit;
+ 
+ 	/* mask all interrupts to avoid flooding */
+-	status = mc_readl(mc, MC_INTSTATUS);
+-	mask = mc_readl(mc, MC_INTMASK);
++	status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask;
++	if (!status)
++		return IRQ_NONE;
+ 
+ 	for_each_set_bit(bit, &status, 32) {
+ 		const char *error = status_names[bit] ?: "unknown";
+@@ -346,7 +339,6 @@ static int tegra_mc_probe(struct platform_device *pdev)
+ 	const struct of_device_id *match;
+ 	struct resource *res;
+ 	struct tegra_mc *mc;
+-	u32 value;
+ 	int err;
+ 
+ 	match = of_match_node(tegra_mc_of_match, pdev->dev.of_node);
+@@ -414,11 +406,7 @@ static int tegra_mc_probe(struct platform_device *pdev)
+ 
+ 	WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n");
+ 
+-	value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
+-		MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
+-		MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM;
+-
+-	mc_writel(mc, value, MC_INTMASK);
++	mc_writel(mc, mc->soc->intmask, MC_INTMASK);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/memory/tegra/mc.h b/drivers/memory/tegra/mc.h
+index ddb16676c3af..24e020b4609b 100644
+--- a/drivers/memory/tegra/mc.h
++++ b/drivers/memory/tegra/mc.h
+@@ -14,6 +14,15 @@
+ 
+ #include <soc/tegra/mc.h>
+ 
++#define MC_INT_DECERR_MTS (1 << 16)
++#define MC_INT_SECERR_SEC (1 << 13)
++#define MC_INT_DECERR_VPR (1 << 12)
++#define MC_INT_INVALID_APB_ASID_UPDATE (1 << 11)
++#define MC_INT_INVALID_SMMU_PAGE (1 << 10)
++#define MC_INT_ARBITRATION_EMEM (1 << 9)
++#define MC_INT_SECURITY_VIOLATION (1 << 8)
++#define MC_INT_DECERR_EMEM (1 << 6)
++
+ static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset)
+ {
+ 	return readl(mc->regs + offset);
+diff --git a/drivers/memory/tegra/tegra114.c b/drivers/memory/tegra/tegra114.c
+index b20e6e3e208e..7560b2f558a7 100644
+--- a/drivers/memory/tegra/tegra114.c
++++ b/drivers/memory/tegra/tegra114.c
+@@ -945,4 +945,6 @@ const struct tegra_mc_soc tegra114_mc_soc = {
+ 	.atom_size = 32,
+ 	.client_id_mask = 0x7f,
+ 	.smmu = &tegra114_smmu_soc,
++	.intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION |
++		   MC_INT_DECERR_EMEM,
+ };
+diff --git a/drivers/memory/tegra/tegra124.c b/drivers/memory/tegra/tegra124.c
+index 8b6360eabb8a..bd16555cca0f 100644
+--- a/drivers/memory/tegra/tegra124.c
++++ b/drivers/memory/tegra/tegra124.c
+@@ -1035,6 +1035,9 @@ const struct tegra_mc_soc tegra124_mc_soc = {
+ 	.smmu = &tegra124_smmu_soc,
+ 	.emem_regs = tegra124_mc_emem_regs,
+ 	.num_emem_regs = ARRAY_SIZE(tegra124_mc_emem_regs),
++	.intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
++		   MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
++		   MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
+ };
+ #endif /* CONFIG_ARCH_TEGRA_124_SOC */
+ 
+@@ -1059,5 +1062,8 @@ const struct tegra_mc_soc tegra132_mc_soc = {
+ 	.atom_size = 32,
+ 	.client_id_mask = 0x7f,
+ 	.smmu = &tegra132_smmu_soc,
++	.intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
++		   MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
++		   MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
+ };
+ #endif /* CONFIG_ARCH_TEGRA_132_SOC */
+diff --git a/drivers/memory/tegra/tegra210.c b/drivers/memory/tegra/tegra210.c
+index d398bcd3fc57..3b8d0100088c 100644
+--- a/drivers/memory/tegra/tegra210.c
++++ b/drivers/memory/tegra/tegra210.c
+@@ -1092,4 +1092,7 @@ const struct tegra_mc_soc tegra210_mc_soc = {
+ 	.atom_size = 64,
+ 	.client_id_mask = 0xff,
+ 	.smmu = &tegra210_smmu_soc,
++	.intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
++		   MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
++		   MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
+ };
+diff --git a/drivers/memory/tegra/tegra30.c b/drivers/memory/tegra/tegra30.c
+index d756c837f23e..d2ba50ed0490 100644
+--- a/drivers/memory/tegra/tegra30.c
++++ b/drivers/memory/tegra/tegra30.c
+@@ -967,4 +967,6 @@ const struct tegra_mc_soc tegra30_mc_soc = {
+ 	.atom_size = 16,
+ 	.client_id_mask = 0x7f,
+ 	.smmu = &tegra30_smmu_soc,
++	.intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION |
++		   MC_INT_DECERR_EMEM,
+ };
+diff --git a/drivers/mfd/cros_ec.c b/drivers/mfd/cros_ec.c
+index d61024141e2b..74780f2964a1 100644
+--- a/drivers/mfd/cros_ec.c
++++ b/drivers/mfd/cros_ec.c
+@@ -112,7 +112,11 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 
+ 	mutex_init(&ec_dev->lock);
+ 
+-	cros_ec_query_all(ec_dev);
++	err = cros_ec_query_all(ec_dev);
++	if (err) {
++		dev_err(dev, "Cannot identify the EC: error %d\n", err);
++		return err;
++	}
+ 
+ 	if (ec_dev->irq) {
+ 		err = request_threaded_irq(ec_dev->irq, NULL, ec_irq_thread,
+diff --git a/drivers/mmc/core/pwrseq_simple.c b/drivers/mmc/core/pwrseq_simple.c
+index 13ef162cf066..a8b9fee4d62a 100644
+--- a/drivers/mmc/core/pwrseq_simple.c
++++ b/drivers/mmc/core/pwrseq_simple.c
+@@ -40,14 +40,18 @@ static void mmc_pwrseq_simple_set_gpios_value(struct mmc_pwrseq_simple *pwrseq,
+ 	struct gpio_descs *reset_gpios = pwrseq->reset_gpios;
+ 
+ 	if (!IS_ERR(reset_gpios)) {
+-		int i;
+-		int values[reset_gpios->ndescs];
++		int i, *values;
++		int nvalues = reset_gpios->ndescs;
+ 
+-		for (i = 0; i < reset_gpios->ndescs; i++)
++		values = kmalloc_array(nvalues, sizeof(int), GFP_KERNEL);
++		if (!values)
++			return;
++
++		for (i = 0; i < nvalues; i++)
+ 			values[i] = value;
+ 
+-		gpiod_set_array_value_cansleep(
+-			reset_gpios->ndescs, reset_gpios->desc, values);
++		gpiod_set_array_value_cansleep(nvalues, reset_gpios->desc, values);
++		kfree(values);
+ 	}
+ }
+ 
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 3ee8f57fd612..80dc2fd6576c 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -1231,6 +1231,8 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
+ 	if (host->state == STATE_WAITING_CMD11_DONE)
+ 		sdmmc_cmd_bits |= SDMMC_CMD_VOLT_SWITCH;
+ 
++	slot->mmc->actual_clock = 0;
++
+ 	if (!clock) {
+ 		mci_writel(host, CLKENA, 0);
+ 		mci_send_cmd(slot, sdmmc_cmd_bits, 0);
+@@ -1289,6 +1291,8 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
+ 
+ 		/* keep the last clock value that was requested from core */
+ 		slot->__clk_old = clock;
++		slot->mmc->actual_clock = div ? ((host->bus_hz / div) >> 1) :
++					  host->bus_hz;
+ 	}
+ 
+ 	host->current_speed = clock;
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 1456abd5eeb9..e7e43f2ae224 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -916,10 +916,6 @@ static int sdhci_omap_probe(struct platform_device *pdev)
+ 		goto err_put_sync;
+ 	}
+ 
+-	ret = sdhci_omap_config_iodelay_pinctrl_state(omap_host);
+-	if (ret)
+-		goto err_put_sync;
+-
+ 	host->mmc_host_ops.get_ro = mmc_gpio_get_ro;
+ 	host->mmc_host_ops.start_signal_voltage_switch =
+ 					sdhci_omap_start_signal_voltage_switch;
+@@ -930,12 +926,23 @@ static int sdhci_omap_probe(struct platform_device *pdev)
+ 	sdhci_read_caps(host);
+ 	host->caps |= SDHCI_CAN_DO_ADMA2;
+ 
+-	ret = sdhci_add_host(host);
++	ret = sdhci_setup_host(host);
+ 	if (ret)
+ 		goto err_put_sync;
+ 
++	ret = sdhci_omap_config_iodelay_pinctrl_state(omap_host);
++	if (ret)
++		goto err_cleanup_host;
++
++	ret = __sdhci_add_host(host);
++	if (ret)
++		goto err_cleanup_host;
++
+ 	return 0;
+ 
++err_cleanup_host:
++	sdhci_cleanup_host(host);
++
+ err_put_sync:
+ 	pm_runtime_put_sync(dev);
+ 
+diff --git a/drivers/mtd/nand/raw/fsl_ifc_nand.c b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+index 61aae0224078..98aac1f2e9ae 100644
+--- a/drivers/mtd/nand/raw/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/raw/fsl_ifc_nand.c
+@@ -342,9 +342,16 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
+ 
+ 	case NAND_CMD_READID:
+ 	case NAND_CMD_PARAM: {
++		/*
++		 * For READID, read 8 bytes that are currently used.
++		 * For PARAM, read all 3 copies of 256-bytes pages.
++		 */
++		int len = 8;
+ 		int timing = IFC_FIR_OP_RB;
+-		if (command == NAND_CMD_PARAM)
++		if (command == NAND_CMD_PARAM) {
+ 			timing = IFC_FIR_OP_RBCD;
++			len = 256 * 3;
++		}
+ 
+ 		ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
+ 			  (IFC_FIR_OP_UA  << IFC_NAND_FIR0_OP1_SHIFT) |
+@@ -354,12 +361,8 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
+ 			  &ifc->ifc_nand.nand_fcr0);
+ 		ifc_out32(column, &ifc->ifc_nand.row3);
+ 
+-		/*
+-		 * although currently it's 8 bytes for READID, we always read
+-		 * the maximum 256 bytes(for PARAM)
+-		 */
+-		ifc_out32(256, &ifc->ifc_nand.nand_fbcr);
+-		ifc_nand_ctrl->read_bytes = 256;
++		ifc_out32(len, &ifc->ifc_nand.nand_fbcr);
++		ifc_nand_ctrl->read_bytes = len;
+ 
+ 		set_addr(mtd, 0, 0, 0);
+ 		fsl_ifc_run_command(mtd);
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 600d5ad1fbde..18f51d5ac846 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -473,7 +473,7 @@ qca8k_set_pad_ctrl(struct qca8k_priv *priv, int port, int mode)
+ static void
+ qca8k_port_set_status(struct qca8k_priv *priv, int port, int enable)
+ {
+-	u32 mask = QCA8K_PORT_STATUS_TXMAC;
++	u32 mask = QCA8K_PORT_STATUS_TXMAC | QCA8K_PORT_STATUS_RXMAC;
+ 
+ 	/* Port 0 and 6 have no internal PHY */
+ 	if ((port > 0) && (port < 6))
+@@ -490,6 +490,7 @@ qca8k_setup(struct dsa_switch *ds)
+ {
+ 	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+ 	int ret, i, phy_mode = -1;
++	u32 mask;
+ 
+ 	/* Make sure that port 0 is the cpu port */
+ 	if (!dsa_is_cpu_port(ds, 0)) {
+@@ -515,7 +516,10 @@ qca8k_setup(struct dsa_switch *ds)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	/* Enable CPU Port */
++	/* Enable CPU Port, force it to maximum bandwidth and full-duplex */
++	mask = QCA8K_PORT_STATUS_SPEED_1000 | QCA8K_PORT_STATUS_TXFLOW |
++	       QCA8K_PORT_STATUS_RXFLOW | QCA8K_PORT_STATUS_DUPLEX;
++	qca8k_write(priv, QCA8K_REG_PORT_STATUS(QCA8K_CPU_PORT), mask);
+ 	qca8k_reg_set(priv, QCA8K_REG_GLOBAL_FW_CTRL0,
+ 		      QCA8K_GLOBAL_FW_CTRL0_CPU_PORT_EN);
+ 	qca8k_port_set_status(priv, QCA8K_CPU_PORT, 1);
+@@ -583,6 +587,47 @@ qca8k_setup(struct dsa_switch *ds)
+ 	return 0;
+ }
+ 
++static void
++qca8k_adjust_link(struct dsa_switch *ds, int port, struct phy_device *phy)
++{
++	struct qca8k_priv *priv = ds->priv;
++	u32 reg;
++
++	/* Force fixed-link setting for CPU port, skip others. */
++	if (!phy_is_pseudo_fixed_link(phy))
++		return;
++
++	/* Set port speed */
++	switch (phy->speed) {
++	case 10:
++		reg = QCA8K_PORT_STATUS_SPEED_10;
++		break;
++	case 100:
++		reg = QCA8K_PORT_STATUS_SPEED_100;
++		break;
++	case 1000:
++		reg = QCA8K_PORT_STATUS_SPEED_1000;
++		break;
++	default:
++		dev_dbg(priv->dev, "port%d link speed %dMbps not supported.\n",
++			port, phy->speed);
++		return;
++	}
++
++	/* Set duplex mode */
++	if (phy->duplex == DUPLEX_FULL)
++		reg |= QCA8K_PORT_STATUS_DUPLEX;
++
++	/* Force flow control */
++	if (dsa_is_cpu_port(ds, port))
++		reg |= QCA8K_PORT_STATUS_RXFLOW | QCA8K_PORT_STATUS_TXFLOW;
++
++	/* Force link down before changing MAC options */
++	qca8k_port_set_status(priv, port, 0);
++	qca8k_write(priv, QCA8K_REG_PORT_STATUS(port), reg);
++	qca8k_port_set_status(priv, port, 1);
++}
++
+ static int
+ qca8k_phy_read(struct dsa_switch *ds, int phy, int regnum)
+ {
+@@ -831,6 +876,7 @@ qca8k_get_tag_protocol(struct dsa_switch *ds, int port)
+ static const struct dsa_switch_ops qca8k_switch_ops = {
+ 	.get_tag_protocol	= qca8k_get_tag_protocol,
+ 	.setup			= qca8k_setup,
++	.adjust_link            = qca8k_adjust_link,
+ 	.get_strings		= qca8k_get_strings,
+ 	.phy_read		= qca8k_phy_read,
+ 	.phy_write		= qca8k_phy_write,
+@@ -862,6 +908,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
+ 		return -ENOMEM;
+ 
+ 	priv->bus = mdiodev->bus;
++	priv->dev = &mdiodev->dev;
+ 
+ 	/* read the switches ID register */
+ 	id = qca8k_read(priv, QCA8K_REG_MASK_CTRL);
+@@ -933,6 +980,7 @@ static SIMPLE_DEV_PM_OPS(qca8k_pm_ops,
+ 			 qca8k_suspend, qca8k_resume);
+ 
+ static const struct of_device_id qca8k_of_match[] = {
++	{ .compatible = "qca,qca8334" },
+ 	{ .compatible = "qca,qca8337" },
+ 	{ /* sentinel */ },
+ };
+diff --git a/drivers/net/dsa/qca8k.h b/drivers/net/dsa/qca8k.h
+index 1cf8a920d4ff..613fe5c50236 100644
+--- a/drivers/net/dsa/qca8k.h
++++ b/drivers/net/dsa/qca8k.h
+@@ -51,8 +51,10 @@
+ #define QCA8K_GOL_MAC_ADDR0				0x60
+ #define QCA8K_GOL_MAC_ADDR1				0x64
+ #define QCA8K_REG_PORT_STATUS(_i)			(0x07c + (_i) * 4)
+-#define   QCA8K_PORT_STATUS_SPEED			GENMASK(2, 0)
+-#define   QCA8K_PORT_STATUS_SPEED_S			0
++#define   QCA8K_PORT_STATUS_SPEED			GENMASK(1, 0)
++#define   QCA8K_PORT_STATUS_SPEED_10			0
++#define   QCA8K_PORT_STATUS_SPEED_100			0x1
++#define   QCA8K_PORT_STATUS_SPEED_1000			0x2
+ #define   QCA8K_PORT_STATUS_TXMAC			BIT(2)
+ #define   QCA8K_PORT_STATUS_RXMAC			BIT(3)
+ #define   QCA8K_PORT_STATUS_TXFLOW			BIT(4)
+@@ -165,6 +167,7 @@ struct qca8k_priv {
+ 	struct ar8xxx_port_status port_sts[QCA8K_NUM_PORTS];
+ 	struct dsa_switch *ds;
+ 	struct mutex reg_mutex;
++	struct device *dev;
+ };
+ 
+ struct qca8k_mib_desc {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index 1b9d3130af4d..17f12c18d225 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -333,6 +333,7 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev,
+ 
+ 	memset(&io_sq->desc_addr, 0x0, sizeof(io_sq->desc_addr));
+ 
++	io_sq->dma_addr_bits = ena_dev->dma_addr_bits;
+ 	io_sq->desc_entry_size =
+ 		(io_sq->direction == ENA_COM_IO_QUEUE_DIRECTION_TX) ?
+ 		sizeof(struct ena_eth_io_tx_desc) :
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 1b45cd73a258..119777986ea4 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1128,14 +1128,14 @@ static void xgbe_phy_adjust_link(struct xgbe_prv_data *pdata)
+ 
+ 		if (pdata->tx_pause != pdata->phy.tx_pause) {
+ 			new_state = 1;
+-			pdata->hw_if.config_tx_flow_control(pdata);
+ 			pdata->tx_pause = pdata->phy.tx_pause;
++			pdata->hw_if.config_tx_flow_control(pdata);
+ 		}
+ 
+ 		if (pdata->rx_pause != pdata->phy.rx_pause) {
+ 			new_state = 1;
+-			pdata->hw_if.config_rx_flow_control(pdata);
+ 			pdata->rx_pause = pdata->phy.rx_pause;
++			pdata->hw_if.config_rx_flow_control(pdata);
+ 		}
+ 
+ 		/* Speed support */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index f83769d8047b..401e58939795 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -6457,6 +6457,9 @@ static int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
+ 	}
+ 	mutex_unlock(&bp->hwrm_cmd_lock);
+ 
++	if (!BNXT_SINGLE_PF(bp))
++		return 0;
++
+ 	diff = link_info->support_auto_speeds ^ link_info->advertising;
+ 	if ((link_info->support_auto_speeds | diff) !=
+ 	    link_info->support_auto_speeds) {
+@@ -8614,8 +8617,8 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
+ 			memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN);
+ 		} else {
+ 			eth_hw_addr_random(bp->dev);
+-			rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
+ 		}
++		rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
+ #endif
+ 	}
+ 	return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index f952963d594e..e1f025b2a6bc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -914,7 +914,8 @@ static int bnxt_vf_configure_mac(struct bnxt *bp, struct bnxt_vf_info *vf)
+ 	if (req->enables & cpu_to_le32(FUNC_VF_CFG_REQ_ENABLES_DFLT_MAC_ADDR)) {
+ 		if (is_valid_ether_addr(req->dflt_mac_addr) &&
+ 		    ((vf->flags & BNXT_VF_TRUST) ||
+-		     (!is_valid_ether_addr(vf->mac_addr)))) {
++		     !is_valid_ether_addr(vf->mac_addr) ||
++		     ether_addr_equal(req->dflt_mac_addr, vf->mac_addr))) {
+ 			ether_addr_copy(vf->vf_mac_addr, req->dflt_mac_addr);
+ 			return bnxt_hwrm_exec_fwd_resp(bp, vf, msg_size);
+ 		}
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 005283c7cdfe..72c83496e01f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -3066,6 +3066,7 @@ static void cxgb_del_udp_tunnel(struct net_device *netdev,
+ 
+ 		adapter->geneve_port = 0;
+ 		t4_write_reg(adapter, MPS_RX_GENEVE_TYPE_A, 0);
++		break;
+ 	default:
+ 		return;
+ 	}
+@@ -3151,6 +3152,7 @@ static void cxgb_add_udp_tunnel(struct net_device *netdev,
+ 
+ 		t4_write_reg(adapter, MPS_RX_GENEVE_TYPE_A,
+ 			     GENEVE_V(be16_to_cpu(ti->port)) | GENEVE_EN_F);
++		break;
+ 	default:
+ 		return;
+ 	}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index 02145f2de820..618eec654bd3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -283,3 +283,4 @@ EXPORT_SYMBOL(hnae3_unregister_ae_dev);
+ MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("HNAE3(Hisilicon Network Acceleration Engine) Framework");
++MODULE_VERSION(HNAE3_MOD_VERSION);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 37ec1b3286c6..67ed70fc3f0a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -36,6 +36,8 @@
+ #include <linux/pci.h>
+ #include <linux/types.h>
+ 
++#define HNAE3_MOD_VERSION "1.0"
++
+ /* Device IDs */
+ #define HNAE3_DEV_ID_GE				0xA220
+ #define HNAE3_DEV_ID_25GE			0xA221
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 8c55965a66ac..c23ba15d5e8f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1836,6 +1836,7 @@ static void hns3_replace_buffer(struct hns3_enet_ring *ring, int i,
+ 	hns3_unmap_buffer(ring, &ring->desc_cb[i]);
+ 	ring->desc_cb[i] = *res_cb;
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma);
++	ring->desc[i].rx.bd_base_info = 0;
+ }
+ 
+ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
+@@ -1843,6 +1844,7 @@ static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)
+ 	ring->desc_cb[i].reuse_flag = 0;
+ 	ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma
+ 		+ ring->desc_cb[i].page_offset);
++	ring->desc[i].rx.bd_base_info = 0;
+ }
+ 
+ static void hns3_nic_reclaim_one_desc(struct hns3_enet_ring *ring, int *bytes,
+@@ -3600,6 +3602,8 @@ static int __init hns3_init_module(void)
+ 
+ 	client.ops = &client_ops;
+ 
++	INIT_LIST_HEAD(&client.node);
++
+ 	ret = hnae3_register_client(&client);
+ 	if (ret)
+ 		return ret;
+@@ -3627,3 +3631,4 @@ MODULE_DESCRIPTION("HNS3: Hisilicon Ethernet Driver");
+ MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
+ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("pci:hns-nic");
++MODULE_VERSION(HNS3_MOD_VERSION);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 98cdbd3a1163..5b40f5a53761 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -14,6 +14,8 @@
+ 
+ #include "hnae3.h"
+ 
++#define HNS3_MOD_VERSION "1.0"
++
+ extern const char hns3_driver_version[];
+ 
+ enum hns3_nic_state {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 2066dd734444..553eaa476b19 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1459,8 +1459,11 @@ static int hclge_alloc_vport(struct hclge_dev *hdev)
+ 	/* We need to alloc a vport for main NIC of PF */
+ 	num_vport = hdev->num_vmdq_vport + hdev->num_req_vfs + 1;
+ 
+-	if (hdev->num_tqps < num_vport)
+-		num_vport = hdev->num_tqps;
++	if (hdev->num_tqps < num_vport) {
++		dev_err(&hdev->pdev->dev, "tqps(%d) is less than vports(%d)",
++			hdev->num_tqps, num_vport);
++		return -EINVAL;
++	}
+ 
+ 	/* Alloc the same number of TQPs for every vport */
+ 	tqp_per_vport = hdev->num_tqps / num_vport;
+@@ -3783,13 +3786,11 @@ static int hclge_ae_start(struct hnae3_handle *handle)
+ 	hclge_cfg_mac_mode(hdev, true);
+ 	clear_bit(HCLGE_STATE_DOWN, &hdev->state);
+ 	mod_timer(&hdev->service_timer, jiffies + HZ);
++	hdev->hw.mac.link = 0;
+ 
+ 	/* reset tqp stats */
+ 	hclge_reset_tqp_stats(handle);
+ 
+-	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
+-		return 0;
+-
+ 	ret = hclge_mac_start_phy(hdev);
+ 	if (ret)
+ 		return ret;
+@@ -3805,9 +3806,12 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 
+ 	del_timer_sync(&hdev->service_timer);
+ 	cancel_work_sync(&hdev->service_task);
++	clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state);
+ 
+-	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state))
++	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) {
++		hclge_mac_stop_phy(hdev);
+ 		return;
++	}
+ 
+ 	for (i = 0; i < vport->alloc_tqps; i++)
+ 		hclge_tqp_enable(hdev, i, 0, false);
+@@ -3819,7 +3823,6 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 
+ 	/* reset tqp stats */
+ 	hclge_reset_tqp_stats(handle);
+-	hclge_update_link_status(hdev);
+ }
+ 
+ static int hclge_get_mac_vlan_cmd_status(struct hclge_vport *vport,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 0f4157e71282..7c88b65353cc 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -15,7 +15,7 @@
+ #include "hclge_cmd.h"
+ #include "hnae3.h"
+ 
+-#define HCLGE_MOD_VERSION "v1.0"
++#define HCLGE_MOD_VERSION "1.0"
+ #define HCLGE_DRIVER_NAME "hclge"
+ 
+ #define HCLGE_INVALID_VPORT 0xffff
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 2b8426412cc9..7a6510314657 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1323,6 +1323,7 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
+ 	hclgevf_reset_tqp_stats(handle);
+ 	del_timer_sync(&hdev->service_timer);
+ 	cancel_work_sync(&hdev->service_task);
++	clear_bit(HCLGEVF_STATE_SERVICE_SCHED, &hdev->state);
+ 	hclgevf_update_link_status(hdev, 0);
+ }
+ 
+@@ -1441,6 +1442,8 @@ static int hclgevf_misc_irq_init(struct hclgevf_dev *hdev)
+ 		return ret;
+ 	}
+ 
++	hclgevf_clear_event_cause(hdev, 0);
++
+ 	/* enable misc. vector(vector 0) */
+ 	hclgevf_enable_vector(&hdev->misc_vector, true);
+ 
+@@ -1451,6 +1454,7 @@ static void hclgevf_misc_irq_uninit(struct hclgevf_dev *hdev)
+ {
+ 	/* disable misc vector(vector 0) */
+ 	hclgevf_enable_vector(&hdev->misc_vector, false);
++	synchronize_irq(hdev->misc_vector.vector_irq);
+ 	free_irq(hdev->misc_vector.vector_irq, hdev);
+ 	hclgevf_free_vector(hdev, 0);
+ }
+@@ -1489,10 +1493,12 @@ static int hclgevf_init_instance(struct hclgevf_dev *hdev,
+ 			return ret;
+ 		break;
+ 	case HNAE3_CLIENT_ROCE:
+-		hdev->roce_client = client;
+-		hdev->roce.client = client;
++		if (hnae3_dev_roce_supported(hdev)) {
++			hdev->roce_client = client;
++			hdev->roce.client = client;
++		}
+ 
+-		if (hdev->roce_client && hnae3_dev_roce_supported(hdev)) {
++		if (hdev->roce_client && hdev->nic_client) {
+ 			ret = hclgevf_init_roce_base_info(hdev);
+ 			if (ret)
+ 				return ret;
+@@ -1625,6 +1631,10 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ 
+ 	hclgevf_state_init(hdev);
+ 
++	ret = hclgevf_cmd_init(hdev);
++	if (ret)
++		goto err_cmd_init;
++
+ 	ret = hclgevf_misc_irq_init(hdev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed(%d) to init Misc IRQ(vector0)\n",
+@@ -1632,10 +1642,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ 		goto err_misc_irq_init;
+ 	}
+ 
+-	ret = hclgevf_cmd_init(hdev);
+-	if (ret)
+-		goto err_cmd_init;
+-
+ 	ret = hclgevf_configure(hdev);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed(%d) to fetch configuration\n", ret);
+@@ -1683,10 +1689,10 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ 	return 0;
+ 
+ err_config:
+-	hclgevf_cmd_uninit(hdev);
+-err_cmd_init:
+ 	hclgevf_misc_irq_uninit(hdev);
+ err_misc_irq_init:
++	hclgevf_cmd_uninit(hdev);
++err_cmd_init:
+ 	hclgevf_state_uninit(hdev);
+ 	hclgevf_uninit_msi(hdev);
+ err_irq_init:
+@@ -1696,9 +1702,9 @@ err_irq_init:
+ 
+ static void hclgevf_uninit_hdev(struct hclgevf_dev *hdev)
+ {
+-	hclgevf_cmd_uninit(hdev);
+-	hclgevf_misc_irq_uninit(hdev);
+ 	hclgevf_state_uninit(hdev);
++	hclgevf_misc_irq_uninit(hdev);
++	hclgevf_cmd_uninit(hdev);
+ 	hclgevf_uninit_msi(hdev);
+ 	hclgevf_pci_uninit(hdev);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index a477a7c36bbd..9763e742e6fb 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -9,7 +9,7 @@
+ #include "hclgevf_cmd.h"
+ #include "hnae3.h"
+ 
+-#define HCLGEVF_MOD_VERSION "v1.0"
++#define HCLGEVF_MOD_VERSION "1.0"
+ #define HCLGEVF_DRIVER_NAME "hclgevf"
+ 
+ #define HCLGEVF_ROCEE_VECTOR_NUM	0
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index ec4a9759a6f2..3afb1f3b6f91 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -3546,15 +3546,12 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
+ 		}
+ 		break;
+ 	case e1000_pch_spt:
+-		if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
+-			/* Stable 24MHz frequency */
+-			incperiod = INCPERIOD_24MHZ;
+-			incvalue = INCVALUE_24MHZ;
+-			shift = INCVALUE_SHIFT_24MHZ;
+-			adapter->cc.shift = shift;
+-			break;
+-		}
+-		return -EINVAL;
++		/* Stable 24MHz frequency */
++		incperiod = INCPERIOD_24MHZ;
++		incvalue = INCVALUE_24MHZ;
++		shift = INCVALUE_SHIFT_24MHZ;
++		adapter->cc.shift = shift;
++		break;
+ 	case e1000_pch_cnp:
+ 		if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) {
+ 			/* Stable 24MHz frequency */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
+index a44139c1de80..12ba0b9f238b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e.h
++++ b/drivers/net/ethernet/intel/i40e/i40e.h
+@@ -608,7 +608,7 @@ struct i40e_pf {
+ 	unsigned long ptp_tx_start;
+ 	struct hwtstamp_config tstamp_config;
+ 	struct mutex tmreg_lock; /* Used to protect the SYSTIME registers. */
+-	u64 ptp_base_adj;
++	u32 ptp_adj_mult;
+ 	u32 tx_hwtstamp_timeouts;
+ 	u32 tx_hwtstamp_skipped;
+ 	u32 rx_hwtstamp_cleared;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index b974482ff630..c5e3d5f406ec 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -977,7 +977,9 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
+ 	    ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 						  10000baseCR_Full) ||
+ 	    ethtool_link_ksettings_test_link_mode(ks, advertising,
+-						  10000baseSR_Full))
++						  10000baseSR_Full) ||
++	    ethtool_link_ksettings_test_link_mode(ks, advertising,
++						  10000baseLR_Full))
+ 		config.link_speed |= I40E_LINK_SPEED_10GB;
+ 	if (ethtool_link_ksettings_test_link_mode(ks, advertising,
+ 						  20000baseKR2_Full))
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+index 5b47dd1f75a5..1a8e0cad787f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+@@ -40,9 +40,9 @@
+  * At 1Gb link, the period is multiplied by 20. (32ns)
+  * 1588 functionality is not supported at 100Mbps.
+  */
+-#define I40E_PTP_40GB_INCVAL 0x0199999999ULL
+-#define I40E_PTP_10GB_INCVAL 0x0333333333ULL
+-#define I40E_PTP_1GB_INCVAL  0x2000000000ULL
++#define I40E_PTP_40GB_INCVAL		0x0199999999ULL
++#define I40E_PTP_10GB_INCVAL_MULT	2
++#define I40E_PTP_1GB_INCVAL_MULT	20
+ 
+ #define I40E_PRTTSYN_CTL1_TSYNTYPE_V1  BIT(I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT)
+ #define I40E_PRTTSYN_CTL1_TSYNTYPE_V2  (2 << \
+@@ -130,17 +130,24 @@ static int i40e_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+ 		ppb = -ppb;
+ 	}
+ 
+-	smp_mb(); /* Force any pending update before accessing. */
+-	adj = READ_ONCE(pf->ptp_base_adj);
+-
+-	freq = adj;
++	freq = I40E_PTP_40GB_INCVAL;
+ 	freq *= ppb;
+ 	diff = div_u64(freq, 1000000000ULL);
+ 
+ 	if (neg_adj)
+-		adj -= diff;
++		adj = I40E_PTP_40GB_INCVAL - diff;
+ 	else
+-		adj += diff;
++		adj = I40E_PTP_40GB_INCVAL + diff;
++
++	/* At some link speeds, the base incval is so large that directly
++	 * multiplying by ppb would result in arithmetic overflow even when
++	 * using a u64. Avoid this by instead calculating the new incval
++	 * always in terms of the 40GbE clock rate and then multiplying by the
++	 * link speed factor afterwards. This does result in slightly lower
++	 * precision at lower link speeds, but it is fairly minor.
++	 */
++	smp_mb(); /* Force any pending update before accessing. */
++	adj *= READ_ONCE(pf->ptp_adj_mult);
+ 
+ 	wr32(hw, I40E_PRTTSYN_INC_L, adj & 0xFFFFFFFF);
+ 	wr32(hw, I40E_PRTTSYN_INC_H, adj >> 32);
+@@ -338,6 +345,8 @@ void i40e_ptp_rx_hang(struct i40e_pf *pf)
+  **/
+ void i40e_ptp_tx_hang(struct i40e_pf *pf)
+ {
++	struct sk_buff *skb;
++
+ 	if (!(pf->flags & I40E_FLAG_PTP) || !pf->ptp_tx)
+ 		return;
+ 
+@@ -350,9 +359,12 @@ void i40e_ptp_tx_hang(struct i40e_pf *pf)
+ 	 * within a second it is reasonable to assume that we never will.
+ 	 */
+ 	if (time_is_before_jiffies(pf->ptp_tx_start + HZ)) {
+-		dev_kfree_skb_any(pf->ptp_tx_skb);
++		skb = pf->ptp_tx_skb;
+ 		pf->ptp_tx_skb = NULL;
+ 		clear_bit_unlock(__I40E_PTP_TX_IN_PROGRESS, pf->state);
++
++		/* Free the skb after we clear the bitlock */
++		dev_kfree_skb_any(skb);
+ 		pf->tx_hwtstamp_timeouts++;
+ 	}
+ }
+@@ -462,6 +474,7 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ 	struct i40e_link_status *hw_link_info;
+ 	struct i40e_hw *hw = &pf->hw;
+ 	u64 incval;
++	u32 mult;
+ 
+ 	hw_link_info = &hw->phy.link_info;
+ 
+@@ -469,10 +482,10 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ 
+ 	switch (hw_link_info->link_speed) {
+ 	case I40E_LINK_SPEED_10GB:
+-		incval = I40E_PTP_10GB_INCVAL;
++		mult = I40E_PTP_10GB_INCVAL_MULT;
+ 		break;
+ 	case I40E_LINK_SPEED_1GB:
+-		incval = I40E_PTP_1GB_INCVAL;
++		mult = I40E_PTP_1GB_INCVAL_MULT;
+ 		break;
+ 	case I40E_LINK_SPEED_100MB:
+ 	{
+@@ -483,15 +496,20 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ 				 "1588 functionality is not supported at 100 Mbps. Stopping the PHC.\n");
+ 			warn_once++;
+ 		}
+-		incval = 0;
++		mult = 0;
+ 		break;
+ 	}
+ 	case I40E_LINK_SPEED_40GB:
+ 	default:
+-		incval = I40E_PTP_40GB_INCVAL;
++		mult = 1;
+ 		break;
+ 	}
+ 
++	/* The increment value is calculated by taking the base 40GbE incvalue
++	 * and multiplying it by a factor based on the link speed.
++	 */
++	incval = I40E_PTP_40GB_INCVAL * mult;
++
+ 	/* Write the new increment value into the increment register. The
+ 	 * hardware will not update the clock until both registers have been
+ 	 * written.
+@@ -500,7 +518,7 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ 	wr32(hw, I40E_PRTTSYN_INC_H, incval >> 32);
+ 
+ 	/* Update the base adjustement value. */
+-	WRITE_ONCE(pf->ptp_base_adj, incval);
++	WRITE_ONCE(pf->ptp_adj_mult, mult);
+ 	smp_mb(); /* Force the above update. */
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index cce7ada89255..9afee130c2aa 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -8763,12 +8763,17 @@ static void igb_rar_set_index(struct igb_adapter *adapter, u32 index)
+ 		if (is_valid_ether_addr(addr))
+ 			rar_high |= E1000_RAH_AV;
+ 
+-		if (hw->mac.type == e1000_82575)
++		switch (hw->mac.type) {
++		case e1000_82575:
++		case e1000_i210:
+ 			rar_high |= E1000_RAH_POOL_1 *
+ 				    adapter->mac_table[index].queue;
+-		else
++			break;
++		default:
+ 			rar_high |= E1000_RAH_POOL_1 <<
+ 				    adapter->mac_table[index].queue;
++			break;
++		}
+ 	}
+ 
+ 	wr32(E1000_RAL(index), rar_low);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+index ed4cbe94c355..4da10b44b7d3 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+@@ -618,6 +618,14 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
+ 	}
+ 
+ #endif
++	/* To support macvlan offload we have to use num_tc to
++	 * restrict the queues that can be used by the device.
++	 * By doing this we can avoid reporting a false number of
++	 * queues.
++	 */
++	if (vmdq_i > 1)
++		netdev_set_num_tc(adapter->netdev, 1);
++
+ 	/* populate TC0 for use by pool 0 */
+ 	netdev_set_tc_queue(adapter->netdev, 0,
+ 			    adapter->num_rx_queues_per_pool, 0);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index a820a6cd831a..d91a5a59c71a 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -8875,14 +8875,6 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
+ 	} else {
+ 		netdev_reset_tc(dev);
+ 
+-		/* To support macvlan offload we have to use num_tc to
+-		 * restrict the queues that can be used by the device.
+-		 * By doing this we can avoid reporting a false number of
+-		 * queues.
+-		 */
+-		if (!tc && adapter->num_rx_pools > 1)
+-			netdev_set_num_tc(dev, 1);
+-
+ 		if (adapter->hw.mac.type == ixgbe_mac_82598EB)
+ 			adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 850f8af95e49..043b695d2a61 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -4187,6 +4187,7 @@ static int ixgbevf_set_mac(struct net_device *netdev, void *p)
+ 		return -EPERM;
+ 
+ 	ether_addr_copy(hw->mac.addr, addr->sa_data);
++	ether_addr_copy(hw->mac.perm_addr, addr->sa_data);
+ 	ether_addr_copy(netdev->dev_addr, addr->sa_data);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
+index 6f410235987c..3bc5690d8376 100644
+--- a/drivers/net/ethernet/marvell/mvpp2.c
++++ b/drivers/net/ethernet/marvell/mvpp2.c
+@@ -2109,6 +2109,9 @@ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add,
+ 				mvpp2_prs_sram_ai_update(&pe, 0,
+ 							MVPP2_PRS_SRAM_AI_MASK);
+ 
++			/* Set result info bits to 'single vlan' */
++			mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_VLAN_SINGLE,
++						 MVPP2_PRS_RI_VLAN_MASK);
+ 			/* If packet is tagged continue check vid filtering */
+ 			mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_VID);
+ 		} else {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 1904c0323d39..692855183187 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -5882,24 +5882,24 @@ static int mlxsw_sp_router_fib_rule_event(unsigned long event,
+ 	switch (info->family) {
+ 	case AF_INET:
+ 		if (!fib4_rule_default(rule) && !rule->l3mdev)
+-			err = -1;
++			err = -EOPNOTSUPP;
+ 		break;
+ 	case AF_INET6:
+ 		if (!fib6_rule_default(rule) && !rule->l3mdev)
+-			err = -1;
++			err = -EOPNOTSUPP;
+ 		break;
+ 	case RTNL_FAMILY_IPMR:
+ 		if (!ipmr_rule_default(rule) && !rule->l3mdev)
+-			err = -1;
++			err = -EOPNOTSUPP;
+ 		break;
+ 	case RTNL_FAMILY_IP6MR:
+ 		if (!ip6mr_rule_default(rule) && !rule->l3mdev)
+-			err = -1;
++			err = -EOPNOTSUPP;
+ 		break;
+ 	}
+ 
+ 	if (err < 0)
+-		NL_SET_ERR_MSG_MOD(extack, "FIB rules not supported. Aborting offload");
++		NL_SET_ERR_MSG_MOD(extack, "FIB rules not supported");
+ 
+ 	return err;
+ }
+@@ -5926,8 +5926,8 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb,
+ 	case FIB_EVENT_RULE_DEL:
+ 		err = mlxsw_sp_router_fib_rule_event(event, info,
+ 						     router->mlxsw_sp);
+-		if (!err)
+-			return NOTIFY_DONE;
++		if (!err || info->extack)
++			return notifier_from_errno(err);
+ 	}
+ 
+ 	fib_work = kzalloc(sizeof(*fib_work), GFP_ATOMIC);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 4ed01182a82c..0ae2da9d08c7 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -1013,8 +1013,10 @@ mlxsw_sp_port_vlan_bridge_join(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan,
+ 	int err;
+ 
+ 	/* No need to continue if only VLAN flags were changed */
+-	if (mlxsw_sp_port_vlan->bridge_port)
++	if (mlxsw_sp_port_vlan->bridge_port) {
++		mlxsw_sp_port_vlan_put(mlxsw_sp_port_vlan);
+ 		return 0;
++	}
+ 
+ 	err = mlxsw_sp_port_vlan_fid_join(mlxsw_sp_port_vlan, bridge_port);
+ 	if (err)
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index 59fbf74dcada..dd963cd255f0 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -1057,7 +1057,8 @@ static int netsec_netdev_load_microcode(struct netsec_priv *priv)
+ 	return 0;
+ }
+ 
+-static int netsec_reset_hardware(struct netsec_priv *priv)
++static int netsec_reset_hardware(struct netsec_priv *priv,
++				 bool load_ucode)
+ {
+ 	u32 value;
+ 	int err;
+@@ -1102,11 +1103,14 @@ static int netsec_reset_hardware(struct netsec_priv *priv)
+ 	netsec_write(priv, NETSEC_REG_NRM_RX_CONFIG,
+ 		     1 << NETSEC_REG_DESC_ENDIAN);
+ 
+-	err = netsec_netdev_load_microcode(priv);
+-	if (err) {
+-		netif_err(priv, probe, priv->ndev,
+-			  "%s: failed to load microcode (%d)\n", __func__, err);
+-		return err;
++	if (load_ucode) {
++		err = netsec_netdev_load_microcode(priv);
++		if (err) {
++			netif_err(priv, probe, priv->ndev,
++				  "%s: failed to load microcode (%d)\n",
++				  __func__, err);
++			return err;
++		}
+ 	}
+ 
+ 	/* start DMA engines */
+@@ -1328,6 +1332,7 @@ err1:
+ 
+ static int netsec_netdev_stop(struct net_device *ndev)
+ {
++	int ret;
+ 	struct netsec_priv *priv = netdev_priv(ndev);
+ 
+ 	netif_stop_queue(priv->ndev);
+@@ -1343,12 +1348,14 @@ static int netsec_netdev_stop(struct net_device *ndev)
+ 	netsec_uninit_pkt_dring(priv, NETSEC_RING_TX);
+ 	netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+ 
++	ret = netsec_reset_hardware(priv, false);
++
+ 	phy_stop(ndev->phydev);
+ 	phy_disconnect(ndev->phydev);
+ 
+ 	pm_runtime_put_sync(priv->dev);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int netsec_netdev_init(struct net_device *ndev)
+@@ -1364,7 +1371,7 @@ static int netsec_netdev_init(struct net_device *ndev)
+ 	if (ret)
+ 		goto err1;
+ 
+-	ret = netsec_reset_hardware(priv);
++	ret = netsec_reset_hardware(priv, true);
+ 	if (ret)
+ 		goto err2;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 1e1cc5256eca..57491da89140 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -51,7 +51,7 @@
+ #include <linux/of_mdio.h>
+ #include "dwmac1000.h"
+ 
+-#define STMMAC_ALIGN(x)	L1_CACHE_ALIGN(x)
++#define	STMMAC_ALIGN(x)		__ALIGN_KERNEL(x, SMP_CACHE_BYTES)
+ #define	TSO_MAX_BUFF_SIZE	(SZ_16K - 1)
+ 
+ /* Module parameters */
+diff --git a/drivers/net/ethernet/ti/cpsw-phy-sel.c b/drivers/net/ethernet/ti/cpsw-phy-sel.c
+index 18013645e76c..0c1adad7415d 100644
+--- a/drivers/net/ethernet/ti/cpsw-phy-sel.c
++++ b/drivers/net/ethernet/ti/cpsw-phy-sel.c
+@@ -177,12 +177,18 @@ void cpsw_phy_sel(struct device *dev, phy_interface_t phy_mode, int slave)
+ 	}
+ 
+ 	dev = bus_find_device(&platform_bus_type, NULL, node, match);
+-	of_node_put(node);
++	if (!dev) {
++		dev_err(dev, "unable to find platform device for %pOF\n", node);
++		goto out;
++	}
++
+ 	priv = dev_get_drvdata(dev);
+ 
+ 	priv->cpsw_phy_sel(priv, phy_mode, slave);
+ 
+ 	put_device(dev);
++out:
++	of_node_put(node);
+ }
+ EXPORT_SYMBOL_GPL(cpsw_phy_sel);
+ 
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index eaeee3201e8f..37096bf29033 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -738,6 +738,8 @@ struct net_device_context {
+ 	struct hv_device *device_ctx;
+ 	/* netvsc_device */
+ 	struct netvsc_device __rcu *nvdev;
++	/* list of netvsc net_devices */
++	struct list_head list;
+ 	/* reconfigure work */
+ 	struct delayed_work dwork;
+ 	/* last reconfig time */
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 82c3c8e200f0..adc176943d94 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -69,6 +69,8 @@ static int debug = -1;
+ module_param(debug, int, 0444);
+ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
+ 
++static LIST_HEAD(netvsc_dev_list);
++
+ static void netvsc_change_rx_flags(struct net_device *net, int change)
+ {
+ 	struct net_device_context *ndev_ctx = netdev_priv(net);
+@@ -1779,13 +1781,10 @@ out_unlock:
+ 
+ static struct net_device *get_netvsc_bymac(const u8 *mac)
+ {
+-	struct net_device *dev;
+-
+-	ASSERT_RTNL();
++	struct net_device_context *ndev_ctx;
+ 
+-	for_each_netdev(&init_net, dev) {
+-		if (dev->netdev_ops != &device_ops)
+-			continue;	/* not a netvsc device */
++	list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
++		struct net_device *dev = hv_get_drvdata(ndev_ctx->device_ctx);
+ 
+ 		if (ether_addr_equal(mac, dev->perm_addr))
+ 			return dev;
+@@ -1796,25 +1795,18 @@ static struct net_device *get_netvsc_bymac(const u8 *mac)
+ 
+ static struct net_device *get_netvsc_byref(struct net_device *vf_netdev)
+ {
++	struct net_device_context *net_device_ctx;
+ 	struct net_device *dev;
+ 
+-	ASSERT_RTNL();
+-
+-	for_each_netdev(&init_net, dev) {
+-		struct net_device_context *net_device_ctx;
++	dev = netdev_master_upper_dev_get(vf_netdev);
++	if (!dev || dev->netdev_ops != &device_ops)
++		return NULL;	/* not a netvsc device */
+ 
+-		if (dev->netdev_ops != &device_ops)
+-			continue;	/* not a netvsc device */
++	net_device_ctx = netdev_priv(dev);
++	if (!rtnl_dereference(net_device_ctx->nvdev))
++		return NULL;	/* device is removed */
+ 
+-		net_device_ctx = netdev_priv(dev);
+-		if (!rtnl_dereference(net_device_ctx->nvdev))
+-			continue;	/* device is removed */
+-
+-		if (rtnl_dereference(net_device_ctx->vf_netdev) == vf_netdev)
+-			return dev;	/* a match */
+-	}
+-
+-	return NULL;
++	return dev;
+ }
+ 
+ /* Called when VF is injecting data into network stack.
+@@ -2094,15 +2086,19 @@ static int netvsc_probe(struct hv_device *dev,
+ 	else
+ 		net->max_mtu = ETH_DATA_LEN;
+ 
+-	ret = register_netdev(net);
++	rtnl_lock();
++	ret = register_netdevice(net);
+ 	if (ret != 0) {
+ 		pr_err("Unable to register netdev.\n");
+ 		goto register_failed;
+ 	}
+ 
+-	return ret;
++	list_add(&net_device_ctx->list, &netvsc_dev_list);
++	rtnl_unlock();
++	return 0;
+ 
+ register_failed:
++	rtnl_unlock();
+ 	rndis_filter_device_remove(dev, nvdev);
+ rndis_failed:
+ 	free_percpu(net_device_ctx->vf_stats);
+@@ -2148,6 +2144,7 @@ static int netvsc_remove(struct hv_device *dev)
+ 		rndis_filter_device_remove(dev, nvdev);
+ 
+ 	unregister_netdevice(net);
++	list_del(&ndev_ctx->list);
+ 
+ 	rtnl_unlock();
+ 	rcu_read_unlock();
+diff --git a/drivers/net/netdevsim/devlink.c b/drivers/net/netdevsim/devlink.c
+index bef7db5d129a..82f0e2663e1a 100644
+--- a/drivers/net/netdevsim/devlink.c
++++ b/drivers/net/netdevsim/devlink.c
+@@ -206,6 +206,7 @@ void nsim_devlink_teardown(struct netdevsim *ns)
+ 		struct net *net = nsim_to_net(ns);
+ 		bool *reg_devlink = net_generic(net, nsim_devlink_id);
+ 
++		devlink_resources_unregister(ns->devlink, NULL);
+ 		devlink_unregister(ns->devlink);
+ 		devlink_free(ns->devlink);
+ 		ns->devlink = NULL;
+diff --git a/drivers/net/phy/mdio-mux-bcm-iproc.c b/drivers/net/phy/mdio-mux-bcm-iproc.c
+index 0831b7142df7..0c5b68e7da51 100644
+--- a/drivers/net/phy/mdio-mux-bcm-iproc.c
++++ b/drivers/net/phy/mdio-mux-bcm-iproc.c
+@@ -218,7 +218,7 @@ out:
+ 
+ static int mdio_mux_iproc_remove(struct platform_device *pdev)
+ {
+-	struct iproc_mdiomux_desc *md = dev_get_platdata(&pdev->dev);
++	struct iproc_mdiomux_desc *md = platform_get_drvdata(pdev);
+ 
+ 	mdio_mux_uninit(md->mux_handle);
+ 	mdiobus_unregister(md->mii_bus);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index c582b2d7546c..18ee7546e4a8 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -612,6 +612,8 @@ void phylink_destroy(struct phylink *pl)
+ {
+ 	if (pl->sfp_bus)
+ 		sfp_unregister_upstream(pl->sfp_bus);
++	if (!IS_ERR(pl->link_gpio))
++		gpiod_put(pl->link_gpio);
+ 
+ 	cancel_work_sync(&pl->resolve);
+ 	kfree(pl);
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index fd6c23f69c2f..d437f4f5ed52 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -132,6 +132,13 @@ void sfp_parse_support(struct sfp_bus *bus, const struct sfp_eeprom_id *id,
+ 			br_max = br_nom + br_nom * id->ext.br_min / 100;
+ 			br_min = br_nom - br_nom * id->ext.br_min / 100;
+ 		}
++
++		/* When using passive cables, in case neither BR,min nor BR,max
++		 * are specified, set br_min to 0 as the nominal value is then
++		 * used as the maximum.
++		 */
++		if (br_min == br_max && id->base.sfp_ct_passive)
++			br_min = 0;
+ 	}
+ 
+ 	/* Set ethtool support from the compliance fields. */
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 8a76c1e5de8d..838df4c2b17f 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1216,6 +1216,8 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 			mod_timer(&dev->stat_monitor,
+ 				  jiffies + STAT_UPDATE_TIMER);
+ 		}
++
++		tasklet_schedule(&dev->bh);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 11a3915e92e9..6bdf01ed07ab 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -551,7 +551,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ 				     struct receive_queue *rq,
+ 				     void *buf, void *ctx,
+ 				     unsigned int len,
+-				     unsigned int *xdp_xmit)
++				     unsigned int *xdp_xmit,
++				     unsigned int *rbytes)
+ {
+ 	struct sk_buff *skb;
+ 	struct bpf_prog *xdp_prog;
+@@ -567,6 +568,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ 	int err;
+ 
+ 	len -= vi->hdr_len;
++	*rbytes += len;
+ 
+ 	rcu_read_lock();
+ 	xdp_prog = rcu_dereference(rq->xdp_prog);
+@@ -666,11 +668,13 @@ static struct sk_buff *receive_big(struct net_device *dev,
+ 				   struct virtnet_info *vi,
+ 				   struct receive_queue *rq,
+ 				   void *buf,
+-				   unsigned int len)
++				   unsigned int len,
++				   unsigned int *rbytes)
+ {
+ 	struct page *page = buf;
+ 	struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE);
+ 
++	*rbytes += len - vi->hdr_len;
+ 	if (unlikely(!skb))
+ 		goto err;
+ 
+@@ -688,7 +692,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 					 void *buf,
+ 					 void *ctx,
+ 					 unsigned int len,
+-					 unsigned int *xdp_xmit)
++					 unsigned int *xdp_xmit,
++					 unsigned int *rbytes)
+ {
+ 	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
+ 	u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+@@ -702,6 +707,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 	int err;
+ 
+ 	head_skb = NULL;
++	*rbytes += len - vi->hdr_len;
+ 
+ 	rcu_read_lock();
+ 	xdp_prog = rcu_dereference(rq->xdp_prog);
+@@ -831,6 +837,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 			goto err_buf;
+ 		}
+ 
++		*rbytes += len;
+ 		page = virt_to_head_page(buf);
+ 
+ 		truesize = mergeable_ctx_to_truesize(ctx);
+@@ -886,6 +893,7 @@ err_skb:
+ 			dev->stats.rx_length_errors++;
+ 			break;
+ 		}
++		*rbytes += len;
+ 		page = virt_to_head_page(buf);
+ 		put_page(page);
+ 	}
+@@ -896,14 +904,13 @@ xdp_xmit:
+ 	return NULL;
+ }
+ 
+-static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+-		       void *buf, unsigned int len, void **ctx,
+-		       unsigned int *xdp_xmit)
++static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
++			void *buf, unsigned int len, void **ctx,
++			unsigned int *xdp_xmit, unsigned int *rbytes)
+ {
+ 	struct net_device *dev = vi->dev;
+ 	struct sk_buff *skb;
+ 	struct virtio_net_hdr_mrg_rxbuf *hdr;
+-	int ret;
+ 
+ 	if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
+ 		pr_debug("%s: short packet %i\n", dev->name, len);
+@@ -915,23 +922,22 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+ 		} else {
+ 			put_page(virt_to_head_page(buf));
+ 		}
+-		return 0;
++		return;
+ 	}
+ 
+ 	if (vi->mergeable_rx_bufs)
+-		skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit);
++		skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
++					rbytes);
+ 	else if (vi->big_packets)
+-		skb = receive_big(dev, vi, rq, buf, len);
++		skb = receive_big(dev, vi, rq, buf, len, rbytes);
+ 	else
+-		skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit);
++		skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, rbytes);
+ 
+ 	if (unlikely(!skb))
+-		return 0;
++		return;
+ 
+ 	hdr = skb_vnet_hdr(skb);
+ 
+-	ret = skb->len;
+-
+ 	if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID)
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 
+@@ -948,12 +954,11 @@ static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+ 		 ntohs(skb->protocol), skb->len, skb->pkt_type);
+ 
+ 	napi_gro_receive(&rq->napi, skb);
+-	return ret;
++	return;
+ 
+ frame_err:
+ 	dev->stats.rx_frame_errors++;
+ 	dev_kfree_skb(skb);
+-	return 0;
+ }
+ 
+ /* Unlike mergeable buffers, all buffers are allocated to the
+@@ -1203,13 +1208,13 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
+ 
+ 		while (received < budget &&
+ 		       (buf = virtqueue_get_buf_ctx(rq->vq, &len, &ctx))) {
+-			bytes += receive_buf(vi, rq, buf, len, ctx, xdp_xmit);
++			receive_buf(vi, rq, buf, len, ctx, xdp_xmit, &bytes);
+ 			received++;
+ 		}
+ 	} else {
+ 		while (received < budget &&
+ 		       (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
+-			bytes += receive_buf(vi, rq, buf, len, NULL, xdp_xmit);
++			receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &bytes);
+ 			received++;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 8a3020dbd4cf..50b52a9e9648 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -1253,14 +1253,61 @@ out:
+ 	return ret;
+ }
+ 
++static int ath10k_core_search_bd(struct ath10k *ar,
++				 const char *boardname,
++				 const u8 *data,
++				 size_t len)
++{
++	size_t ie_len;
++	struct ath10k_fw_ie *hdr;
++	int ret = -ENOENT, ie_id;
++
++	while (len > sizeof(struct ath10k_fw_ie)) {
++		hdr = (struct ath10k_fw_ie *)data;
++		ie_id = le32_to_cpu(hdr->id);
++		ie_len = le32_to_cpu(hdr->len);
++
++		len -= sizeof(*hdr);
++		data = hdr->data;
++
++		if (len < ALIGN(ie_len, 4)) {
++			ath10k_err(ar, "invalid length for board ie_id %d ie_len %zu len %zu\n",
++				   ie_id, ie_len, len);
++			return -EINVAL;
++		}
++
++		switch (ie_id) {
++		case ATH10K_BD_IE_BOARD:
++			ret = ath10k_core_parse_bd_ie_board(ar, data, ie_len,
++							    boardname);
++			if (ret == -ENOENT)
++				/* no match found, continue */
++				break;
++
++			/* either found or error, so stop searching */
++			goto out;
++		}
++
++		/* jump over the padding */
++		ie_len = ALIGN(ie_len, 4);
++
++		len -= ie_len;
++		data += ie_len;
++	}
++
++out:
++	/* return result of parse_bd_ie_board() or -ENOENT */
++	return ret;
++}
++
+ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
+ 					      const char *boardname,
++					      const char *fallback_boardname,
+ 					      const char *filename)
+ {
+-	size_t len, magic_len, ie_len;
+-	struct ath10k_fw_ie *hdr;
++	size_t len, magic_len;
+ 	const u8 *data;
+-	int ret, ie_id;
++	int ret;
+ 
+ 	ar->normal_mode_fw.board = ath10k_fetch_fw_file(ar,
+ 							ar->hw_params.fw.dir,
+@@ -1298,69 +1345,23 @@ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
+ 	data += magic_len;
+ 	len -= magic_len;
+ 
+-	while (len > sizeof(struct ath10k_fw_ie)) {
+-		hdr = (struct ath10k_fw_ie *)data;
+-		ie_id = le32_to_cpu(hdr->id);
+-		ie_len = le32_to_cpu(hdr->len);
+-
+-		len -= sizeof(*hdr);
+-		data = hdr->data;
+-
+-		if (len < ALIGN(ie_len, 4)) {
+-			ath10k_err(ar, "invalid length for board ie_id %d ie_len %zu len %zu\n",
+-				   ie_id, ie_len, len);
+-			ret = -EINVAL;
+-			goto err;
+-		}
++	/* attempt to find boardname in the IE list */
++	ret = ath10k_core_search_bd(ar, boardname, data, len);
+ 
+-		switch (ie_id) {
+-		case ATH10K_BD_IE_BOARD:
+-			ret = ath10k_core_parse_bd_ie_board(ar, data, ie_len,
+-							    boardname);
+-			if (ret == -ENOENT && ar->id.bdf_ext[0] != '\0') {
+-				/* try default bdf if variant was not found */
+-				char *s, *v = ",variant=";
+-				char boardname2[100];
+-
+-				strlcpy(boardname2, boardname,
+-					sizeof(boardname2));
+-
+-				s = strstr(boardname2, v);
+-				if (s)
+-					*s = '\0';  /* strip ",variant=%s" */
++	/* if we didn't find it and have a fallback name, try that */
++	if (ret == -ENOENT && fallback_boardname)
++		ret = ath10k_core_search_bd(ar, fallback_boardname, data, len);
+ 
+-				ret = ath10k_core_parse_bd_ie_board(ar, data,
+-								    ie_len,
+-								    boardname2);
+-			}
+-
+-			if (ret == -ENOENT)
+-				/* no match found, continue */
+-				break;
+-			else if (ret)
+-				/* there was an error, bail out */
+-				goto err;
+-
+-			/* board data found */
+-			goto out;
+-		}
+-
+-		/* jump over the padding */
+-		ie_len = ALIGN(ie_len, 4);
+-
+-		len -= ie_len;
+-		data += ie_len;
+-	}
+-
+-out:
+-	if (!ar->normal_mode_fw.board_data || !ar->normal_mode_fw.board_len) {
++	if (ret == -ENOENT) {
+ 		ath10k_err(ar,
+ 			   "failed to fetch board data for %s from %s/%s\n",
+ 			   boardname, ar->hw_params.fw.dir, filename);
+ 		ret = -ENODATA;
+-		goto err;
+ 	}
+ 
++	if (ret)
++		goto err;
++
+ 	return 0;
+ 
+ err:
+@@ -1369,12 +1370,12 @@ err:
+ }
+ 
+ static int ath10k_core_create_board_name(struct ath10k *ar, char *name,
+-					 size_t name_len)
++					 size_t name_len, bool with_variant)
+ {
+ 	/* strlen(',variant=') + strlen(ar->id.bdf_ext) */
+ 	char variant[9 + ATH10K_SMBIOS_BDF_EXT_STR_LENGTH] = { 0 };
+ 
+-	if (ar->id.bdf_ext[0] != '\0')
++	if (with_variant && ar->id.bdf_ext[0] != '\0')
+ 		scnprintf(variant, sizeof(variant), ",variant=%s",
+ 			  ar->id.bdf_ext);
+ 
+@@ -1400,17 +1401,26 @@ out:
+ 
+ static int ath10k_core_fetch_board_file(struct ath10k *ar)
+ {
+-	char boardname[100];
++	char boardname[100], fallback_boardname[100];
+ 	int ret;
+ 
+-	ret = ath10k_core_create_board_name(ar, boardname, sizeof(boardname));
++	ret = ath10k_core_create_board_name(ar, boardname,
++					    sizeof(boardname), true);
+ 	if (ret) {
+ 		ath10k_err(ar, "failed to create board name: %d", ret);
+ 		return ret;
+ 	}
+ 
++	ret = ath10k_core_create_board_name(ar, fallback_boardname,
++					    sizeof(boardname), false);
++	if (ret) {
++		ath10k_err(ar, "failed to create fallback board name: %d", ret);
++		return ret;
++	}
++
+ 	ar->bd_api = 2;
+ 	ret = ath10k_core_fetch_board_data_api_n(ar, boardname,
++						 fallback_boardname,
+ 						 ATH10K_BOARD_API2_FILE);
+ 	if (!ret)
+ 		goto success;
+diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
+index bac832ce1873..442a6f37e45e 100644
+--- a/drivers/net/wireless/ath/ath10k/debug.c
++++ b/drivers/net/wireless/ath/ath10k/debug.c
+@@ -1519,7 +1519,13 @@ static void ath10k_tpc_stats_print(struct ath10k_tpc_stats *tpc_stats,
+ 	*len += scnprintf(buf + *len, buf_len - *len,
+ 			  "********************************\n");
+ 	*len += scnprintf(buf + *len, buf_len - *len,
+-			  "No.  Preamble Rate_code tpc_value1 tpc_value2 tpc_value3\n");
++			  "No.  Preamble Rate_code ");
++
++	for (i = 0; i < WMI_TPC_TX_N_CHAIN; i++)
++		*len += scnprintf(buf + *len, buf_len - *len,
++				  "tpc_value%d ", i);
++
++	*len += scnprintf(buf + *len, buf_len - *len, "\n");
+ 
+ 	for (i = 0; i < tpc_stats->rate_max; i++) {
+ 		*len += scnprintf(buf + *len, buf_len - *len,
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index c5e1ca5945db..fcc5aa9f3357 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4479,6 +4479,12 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 
+ 	num_tx_chain = __le32_to_cpu(ev->num_tx_chain);
+ 
++	if (num_tx_chain > WMI_TPC_TX_N_CHAIN) {
++		ath10k_warn(ar, "number of tx chain is %d greater than TPC configured tx chain %d\n",
++			    num_tx_chain, WMI_TPC_TX_N_CHAIN);
++		return;
++	}
++
+ 	ath10k_wmi_tpc_config_get_rate_code(rate_code, pream_table,
+ 					    num_tx_chain);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 6fbc84c29521..7fde22ea2ffa 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -4008,9 +4008,9 @@ struct wmi_pdev_get_tpc_config_cmd {
+ } __packed;
+ 
+ #define WMI_TPC_CONFIG_PARAM		1
+-#define WMI_TPC_RATE_MAX		160
+ #define WMI_TPC_FINAL_RATE_MAX		240
+ #define WMI_TPC_TX_N_CHAIN		4
++#define WMI_TPC_RATE_MAX               (WMI_TPC_TX_N_CHAIN * 65)
+ #define WMI_TPC_PREAM_TABLE_MAX		10
+ #define WMI_TPC_FLAG			3
+ #define WMI_TPC_BUF_SIZE		10
+diff --git a/drivers/net/wireless/ath/regd.h b/drivers/net/wireless/ath/regd.h
+index 5d80be213fac..869f276cc1d8 100644
+--- a/drivers/net/wireless/ath/regd.h
++++ b/drivers/net/wireless/ath/regd.h
+@@ -68,12 +68,14 @@ enum CountryCode {
+ 	CTRY_AUSTRALIA = 36,
+ 	CTRY_AUSTRIA = 40,
+ 	CTRY_AZERBAIJAN = 31,
++	CTRY_BAHAMAS = 44,
+ 	CTRY_BAHRAIN = 48,
+ 	CTRY_BANGLADESH = 50,
+ 	CTRY_BARBADOS = 52,
+ 	CTRY_BELARUS = 112,
+ 	CTRY_BELGIUM = 56,
+ 	CTRY_BELIZE = 84,
++	CTRY_BERMUDA = 60,
+ 	CTRY_BOLIVIA = 68,
+ 	CTRY_BOSNIA_HERZ = 70,
+ 	CTRY_BRAZIL = 76,
+@@ -159,6 +161,7 @@ enum CountryCode {
+ 	CTRY_ROMANIA = 642,
+ 	CTRY_RUSSIA = 643,
+ 	CTRY_SAUDI_ARABIA = 682,
++	CTRY_SERBIA = 688,
+ 	CTRY_SERBIA_MONTENEGRO = 891,
+ 	CTRY_SINGAPORE = 702,
+ 	CTRY_SLOVAKIA = 703,
+@@ -170,11 +173,13 @@ enum CountryCode {
+ 	CTRY_SWITZERLAND = 756,
+ 	CTRY_SYRIA = 760,
+ 	CTRY_TAIWAN = 158,
++	CTRY_TANZANIA = 834,
+ 	CTRY_THAILAND = 764,
+ 	CTRY_TRINIDAD_Y_TOBAGO = 780,
+ 	CTRY_TUNISIA = 788,
+ 	CTRY_TURKEY = 792,
+ 	CTRY_UAE = 784,
++	CTRY_UGANDA = 800,
+ 	CTRY_UKRAINE = 804,
+ 	CTRY_UNITED_KINGDOM = 826,
+ 	CTRY_UNITED_STATES = 840,
+diff --git a/drivers/net/wireless/ath/regd_common.h b/drivers/net/wireless/ath/regd_common.h
+index bdd2b4d61f2f..15bbd1e0d912 100644
+--- a/drivers/net/wireless/ath/regd_common.h
++++ b/drivers/net/wireless/ath/regd_common.h
+@@ -35,6 +35,7 @@ enum EnumRd {
+ 	FRANCE_RES = 0x31,
+ 	FCC3_FCCA = 0x3A,
+ 	FCC3_WORLD = 0x3B,
++	FCC3_ETSIC = 0x3F,
+ 
+ 	ETSI1_WORLD = 0x37,
+ 	ETSI3_ETSIA = 0x32,
+@@ -44,6 +45,7 @@ enum EnumRd {
+ 	ETSI4_ETSIC = 0x38,
+ 	ETSI5_WORLD = 0x39,
+ 	ETSI6_WORLD = 0x34,
++	ETSI8_WORLD = 0x3D,
+ 	ETSI_RESERVED = 0x33,
+ 
+ 	MKK1_MKKA = 0x40,
+@@ -59,6 +61,7 @@ enum EnumRd {
+ 	MKK1_MKKA1 = 0x4A,
+ 	MKK1_MKKA2 = 0x4B,
+ 	MKK1_MKKC = 0x4C,
++	APL2_FCCA = 0x4D,
+ 
+ 	APL3_FCCA = 0x50,
+ 	APL1_WORLD = 0x52,
+@@ -67,6 +70,7 @@ enum EnumRd {
+ 	APL1_ETSIC = 0x55,
+ 	APL2_ETSIC = 0x56,
+ 	APL5_WORLD = 0x58,
++	APL13_WORLD = 0x5A,
+ 	APL6_WORLD = 0x5B,
+ 	APL7_FCCA = 0x5C,
+ 	APL8_WORLD = 0x5D,
+@@ -168,6 +172,7 @@ static struct reg_dmn_pair_mapping regDomainPairs[] = {
+ 	{FCC2_ETSIC, CTL_FCC, CTL_ETSI},
+ 	{FCC3_FCCA, CTL_FCC, CTL_FCC},
+ 	{FCC3_WORLD, CTL_FCC, CTL_ETSI},
++	{FCC3_ETSIC, CTL_FCC, CTL_ETSI},
+ 	{FCC4_FCCA, CTL_FCC, CTL_FCC},
+ 	{FCC5_FCCA, CTL_FCC, CTL_FCC},
+ 	{FCC6_FCCA, CTL_FCC, CTL_FCC},
+@@ -179,6 +184,7 @@ static struct reg_dmn_pair_mapping regDomainPairs[] = {
+ 	{ETSI4_WORLD, CTL_ETSI, CTL_ETSI},
+ 	{ETSI5_WORLD, CTL_ETSI, CTL_ETSI},
+ 	{ETSI6_WORLD, CTL_ETSI, CTL_ETSI},
++	{ETSI8_WORLD, CTL_ETSI, CTL_ETSI},
+ 
+ 	/* XXX: For ETSI3_ETSIA, Was NO_CTL meant for the 2 GHz band ? */
+ 	{ETSI3_ETSIA, CTL_ETSI, CTL_ETSI},
+@@ -188,9 +194,11 @@ static struct reg_dmn_pair_mapping regDomainPairs[] = {
+ 	{FCC1_FCCA, CTL_FCC, CTL_FCC},
+ 	{APL1_WORLD, CTL_FCC, CTL_ETSI},
+ 	{APL2_WORLD, CTL_FCC, CTL_ETSI},
++	{APL2_FCCA, CTL_FCC, CTL_FCC},
+ 	{APL3_WORLD, CTL_FCC, CTL_ETSI},
+ 	{APL4_WORLD, CTL_FCC, CTL_ETSI},
+ 	{APL5_WORLD, CTL_FCC, CTL_ETSI},
++	{APL13_WORLD, CTL_ETSI, CTL_ETSI},
+ 	{APL6_WORLD, CTL_ETSI, CTL_ETSI},
+ 	{APL8_WORLD, CTL_ETSI, CTL_ETSI},
+ 	{APL9_WORLD, CTL_ETSI, CTL_ETSI},
+@@ -298,6 +306,7 @@ static struct country_code_to_enum_rd allCountries[] = {
+ 	{CTRY_AUSTRALIA2, FCC6_WORLD, "AU"},
+ 	{CTRY_AUSTRIA, ETSI1_WORLD, "AT"},
+ 	{CTRY_AZERBAIJAN, ETSI4_WORLD, "AZ"},
++	{CTRY_BAHAMAS, FCC3_WORLD, "BS"},
+ 	{CTRY_BAHRAIN, APL6_WORLD, "BH"},
+ 	{CTRY_BANGLADESH, NULL1_WORLD, "BD"},
+ 	{CTRY_BARBADOS, FCC2_WORLD, "BB"},
+@@ -305,6 +314,7 @@ static struct country_code_to_enum_rd allCountries[] = {
+ 	{CTRY_BELGIUM, ETSI1_WORLD, "BE"},
+ 	{CTRY_BELGIUM2, ETSI4_WORLD, "BL"},
+ 	{CTRY_BELIZE, APL1_ETSIC, "BZ"},
++	{CTRY_BERMUDA, FCC3_FCCA, "BM"},
+ 	{CTRY_BOLIVIA, APL1_ETSIC, "BO"},
+ 	{CTRY_BOSNIA_HERZ, ETSI1_WORLD, "BA"},
+ 	{CTRY_BRAZIL, FCC3_WORLD, "BR"},
+@@ -444,6 +454,7 @@ static struct country_code_to_enum_rd allCountries[] = {
+ 	{CTRY_ROMANIA, NULL1_WORLD, "RO"},
+ 	{CTRY_RUSSIA, NULL1_WORLD, "RU"},
+ 	{CTRY_SAUDI_ARABIA, NULL1_WORLD, "SA"},
++	{CTRY_SERBIA, ETSI1_WORLD, "RS"},
+ 	{CTRY_SERBIA_MONTENEGRO, ETSI1_WORLD, "CS"},
+ 	{CTRY_SINGAPORE, APL6_WORLD, "SG"},
+ 	{CTRY_SLOVAKIA, ETSI1_WORLD, "SK"},
+@@ -455,10 +466,12 @@ static struct country_code_to_enum_rd allCountries[] = {
+ 	{CTRY_SWITZERLAND, ETSI1_WORLD, "CH"},
+ 	{CTRY_SYRIA, NULL1_WORLD, "SY"},
+ 	{CTRY_TAIWAN, APL3_FCCA, "TW"},
++	{CTRY_TANZANIA, APL1_WORLD, "TZ"},
+ 	{CTRY_THAILAND, FCC3_WORLD, "TH"},
+ 	{CTRY_TRINIDAD_Y_TOBAGO, FCC3_WORLD, "TT"},
+ 	{CTRY_TUNISIA, ETSI3_WORLD, "TN"},
+ 	{CTRY_TURKEY, ETSI3_WORLD, "TR"},
++	{CTRY_UGANDA, FCC3_WORLD, "UG"},
+ 	{CTRY_UKRAINE, NULL1_WORLD, "UA"},
+ 	{CTRY_UAE, NULL1_WORLD, "AE"},
+ 	{CTRY_UNITED_KINGDOM, ETSI1_WORLD, "GB"},
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index 0b68240ec7b4..a1915411c280 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -963,6 +963,7 @@ static const struct sdio_device_id brcmf_sdmmc_ids[] = {
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43340),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43341),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43362),
++ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43364),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4335_4339),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_4339),
+ 	BRCMF_SDIO_DEVICE(SDIO_DEVICE_ID_BROADCOM_43430),
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 90f8c89ea59c..2f7b9421410f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2652,7 +2652,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ 
+ 	mutex_lock(&mvm->mutex);
+ 	/* track whether or not the station is associated */
+-	mvm_sta->associated = new_state >= IEEE80211_STA_ASSOC;
++	mvm_sta->sta_state = new_state;
+ 
+ 	if (old_state == IEEE80211_STA_NOTEXIST &&
+ 	    new_state == IEEE80211_STA_NONE) {
+@@ -2704,8 +2704,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ 			iwl_mvm_mac_ctxt_changed(mvm, vif, false, NULL);
+ 		}
+ 
+-		iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
+-				     true);
++		iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band);
+ 		ret = iwl_mvm_update_sta(mvm, vif, sta);
+ 	} else if (old_state == IEEE80211_STA_ASSOC &&
+ 		   new_state == IEEE80211_STA_AUTHORIZED) {
+@@ -2721,8 +2720,7 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
+ 		/* enable beacon filtering */
+ 		WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
+ 
+-		iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
+-				     false);
++		iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band);
+ 
+ 		ret = 0;
+ 	} else if (old_state == IEEE80211_STA_AUTHORIZED &&
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 5d776ec1840f..36f27981165c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -3,6 +3,7 @@
+  * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms of version 2 of the GNU General Public License as
+@@ -13,10 +14,6 @@
+  * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+  * more details.
+  *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+  * The full GNU General Public License is included in this distribution in the
+  * file called LICENSE.
+  *
+@@ -651,9 +648,10 @@ static void rs_tl_turn_on_agg(struct iwl_mvm *mvm, struct iwl_mvm_sta *mvmsta,
+ 	}
+ 
+ 	tid_data = &mvmsta->tid_data[tid];
+-	if ((tid_data->state == IWL_AGG_OFF) &&
++	if (mvmsta->sta_state >= IEEE80211_STA_AUTHORIZED &&
++	    tid_data->state == IWL_AGG_OFF &&
+ 	    (lq_sta->tx_agg_tid_en & BIT(tid)) &&
+-	    (tid_data->tx_count_last >= IWL_MVM_RS_AGG_START_THRESHOLD)) {
++	    tid_data->tx_count_last >= IWL_MVM_RS_AGG_START_THRESHOLD) {
+ 		IWL_DEBUG_RATE(mvm, "try to aggregate tid %d\n", tid);
+ 		if (rs_tl_turn_on_agg_for_tid(mvm, lq_sta, tid, sta) == 0)
+ 			tid_data->state = IWL_AGG_QUEUED;
+@@ -1257,7 +1255,7 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 		       (unsigned long)(lq_sta->last_tx +
+ 				       (IWL_MVM_RS_IDLE_TIMEOUT * HZ)))) {
+ 		IWL_DEBUG_RATE(mvm, "Tx idle for too long. reinit rs\n");
+-		iwl_mvm_rs_rate_init(mvm, sta, info->band, false);
++		iwl_mvm_rs_rate_init(mvm, sta, info->band);
+ 		return;
+ 	}
+ 	lq_sta->last_tx = jiffies;
+@@ -2684,9 +2682,9 @@ static void rs_get_initial_rate(struct iwl_mvm *mvm,
+ 				struct ieee80211_sta *sta,
+ 				struct iwl_lq_sta *lq_sta,
+ 				enum nl80211_band band,
+-				struct rs_rate *rate,
+-				bool init)
++				struct rs_rate *rate)
+ {
++	struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ 	int i, nentries;
+ 	unsigned long active_rate;
+ 	s8 best_rssi = S8_MIN;
+@@ -2748,7 +2746,8 @@ static void rs_get_initial_rate(struct iwl_mvm *mvm,
+ 		 * bandwidth rate, and after authorization, when the phy context
+ 		 * is already up-to-date, re-init rs with the correct bw.
+ 		 */
+-		u32 bw = init ? RATE_MCS_CHAN_WIDTH_20 : rs_bw_from_sta_bw(sta);
++		u32 bw = mvmsta->sta_state < IEEE80211_STA_AUTHORIZED ?
++				RATE_MCS_CHAN_WIDTH_20 : rs_bw_from_sta_bw(sta);
+ 
+ 		switch (bw) {
+ 		case RATE_MCS_CHAN_WIDTH_40:
+@@ -2833,9 +2832,9 @@ void rs_update_last_rssi(struct iwl_mvm *mvm,
+ static void rs_initialize_lq(struct iwl_mvm *mvm,
+ 			     struct ieee80211_sta *sta,
+ 			     struct iwl_lq_sta *lq_sta,
+-			     enum nl80211_band band,
+-			     bool init)
++			     enum nl80211_band band)
+ {
++	struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ 	struct iwl_scale_tbl_info *tbl;
+ 	struct rs_rate *rate;
+ 	u8 active_tbl = 0;
+@@ -2851,7 +2850,7 @@ static void rs_initialize_lq(struct iwl_mvm *mvm,
+ 	tbl = &(lq_sta->lq_info[active_tbl]);
+ 	rate = &tbl->rate;
+ 
+-	rs_get_initial_rate(mvm, sta, lq_sta, band, rate, init);
++	rs_get_initial_rate(mvm, sta, lq_sta, band, rate);
+ 	rs_init_optimal_rate(mvm, sta, lq_sta);
+ 
+ 	WARN_ONCE(rate->ant != ANT_A && rate->ant != ANT_B,
+@@ -2864,7 +2863,8 @@ static void rs_initialize_lq(struct iwl_mvm *mvm,
+ 	rs_set_expected_tpt_table(lq_sta, tbl);
+ 	rs_fill_lq_cmd(mvm, sta, lq_sta, rate);
+ 	/* TODO restore station should remember the lq cmd */
+-	iwl_mvm_send_lq_cmd(mvm, &lq_sta->lq, init);
++	iwl_mvm_send_lq_cmd(mvm, &lq_sta->lq,
++			    mvmsta->sta_state < IEEE80211_STA_AUTHORIZED);
+ }
+ 
+ static void rs_drv_get_rate(void *mvm_r, struct ieee80211_sta *sta,
+@@ -3117,7 +3117,7 @@ void iwl_mvm_update_frame_stats(struct iwl_mvm *mvm, u32 rate, bool agg)
+  * Called after adding a new station to initialize rate scaling
+  */
+ static void rs_drv_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+-			     enum nl80211_band band, bool init)
++			     enum nl80211_band band)
+ {
+ 	int i, j;
+ 	struct ieee80211_hw *hw = mvm->hw;
+@@ -3196,7 +3196,7 @@ static void rs_drv_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+ 	iwl_mvm_reset_frame_stats(mvm);
+ #endif
+-	rs_initialize_lq(mvm, sta, lq_sta, band, init);
++	rs_initialize_lq(mvm, sta, lq_sta, band);
+ }
+ 
+ static void rs_drv_rate_update(void *mvm_r,
+@@ -3216,7 +3216,7 @@ static void rs_drv_rate_update(void *mvm_r,
+ 	for (tid = 0; tid < IWL_MAX_TID_COUNT; tid++)
+ 		ieee80211_stop_tx_ba_session(sta, tid);
+ 
+-	iwl_mvm_rs_rate_init(mvm, sta, sband->band, false);
++	iwl_mvm_rs_rate_init(mvm, sta, sband->band);
+ }
+ 
+ #ifdef CONFIG_MAC80211_DEBUGFS
+@@ -4062,12 +4062,12 @@ static const struct rate_control_ops rs_mvm_ops_drv = {
+ };
+ 
+ void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+-			  enum nl80211_band band, bool init)
++			  enum nl80211_band band)
+ {
+ 	if (iwl_mvm_has_tlc_offload(mvm))
+ 		rs_fw_rate_init(mvm, sta, band);
+ 	else
+-		rs_drv_rate_init(mvm, sta, band, init);
++		rs_drv_rate_init(mvm, sta, band);
+ }
+ 
+ int iwl_mvm_rate_control_register(void)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+index fb18cb8c233d..f9b272236021 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+@@ -3,6 +3,7 @@
+  * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2017 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms of version 2 of the GNU General Public License as
+@@ -13,10 +14,6 @@
+  * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+  * more details.
+  *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc.,
+- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
+- *
+  * The full GNU General Public License is included in this distribution in the
+  * file called LICENSE.
+  *
+@@ -410,7 +407,7 @@ struct iwl_lq_sta {
+ 
+ /* Initialize station's rate scaling information after adding station */
+ void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+-			  enum nl80211_band band, bool init);
++			  enum nl80211_band band);
+ 
+ /* Notify RS about Tx status */
+ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 80067eb9ea05..fdc8ba319c1f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -214,7 +214,7 @@ int iwl_mvm_sta_send_to_fw(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 		cpu_to_le32(agg_size << STA_FLG_MAX_AGG_SIZE_SHIFT);
+ 	add_sta_cmd.station_flags |=
+ 		cpu_to_le32(mpdu_dens << STA_FLG_AGG_MPDU_DENS_SHIFT);
+-	if (mvm_sta->associated)
++	if (mvm_sta->sta_state >= IEEE80211_STA_ASSOC)
+ 		add_sta_cmd.assoc_id = cpu_to_le16(sta->aid);
+ 
+ 	if (sta->wme) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+index 5ffd6adbc383..d0fa0be31b0d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+@@ -8,6 +8,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+  * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -18,11 +19,6 @@
+  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  * General Public License for more details.
+  *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110,
+- * USA
+- *
+  * The full GNU General Public License is included in this distribution
+  * in the file called COPYING.
+  *
+@@ -35,6 +31,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
+  * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
++ * Copyright(c) 2018 Intel Corporation
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -376,6 +373,7 @@ struct iwl_mvm_rxq_dup_data {
+  *	tid.
+  * @max_agg_bufsize: the maximal size of the AGG buffer for this station
+  * @sta_type: station type
++ * @sta_state: station state according to enum %ieee80211_sta_state
+  * @bt_reduced_txpower: is reduced tx power enabled for this station
+  * @next_status_eosp: the next reclaimed packet is a PS-Poll response and
+  *	we need to signal the EOSP
+@@ -414,6 +412,7 @@ struct iwl_mvm_sta {
+ 	u16 tid_disable_agg;
+ 	u8 max_agg_bufsize;
+ 	enum iwl_sta_type sta_type;
++	enum ieee80211_sta_state sta_state;
+ 	bool bt_reduced_txpower;
+ 	bool next_status_eosp;
+ 	spinlock_t lock;
+@@ -438,7 +437,6 @@ struct iwl_mvm_sta {
+ 	bool disable_tx;
+ 	bool tlc_amsdu;
+ 	bool sleeping;
+-	bool associated;
+ 	u8 agg_tids;
+ 	u8 sleep_tx_count;
+ 	u8 avg_energy;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index f25ce3a1ea50..d57f2a08ca88 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -901,6 +901,8 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
+ 	}
+ 	def_rxq = trans_pcie->rxq;
+ 
++	cancel_work_sync(&rba->rx_alloc);
++
+ 	spin_lock(&rba->lock);
+ 	atomic_set(&rba->req_pending, 0);
+ 	atomic_set(&rba->req_ready, 0);
+diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c
+index 4bc244801636..26ca670584c0 100644
+--- a/drivers/net/wireless/marvell/mwifiex/usb.c
++++ b/drivers/net/wireless/marvell/mwifiex/usb.c
+@@ -644,6 +644,9 @@ static void mwifiex_usb_disconnect(struct usb_interface *intf)
+ 					 MWIFIEX_FUNC_SHUTDOWN);
+ 	}
+ 
++	if (adapter->workqueue)
++		flush_workqueue(adapter->workqueue);
++
+ 	mwifiex_usb_free(card);
+ 
+ 	mwifiex_dbg(adapter, FATAL,
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index 0cd68ffc2c74..51ccf10f4413 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -708,12 +708,14 @@ void mwifiex_hist_data_set(struct mwifiex_private *priv, u8 rx_rate, s8 snr,
+ 			   s8 nflr)
+ {
+ 	struct mwifiex_histogram_data *phist_data = priv->hist_data;
++	s8 nf   = -nflr;
++	s8 rssi = snr - nflr;
+ 
+ 	atomic_inc(&phist_data->num_samples);
+ 	atomic_inc(&phist_data->rx_rate[rx_rate]);
+-	atomic_inc(&phist_data->snr[snr]);
+-	atomic_inc(&phist_data->noise_flr[128 + nflr]);
+-	atomic_inc(&phist_data->sig_str[nflr - snr]);
++	atomic_inc(&phist_data->snr[snr + 128]);
++	atomic_inc(&phist_data->noise_flr[nf + 128]);
++	atomic_inc(&phist_data->sig_str[rssi + 128]);
+ }
+ 
+ /* function to reset histogram data during init/reset */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c
+index 934c331d995e..1932414e5088 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c
+@@ -482,7 +482,10 @@ void mt76x2_set_tx_ackto(struct mt76x2_dev *dev)
+ {
+ 	u8 ackto, sifs, slottime = dev->slottime;
+ 
++	/* As defined by IEEE 802.11-2007 17.3.8.6 */
+ 	slottime += 3 * dev->coverage_class;
++	mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG,
++		       MT_BKOFF_SLOT_CFG_SLOTTIME, slottime);
+ 
+ 	sifs = mt76_get_field(dev, MT_XIFS_TIME_CFG,
+ 			      MT_XIFS_TIME_CFG_OFDM_SIFS);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2_main.c
+index 73c127f92613..f66b6ff92ae0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_main.c
+@@ -247,8 +247,7 @@ mt76x2_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		int slottime = info->use_short_slot ? 9 : 20;
+ 
+ 		dev->slottime = slottime;
+-		mt76_rmw_field(dev, MT_BKOFF_SLOT_CFG,
+-			       MT_BKOFF_SLOT_CFG_SLOTTIME, slottime);
++		mt76x2_set_tx_ackto(dev);
+ 	}
+ 
+ 	mutex_unlock(&dev->mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c
+index fcc37eb7ce0b..6b4fa7be573e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_phy.c
+@@ -492,8 +492,10 @@ mt76x2_phy_update_channel_gain(struct mt76x2_dev *dev)
+ 	u8 gain_delta;
+ 	int low_gain;
+ 
+-	dev->cal.avg_rssi[0] = (dev->cal.avg_rssi[0] * 15) / 16 + (rssi0 << 8);
+-	dev->cal.avg_rssi[1] = (dev->cal.avg_rssi[1] * 15) / 16 + (rssi1 << 8);
++	dev->cal.avg_rssi[0] = (dev->cal.avg_rssi[0] * 15) / 16 +
++			       (rssi0 << 8) / 16;
++	dev->cal.avg_rssi[1] = (dev->cal.avg_rssi[1] * 15) / 16 +
++			       (rssi1 << 8) / 16;
+ 	dev->cal.avg_rssi_all = (dev->cal.avg_rssi[0] +
+ 				 dev->cal.avg_rssi[1]) / 512;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 4eef69bd8a9e..6aca794cf998 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -422,12 +422,14 @@ void mt76_txq_schedule(struct mt76_dev *dev, struct mt76_queue *hwq)
+ {
+ 	int len;
+ 
++	rcu_read_lock();
+ 	do {
+ 		if (hwq->swq_queued >= 4 || list_empty(&hwq->swq))
+ 			break;
+ 
+ 		len = mt76_txq_schedule_list(dev, hwq);
+ 	} while (len > 0);
++	rcu_read_unlock();
+ }
+ EXPORT_SYMBOL_GPL(mt76_txq_schedule);
+ 
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+index 0398bece5782..a5f0306a7e29 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+@@ -651,28 +651,35 @@ qtnf_disconnect(struct wiphy *wiphy, struct net_device *dev,
+ {
+ 	struct qtnf_wmac *mac = wiphy_priv(wiphy);
+ 	struct qtnf_vif *vif;
+-	int ret;
++	int ret = 0;
+ 
+ 	vif = qtnf_mac_get_base_vif(mac);
+ 	if (!vif) {
+ 		pr_err("MAC%u: primary VIF is not configured\n", mac->macid);
+-		return -EFAULT;
++		ret = -EFAULT;
++		goto out;
+ 	}
+ 
+-	if (vif->wdev.iftype != NL80211_IFTYPE_STATION)
+-		return -EOPNOTSUPP;
++	if (vif->wdev.iftype != NL80211_IFTYPE_STATION) {
++		ret = -EOPNOTSUPP;
++		goto out;
++	}
+ 
+ 	if (vif->sta_state == QTNF_STA_DISCONNECTED)
+-		return 0;
++		goto out;
+ 
+ 	ret = qtnf_cmd_send_disconnect(vif, reason_code);
+ 	if (ret) {
+ 		pr_err("VIF%u.%u: failed to disconnect\n", mac->macid,
+ 		       vif->vifid);
+-		return ret;
++		goto out;
+ 	}
+ 
+-	return 0;
++out:
++	if (vif->sta_state == QTNF_STA_CONNECTING)
++		vif->sta_state = QTNF_STA_DISCONNECTED;
++
++	return ret;
+ }
+ 
+ static int
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
+index bcd415f96412..77ee6439ec6e 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
+@@ -198,11 +198,9 @@ qtnf_event_handle_bss_leave(struct qtnf_vif *vif,
+ 		return -EPROTO;
+ 	}
+ 
+-	if (vif->sta_state != QTNF_STA_CONNECTED) {
+-		pr_err("VIF%u.%u: BSS_LEAVE event when STA is not connected\n",
+-		       vif->mac->macid, vif->vifid);
+-		return -EPROTO;
+-	}
++	if (vif->sta_state != QTNF_STA_CONNECTED)
++		pr_warn("VIF%u.%u: BSS_LEAVE event when STA is not connected\n",
++			vif->mac->macid, vif->vifid);
+ 
+ 	pr_debug("VIF%u.%u: disconnected\n", vif->mac->macid, vif->vifid);
+ 
+diff --git a/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c b/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c
+index f117904d9120..6c1e139bb8f7 100644
+--- a/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c
++++ b/drivers/net/wireless/quantenna/qtnfmac/pearl/pcie.c
+@@ -1185,6 +1185,10 @@ static void qtnf_fw_work_handler(struct work_struct *work)
+ 	if (qtnf_poll_state(&priv->bda->bda_ep_state, QTN_EP_FW_LOADRDY,
+ 			    QTN_FW_DL_TIMEOUT_MS)) {
+ 		pr_err("card is not ready\n");
++
++		if (!flashboot)
++			release_firmware(fw);
++
+ 		goto fw_load_fail;
+ 	}
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index de608ae365a4..5425726d509b 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -616,28 +616,32 @@ static int bl_write_header(struct rsi_hw *adapter, u8 *flash_content,
+ 			   u32 content_size)
+ {
+ 	struct rsi_host_intf_ops *hif_ops = adapter->host_intf_ops;
+-	struct bl_header bl_hdr;
++	struct bl_header *bl_hdr;
+ 	u32 write_addr, write_len;
+ 	int status;
+ 
+-	bl_hdr.flags = 0;
+-	bl_hdr.image_no = cpu_to_le32(adapter->priv->coex_mode);
+-	bl_hdr.check_sum = cpu_to_le32(
+-				*(u32 *)&flash_content[CHECK_SUM_OFFSET]);
+-	bl_hdr.flash_start_address = cpu_to_le32(
+-					*(u32 *)&flash_content[ADDR_OFFSET]);
+-	bl_hdr.flash_len = cpu_to_le32(*(u32 *)&flash_content[LEN_OFFSET]);
++	bl_hdr = kzalloc(sizeof(*bl_hdr), GFP_KERNEL);
++	if (!bl_hdr)
++		return -ENOMEM;
++
++	bl_hdr->flags = 0;
++	bl_hdr->image_no = cpu_to_le32(adapter->priv->coex_mode);
++	bl_hdr->check_sum =
++		cpu_to_le32(*(u32 *)&flash_content[CHECK_SUM_OFFSET]);
++	bl_hdr->flash_start_address =
++		cpu_to_le32(*(u32 *)&flash_content[ADDR_OFFSET]);
++	bl_hdr->flash_len = cpu_to_le32(*(u32 *)&flash_content[LEN_OFFSET]);
+ 	write_len = sizeof(struct bl_header);
+ 
+ 	if (adapter->rsi_host_intf == RSI_HOST_INTF_USB) {
+ 		write_addr = PING_BUFFER_ADDRESS;
+ 		status = hif_ops->write_reg_multiple(adapter, write_addr,
+-						 (u8 *)&bl_hdr, write_len);
++						 (u8 *)bl_hdr, write_len);
+ 		if (status < 0) {
+ 			rsi_dbg(ERR_ZONE,
+ 				"%s: Failed to load Version/CRC structure\n",
+ 				__func__);
+-			return status;
++			goto fail;
+ 		}
+ 	} else {
+ 		write_addr = PING_BUFFER_ADDRESS >> 16;
+@@ -646,20 +650,23 @@ static int bl_write_header(struct rsi_hw *adapter, u8 *flash_content,
+ 			rsi_dbg(ERR_ZONE,
+ 				"%s: Unable to set ms word to common reg\n",
+ 				__func__);
+-			return status;
++			goto fail;
+ 		}
+ 		write_addr = RSI_SD_REQUEST_MASTER |
+ 			     (PING_BUFFER_ADDRESS & 0xFFFF);
+ 		status = hif_ops->write_reg_multiple(adapter, write_addr,
+-						 (u8 *)&bl_hdr, write_len);
++						 (u8 *)bl_hdr, write_len);
+ 		if (status < 0) {
+ 			rsi_dbg(ERR_ZONE,
+ 				"%s: Failed to load Version/CRC structure\n",
+ 				__func__);
+-			return status;
++			goto fail;
+ 		}
+ 	}
+-	return 0;
++	status = 0;
++fail:
++	kfree(bl_hdr);
++	return status;
+ }
+ 
+ static u32 read_flash_capacity(struct rsi_hw *adapter)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index 32f5cb46fd4f..8f83303365c8 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -1788,10 +1788,15 @@ int rsi_config_wowlan(struct rsi_hw *adapter, struct cfg80211_wowlan *wowlan)
+ 	struct rsi_common *common = adapter->priv;
+ 	u16 triggers = 0;
+ 	u16 rx_filter_word = 0;
+-	struct ieee80211_bss_conf *bss = &adapter->vifs[0]->bss_conf;
++	struct ieee80211_bss_conf *bss = NULL;
+ 
+ 	rsi_dbg(INFO_ZONE, "Config WoWLAN to device\n");
+ 
++	if (!adapter->vifs[0])
++		return -EINVAL;
++
++	bss = &adapter->vifs[0]->bss_conf;
++
+ 	if (WARN_ON(!wowlan)) {
+ 		rsi_dbg(ERR_ZONE, "WoW triggers not enabled\n");
+ 		return -EINVAL;
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index d76e69c0beaa..ffea376260eb 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -170,7 +170,6 @@ static void rsi_reset_card(struct sdio_func *pfunction)
+ 	int err;
+ 	struct mmc_card *card = pfunction->card;
+ 	struct mmc_host *host = card->host;
+-	s32 bit = (fls(host->ocr_avail) - 1);
+ 	u8 cmd52_resp;
+ 	u32 clock, resp, i;
+ 	u16 rca;
+@@ -190,7 +189,6 @@ static void rsi_reset_card(struct sdio_func *pfunction)
+ 	msleep(20);
+ 
+ 	/* Initialize the SDIO card */
+-	host->ios.vdd = bit;
+ 	host->ios.chip_select = MMC_CS_DONTCARE;
+ 	host->ios.bus_mode = MMC_BUSMODE_OPENDRAIN;
+ 	host->ios.power_mode = MMC_POWER_UP;
+@@ -1042,17 +1040,21 @@ static void ulp_read_write(struct rsi_hw *adapter, u16 addr, u32 data,
+ /*This function resets and re-initializes the chip.*/
+ static void rsi_reset_chip(struct rsi_hw *adapter)
+ {
+-	__le32 data;
++	u8 *data;
+ 	u8 sdio_interrupt_status = 0;
+ 	u8 request = 1;
+ 	int ret;
+ 
++	data = kzalloc(sizeof(u32), GFP_KERNEL);
++	if (!data)
++		return;
++
+ 	rsi_dbg(INFO_ZONE, "Writing disable to wakeup register\n");
+ 	ret =  rsi_sdio_write_register(adapter, 0, SDIO_WAKEUP_REG, &request);
+ 	if (ret < 0) {
+ 		rsi_dbg(ERR_ZONE,
+ 			"%s: Failed to write SDIO wakeup register\n", __func__);
+-		return;
++		goto err;
+ 	}
+ 	msleep(20);
+ 	ret =  rsi_sdio_read_register(adapter, RSI_FN1_INT_REGISTER,
+@@ -1060,7 +1062,7 @@ static void rsi_reset_chip(struct rsi_hw *adapter)
+ 	if (ret < 0) {
+ 		rsi_dbg(ERR_ZONE, "%s: Failed to Read Intr Status Register\n",
+ 			__func__);
+-		return;
++		goto err;
+ 	}
+ 	rsi_dbg(INFO_ZONE, "%s: Intr Status Register value = %d\n",
+ 		__func__, sdio_interrupt_status);
+@@ -1070,17 +1072,17 @@ static void rsi_reset_chip(struct rsi_hw *adapter)
+ 		rsi_dbg(ERR_ZONE,
+ 			"%s: Unable to set ms word to common reg\n",
+ 			__func__);
+-		return;
++		goto err;
+ 	}
+ 
+-	data = TA_HOLD_THREAD_VALUE;
++	put_unaligned_le32(TA_HOLD_THREAD_VALUE, data);
+ 	if (rsi_sdio_write_register_multiple(adapter, TA_HOLD_THREAD_REG |
+ 					     RSI_SD_REQUEST_MASTER,
+-					     (u8 *)&data, 4)) {
++					     data, 4)) {
+ 		rsi_dbg(ERR_ZONE,
+ 			"%s: Unable to hold Thread-Arch processor threads\n",
+ 			__func__);
+-		return;
++		goto err;
+ 	}
+ 
+ 	/* This msleep will ensure Thread-Arch processor to go to hold
+@@ -1101,6 +1103,9 @@ static void rsi_reset_chip(struct rsi_hw *adapter)
+ 	 * read write operations to complete for chip reset.
+ 	 */
+ 	msleep(500);
++err:
++	kfree(data);
++	return;
+ }
+ 
+ /**
+diff --git a/drivers/net/wireless/rsi/rsi_sdio.h b/drivers/net/wireless/rsi/rsi_sdio.h
+index ead8e7c4df3a..353dbdf31e75 100644
+--- a/drivers/net/wireless/rsi/rsi_sdio.h
++++ b/drivers/net/wireless/rsi/rsi_sdio.h
+@@ -87,7 +87,7 @@ enum sdio_interrupt_type {
+ #define TA_SOFT_RST_CLR              0
+ #define TA_SOFT_RST_SET              BIT(0)
+ #define TA_PC_ZERO                   0
+-#define TA_HOLD_THREAD_VALUE         cpu_to_le32(0xF)
++#define TA_HOLD_THREAD_VALUE         0xF
+ #define TA_RELEASE_THREAD_VALUE      cpu_to_le32(0xF)
+ #define TA_BASE_ADDR                 0x2200
+ #define MISC_CFG_BASE_ADDR           0x4105
+diff --git a/drivers/net/wireless/ti/wlcore/sdio.c b/drivers/net/wireless/ti/wlcore/sdio.c
+index 1f727babbea0..5de8305a6fd6 100644
+--- a/drivers/net/wireless/ti/wlcore/sdio.c
++++ b/drivers/net/wireless/ti/wlcore/sdio.c
+@@ -406,6 +406,11 @@ static int wl1271_suspend(struct device *dev)
+ 	mmc_pm_flag_t sdio_flags;
+ 	int ret = 0;
+ 
++	if (!wl) {
++		dev_err(dev, "no wilink module was probed\n");
++		goto out;
++	}
++
+ 	dev_dbg(dev, "wl1271 suspend. wow_enabled: %d\n",
+ 		wl->wow_enabled);
+ 
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 1d5082d30187..42e93cb4eca7 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -87,6 +87,7 @@ struct netfront_cb {
+ /* IRQ name is queue name with "-tx" or "-rx" appended */
+ #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+ 
++static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
+ static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
+ 
+ struct netfront_stats {
+@@ -239,7 +240,7 @@ static void rx_refill_timeout(struct timer_list *t)
+ static int netfront_tx_slot_available(struct netfront_queue *queue)
+ {
+ 	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
+-		(NET_TX_RING_SIZE - MAX_SKB_FRAGS - 2);
++		(NET_TX_RING_SIZE - XEN_NETIF_NR_SLOTS_MIN - 1);
+ }
+ 
+ static void xennet_maybe_wake_tx(struct netfront_queue *queue)
+@@ -790,7 +791,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
+ 	RING_IDX cons = queue->rx.rsp_cons;
+ 	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+ 	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
+-	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
++	int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
+ 	int slots = 1;
+ 	int err = 0;
+ 	unsigned long ret;
+@@ -1330,6 +1331,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
+ 	netif_carrier_off(netdev);
+ 
+ 	xenbus_switch_state(dev, XenbusStateInitialising);
++	wait_event(module_load_q,
++			   xenbus_read_driver_state(dev->otherend) !=
++			   XenbusStateClosed &&
++			   xenbus_read_driver_state(dev->otherend) !=
++			   XenbusStateUnknown);
+ 	return netdev;
+ 
+  exit:
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 5dbb0f0c02ef..0483c33a3567 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2679,19 +2679,15 @@ static pci_ers_result_t nvme_slot_reset(struct pci_dev *pdev)
+ 
+ 	dev_info(dev->ctrl.device, "restart after slot reset\n");
+ 	pci_restore_state(pdev);
+-	nvme_reset_ctrl_sync(&dev->ctrl);
+-
+-	switch (dev->ctrl.state) {
+-	case NVME_CTRL_LIVE:
+-	case NVME_CTRL_ADMIN_ONLY:
+-		return PCI_ERS_RESULT_RECOVERED;
+-	default:
+-		return PCI_ERS_RESULT_DISCONNECT;
+-	}
++	nvme_reset_ctrl(&dev->ctrl);
++	return PCI_ERS_RESULT_RECOVERED;
+ }
+ 
+ static void nvme_error_resume(struct pci_dev *pdev)
+ {
++	struct nvme_dev *dev = pci_get_drvdata(pdev);
++
++	flush_work(&dev->ctrl.reset_work);
+ 	pci_cleanup_aer_uncorrect_error_status(pdev);
+ }
+ 
+@@ -2735,6 +2731,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ 		.driver_data = NVME_QUIRK_LIGHTNVM, },
+ 	{ PCI_DEVICE(0x1d1d, 0x2807),	/* CNEX WL */
+ 		.driver_data = NVME_QUIRK_LIGHTNVM, },
++	{ PCI_DEVICE(0x1d1d, 0x2601),	/* CNEX Granby */
++		.driver_data = NVME_QUIRK_LIGHTNVM, },
+ 	{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 1eb4438a8763..2181299ce8f5 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -778,7 +778,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ 	if (error) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"prop_get NVME_REG_CAP failed\n");
+-		goto out_cleanup_queue;
++		goto out_stop_queue;
+ 	}
+ 
+ 	ctrl->ctrl.sqsize =
+@@ -786,23 +786,25 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ 
+ 	error = nvme_enable_ctrl(&ctrl->ctrl, ctrl->ctrl.cap);
+ 	if (error)
+-		goto out_cleanup_queue;
++		goto out_stop_queue;
+ 
+ 	ctrl->ctrl.max_hw_sectors =
+ 		(ctrl->max_fr_pages - 1) << (ilog2(SZ_4K) - 9);
+ 
+ 	error = nvme_init_identify(&ctrl->ctrl);
+ 	if (error)
+-		goto out_cleanup_queue;
++		goto out_stop_queue;
+ 
+ 	error = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev,
+ 			&ctrl->async_event_sqe, sizeof(struct nvme_command),
+ 			DMA_TO_DEVICE);
+ 	if (error)
+-		goto out_cleanup_queue;
++		goto out_stop_queue;
+ 
+ 	return 0;
+ 
++out_stop_queue:
++	nvme_rdma_stop_queue(&ctrl->queues[0]);
+ out_cleanup_queue:
+ 	if (new)
+ 		blk_cleanup_queue(ctrl->ctrl.admin_q);
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 33ee8d3145f8..9fb28e076c26 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -58,8 +58,8 @@ struct nvmet_fc_ls_iod {
+ 	struct work_struct		work;
+ } __aligned(sizeof(unsigned long long));
+ 
++/* desired maximum for a single sequence - if sg list allows it */
+ #define NVMET_FC_MAX_SEQ_LENGTH		(256 * 1024)
+-#define NVMET_FC_MAX_XFR_SGENTS		(NVMET_FC_MAX_SEQ_LENGTH / PAGE_SIZE)
+ 
+ enum nvmet_fcp_datadir {
+ 	NVMET_FCP_NODATA,
+@@ -74,6 +74,7 @@ struct nvmet_fc_fcp_iod {
+ 	struct nvme_fc_cmd_iu		cmdiubuf;
+ 	struct nvme_fc_ersp_iu		rspiubuf;
+ 	dma_addr_t			rspdma;
++	struct scatterlist		*next_sg;
+ 	struct scatterlist		*data_sg;
+ 	int				data_sg_cnt;
+ 	u32				offset;
+@@ -1025,8 +1026,7 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
+ 	INIT_LIST_HEAD(&newrec->assoc_list);
+ 	kref_init(&newrec->ref);
+ 	ida_init(&newrec->assoc_cnt);
+-	newrec->max_sg_cnt = min_t(u32, NVMET_FC_MAX_XFR_SGENTS,
+-					template->max_sgl_segments);
++	newrec->max_sg_cnt = template->max_sgl_segments;
+ 
+ 	ret = nvmet_fc_alloc_ls_iodlist(newrec);
+ 	if (ret) {
+@@ -1722,6 +1722,7 @@ nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod)
+ 				((fod->io_dir == NVMET_FCP_WRITE) ?
+ 					DMA_FROM_DEVICE : DMA_TO_DEVICE));
+ 				/* note: write from initiator perspective */
++	fod->next_sg = fod->data_sg;
+ 
+ 	return 0;
+ 
+@@ -1866,24 +1867,49 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
+ 				struct nvmet_fc_fcp_iod *fod, u8 op)
+ {
+ 	struct nvmefc_tgt_fcp_req *fcpreq = fod->fcpreq;
++	struct scatterlist *sg = fod->next_sg;
+ 	unsigned long flags;
+-	u32 tlen;
++	u32 remaininglen = fod->req.transfer_len - fod->offset;
++	u32 tlen = 0;
+ 	int ret;
+ 
+ 	fcpreq->op = op;
+ 	fcpreq->offset = fod->offset;
+ 	fcpreq->timeout = NVME_FC_TGTOP_TIMEOUT_SEC;
+ 
+-	tlen = min_t(u32, tgtport->max_sg_cnt * PAGE_SIZE,
+-			(fod->req.transfer_len - fod->offset));
++	/*
++	 * for next sequence:
++	 *  break at a sg element boundary
++	 *  attempt to keep sequence length capped at
++	 *    NVMET_FC_MAX_SEQ_LENGTH but allow sequence to
++	 *    be longer if a single sg element is larger
++	 *    than that amount. This is done to avoid creating
++	 *    a new sg list to use for the tgtport api.
++	 */
++	fcpreq->sg = sg;
++	fcpreq->sg_cnt = 0;
++	while (tlen < remaininglen &&
++	       fcpreq->sg_cnt < tgtport->max_sg_cnt &&
++	       tlen + sg_dma_len(sg) < NVMET_FC_MAX_SEQ_LENGTH) {
++		fcpreq->sg_cnt++;
++		tlen += sg_dma_len(sg);
++		sg = sg_next(sg);
++	}
++	if (tlen < remaininglen && fcpreq->sg_cnt == 0) {
++		fcpreq->sg_cnt++;
++		tlen += min_t(u32, sg_dma_len(sg), remaininglen);
++		sg = sg_next(sg);
++	}
++	if (tlen < remaininglen)
++		fod->next_sg = sg;
++	else
++		fod->next_sg = NULL;
++
+ 	fcpreq->transfer_length = tlen;
+ 	fcpreq->transferred_length = 0;
+ 	fcpreq->fcp_error = 0;
+ 	fcpreq->rsplen = 0;
+ 
+-	fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE];
+-	fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE);
+-
+ 	/*
+ 	 * If the last READDATA request: check if LLDD supports
+ 	 * combined xfr with response.
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index b05aa8e81303..1e28597138c8 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1107,6 +1107,8 @@ static void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell,
+ 
+ 		/* setup the first byte with lsb bits from nvmem */
+ 		rc = nvmem_reg_read(nvmem, cell->offset, &v, 1);
++		if (rc)
++			goto err;
+ 		*b++ |= GENMASK(bit_offset - 1, 0) & v;
+ 
+ 		/* setup rest of the byte if any */
+@@ -1125,11 +1127,16 @@ static void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell,
+ 		/* setup the last byte with msb bits from nvmem */
+ 		rc = nvmem_reg_read(nvmem,
+ 				    cell->offset + cell->bytes - 1, &v, 1);
++		if (rc)
++			goto err;
+ 		*p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v;
+ 
+ 	}
+ 
+ 	return buf;
++err:
++	kfree(buf);
++	return ERR_PTR(rc);
+ }
+ 
+ /**
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 366d93af051d..788a200fb2dc 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -288,13 +288,16 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr,
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+-	if (!val) {
+-		if (pci_is_enabled(pdev))
+-			pci_disable_device(pdev);
+-		else
+-			result = -EIO;
+-	} else
++	device_lock(dev);
++	if (dev->driver)
++		result = -EBUSY;
++	else if (val)
+ 		result = pci_enable_device(pdev);
++	else if (pci_is_enabled(pdev))
++		pci_disable_device(pdev);
++	else
++		result = -EIO;
++	device_unlock(dev);
+ 
+ 	return result < 0 ? result : count;
+ }
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index f76eb7704f64..c687c817b47d 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -400,6 +400,15 @@ static void pcie_get_aspm_reg(struct pci_dev *pdev,
+ 		info->l1ss_cap = 0;
+ 		return;
+ 	}
++
++	/*
++	 * If we don't have LTR for the entire path from the Root Complex
++	 * to this device, we can't use ASPM L1.2 because it relies on the
++	 * LTR_L1.2_THRESHOLD.  See PCIe r4.0, secs 5.5.4, 6.18.
++	 */
++	if (!pdev->ltr_path)
++		info->l1ss_cap &= ~PCI_L1SS_CAP_ASPM_L1_2;
++
+ 	pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL1,
+ 			      &info->l1ss_ctl1);
+ 	pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL2,
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index 8c57d607e603..74562dbacbf1 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -113,7 +113,7 @@ static void dpc_work(struct work_struct *work)
+ 	}
+ 
+ 	pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
+-		PCI_EXP_DPC_STATUS_TRIGGER | PCI_EXP_DPC_STATUS_INTERRUPT);
++			      PCI_EXP_DPC_STATUS_TRIGGER);
+ 
+ 	pci_read_config_word(pdev, cap + PCI_EXP_DPC_CTL, &ctl);
+ 	pci_write_config_word(pdev, cap + PCI_EXP_DPC_CTL,
+@@ -223,6 +223,9 @@ static irqreturn_t dpc_irq(int irq, void *context)
+ 	if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
+ 		dpc_process_rp_pio_error(dpc);
+ 
++	pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
++			      PCI_EXP_DPC_STATUS_INTERRUPT);
++
+ 	schedule_work(&dpc->work);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 73ac02796ba9..d21686ad3ce5 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -526,12 +526,14 @@ static void devm_pci_release_host_bridge_dev(struct device *dev)
+ 
+ 	if (bridge->release_fn)
+ 		bridge->release_fn(bridge);
++
++	pci_free_resource_list(&bridge->windows);
+ }
+ 
+ static void pci_release_host_bridge_dev(struct device *dev)
+ {
+ 	devm_pci_release_host_bridge_dev(dev);
+-	pci_free_host_bridge(to_pci_host_bridge(dev));
++	kfree(to_pci_host_bridge(dev));
+ }
+ 
+ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
+diff --git a/drivers/perf/arm-cci.c b/drivers/perf/arm-cci.c
+index 383b2d3dcbc6..687ae8e674db 100644
+--- a/drivers/perf/arm-cci.c
++++ b/drivers/perf/arm-cci.c
+@@ -120,9 +120,9 @@ enum cci_models {
+ 
+ static void pmu_write_counters(struct cci_pmu *cci_pmu,
+ 				 unsigned long *mask);
+-static ssize_t cci_pmu_format_show(struct device *dev,
++static ssize_t __maybe_unused cci_pmu_format_show(struct device *dev,
+ 			struct device_attribute *attr, char *buf);
+-static ssize_t cci_pmu_event_show(struct device *dev,
++static ssize_t __maybe_unused cci_pmu_event_show(struct device *dev,
+ 			struct device_attribute *attr, char *buf);
+ 
+ #define CCI_EXT_ATTR_ENTRY(_name, _func, _config) 				\
+@@ -1466,7 +1466,7 @@ static int cci_pmu_offline_cpu(unsigned int cpu)
+ 	return 0;
+ }
+ 
+-static struct cci_pmu_model cci_pmu_models[] = {
++static __maybe_unused struct cci_pmu_model cci_pmu_models[] = {
+ #ifdef CONFIG_ARM_CCI400_PMU
+ 	[CCI400_R0] = {
+ 		.name = "CCI_400",
+diff --git a/drivers/perf/arm-ccn.c b/drivers/perf/arm-ccn.c
+index 65b7e4042ece..07771e28f572 100644
+--- a/drivers/perf/arm-ccn.c
++++ b/drivers/perf/arm-ccn.c
+@@ -736,7 +736,7 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ 	ccn = pmu_to_arm_ccn(event->pmu);
+ 
+ 	if (hw->sample_period) {
+-		dev_warn(ccn->dev, "Sampling not supported!\n");
++		dev_dbg(ccn->dev, "Sampling not supported!\n");
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+@@ -744,12 +744,12 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ 			event->attr.exclude_kernel || event->attr.exclude_hv ||
+ 			event->attr.exclude_idle || event->attr.exclude_host ||
+ 			event->attr.exclude_guest) {
+-		dev_warn(ccn->dev, "Can't exclude execution levels!\n");
++		dev_dbg(ccn->dev, "Can't exclude execution levels!\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	if (event->cpu < 0) {
+-		dev_warn(ccn->dev, "Can't provide per-task data!\n");
++		dev_dbg(ccn->dev, "Can't provide per-task data!\n");
+ 		return -EOPNOTSUPP;
+ 	}
+ 	/*
+@@ -771,13 +771,13 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ 	switch (type) {
+ 	case CCN_TYPE_MN:
+ 		if (node_xp != ccn->mn_id) {
+-			dev_warn(ccn->dev, "Invalid MN ID %d!\n", node_xp);
++			dev_dbg(ccn->dev, "Invalid MN ID %d!\n", node_xp);
+ 			return -EINVAL;
+ 		}
+ 		break;
+ 	case CCN_TYPE_XP:
+ 		if (node_xp >= ccn->num_xps) {
+-			dev_warn(ccn->dev, "Invalid XP ID %d!\n", node_xp);
++			dev_dbg(ccn->dev, "Invalid XP ID %d!\n", node_xp);
+ 			return -EINVAL;
+ 		}
+ 		break;
+@@ -785,11 +785,11 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ 		break;
+ 	default:
+ 		if (node_xp >= ccn->num_nodes) {
+-			dev_warn(ccn->dev, "Invalid node ID %d!\n", node_xp);
++			dev_dbg(ccn->dev, "Invalid node ID %d!\n", node_xp);
+ 			return -EINVAL;
+ 		}
+ 		if (!arm_ccn_pmu_type_eq(type, ccn->node[node_xp].type)) {
+-			dev_warn(ccn->dev, "Invalid type 0x%x for node %d!\n",
++			dev_dbg(ccn->dev, "Invalid type 0x%x for node %d!\n",
+ 					type, node_xp);
+ 			return -EINVAL;
+ 		}
+@@ -808,19 +808,19 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
+ 		if (event_id != e->event)
+ 			continue;
+ 		if (e->num_ports && port >= e->num_ports) {
+-			dev_warn(ccn->dev, "Invalid port %d for node/XP %d!\n",
++			dev_dbg(ccn->dev, "Invalid port %d for node/XP %d!\n",
+ 					port, node_xp);
+ 			return -EINVAL;
+ 		}
+ 		if (e->num_vcs && vc >= e->num_vcs) {
+-			dev_warn(ccn->dev, "Invalid vc %d for node/XP %d!\n",
++			dev_dbg(ccn->dev, "Invalid vc %d for node/XP %d!\n",
+ 					vc, node_xp);
+ 			return -EINVAL;
+ 		}
+ 		valid = 1;
+ 	}
+ 	if (!valid) {
+-		dev_warn(ccn->dev, "Invalid event 0x%x for node/XP %d!\n",
++		dev_dbg(ccn->dev, "Invalid event 0x%x for node/XP %d!\n",
+ 				event_id, node_xp);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 4b57a13758a4..bafb3d40545e 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -576,8 +576,10 @@ static int atmel_pctl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 		for_each_child_of_node(np_config, np) {
+ 			ret = atmel_pctl_dt_subnode_to_map(pctldev, np, map,
+ 						    &reserved_maps, num_maps);
+-			if (ret < 0)
++			if (ret < 0) {
++				of_node_put(np);
+ 				break;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index ad80a17c9990..ace2bfbf1bee 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -890,11 +890,24 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ 		return ret;
+ 	}
+ 
+-	ret = gpiochip_add_pin_range(&pctrl->chip, dev_name(pctrl->dev), 0, 0, chip->ngpio);
+-	if (ret) {
+-		dev_err(pctrl->dev, "Failed to add pin range\n");
+-		gpiochip_remove(&pctrl->chip);
+-		return ret;
++	/*
++	 * For DeviceTree-supported systems, the gpio core checks the
++	 * pinctrl's device node for the "gpio-ranges" property.
++	 * If it is present, it takes care of adding the pin ranges
++	 * for the driver. In this case the driver can skip ahead.
++	 *
++	 * In order to remain compatible with older, existing DeviceTree
++	 * files which don't set the "gpio-ranges" property or systems that
++	 * utilize ACPI the driver has to call gpiochip_add_pin_range().
++	 */
++	if (!of_property_read_bool(pctrl->dev->of_node, "gpio-ranges")) {
++		ret = gpiochip_add_pin_range(&pctrl->chip,
++			dev_name(pctrl->dev), 0, 0, chip->ngpio);
++		if (ret) {
++			dev_err(pctrl->dev, "Failed to add pin range\n");
++			gpiochip_remove(&pctrl->chip);
++			return ret;
++		}
+ 	}
+ 
+ 	ret = gpiochip_irqchip_add(chip,
+diff --git a/drivers/platform/x86/dell-smbios-base.c b/drivers/platform/x86/dell-smbios-base.c
+index 33fb2a20458a..9dc282ed5a9e 100644
+--- a/drivers/platform/x86/dell-smbios-base.c
++++ b/drivers/platform/x86/dell-smbios-base.c
+@@ -555,11 +555,10 @@ static void free_group(struct platform_device *pdev)
+ 
+ static int __init dell_smbios_init(void)
+ {
+-	const struct dmi_device *valid;
+ 	int ret, wmi, smm;
+ 
+-	valid = dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL);
+-	if (!valid) {
++	if (!dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL) &&
++	    !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "www.dell.com", NULL)) {
+ 		pr_err("Unable to run on non-Dell system\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/regulator/cpcap-regulator.c b/drivers/regulator/cpcap-regulator.c
+index f541b80f1b54..bd910fe123d9 100644
+--- a/drivers/regulator/cpcap-regulator.c
++++ b/drivers/regulator/cpcap-regulator.c
+@@ -222,7 +222,7 @@ static unsigned int cpcap_map_mode(unsigned int mode)
+ 	case CPCAP_BIT_AUDIO_LOW_PWR:
+ 		return REGULATOR_MODE_STANDBY;
+ 	default:
+-		return -EINVAL;
++		return REGULATOR_MODE_INVALID;
+ 	}
+ }
+ 
+diff --git a/drivers/regulator/internal.h b/drivers/regulator/internal.h
+index abfd56e8c78a..24fde1e08f3a 100644
+--- a/drivers/regulator/internal.h
++++ b/drivers/regulator/internal.h
+@@ -56,14 +56,19 @@ static inline struct regulator_dev *dev_to_rdev(struct device *dev)
+ 	return container_of(dev, struct regulator_dev, dev);
+ }
+ 
+-struct regulator_dev *of_find_regulator_by_node(struct device_node *np);
+-
+ #ifdef CONFIG_OF
++struct regulator_dev *of_find_regulator_by_node(struct device_node *np);
+ struct regulator_init_data *regulator_of_get_init_data(struct device *dev,
+ 			         const struct regulator_desc *desc,
+ 				 struct regulator_config *config,
+ 				 struct device_node **node);
+ #else
++static inline struct regulator_dev *
++of_find_regulator_by_node(struct device_node *np)
++{
++	return NULL;
++}
++
+ static inline struct regulator_init_data *
+ regulator_of_get_init_data(struct device *dev,
+ 			   const struct regulator_desc *desc,
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index f47264fa1940..0d3f73eacb99 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -31,6 +31,7 @@ static void of_get_regulation_constraints(struct device_node *np,
+ 	struct regulation_constraints *constraints = &(*init_data)->constraints;
+ 	struct regulator_state *suspend_state;
+ 	struct device_node *suspend_np;
++	unsigned int mode;
+ 	int ret, i;
+ 	u32 pval;
+ 
+@@ -124,11 +125,11 @@ static void of_get_regulation_constraints(struct device_node *np,
+ 
+ 	if (!of_property_read_u32(np, "regulator-initial-mode", &pval)) {
+ 		if (desc && desc->of_map_mode) {
+-			ret = desc->of_map_mode(pval);
+-			if (ret == -EINVAL)
++			mode = desc->of_map_mode(pval);
++			if (mode == REGULATOR_MODE_INVALID)
+ 				pr_err("%s: invalid mode %u\n", np->name, pval);
+ 			else
+-				constraints->initial_mode = ret;
++				constraints->initial_mode = mode;
+ 		} else {
+ 			pr_warn("%s: mapping for mode %d not defined\n",
+ 				np->name, pval);
+@@ -163,12 +164,12 @@ static void of_get_regulation_constraints(struct device_node *np,
+ 		if (!of_property_read_u32(suspend_np, "regulator-mode",
+ 					  &pval)) {
+ 			if (desc && desc->of_map_mode) {
+-				ret = desc->of_map_mode(pval);
+-				if (ret == -EINVAL)
++				mode = desc->of_map_mode(pval);
++				if (mode == REGULATOR_MODE_INVALID)
+ 					pr_err("%s: invalid mode %u\n",
+ 					       np->name, pval);
+ 				else
+-					suspend_state->mode = ret;
++					suspend_state->mode = mode;
+ 			} else {
+ 				pr_warn("%s: mapping for mode %d not defined\n",
+ 					np->name, pval);
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index 63922a2167e5..659e516455be 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -158,6 +158,7 @@ static const struct regulator_ops pfuze100_sw_regulator_ops = {
+ static const struct regulator_ops pfuze100_swb_regulator_ops = {
+ 	.enable = regulator_enable_regmap,
+ 	.disable = regulator_disable_regmap,
++	.is_enabled = regulator_is_enabled_regmap,
+ 	.list_voltage = regulator_list_voltage_table,
+ 	.map_voltage = regulator_map_voltage_ascend,
+ 	.set_voltage_sel = regulator_set_voltage_sel_regmap,
+diff --git a/drivers/regulator/twl-regulator.c b/drivers/regulator/twl-regulator.c
+index a4456db5849d..884c7505ed91 100644
+--- a/drivers/regulator/twl-regulator.c
++++ b/drivers/regulator/twl-regulator.c
+@@ -274,7 +274,7 @@ static inline unsigned int twl4030reg_map_mode(unsigned int mode)
+ 	case RES_STATE_SLEEP:
+ 		return REGULATOR_MODE_STANDBY;
+ 	default:
+-		return -EINVAL;
++		return REGULATOR_MODE_INVALID;
+ 	}
+ }
+ 
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index 7cbdc9228dd5..6d4012dd6922 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -441,6 +441,11 @@ int rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+ {
+ 	int err;
+ 
++	if (!rtc->ops)
++		return -ENODEV;
++	else if (!rtc->ops->set_alarm)
++		return -EINVAL;
++
+ 	err = rtc_valid_tm(&alarm->time);
+ 	if (err != 0)
+ 		return err;
+diff --git a/drivers/rtc/rtc-tps6586x.c b/drivers/rtc/rtc-tps6586x.c
+index d7785ae0a2b4..1144fe07503e 100644
+--- a/drivers/rtc/rtc-tps6586x.c
++++ b/drivers/rtc/rtc-tps6586x.c
+@@ -276,14 +276,15 @@ static int tps6586x_rtc_probe(struct platform_device *pdev)
+ 	device_init_wakeup(&pdev->dev, 1);
+ 
+ 	platform_set_drvdata(pdev, rtc);
+-	rtc->rtc = devm_rtc_device_register(&pdev->dev, dev_name(&pdev->dev),
+-				       &tps6586x_rtc_ops, THIS_MODULE);
++	rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
+ 	if (IS_ERR(rtc->rtc)) {
+ 		ret = PTR_ERR(rtc->rtc);
+-		dev_err(&pdev->dev, "RTC device register: ret %d\n", ret);
++		dev_err(&pdev->dev, "RTC allocate device: ret %d\n", ret);
+ 		goto fail_rtc_register;
+ 	}
+ 
++	rtc->rtc->ops = &tps6586x_rtc_ops;
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, rtc->irq, NULL,
+ 				tps6586x_rtc_irq,
+ 				IRQF_ONESHOT,
+@@ -294,6 +295,13 @@ static int tps6586x_rtc_probe(struct platform_device *pdev)
+ 		goto fail_rtc_register;
+ 	}
+ 	disable_irq(rtc->irq);
++
++	ret = rtc_register_device(rtc->rtc);
++	if (ret) {
++		dev_err(&pdev->dev, "RTC device register: ret %d\n", ret);
++		goto fail_rtc_register;
++	}
++
+ 	return 0;
+ 
+ fail_rtc_register:
+diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c
+index d0244d7979fc..a56b526db89a 100644
+--- a/drivers/rtc/rtc-tps65910.c
++++ b/drivers/rtc/rtc-tps65910.c
+@@ -380,6 +380,10 @@ static int tps65910_rtc_probe(struct platform_device *pdev)
+ 	if (!tps_rtc)
+ 		return -ENOMEM;
+ 
++	tps_rtc->rtc = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(tps_rtc->rtc))
++		return PTR_ERR(tps_rtc->rtc);
++
+ 	/* Clear pending interrupts */
+ 	ret = regmap_read(tps65910->regmap, TPS65910_RTC_STATUS, &rtc_reg);
+ 	if (ret < 0)
+@@ -421,10 +425,10 @@ static int tps65910_rtc_probe(struct platform_device *pdev)
+ 	tps_rtc->irq = irq;
+ 	device_set_wakeup_capable(&pdev->dev, 1);
+ 
+-	tps_rtc->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+-		&tps65910_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(tps_rtc->rtc)) {
+-		ret = PTR_ERR(tps_rtc->rtc);
++	tps_rtc->rtc->ops = &tps65910_rtc_ops;
++
++	ret = rtc_register_device(tps_rtc->rtc);
++	if (ret) {
+ 		dev_err(&pdev->dev, "RTC device register: err %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/rtc/rtc-vr41xx.c b/drivers/rtc/rtc-vr41xx.c
+index 7ce22967fd16..7ed010714f29 100644
+--- a/drivers/rtc/rtc-vr41xx.c
++++ b/drivers/rtc/rtc-vr41xx.c
+@@ -292,13 +292,14 @@ static int rtc_probe(struct platform_device *pdev)
+ 		goto err_rtc1_iounmap;
+ 	}
+ 
+-	rtc = devm_rtc_device_register(&pdev->dev, rtc_name, &vr41xx_rtc_ops,
+-					THIS_MODULE);
++	rtc = devm_rtc_allocate_device(&pdev->dev);
+ 	if (IS_ERR(rtc)) {
+ 		retval = PTR_ERR(rtc);
+ 		goto err_iounmap_all;
+ 	}
+ 
++	rtc->ops = &vr41xx_rtc_ops;
++
+ 	rtc->max_user_freq = MAX_PERIODIC_RATE;
+ 
+ 	spin_lock_irq(&rtc_lock);
+@@ -340,6 +341,10 @@ static int rtc_probe(struct platform_device *pdev)
+ 
+ 	dev_info(&pdev->dev, "Real Time Clock of NEC VR4100 series\n");
+ 
++	retval = rtc_register_device(rtc);
++	if (retval)
++		goto err_iounmap_all;
++
+ 	return 0;
+ 
+ err_iounmap_all:
+diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
+index b415ba42ca73..599447032e50 100644
+--- a/drivers/s390/scsi/zfcp_dbf.c
++++ b/drivers/s390/scsi/zfcp_dbf.c
+@@ -285,6 +285,8 @@ void zfcp_dbf_rec_trig(char *tag, struct zfcp_adapter *adapter,
+ 	struct list_head *entry;
+ 	unsigned long flags;
+ 
++	lockdep_assert_held(&adapter->erp_lock);
++
+ 	if (unlikely(!debug_level_enabled(dbf->rec, level)))
+ 		return;
+ 
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index b42c9c479d4b..99ba4a770406 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -882,6 +882,11 @@ static int twa_chrdev_open(struct inode *inode, struct file *file)
+ 	unsigned int minor_number;
+ 	int retval = TW_IOCTL_ERROR_OS_ENODEV;
+ 
++	if (!capable(CAP_SYS_ADMIN)) {
++		retval = -EACCES;
++		goto out;
++	}
++
+ 	minor_number = iminor(inode);
+ 	if (minor_number >= twa_device_extension_count)
+ 		goto out;
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index 33261b690774..f6179e3d6953 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -1033,6 +1033,9 @@ static int tw_chrdev_open(struct inode *inode, struct file *file)
+ 
+ 	dprintk(KERN_WARNING "3w-xxxx: tw_ioctl_open()\n");
+ 
++	if (!capable(CAP_SYS_ADMIN))
++		return -EACCES;
++
+ 	minor_number = iminor(inode);
+ 	if (minor_number >= tw_device_extension_count)
+ 		return -ENODEV;
+diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c
+index d8fe7ab870b8..f97f44b4b706 100644
+--- a/drivers/scsi/cxlflash/main.c
++++ b/drivers/scsi/cxlflash/main.c
+@@ -946,9 +946,9 @@ static void cxlflash_remove(struct pci_dev *pdev)
+ 		return;
+ 	}
+ 
+-	/* If a Task Management Function is active, wait for it to complete
+-	 * before continuing with remove.
+-	 */
++	/* Yield to running recovery threads before continuing with remove */
++	wait_event(cfg->reset_waitq, cfg->state != STATE_RESET &&
++				     cfg->state != STATE_PROBING);
+ 	spin_lock_irqsave(&cfg->tmf_slock, lock_flags);
+ 	if (cfg->tmf_active)
+ 		wait_event_interruptible_lock_irq(cfg->tmf_waitq,
+@@ -1303,7 +1303,10 @@ static void afu_err_intr_init(struct afu *afu)
+ 	for (i = 0; i < afu->num_hwqs; i++) {
+ 		hwq = get_hwq(afu, i);
+ 
+-		writeq_be(SISL_MSI_SYNC_ERROR, &hwq->host_map->ctx_ctrl);
++		reg = readq_be(&hwq->host_map->ctx_ctrl);
++		WARN_ON((reg & SISL_CTX_CTRL_LISN_MASK) != 0);
++		reg |= SISL_MSI_SYNC_ERROR;
++		writeq_be(reg, &hwq->host_map->ctx_ctrl);
+ 		writeq_be(SISL_ISTATUS_MASK, &hwq->host_map->intr_mask);
+ 	}
+ }
+diff --git a/drivers/scsi/cxlflash/sislite.h b/drivers/scsi/cxlflash/sislite.h
+index bedf1ce2f33c..d8940f1ae219 100644
+--- a/drivers/scsi/cxlflash/sislite.h
++++ b/drivers/scsi/cxlflash/sislite.h
+@@ -284,6 +284,7 @@ struct sisl_host_map {
+ 	__be64 cmd_room;
+ 	__be64 ctx_ctrl;	/* least significant byte or b56:63 is LISN# */
+ #define SISL_CTX_CTRL_UNMAP_SECTOR	0x8000000000000000ULL /* b0 */
++#define SISL_CTX_CTRL_LISN_MASK		(0xFFULL)
+ 	__be64 mbox_w;		/* restricted use */
+ 	__be64 sq_start;	/* Submission Queue (R/W): write sequence and */
+ 	__be64 sq_end;		/* inclusion semantics are the same as RRQ    */
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 6f3e5ba6b472..3d3aa47bab69 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -348,10 +348,11 @@ struct hisi_sas_err_record_v3 {
+ #define DIR_TO_DEVICE 2
+ #define DIR_RESERVED 3
+ 
+-#define CMD_IS_UNCONSTRAINT(cmd) \
+-	((cmd == ATA_CMD_READ_LOG_EXT) || \
+-	(cmd == ATA_CMD_READ_LOG_DMA_EXT) || \
+-	(cmd == ATA_CMD_DEV_RESET))
++#define FIS_CMD_IS_UNCONSTRAINED(fis) \
++	((fis.command == ATA_CMD_READ_LOG_EXT) || \
++	(fis.command == ATA_CMD_READ_LOG_DMA_EXT) || \
++	((fis.command == ATA_CMD_DEV_RESET) && \
++	((fis.control & ATA_SRST) != 0)))
+ 
+ static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off)
+ {
+@@ -1046,7 +1047,7 @@ static int prep_ata_v3_hw(struct hisi_hba *hisi_hba,
+ 		<< CMD_HDR_FRAME_TYPE_OFF;
+ 	dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF;
+ 
+-	if (CMD_IS_UNCONSTRAINT(task->ata_task.fis.command))
++	if (FIS_CMD_IS_UNCONSTRAINED(task->ata_task.fis))
+ 		dw1 |= 1 << CMD_HDR_UNCON_CMD_OFF;
+ 
+ 	hdr->dw1 = cpu_to_le32(dw1);
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index 7195cff51d4c..9b6f5d024dba 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4199,6 +4199,9 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	int irq, i, j;
+ 	int error = -ENODEV;
+ 
++	if (hba_count >= MAX_CONTROLLERS)
++		goto out;
++
+ 	if (pci_enable_device(pdev))
+ 		goto out;
+ 	pci_set_master(pdev);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index f4d988dd1e9d..35497abb0e81 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -2981,6 +2981,9 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
+ 		pRAID_Context->timeout_value = cpu_to_le16(os_timeout_value);
+ 		pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id);
+ 	} else {
++		if (os_timeout_value)
++			os_timeout_value++;
++
+ 		/* system pd Fast Path */
+ 		io_request->Function = MPI2_FUNCTION_SCSI_IO_REQUEST;
+ 		timeout_limit = (scmd->device->type == TYPE_DISK) ?
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 284ccb566b19..5015b8fbbfc5 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1647,6 +1647,15 @@ static int qedf_vport_destroy(struct fc_vport *vport)
+ 	struct Scsi_Host *shost = vport_to_shost(vport);
+ 	struct fc_lport *n_port = shost_priv(shost);
+ 	struct fc_lport *vn_port = vport->dd_data;
++	struct qedf_ctx *qedf = lport_priv(vn_port);
++
++	if (!qedf) {
++		QEDF_ERR(NULL, "qedf is NULL.\n");
++		goto out;
++	}
++
++	/* Set unloading bit on vport qedf_ctx to prevent more I/O */
++	set_bit(QEDF_UNLOADING, &qedf->flags);
+ 
+ 	mutex_lock(&n_port->lp_mutex);
+ 	list_del(&vn_port->list);
+@@ -1673,6 +1682,7 @@ static int qedf_vport_destroy(struct fc_vport *vport)
+ 	if (vn_port->host)
+ 		scsi_host_put(vn_port->host);
+ 
++out:
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/scsi_dh.c b/drivers/scsi/scsi_dh.c
+index 188f30572aa1..5a58cbf3a75d 100644
+--- a/drivers/scsi/scsi_dh.c
++++ b/drivers/scsi/scsi_dh.c
+@@ -58,7 +58,10 @@ static const struct scsi_dh_blist scsi_dh_blist[] = {
+ 	{"IBM", "3526",			"rdac", },
+ 	{"IBM", "3542",			"rdac", },
+ 	{"IBM", "3552",			"rdac", },
+-	{"SGI", "TP9",			"rdac", },
++	{"SGI", "TP9300",		"rdac", },
++	{"SGI", "TP9400",		"rdac", },
++	{"SGI", "TP9500",		"rdac", },
++	{"SGI", "TP9700",		"rdac", },
+ 	{"SGI", "IS",			"rdac", },
+ 	{"STK", "OPENstorage",		"rdac", },
+ 	{"STK", "FLEXLINE 380",		"rdac", },
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 00e79057f870..15c394d95445 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -4969,6 +4969,7 @@ static void ufshcd_exception_event_handler(struct work_struct *work)
+ 	hba = container_of(work, struct ufs_hba, eeh_work);
+ 
+ 	pm_runtime_get_sync(hba->dev);
++	scsi_block_requests(hba->host);
+ 	err = ufshcd_get_ee_status(hba, &status);
+ 	if (err) {
+ 		dev_err(hba->dev, "%s: failed to get exception status %d\n",
+@@ -4982,6 +4983,7 @@ static void ufshcd_exception_event_handler(struct work_struct *work)
+ 		ufshcd_bkops_exception_event_handler(hba);
+ 
+ out:
++	scsi_unblock_requests(hba->host);
+ 	pm_runtime_put_sync(hba->dev);
+ 	return;
+ }
+@@ -6799,9 +6801,16 @@ static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+ 	if (list_empty(head))
+ 		goto out;
+ 
+-	ret = ufshcd_vops_setup_clocks(hba, on, PRE_CHANGE);
+-	if (ret)
+-		return ret;
++	/*
++	 * vendor specific setup_clocks ops may depend on clocks managed by
++	 * this standard driver hence call the vendor specific setup_clocks
++	 * before disabling the clocks managed here.
++	 */
++	if (!on) {
++		ret = ufshcd_vops_setup_clocks(hba, on, PRE_CHANGE);
++		if (ret)
++			return ret;
++	}
+ 
+ 	list_for_each_entry(clki, head, list) {
+ 		if (!IS_ERR_OR_NULL(clki->clk)) {
+@@ -6825,9 +6834,16 @@ static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
+ 		}
+ 	}
+ 
+-	ret = ufshcd_vops_setup_clocks(hba, on, POST_CHANGE);
+-	if (ret)
+-		return ret;
++	/*
++	 * vendor specific setup_clocks ops may depend on clocks managed by
++	 * this standard driver hence call the vendor specific setup_clocks
++	 * after enabling the clocks managed here.
++	 */
++	if (on) {
++		ret = ufshcd_vops_setup_clocks(hba, on, POST_CHANGE);
++		if (ret)
++			return ret;
++	}
+ 
+ out:
+ 	if (ret) {
+diff --git a/drivers/soc/imx/gpcv2.c b/drivers/soc/imx/gpcv2.c
+index afc7ecc3c187..f4e3bd40c72e 100644
+--- a/drivers/soc/imx/gpcv2.c
++++ b/drivers/soc/imx/gpcv2.c
+@@ -155,7 +155,7 @@ static int imx7_gpc_pu_pgc_sw_pdn_req(struct generic_pm_domain *genpd)
+ 	return imx7_gpc_pu_pgc_sw_pxx_req(genpd, false);
+ }
+ 
+-static struct imx7_pgc_domain imx7_pgc_domains[] = {
++static const struct imx7_pgc_domain imx7_pgc_domains[] = {
+ 	[IMX7_POWER_DOMAIN_MIPI_PHY] = {
+ 		.genpd = {
+ 			.name      = "mipi-phy",
+@@ -321,11 +321,6 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
+ 			continue;
+ 		}
+ 
+-		domain = &imx7_pgc_domains[domain_index];
+-		domain->regmap = regmap;
+-		domain->genpd.power_on  = imx7_gpc_pu_pgc_sw_pup_req;
+-		domain->genpd.power_off = imx7_gpc_pu_pgc_sw_pdn_req;
+-
+ 		pd_pdev = platform_device_alloc("imx7-pgc-domain",
+ 						domain_index);
+ 		if (!pd_pdev) {
+@@ -334,7 +329,20 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
+ 			return -ENOMEM;
+ 		}
+ 
+-		pd_pdev->dev.platform_data = domain;
++		ret = platform_device_add_data(pd_pdev,
++					       &imx7_pgc_domains[domain_index],
++					       sizeof(imx7_pgc_domains[domain_index]));
++		if (ret) {
++			platform_device_put(pd_pdev);
++			of_node_put(np);
++			return ret;
++		}
++
++		domain = pd_pdev->dev.platform_data;
++		domain->regmap = regmap;
++		domain->genpd.power_on  = imx7_gpc_pu_pgc_sw_pup_req;
++		domain->genpd.power_off = imx7_gpc_pu_pgc_sw_pdn_req;
++
+ 		pd_pdev->dev.parent = dev;
+ 		pd_pdev->dev.of_node = np;
+ 
+diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
+index 321982277697..938ca41c56cd 100644
+--- a/drivers/soc/qcom/qmi_interface.c
++++ b/drivers/soc/qcom/qmi_interface.c
+@@ -639,10 +639,11 @@ int qmi_handle_init(struct qmi_handle *qmi, size_t recv_buf_size,
+ 	if (ops)
+ 		qmi->ops = *ops;
+ 
++	/* Make room for the header */
++	recv_buf_size += sizeof(struct qmi_header);
++	/* Must also be sufficient to hold a control packet */
+ 	if (recv_buf_size < sizeof(struct qrtr_ctrl_pkt))
+ 		recv_buf_size = sizeof(struct qrtr_ctrl_pkt);
+-	else
+-		recv_buf_size += sizeof(struct qmi_header);
+ 
+ 	qmi->recv_buf_size = recv_buf_size;
+ 	qmi->recv_buf = kzalloc(recv_buf_size, GFP_KERNEL);
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index 0b94d62fad2b..493865977e3d 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -362,13 +362,8 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ 	cached = phdr_to_last_cached_entry(phdr);
+ 
+ 	while (hdr < end) {
+-		if (hdr->canary != SMEM_PRIVATE_CANARY) {
+-			dev_err(smem->dev,
+-				"Found invalid canary in hosts %d:%d partition\n",
+-				phdr->host0, phdr->host1);
+-			return -EINVAL;
+-		}
+-
++		if (hdr->canary != SMEM_PRIVATE_CANARY)
++			goto bad_canary;
+ 		if (le16_to_cpu(hdr->item) == item)
+ 			return -EEXIST;
+ 
+@@ -397,6 +392,11 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ 	le32_add_cpu(&phdr->offset_free_uncached, alloc_size);
+ 
+ 	return 0;
++bad_canary:
++	dev_err(smem->dev, "Found invalid canary in hosts %hu:%hu partition\n",
++		le16_to_cpu(phdr->host0), le16_to_cpu(phdr->host1));
++
++	return -EINVAL;
+ }
+ 
+ static int qcom_smem_alloc_global(struct qcom_smem *smem,
+@@ -560,8 +560,8 @@ static void *qcom_smem_get_private(struct qcom_smem *smem,
+ 	return ERR_PTR(-ENOENT);
+ 
+ invalid_canary:
+-	dev_err(smem->dev, "Found invalid canary in hosts %d:%d partition\n",
+-			phdr->host0, phdr->host1);
++	dev_err(smem->dev, "Found invalid canary in hosts %hu:%hu partition\n",
++			le16_to_cpu(phdr->host0), le16_to_cpu(phdr->host1));
+ 
+ 	return ERR_PTR(-EINVAL);
+ }
+@@ -695,9 +695,10 @@ static u32 qcom_smem_get_item_count(struct qcom_smem *smem)
+ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ {
+ 	struct smem_partition_header *header;
+-	struct smem_ptable_entry *entry = NULL;
++	struct smem_ptable_entry *entry;
+ 	struct smem_ptable *ptable;
+ 	u32 host0, host1, size;
++	bool found = false;
+ 	int i;
+ 
+ 	ptable = qcom_smem_get_ptable(smem);
+@@ -709,11 +710,13 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ 		host0 = le16_to_cpu(entry->host0);
+ 		host1 = le16_to_cpu(entry->host1);
+ 
+-		if (host0 == SMEM_GLOBAL_HOST && host0 == host1)
++		if (host0 == SMEM_GLOBAL_HOST && host0 == host1) {
++			found = true;
+ 			break;
++		}
+ 	}
+ 
+-	if (!entry) {
++	if (!found) {
+ 		dev_err(smem->dev, "Missing entry for global partition\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index d9fcdb592b39..3e3d12ce4587 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -559,22 +559,28 @@ EXPORT_SYMBOL(tegra_powergate_remove_clamping);
+ int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk,
+ 				      struct reset_control *rst)
+ {
+-	struct tegra_powergate pg;
++	struct tegra_powergate *pg;
+ 	int err;
+ 
+ 	if (!tegra_powergate_is_available(id))
+ 		return -EINVAL;
+ 
+-	pg.id = id;
+-	pg.clks = &clk;
+-	pg.num_clks = 1;
+-	pg.reset = rst;
+-	pg.pmc = pmc;
++	pg = kzalloc(sizeof(*pg), GFP_KERNEL);
++	if (!pg)
++		return -ENOMEM;
+ 
+-	err = tegra_powergate_power_up(&pg, false);
++	pg->id = id;
++	pg->clks = &clk;
++	pg->num_clks = 1;
++	pg->reset = rst;
++	pg->pmc = pmc;
++
++	err = tegra_powergate_power_up(pg, false);
+ 	if (err)
+ 		pr_err("failed to turn on partition %d: %d\n", id, err);
+ 
++	kfree(pg);
++
+ 	return err;
+ }
+ EXPORT_SYMBOL(tegra_powergate_sequence_power_up);
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index 5c82910e3480..7fe4488ace57 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -574,10 +574,15 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 		master->max_speed_hz = rate >> 2;
+ 
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+-	if (!ret)
+-		return 0;
++	if (ret) {
++		dev_err(&pdev->dev, "spi master registration failed\n");
++		goto out_clk;
++	}
+ 
+-	dev_err(&pdev->dev, "spi master registration failed\n");
++	return 0;
++
++out_clk:
++	clk_disable_unprepare(spicc->core);
+ 
+ out_master:
+ 	spi_master_put(master);
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index baa3a9fa2638..92e57e35418b 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -1260,8 +1260,6 @@ static int s3c64xx_spi_resume(struct device *dev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	s3c64xx_spi_hwinit(sdd, sdd->port_id);
+-
+ 	return spi_master_resume(master);
+ }
+ #endif /* CONFIG_PM_SLEEP */
+@@ -1299,6 +1297,8 @@ static int s3c64xx_spi_runtime_resume(struct device *dev)
+ 	if (ret != 0)
+ 		goto err_disable_src_clk;
+ 
++	s3c64xx_spi_hwinit(sdd, sdd->port_id);
++
+ 	return 0;
+ 
+ err_disable_src_clk:
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 8171eedbfc90..c75641b9df79 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -564,14 +564,16 @@ static int sh_msiof_spi_setup(struct spi_device *spi)
+ 
+ 	/* Configure native chip select mode/polarity early */
+ 	clr = MDR1_SYNCMD_MASK;
+-	set = MDR1_TRMD | TMDR1_PCON | MDR1_SYNCMD_SPI;
++	set = MDR1_SYNCMD_SPI;
+ 	if (spi->mode & SPI_CS_HIGH)
+ 		clr |= BIT(MDR1_SYNCAC_SHIFT);
+ 	else
+ 		set |= BIT(MDR1_SYNCAC_SHIFT);
+ 	pm_runtime_get_sync(&p->pdev->dev);
+ 	tmp = sh_msiof_read(p, TMDR1) & ~clr;
+-	sh_msiof_write(p, TMDR1, tmp | set);
++	sh_msiof_write(p, TMDR1, tmp | set | MDR1_TRMD | TMDR1_PCON);
++	tmp = sh_msiof_read(p, RMDR1) & ~clr;
++	sh_msiof_write(p, RMDR1, tmp | set);
+ 	pm_runtime_put(&p->pdev->dev);
+ 	p->native_cs_high = spi->mode & SPI_CS_HIGH;
+ 	p->native_cs_inited = true;
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 7b213faa0a2b..91e76c776037 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1222,6 +1222,7 @@ static void __spi_pump_messages(struct spi_controller *ctlr, bool in_kthread)
+ 	if (!was_busy && ctlr->auto_runtime_pm) {
+ 		ret = pm_runtime_get_sync(ctlr->dev.parent);
+ 		if (ret < 0) {
++			pm_runtime_put_noidle(ctlr->dev.parent);
+ 			dev_err(&ctlr->dev, "Failed to power device: %d\n",
+ 				ret);
+ 			mutex_unlock(&ctlr->io_mutex);
+diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c
+index b8f55a11ee1c..7391bba405ae 100644
+--- a/drivers/staging/ks7010/ks7010_sdio.c
++++ b/drivers/staging/ks7010/ks7010_sdio.c
+@@ -657,8 +657,11 @@ static int ks7010_upload_firmware(struct ks_sdio_card *card)
+ 
+ 	/* Firmware running ? */
+ 	ret = ks7010_sdio_readb(priv, GCR_A, &byte);
++	if (ret)
++		goto release_host_and_free;
+ 	if (byte == GCR_A_RUN) {
+ 		netdev_dbg(priv->net_dev, "MAC firmware running ...\n");
++		ret = -EBUSY;
+ 		goto release_host_and_free;
+ 	}
+ 
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+index 7ae2955c4db6..355c81651a65 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+@@ -1702,7 +1702,7 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
+ 				return 0;
+ 			}
+ 			spin_unlock(&fps->fps_lock);
+-			rc = -EBUSY;
++			rc = -EAGAIN;
+ 		}
+ 
+ 		spin_lock(&fps->fps_lock);
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+index 6690a6cd4e34..5828ee96d74c 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+@@ -48,7 +48,7 @@ static int kiblnd_init_rdma(struct kib_conn *conn, struct kib_tx *tx, int type,
+ 			    __u64 dstcookie);
+ static void kiblnd_queue_tx_locked(struct kib_tx *tx, struct kib_conn *conn);
+ static void kiblnd_queue_tx(struct kib_tx *tx, struct kib_conn *conn);
+-static void kiblnd_unmap_tx(struct lnet_ni *ni, struct kib_tx *tx);
++static void kiblnd_unmap_tx(struct kib_tx *tx);
+ static void kiblnd_check_sends_locked(struct kib_conn *conn);
+ 
+ static void
+@@ -66,7 +66,7 @@ kiblnd_tx_done(struct lnet_ni *ni, struct kib_tx *tx)
+ 	LASSERT(!tx->tx_waiting);	      /* mustn't be awaiting peer response */
+ 	LASSERT(tx->tx_pool);
+ 
+-	kiblnd_unmap_tx(ni, tx);
++	kiblnd_unmap_tx(tx);
+ 
+ 	/* tx may have up to 2 lnet msgs to finalise */
+ 	lntmsg[0] = tx->tx_lntmsg[0]; tx->tx_lntmsg[0] = NULL;
+@@ -591,13 +591,9 @@ kiblnd_fmr_map_tx(struct kib_net *net, struct kib_tx *tx, struct kib_rdma_desc *
+ 	return 0;
+ }
+ 
+-static void kiblnd_unmap_tx(struct lnet_ni *ni, struct kib_tx *tx)
++static void kiblnd_unmap_tx(struct kib_tx *tx)
+ {
+-	struct kib_net *net = ni->ni_data;
+-
+-	LASSERT(net);
+-
+-	if (net->ibn_fmr_ps)
++	if (tx->fmr.fmr_pfmr || tx->fmr.fmr_frd)
+ 		kiblnd_fmr_pool_unmap(&tx->fmr, tx->tx_status);
+ 
+ 	if (tx->tx_nfrags) {
+@@ -1290,11 +1286,6 @@ kiblnd_connect_peer(struct kib_peer *peer)
+ 		goto failed2;
+ 	}
+ 
+-	LASSERT(cmid->device);
+-	CDEBUG(D_NET, "%s: connection bound to %s:%pI4h:%s\n",
+-	       libcfs_nid2str(peer->ibp_nid), dev->ibd_ifname,
+-	       &dev->ibd_ifip, cmid->device->name);
+-
+ 	return;
+ 
+  failed2:
+@@ -2996,8 +2987,19 @@ kiblnd_cm_callback(struct rdma_cm_id *cmid, struct rdma_cm_event *event)
+ 		} else {
+ 			rc = rdma_resolve_route(
+ 				cmid, *kiblnd_tunables.kib_timeout * 1000);
+-			if (!rc)
++			if (!rc) {
++				struct kib_net *net = peer->ibp_ni->ni_data;
++				struct kib_dev *dev = net->ibn_dev;
++
++				CDEBUG(D_NET, "%s: connection bound to "\
++				       "%s:%pI4h:%s\n",
++				       libcfs_nid2str(peer->ibp_nid),
++				       dev->ibd_ifname,
++				       &dev->ibd_ifip, cmid->device->name);
++
+ 				return 0;
++			}
++
+ 			/* Can't initiate route resolution */
+ 			CERROR("Can't resolve route for %s: %d\n",
+ 			       libcfs_nid2str(peer->ibp_nid), rc);
+diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+index 95bea351d21d..59d6259f2c14 100644
+--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
++++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+@@ -1565,8 +1565,10 @@ struct ldlm_lock *ldlm_lock_create(struct ldlm_namespace *ns,
+ 		return ERR_CAST(res);
+ 
+ 	lock = ldlm_lock_new(res);
+-	if (!lock)
++	if (!lock) {
++		ldlm_resource_putref(res);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	lock->l_req_mode = mode;
+ 	lock->l_ast_data = data;
+@@ -1609,6 +1611,8 @@ out:
+ 	return ERR_PTR(rc);
+ }
+ 
++
++
+ /**
+  * Enqueue (request) a lock.
+  * On the client this is called from ldlm_cli_enqueue_fini
+diff --git a/drivers/staging/lustre/lustre/llite/xattr.c b/drivers/staging/lustre/lustre/llite/xattr.c
+index 2d78432963dc..5caccfef9c62 100644
+--- a/drivers/staging/lustre/lustre/llite/xattr.c
++++ b/drivers/staging/lustre/lustre/llite/xattr.c
+@@ -94,7 +94,11 @@ ll_xattr_set_common(const struct xattr_handler *handler,
+ 	__u64 valid;
+ 	int rc;
+ 
+-	if (flags == XATTR_REPLACE) {
++	/* When setxattr() is called with a size of 0 the value is
++	 * unconditionally replaced by "". When removexattr() is
++	 * called we get a NULL value and XATTR_REPLACE for flags.
++	 */
++	if (!value && flags == XATTR_REPLACE) {
+ 		ll_stats_ops_tally(ll_i2sbi(inode), LPROC_LL_REMOVEXATTR, 1);
+ 		valid = OBD_MD_FLXATTRRM;
+ 	} else {
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c b/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
+index c0849299d592..bba3d1745908 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
+@@ -397,14 +397,13 @@ static long __ov2680_set_exposure(struct v4l2_subdev *sd, int coarse_itg,
+ {
+ 	struct i2c_client *client = v4l2_get_subdevdata(sd);
+ 	struct ov2680_device *dev = to_ov2680_sensor(sd);
+-	u16 vts,hts;
++	u16 vts;
+ 	int ret,exp_val;
+ 
+ 	dev_dbg(&client->dev,
+ 		"+++++++__ov2680_set_exposure coarse_itg %d, gain %d, digitgain %d++\n",
+ 		coarse_itg, gain, digitgain);
+ 
+-	hts = ov2680_res[dev->fmt_idx].pixels_per_line;
+ 	vts = ov2680_res[dev->fmt_idx].lines_per_frame;
+ 
+ 	/* group hold */
+@@ -1185,7 +1184,8 @@ static int ov2680_detect(struct i2c_client *client)
+ 					OV2680_SC_CMMN_SUB_ID, &high);
+ 	revision = (u8) high & 0x0f;
+ 
+-	dev_info(&client->dev, "sensor_revision id = 0x%x\n", id);
++	dev_info(&client->dev, "sensor_revision id = 0x%x, rev= %d\n",
++		 id, revision);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/staging/media/atomisp/i2c/gc2235.h b/drivers/staging/media/atomisp/i2c/gc2235.h
+index 0e805bcfa4d8..54bf7812b27a 100644
+--- a/drivers/staging/media/atomisp/i2c/gc2235.h
++++ b/drivers/staging/media/atomisp/i2c/gc2235.h
+@@ -33,6 +33,11 @@
+ 
+ #include "../include/linux/atomisp_platform.h"
+ 
++/*
++ * FIXME: non-preview resolutions are currently broken
++ */
++#define ENABLE_NON_PREVIEW     0
++
+ /* Defines for register writes and register array processing */
+ #define I2C_MSG_LENGTH		0x2
+ #define I2C_RETRY_COUNT		5
+@@ -284,6 +289,7 @@ static struct gc2235_reg const gc2235_init_settings[] = {
+ /*
+  * Register settings for various resolution
+  */
++#if ENABLE_NON_PREVIEW
+ static struct gc2235_reg const gc2235_1296_736_30fps[] = {
+ 	{ GC2235_8BIT, 0x8b, 0xa0 },
+ 	{ GC2235_8BIT, 0x8c, 0x02 },
+@@ -387,6 +393,7 @@ static struct gc2235_reg const gc2235_960_640_30fps[] = {
+ 	{ GC2235_8BIT, 0xfe, 0x00 }, /* switch to P0 */
+ 	{ GC2235_TOK_TERM, 0, 0 }
+ };
++#endif
+ 
+ static struct gc2235_reg const gc2235_1600_900_30fps[] = {
+ 	{ GC2235_8BIT, 0x8b, 0xa0 },
+@@ -578,7 +585,7 @@ static struct gc2235_resolution gc2235_res_preview[] = {
+  * Disable non-preview configurations until the configuration selection is
+  * improved.
+  */
+-#if 0
++#if ENABLE_NON_PREVIEW
+ static struct gc2235_resolution gc2235_res_still[] = {
+ 	{
+ 		.desc = "gc2235_1600_900_30fps",
+diff --git a/drivers/staging/media/atomisp/i2c/ov2680.h b/drivers/staging/media/atomisp/i2c/ov2680.h
+index cb38e6e79409..58d6be07d986 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2680.h
++++ b/drivers/staging/media/atomisp/i2c/ov2680.h
+@@ -295,6 +295,7 @@ struct ov2680_format {
+ 	};
+ 
+ 
++#if 0 /* None of the definitions below are used currently */
+ 	/*
+ 	 * 176x144 30fps  VBlanking 1lane 10Bit (binning)
+ 	 */
+@@ -513,7 +514,6 @@ struct ov2680_format {
+ 	{OV2680_8BIT, 0x5081, 0x41},
+     {OV2680_TOK_TERM, 0, 0}
+ 	};
+-
+ 	/*
+ 	* 800x600 30fps  VBlanking 1lane 10Bit (binning)
+ 	*/
+@@ -685,6 +685,7 @@ struct ov2680_format {
+     // {OV2680_8BIT, 0x5090, 0x0c},
+ 	{OV2680_TOK_TERM, 0, 0}
+ 	};
++#endif
+ 
+ 	/*
+ 	 *1616x916  30fps  VBlanking 1lane 10bit
+@@ -734,6 +735,7 @@ struct ov2680_format {
+ 	/*
+ 	 * 1612x1212 30fps VBlanking 1lane 10Bit
+ 	 */
++#if 0
+ 	static struct ov2680_reg const ov2680_1616x1082_30fps[] = {
+        {OV2680_8BIT, 0x3086, 0x00},
+        {OV2680_8BIT, 0x3501, 0x48},
+@@ -773,6 +775,7 @@ struct ov2680_format {
+        {OV2680_8BIT, 0x5081, 0x41},
+ 		{OV2680_TOK_TERM, 0, 0}
+         };
++#endif
+ 	/*
+ 	 * 1616x1216 30fps VBlanking 1lane 10Bit
+ 	 */
+diff --git a/drivers/staging/media/atomisp/i2c/ov2722.h b/drivers/staging/media/atomisp/i2c/ov2722.h
+index 757b37613ccc..d99188a5c9d0 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2722.h
++++ b/drivers/staging/media/atomisp/i2c/ov2722.h
+@@ -254,6 +254,7 @@ struct ov2722_write_ctrl {
+ /*
+  * Register settings for various resolution
+  */
++#if 0
+ static struct ov2722_reg const ov2722_QVGA_30fps[] = {
+ 	{OV2722_8BIT, 0x3718, 0x10},
+ 	{OV2722_8BIT, 0x3702, 0x0c},
+@@ -581,6 +582,7 @@ static struct ov2722_reg const ov2722_VGA_30fps[] = {
+ 	{OV2722_8BIT, 0x3509, 0x10},
+ 	{OV2722_TOK_TERM, 0, 0},
+ };
++#endif
+ 
+ static struct ov2722_reg const ov2722_1632_1092_30fps[] = {
+ 	{OV2722_8BIT, 0x3021, 0x03}, /* For stand wait for
+@@ -784,6 +786,7 @@ static struct ov2722_reg const ov2722_1452_1092_30fps[] = {
+ 	{OV2722_8BIT, 0x3509, 0x00},
+ 	{OV2722_TOK_TERM, 0, 0}
+ };
++#if 0
+ static struct ov2722_reg const ov2722_1M3_30fps[] = {
+ 	{OV2722_8BIT, 0x3718, 0x10},
+ 	{OV2722_8BIT, 0x3702, 0x24},
+@@ -890,6 +893,7 @@ static struct ov2722_reg const ov2722_1M3_30fps[] = {
+ 	{OV2722_8BIT, 0x3509, 0x10},
+ 	{OV2722_TOK_TERM, 0, 0},
+ };
++#endif
+ 
+ static struct ov2722_reg const ov2722_1080p_30fps[] = {
+ 	{OV2722_8BIT, 0x3021, 0x03}, /* For stand wait for a whole
+@@ -996,6 +1000,7 @@ static struct ov2722_reg const ov2722_1080p_30fps[] = {
+ 	{OV2722_TOK_TERM, 0, 0}
+ };
+ 
++#if 0 /* Currently unused */
+ static struct ov2722_reg const ov2722_720p_30fps[] = {
+ 	{OV2722_8BIT, 0x3021, 0x03},
+ 	{OV2722_8BIT, 0x3718, 0x10},
+@@ -1095,6 +1100,7 @@ static struct ov2722_reg const ov2722_720p_30fps[] = {
+ 	{OV2722_8BIT, 0x3509, 0x00},
+ 	{OV2722_TOK_TERM, 0, 0},
+ };
++#endif
+ 
+ static struct ov2722_resolution ov2722_res_preview[] = {
+ 	{
+diff --git a/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h b/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h
+index 9058a82455a6..bba99406785e 100644
+--- a/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h
++++ b/drivers/staging/media/atomisp/i2c/ov5693/ov5693.h
+@@ -31,6 +31,12 @@
+ 
+ #include "../../include/linux/atomisp_platform.h"
+ 
++/*
++ * FIXME: non-preview resolutions are currently broken
++ */
++#define ENABLE_NON_PREVIEW	0
++
++
+ #define OV5693_POWER_UP_RETRY_NUM 5
+ 
+ /* Defines for register writes and register array processing */
+@@ -503,6 +509,7 @@ static struct ov5693_reg const ov5693_global_setting[] = {
+ 	{OV5693_TOK_TERM, 0, 0}
+ };
+ 
++#if ENABLE_NON_PREVIEW
+ /*
+  * 654x496 30fps 17ms VBlanking 2lane 10Bit (Scaling)
+  */
+@@ -695,6 +702,7 @@ static struct ov5693_reg const ov5693_736x496[] = {
+ 	{OV5693_8BIT, 0x0100, 0x01},
+ 	{OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+ 
+ /*
+ static struct ov5693_reg const ov5693_736x496[] = {
+@@ -727,6 +735,7 @@ static struct ov5693_reg const ov5693_736x496[] = {
+ /*
+  * 976x556 30fps 8.8ms VBlanking 2lane 10Bit (Scaling)
+  */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_976x556[] = {
+ 	{OV5693_8BIT, 0x3501, 0x7b},
+ 	{OV5693_8BIT, 0x3502, 0x00},
+@@ -819,6 +828,7 @@ static struct ov5693_reg const ov5693_1636p_30fps[] = {
+ 	{OV5693_8BIT, 0x0100, 0x01},
+ 	{OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+ 
+ static struct ov5693_reg const ov5693_1616x1216_30fps[] = {
+ 	{OV5693_8BIT, 0x3501, 0x7b},
+@@ -859,6 +869,7 @@ static struct ov5693_reg const ov5693_1616x1216_30fps[] = {
+ /*
+  * 1940x1096 30fps 8.8ms VBlanking 2lane 10bit (Scaling)
+  */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_1940x1096[] = {
+ 	{OV5693_8BIT, 0x3501, 0x7b},
+ 	{OV5693_8BIT, 0x3502, 0x00},
+@@ -916,6 +927,7 @@ static struct ov5693_reg const ov5693_2592x1456_30fps[] = {
+ 	{OV5693_8BIT, 0x5002, 0x00},
+ 	{OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+ 
+ static struct ov5693_reg const ov5693_2576x1456_30fps[] = {
+ 	{OV5693_8BIT, 0x3501, 0x7b},
+@@ -951,6 +963,7 @@ static struct ov5693_reg const ov5693_2576x1456_30fps[] = {
+ /*
+  * 2592x1944 30fps 0.6ms VBlanking 2lane 10Bit
+  */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_2592x1944_30fps[] = {
+ 	{OV5693_8BIT, 0x3501, 0x7b},
+ 	{OV5693_8BIT, 0x3502, 0x00},
+@@ -977,6 +990,7 @@ static struct ov5693_reg const ov5693_2592x1944_30fps[] = {
+ 	{OV5693_8BIT, 0x0100, 0x01},
+ 	{OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+ 
+ /*
+  * 11:9 Full FOV Output, expected FOV Res: 2346x1920
+@@ -985,6 +999,7 @@ static struct ov5693_reg const ov5693_2592x1944_30fps[] = {
+  *
+  * WA: Left Offset: 8, Hor scal: 64
+  */
++#if ENABLE_NON_PREVIEW
+ static struct ov5693_reg const ov5693_1424x1168_30fps[] = {
+ 	{OV5693_8BIT, 0x3501, 0x3b}, /* long exposure[15:8] */
+ 	{OV5693_8BIT, 0x3502, 0x80}, /* long exposure[7:0] */
+@@ -1019,6 +1034,7 @@ static struct ov5693_reg const ov5693_1424x1168_30fps[] = {
+ 	{OV5693_8BIT, 0x0100, 0x01},
+ 	{OV5693_TOK_TERM, 0, 0}
+ };
++#endif
+ 
+ /*
+  * 3:2 Full FOV Output, expected FOV Res: 2560x1706
+@@ -1151,7 +1167,7 @@ static struct ov5693_resolution ov5693_res_preview[] = {
+  * Disable non-preview configurations until the configuration selection is
+  * improved.
+  */
+-#if 0
++#if ENABLE_NON_PREVIEW
+ struct ov5693_resolution ov5693_res_still[] = {
+ 	{
+ 		.desc = "ov5693_736x496_30fps",
+diff --git a/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c b/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c
+index 44c21813a06e..2d008590e26e 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c
++++ b/drivers/staging/media/atomisp/pci/atomisp2/atomisp_compat_ioctl32.c
+@@ -77,7 +77,7 @@ static int get_v4l2_framebuffer32(struct v4l2_framebuffer *kp,
+ 		get_user(kp->flags, &up->flags))
+ 			return -EFAULT;
+ 
+-	kp->base = compat_ptr(tmp);
++	kp->base = (void __force *)compat_ptr(tmp);
+ 	get_v4l2_pix_format((struct v4l2_pix_format *)&kp->fmt, &up->fmt);
+ 	return 0;
+ }
+@@ -228,10 +228,10 @@ static int get_atomisp_dvs_6axis_config32(struct atomisp_dvs_6axis_config *kp,
+ 		get_user(ycoords_uv, &up->ycoords_uv))
+ 			return -EFAULT;
+ 
+-	kp->xcoords_y = compat_ptr(xcoords_y);
+-	kp->ycoords_y = compat_ptr(ycoords_y);
+-	kp->xcoords_uv = compat_ptr(xcoords_uv);
+-	kp->ycoords_uv = compat_ptr(ycoords_uv);
++	kp->xcoords_y = (void __force *)compat_ptr(xcoords_y);
++	kp->ycoords_y = (void __force *)compat_ptr(ycoords_y);
++	kp->xcoords_uv = (void __force *)compat_ptr(xcoords_uv);
++	kp->ycoords_uv = (void __force *)compat_ptr(ycoords_uv);
+ 	return 0;
+ }
+ 
+@@ -292,7 +292,7 @@ static int get_atomisp_metadata_stat32(struct atomisp_metadata *kp,
+ 			return -EFAULT;
+ 
+ 	kp->data = compat_ptr(data);
+-	kp->effective_width = compat_ptr(effective_width);
++	kp->effective_width = (void __force *)compat_ptr(effective_width);
+ 	return 0;
+ }
+ 
+@@ -356,7 +356,7 @@ static int get_atomisp_metadata_by_type_stat32(
+ 			return -EFAULT;
+ 
+ 	kp->data = compat_ptr(data);
+-	kp->effective_width = compat_ptr(effective_width);
++	kp->effective_width = (void __force *)compat_ptr(effective_width);
+ 	return 0;
+ }
+ 
+@@ -433,7 +433,7 @@ static int get_atomisp_overlay32(struct atomisp_overlay *kp,
+ 		get_user(kp->overlay_start_x, &up->overlay_start_y))
+ 			return -EFAULT;
+ 
+-	kp->frame = compat_ptr(frame);
++	kp->frame = (void __force *)compat_ptr(frame);
+ 	return 0;
+ }
+ 
+@@ -477,7 +477,7 @@ static int get_atomisp_calibration_group32(
+ 		get_user(calb_grp_values, &up->calb_grp_values))
+ 			return -EFAULT;
+ 
+-	kp->calb_grp_values = compat_ptr(calb_grp_values);
++	kp->calb_grp_values = (void __force *)compat_ptr(calb_grp_values);
+ 	return 0;
+ }
+ 
+@@ -699,8 +699,8 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ 			return -EFAULT;
+ 
+ 	while (n >= 0) {
+-		compat_uptr_t *src = (compat_uptr_t *)up + n;
+-		uintptr_t *dst = (uintptr_t *)kp + n;
++		compat_uptr_t __user *src = ((compat_uptr_t __user *)up) + n;
++		uintptr_t *dst = ((uintptr_t *)kp) + n;
+ 
+ 		if (get_user((*dst), src))
+ 			return -EFAULT;
+@@ -747,12 +747,12 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ 				return -EFAULT;
+ 
+-			kp->shading_table = user_ptr + offset;
++			kp->shading_table = (void __force *)user_ptr + offset;
+ 			offset = sizeof(struct atomisp_shading_table);
+ 			if (!kp->shading_table)
+ 				return -EFAULT;
+ 
+-			if (copy_to_user(kp->shading_table,
++			if (copy_to_user((void __user *)kp->shading_table,
+ 					 &karg.shading_table,
+ 					 sizeof(struct atomisp_shading_table)))
+ 				return -EFAULT;
+@@ -773,13 +773,14 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ 				return -EFAULT;
+ 
+-			kp->morph_table = user_ptr + offset;
++			kp->morph_table = (void __force *)user_ptr + offset;
+ 			offset += sizeof(struct atomisp_morph_table);
+ 			if (!kp->morph_table)
+ 				return -EFAULT;
+ 
+-			if (copy_to_user(kp->morph_table, &karg.morph_table,
+-					   sizeof(struct atomisp_morph_table)))
++			if (copy_to_user((void __user *)kp->morph_table,
++					 &karg.morph_table,
++					 sizeof(struct atomisp_morph_table)))
+ 				return -EFAULT;
+ 		}
+ 
+@@ -798,13 +799,14 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ 				return -EFAULT;
+ 
+-			kp->dvs2_coefs = user_ptr + offset;
++			kp->dvs2_coefs = (void __force *)user_ptr + offset;
+ 			offset += sizeof(struct atomisp_dis_coefficients);
+ 			if (!kp->dvs2_coefs)
+ 				return -EFAULT;
+ 
+-			if (copy_to_user(kp->dvs2_coefs, &karg.dvs2_coefs,
+-				sizeof(struct atomisp_dis_coefficients)))
++			if (copy_to_user((void __user *)kp->dvs2_coefs,
++					 &karg.dvs2_coefs,
++					 sizeof(struct atomisp_dis_coefficients)))
+ 				return -EFAULT;
+ 		}
+ 		/* handle dvs 6axis configuration */
+@@ -822,13 +824,14 @@ static int get_atomisp_parameters32(struct atomisp_parameters *kp,
+ #endif
+ 				return -EFAULT;
+ 
+-			kp->dvs_6axis_config = user_ptr + offset;
++			kp->dvs_6axis_config = (void __force *)user_ptr + offset;
+ 			offset += sizeof(struct atomisp_dvs_6axis_config);
+ 			if (!kp->dvs_6axis_config)
+ 				return -EFAULT;
+ 
+-			if (copy_to_user(kp->dvs_6axis_config, &karg.dvs_6axis_config,
+-				sizeof(struct atomisp_dvs_6axis_config)))
++			if (copy_to_user((void __user *)kp->dvs_6axis_config,
++					 &karg.dvs_6axis_config,
++					 sizeof(struct atomisp_dvs_6axis_config)))
+ 				return -EFAULT;
+ 		}
+ 	}
+@@ -887,7 +890,7 @@ static int get_atomisp_sensor_ae_bracketing_lut(
+ 		get_user(lut, &up->lut))
+ 			return -EFAULT;
+ 
+-	kp->lut = compat_ptr(lut);
++	kp->lut = (void __force *)compat_ptr(lut);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/most/cdev/cdev.c b/drivers/staging/most/cdev/cdev.c
+index 4d7fce8731fe..dfa8e4db2239 100644
+--- a/drivers/staging/most/cdev/cdev.c
++++ b/drivers/staging/most/cdev/cdev.c
+@@ -18,6 +18,8 @@
+ #include <linux/idr.h>
+ #include "most/core.h"
+ 
++#define CHRDEV_REGION_SIZE 50
++
+ static struct cdev_component {
+ 	dev_t devno;
+ 	struct ida minor_id;
+@@ -513,7 +515,7 @@ static int __init mod_init(void)
+ 	spin_lock_init(&ch_list_lock);
+ 	ida_init(&comp.minor_id);
+ 
+-	err = alloc_chrdev_region(&comp.devno, 0, 50, "cdev");
++	err = alloc_chrdev_region(&comp.devno, 0, CHRDEV_REGION_SIZE, "cdev");
+ 	if (err < 0)
+ 		goto dest_ida;
+ 	comp.major = MAJOR(comp.devno);
+@@ -523,7 +525,7 @@ static int __init mod_init(void)
+ 	return 0;
+ 
+ free_cdev:
+-	unregister_chrdev_region(comp.devno, 1);
++	unregister_chrdev_region(comp.devno, CHRDEV_REGION_SIZE);
+ dest_ida:
+ 	ida_destroy(&comp.minor_id);
+ 	class_destroy(comp.class);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 5d28fff46557..80f6168f06f6 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -601,6 +601,7 @@ reserve_space(VCHIQ_STATE_T *state, size_t space, int is_blocking)
+ 		}
+ 
+ 		if (tx_pos == (state->slot_queue_available * VCHIQ_SLOT_SIZE)) {
++			up(&state->slot_available_event);
+ 			pr_warn("%s: invalid tx_pos: %d\n", __func__, tx_pos);
+ 			return NULL;
+ 		}
+diff --git a/drivers/thermal/samsung/exynos_tmu.c b/drivers/thermal/samsung/exynos_tmu.c
+index ac83f721db24..d60069b5dc98 100644
+--- a/drivers/thermal/samsung/exynos_tmu.c
++++ b/drivers/thermal/samsung/exynos_tmu.c
+@@ -598,6 +598,7 @@ static int exynos5433_tmu_initialize(struct platform_device *pdev)
+ 		threshold_code = temp_to_code(data, temp);
+ 
+ 		rising_threshold = readl(data->base + rising_reg_offset);
++		rising_threshold &= ~(0xff << j * 8);
+ 		rising_threshold |= (threshold_code << j * 8);
+ 		writel(rising_threshold, data->base + rising_reg_offset);
+ 
+diff --git a/drivers/tty/hvc/hvc_opal.c b/drivers/tty/hvc/hvc_opal.c
+index 2ed07ca6389e..9645c0062a90 100644
+--- a/drivers/tty/hvc/hvc_opal.c
++++ b/drivers/tty/hvc/hvc_opal.c
+@@ -318,7 +318,6 @@ static void udbg_init_opal_common(void)
+ 	udbg_putc = udbg_opal_putc;
+ 	udbg_getc = udbg_opal_getc;
+ 	udbg_getc_poll = udbg_opal_getc_poll;
+-	tb_ticks_per_usec = 0x200; /* Make udelay not suck */
+ }
+ 
+ void __init hvc_opal_init_early(void)
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index 6c7151edd715..b0e2c4847a5d 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -110,16 +110,19 @@ static void pty_unthrottle(struct tty_struct *tty)
+ static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c)
+ {
+ 	struct tty_struct *to = tty->link;
++	unsigned long flags;
+ 
+ 	if (tty->stopped)
+ 		return 0;
+ 
+ 	if (c > 0) {
++		spin_lock_irqsave(&to->port->lock, flags);
+ 		/* Stuff the data into the input queue of the other end */
+ 		c = tty_insert_flip_string(to->port, buf, c);
+ 		/* And shovel */
+ 		if (c)
+ 			tty_flip_buffer_push(to->port);
++		spin_unlock_irqrestore(&to->port->lock, flags);
+ 	}
+ 	return c;
+ }
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 40c2d9878190..c75c1532ca73 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3380,6 +3380,10 @@ static int wait_for_connected(struct usb_device *udev,
+ 	while (delay_ms < 2000) {
+ 		if (status || *portstatus & USB_PORT_STAT_CONNECTION)
+ 			break;
++		if (!port_is_power_on(hub, *portstatus)) {
++			status = -ENODEV;
++			break;
++		}
+ 		msleep(20);
+ 		delay_ms += 20;
+ 		status = hub_port_status(hub, *port1, portstatus, portchange);
+diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c
+index 126991046eb7..0212f0ee8aea 100644
+--- a/drivers/vfio/mdev/mdev_core.c
++++ b/drivers/vfio/mdev/mdev_core.c
+@@ -66,34 +66,6 @@ uuid_le mdev_uuid(struct mdev_device *mdev)
+ }
+ EXPORT_SYMBOL(mdev_uuid);
+ 
+-static int _find_mdev_device(struct device *dev, void *data)
+-{
+-	struct mdev_device *mdev;
+-
+-	if (!dev_is_mdev(dev))
+-		return 0;
+-
+-	mdev = to_mdev_device(dev);
+-
+-	if (uuid_le_cmp(mdev->uuid, *(uuid_le *)data) == 0)
+-		return 1;
+-
+-	return 0;
+-}
+-
+-static bool mdev_device_exist(struct mdev_parent *parent, uuid_le uuid)
+-{
+-	struct device *dev;
+-
+-	dev = device_find_child(parent->dev, &uuid, _find_mdev_device);
+-	if (dev) {
+-		put_device(dev);
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+ /* Should be called holding parent_list_lock */
+ static struct mdev_parent *__find_parent_device(struct device *dev)
+ {
+@@ -221,7 +193,6 @@ int mdev_register_device(struct device *dev, const struct mdev_parent_ops *ops)
+ 	}
+ 
+ 	kref_init(&parent->ref);
+-	mutex_init(&parent->lock);
+ 
+ 	parent->dev = dev;
+ 	parent->ops = ops;
+@@ -297,6 +268,10 @@ static void mdev_device_release(struct device *dev)
+ {
+ 	struct mdev_device *mdev = to_mdev_device(dev);
+ 
++	mutex_lock(&mdev_list_lock);
++	list_del(&mdev->next);
++	mutex_unlock(&mdev_list_lock);
++
+ 	dev_dbg(&mdev->dev, "MDEV: destroying\n");
+ 	kfree(mdev);
+ }
+@@ -304,7 +279,7 @@ static void mdev_device_release(struct device *dev)
+ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
+ {
+ 	int ret;
+-	struct mdev_device *mdev;
++	struct mdev_device *mdev, *tmp;
+ 	struct mdev_parent *parent;
+ 	struct mdev_type *type = to_mdev_type(kobj);
+ 
+@@ -312,21 +287,28 @@ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
+ 	if (!parent)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&parent->lock);
++	mutex_lock(&mdev_list_lock);
+ 
+ 	/* Check for duplicate */
+-	if (mdev_device_exist(parent, uuid)) {
+-		ret = -EEXIST;
+-		goto create_err;
++	list_for_each_entry(tmp, &mdev_list, next) {
++		if (!uuid_le_cmp(tmp->uuid, uuid)) {
++			mutex_unlock(&mdev_list_lock);
++			ret = -EEXIST;
++			goto mdev_fail;
++		}
+ 	}
+ 
+ 	mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ 	if (!mdev) {
++		mutex_unlock(&mdev_list_lock);
+ 		ret = -ENOMEM;
+-		goto create_err;
++		goto mdev_fail;
+ 	}
+ 
+ 	memcpy(&mdev->uuid, &uuid, sizeof(uuid_le));
++	list_add(&mdev->next, &mdev_list);
++	mutex_unlock(&mdev_list_lock);
++
+ 	mdev->parent = parent;
+ 	kref_init(&mdev->ref);
+ 
+@@ -338,35 +320,28 @@ int mdev_device_create(struct kobject *kobj, struct device *dev, uuid_le uuid)
+ 	ret = device_register(&mdev->dev);
+ 	if (ret) {
+ 		put_device(&mdev->dev);
+-		goto create_err;
++		goto mdev_fail;
+ 	}
+ 
+ 	ret = mdev_device_create_ops(kobj, mdev);
+ 	if (ret)
+-		goto create_failed;
++		goto create_fail;
+ 
+ 	ret = mdev_create_sysfs_files(&mdev->dev, type);
+ 	if (ret) {
+ 		mdev_device_remove_ops(mdev, true);
+-		goto create_failed;
++		goto create_fail;
+ 	}
+ 
+ 	mdev->type_kobj = kobj;
++	mdev->active = true;
+ 	dev_dbg(&mdev->dev, "MDEV: created\n");
+ 
+-	mutex_unlock(&parent->lock);
+-
+-	mutex_lock(&mdev_list_lock);
+-	list_add(&mdev->next, &mdev_list);
+-	mutex_unlock(&mdev_list_lock);
+-
+-	return ret;
++	return 0;
+ 
+-create_failed:
++create_fail:
+ 	device_unregister(&mdev->dev);
+-
+-create_err:
+-	mutex_unlock(&parent->lock);
++mdev_fail:
+ 	mdev_put_parent(parent);
+ 	return ret;
+ }
+@@ -377,44 +352,39 @@ int mdev_device_remove(struct device *dev, bool force_remove)
+ 	struct mdev_parent *parent;
+ 	struct mdev_type *type;
+ 	int ret;
+-	bool found = false;
+ 
+ 	mdev = to_mdev_device(dev);
+ 
+ 	mutex_lock(&mdev_list_lock);
+ 	list_for_each_entry(tmp, &mdev_list, next) {
+-		if (tmp == mdev) {
+-			found = true;
++		if (tmp == mdev)
+ 			break;
+-		}
+ 	}
+ 
+-	if (found)
+-		list_del(&mdev->next);
++	if (tmp != mdev) {
++		mutex_unlock(&mdev_list_lock);
++		return -ENODEV;
++	}
+ 
+-	mutex_unlock(&mdev_list_lock);
++	if (!mdev->active) {
++		mutex_unlock(&mdev_list_lock);
++		return -EAGAIN;
++	}
+ 
+-	if (!found)
+-		return -ENODEV;
++	mdev->active = false;
++	mutex_unlock(&mdev_list_lock);
+ 
+ 	type = to_mdev_type(mdev->type_kobj);
+ 	parent = mdev->parent;
+-	mutex_lock(&parent->lock);
+ 
+ 	ret = mdev_device_remove_ops(mdev, force_remove);
+ 	if (ret) {
+-		mutex_unlock(&parent->lock);
+-
+-		mutex_lock(&mdev_list_lock);
+-		list_add(&mdev->next, &mdev_list);
+-		mutex_unlock(&mdev_list_lock);
+-
++		mdev->active = true;
+ 		return ret;
+ 	}
+ 
+ 	mdev_remove_sysfs_files(dev, type);
+ 	device_unregister(dev);
+-	mutex_unlock(&parent->lock);
+ 	mdev_put_parent(parent);
+ 
+ 	return 0;
+diff --git a/drivers/vfio/mdev/mdev_private.h b/drivers/vfio/mdev/mdev_private.h
+index a9cefd70a705..b5819b7d7ef7 100644
+--- a/drivers/vfio/mdev/mdev_private.h
++++ b/drivers/vfio/mdev/mdev_private.h
+@@ -20,7 +20,6 @@ struct mdev_parent {
+ 	struct device *dev;
+ 	const struct mdev_parent_ops *ops;
+ 	struct kref ref;
+-	struct mutex lock;
+ 	struct list_head next;
+ 	struct kset *mdev_types_kset;
+ 	struct list_head type_list;
+@@ -34,6 +33,7 @@ struct mdev_device {
+ 	struct kref ref;
+ 	struct list_head next;
+ 	struct kobject *type_kobj;
++	bool active;
+ };
+ 
+ #define to_mdev_device(dev)	container_of(dev, struct mdev_device, dev)
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index 4c27f4be3c3d..aa9e792110e3 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -681,18 +681,23 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
+ 	group = vfio_iommu_group_get(dev);
+ 	if (!group) {
+ 		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_reset;
+ 	}
+ 
+ 	ret = vfio_add_group_dev(dev, &vfio_platform_ops, vdev);
+-	if (ret) {
+-		vfio_iommu_group_put(group, dev);
+-		return ret;
+-	}
++	if (ret)
++		goto put_iommu;
+ 
+ 	mutex_init(&vdev->igate);
+ 
+ 	return 0;
++
++put_iommu:
++	vfio_iommu_group_put(group, dev);
++put_reset:
++	vfio_platform_put_reset(vdev);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(vfio_platform_probe_common);
+ 
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 0586ad5eb590..3e5b17710a4f 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -83,6 +83,7 @@ struct vfio_dma {
+ 	size_t			size;		/* Map size (bytes) */
+ 	int			prot;		/* IOMMU_READ/WRITE */
+ 	bool			iommu_mapped;
++	bool			lock_cap;	/* capable(CAP_IPC_LOCK) */
+ 	struct task_struct	*task;
+ 	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
+ };
+@@ -253,29 +254,25 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
+ 	return ret;
+ }
+ 
+-static int vfio_lock_acct(struct task_struct *task, long npage, bool *lock_cap)
++static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
+ {
+ 	struct mm_struct *mm;
+-	bool is_current;
+ 	int ret;
+ 
+ 	if (!npage)
+ 		return 0;
+ 
+-	is_current = (task->mm == current->mm);
+-
+-	mm = is_current ? task->mm : get_task_mm(task);
++	mm = async ? get_task_mm(dma->task) : dma->task->mm;
+ 	if (!mm)
+ 		return -ESRCH; /* process exited */
+ 
+ 	ret = down_write_killable(&mm->mmap_sem);
+ 	if (!ret) {
+ 		if (npage > 0) {
+-			if (lock_cap ? !*lock_cap :
+-			    !has_capability(task, CAP_IPC_LOCK)) {
++			if (!dma->lock_cap) {
+ 				unsigned long limit;
+ 
+-				limit = task_rlimit(task,
++				limit = task_rlimit(dma->task,
+ 						RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+ 
+ 				if (mm->locked_vm + npage > limit)
+@@ -289,7 +286,7 @@ static int vfio_lock_acct(struct task_struct *task, long npage, bool *lock_cap)
+ 		up_write(&mm->mmap_sem);
+ 	}
+ 
+-	if (!is_current)
++	if (async)
+ 		mmput(mm);
+ 
+ 	return ret;
+@@ -398,7 +395,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+  */
+ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 				  long npage, unsigned long *pfn_base,
+-				  bool lock_cap, unsigned long limit)
++				  unsigned long limit)
+ {
+ 	unsigned long pfn = 0;
+ 	long ret, pinned = 0, lock_acct = 0;
+@@ -421,7 +418,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 	 * pages are already counted against the user.
+ 	 */
+ 	if (!rsvd && !vfio_find_vpfn(dma, iova)) {
+-		if (!lock_cap && current->mm->locked_vm + 1 > limit) {
++		if (!dma->lock_cap && current->mm->locked_vm + 1 > limit) {
+ 			put_pfn(*pfn_base, dma->prot);
+ 			pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__,
+ 					limit << PAGE_SHIFT);
+@@ -447,7 +444,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 		}
+ 
+ 		if (!rsvd && !vfio_find_vpfn(dma, iova)) {
+-			if (!lock_cap &&
++			if (!dma->lock_cap &&
+ 			    current->mm->locked_vm + lock_acct + 1 > limit) {
+ 				put_pfn(pfn, dma->prot);
+ 				pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n",
+@@ -460,7 +457,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
+ 	}
+ 
+ out:
+-	ret = vfio_lock_acct(current, lock_acct, &lock_cap);
++	ret = vfio_lock_acct(dma, lock_acct, false);
+ 
+ unpin_out:
+ 	if (ret) {
+@@ -491,7 +488,7 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
+ 	}
+ 
+ 	if (do_accounting)
+-		vfio_lock_acct(dma->task, locked - unlocked, NULL);
++		vfio_lock_acct(dma, locked - unlocked, true);
+ 
+ 	return unlocked;
+ }
+@@ -508,7 +505,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
+ 
+ 	ret = vaddr_get_pfn(mm, vaddr, dma->prot, pfn_base);
+ 	if (!ret && do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
+-		ret = vfio_lock_acct(dma->task, 1, NULL);
++		ret = vfio_lock_acct(dma, 1, true);
+ 		if (ret) {
+ 			put_pfn(*pfn_base, dma->prot);
+ 			if (ret == -ENOMEM)
+@@ -535,7 +532,7 @@ static int vfio_unpin_page_external(struct vfio_dma *dma, dma_addr_t iova,
+ 	unlocked = vfio_iova_put_vfio_pfn(dma, vpfn);
+ 
+ 	if (do_accounting)
+-		vfio_lock_acct(dma->task, -unlocked, NULL);
++		vfio_lock_acct(dma, -unlocked, true);
+ 
+ 	return unlocked;
+ }
+@@ -827,7 +824,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ 		unlocked += vfio_sync_unpin(dma, domain, &unmapped_region_list);
+ 
+ 	if (do_accounting) {
+-		vfio_lock_acct(dma->task, -unlocked, NULL);
++		vfio_lock_acct(dma, -unlocked, true);
+ 		return 0;
+ 	}
+ 	return unlocked;
+@@ -1042,14 +1039,12 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
+ 	size_t size = map_size;
+ 	long npage;
+ 	unsigned long pfn, limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+-	bool lock_cap = capable(CAP_IPC_LOCK);
+ 	int ret = 0;
+ 
+ 	while (size) {
+ 		/* Pin a contiguous chunk of memory */
+ 		npage = vfio_pin_pages_remote(dma, vaddr + dma->size,
+-					      size >> PAGE_SHIFT, &pfn,
+-					      lock_cap, limit);
++					      size >> PAGE_SHIFT, &pfn, limit);
+ 		if (npage <= 0) {
+ 			WARN_ON(!npage);
+ 			ret = (int)npage;
+@@ -1124,8 +1119,36 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ 	dma->iova = iova;
+ 	dma->vaddr = vaddr;
+ 	dma->prot = prot;
+-	get_task_struct(current);
+-	dma->task = current;
++
++	/*
++	 * We need to be able to both add to a task's locked memory and test
++	 * against the locked memory limit and we need to be able to do both
++	 * outside of this call path as pinning can be asynchronous via the
++	 * external interfaces for mdev devices.  RLIMIT_MEMLOCK requires a
++	 * task_struct and VM locked pages requires an mm_struct, however
++	 * holding an indefinite mm reference is not recommended, therefore we
++	 * only hold a reference to a task.  We could hold a reference to
++	 * current, however QEMU uses this call path through vCPU threads,
++	 * which can be killed resulting in a NULL mm and failure in the unmap
++	 * path when called via a different thread.  Avoid this problem by
++	 * using the group_leader as threads within the same group require
++	 * both CLONE_THREAD and CLONE_VM and will therefore use the same
++	 * mm_struct.
++	 *
++	 * Previously we also used the task for testing CAP_IPC_LOCK at the
++	 * time of pinning and accounting, however has_capability() makes use
++	 * of real_cred, a copy-on-write field, so we can't guarantee that it
++	 * matches group_leader, or in fact that it might not change by the
++	 * time it's evaluated.  If a process were to call MAP_DMA with
++	 * CAP_IPC_LOCK but later drop it, it doesn't make sense that they
++	 * possibly see different results for an iommu_mapped vfio_dma vs
++	 * externally mapped.  Therefore track CAP_IPC_LOCK in vfio_dma at the
++	 * time of calling MAP_DMA.
++	 */
++	get_task_struct(current->group_leader);
++	dma->task = current->group_leader;
++	dma->lock_cap = capable(CAP_IPC_LOCK);
++
+ 	dma->pfn_list = RB_ROOT;
+ 
+ 	/* Insert zero-sized and grow as we map chunks of it */
+@@ -1160,7 +1183,6 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ 	struct vfio_domain *d;
+ 	struct rb_node *n;
+ 	unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+-	bool lock_cap = capable(CAP_IPC_LOCK);
+ 	int ret;
+ 
+ 	/* Arbitrarily pick the first domain in the list for lookups */
+@@ -1207,8 +1229,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
+ 
+ 				npage = vfio_pin_pages_remote(dma, vaddr,
+ 							      n >> PAGE_SHIFT,
+-							      &pfn, lock_cap,
+-							      limit);
++							      &pfn, limit);
+ 				if (npage <= 0) {
+ 					WARN_ON(!npage);
+ 					ret = (int)npage;
+@@ -1485,7 +1506,7 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu)
+ 			if (!is_invalid_reserved_pfn(vpfn->pfn))
+ 				locked++;
+ 		}
+-		vfio_lock_acct(dma->task, locked - unlocked, NULL);
++		vfio_lock_acct(dma, locked - unlocked, true);
+ 	}
+ }
+ 
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index 1c2289ddd555..0fa7d2bd0e48 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -301,14 +301,14 @@ static int pwm_backlight_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * If the GPIO is not known to be already configured as output, that
+-	 * is, if gpiod_get_direction returns either GPIOF_DIR_IN or -EINVAL,
+-	 * change the direction to output and set the GPIO as active.
++	 * is, if gpiod_get_direction returns either 1 or -EINVAL, change the
++	 * direction to output and set the GPIO as active.
+ 	 * Do not force the GPIO to active when it was already output as it
+ 	 * could cause backlight flickering or we would enable the backlight too
+ 	 * early. Leave the decision of the initial backlight state for later.
+ 	 */
+ 	if (pb->enable_gpio &&
+-	    gpiod_get_direction(pb->enable_gpio) != GPIOF_DIR_OUT)
++	    gpiod_get_direction(pb->enable_gpio) != 0)
+ 		gpiod_direction_output(pb->enable_gpio, 1);
+ 
+ 	pb->power_supply = devm_regulator_get(&pdev->dev, "power");
+diff --git a/drivers/watchdog/da9063_wdt.c b/drivers/watchdog/da9063_wdt.c
+index b17ac1bb1f28..87fb9ab603fa 100644
+--- a/drivers/watchdog/da9063_wdt.c
++++ b/drivers/watchdog/da9063_wdt.c
+@@ -99,10 +99,23 @@ static int da9063_wdt_set_timeout(struct watchdog_device *wdd,
+ {
+ 	struct da9063 *da9063 = watchdog_get_drvdata(wdd);
+ 	unsigned int selector;
+-	int ret;
++	int ret = 0;
+ 
+ 	selector = da9063_wdt_timeout_to_sel(timeout);
+-	ret = _da9063_wdt_set_timeout(da9063, selector);
++
++	/*
++	 * There are two cases when a set_timeout() will be called:
++	 * 1. The watchdog is off and someone wants to set the timeout for the
++	 *    further use.
++	 * 2. The watchdog is already running and a new timeout value should be
++	 *    set.
++	 *
++	 * The watchdog can't store a timeout value not equal zero without
++	 * enabling the watchdog, so the timeout must be buffered by the driver.
++	 */
++	if (watchdog_active(wdd))
++		ret = _da9063_wdt_set_timeout(da9063, selector);
++
+ 	if (ret)
+ 		dev_err(da9063->dev, "Failed to set watchdog timeout (err = %d)\n",
+ 			ret);
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 7ec920e27065..9bfece2e3c88 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -219,7 +219,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
+ 
+ 	ret = bio_iov_iter_get_pages(&bio, iter);
+ 	if (unlikely(ret))
+-		return ret;
++		goto out;
+ 	ret = bio.bi_iter.bi_size;
+ 
+ 	if (iov_iter_rw(iter) == READ) {
+@@ -248,12 +248,13 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter,
+ 		put_page(bvec->bv_page);
+ 	}
+ 
+-	if (vecs != inline_vecs)
+-		kfree(vecs);
+-
+ 	if (unlikely(bio.bi_status))
+ 		ret = blk_status_to_errno(bio.bi_status);
+ 
++out:
++	if (vecs != inline_vecs)
++		kfree(vecs);
++
+ 	bio_uninit(&bio);
+ 
+ 	return ret;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b54a55497216..bd400cf2756f 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -3160,6 +3160,9 @@ out:
+ 	/* once for the tree */
+ 	btrfs_put_ordered_extent(ordered_extent);
+ 
++	/* Try to release some metadata so we don't get an OOM but don't wait */
++	btrfs_btree_balance_dirty_nodelay(fs_info);
++
+ 	return ret;
+ }
+ 
+@@ -4668,7 +4671,10 @@ delete:
+ 						extent_num_bytes, 0,
+ 						btrfs_header_owner(leaf),
+ 						ino, extent_offset);
+-			BUG_ON(ret);
++			if (ret) {
++				btrfs_abort_transaction(trans, ret);
++				break;
++			}
+ 			if (btrfs_should_throttle_delayed_refs(trans, fs_info))
+ 				btrfs_async_run_delayed_refs(fs_info,
+ 					trans->delayed_ref_updates * 2,
+@@ -5423,13 +5429,18 @@ void btrfs_evict_inode(struct inode *inode)
+ 		trans->block_rsv = rsv;
+ 
+ 		ret = btrfs_truncate_inode_items(trans, root, inode, 0, 0);
+-		if (ret != -ENOSPC && ret != -EAGAIN)
++		if (ret) {
++			trans->block_rsv = &fs_info->trans_block_rsv;
++			btrfs_end_transaction(trans);
++			btrfs_btree_balance_dirty(fs_info);
++			if (ret != -ENOSPC && ret != -EAGAIN) {
++				btrfs_orphan_del(NULL, BTRFS_I(inode));
++				btrfs_free_block_rsv(fs_info, rsv);
++				goto no_delete;
++			}
++		} else {
+ 			break;
+-
+-		trans->block_rsv = &fs_info->trans_block_rsv;
+-		btrfs_end_transaction(trans);
+-		trans = NULL;
+-		btrfs_btree_balance_dirty(fs_info);
++		}
+ 	}
+ 
+ 	btrfs_free_block_rsv(fs_info, rsv);
+@@ -5438,12 +5449,8 @@ void btrfs_evict_inode(struct inode *inode)
+ 	 * Errors here aren't a big deal, it just means we leave orphan items
+ 	 * in the tree.  They will be cleaned up on the next mount.
+ 	 */
+-	if (ret == 0) {
+-		trans->block_rsv = root->orphan_block_rsv;
+-		btrfs_orphan_del(trans, BTRFS_I(inode));
+-	} else {
+-		btrfs_orphan_del(NULL, BTRFS_I(inode));
+-	}
++	trans->block_rsv = root->orphan_block_rsv;
++	btrfs_orphan_del(trans, BTRFS_I(inode));
+ 
+ 	trans->block_rsv = &fs_info->trans_block_rsv;
+ 	if (!(root == fs_info->tree_root ||
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 9fb758d5077a..d0aba20e0843 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2579,6 +2579,21 @@ out:
+ 	spin_unlock(&fs_info->qgroup_lock);
+ }
+ 
++/*
++ * Check if the leaf is the last leaf. Which means all node pointers
++ * are at their last position.
++ */
++static bool is_last_leaf(struct btrfs_path *path)
++{
++	int i;
++
++	for (i = 1; i < BTRFS_MAX_LEVEL && path->nodes[i]; i++) {
++		if (path->slots[i] != btrfs_header_nritems(path->nodes[i]) - 1)
++			return false;
++	}
++	return true;
++}
++
+ /*
+  * returns < 0 on error, 0 when more leafs are to be scanned.
+  * returns 1 when done.
+@@ -2592,6 +2607,7 @@ qgroup_rescan_leaf(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
+ 	struct ulist *roots = NULL;
+ 	struct seq_list tree_mod_seq_elem = SEQ_LIST_INIT(tree_mod_seq_elem);
+ 	u64 num_bytes;
++	bool done;
+ 	int slot;
+ 	int ret;
+ 
+@@ -2620,6 +2636,7 @@ qgroup_rescan_leaf(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
+ 		mutex_unlock(&fs_info->qgroup_rescan_lock);
+ 		return ret;
+ 	}
++	done = is_last_leaf(path);
+ 
+ 	btrfs_item_key_to_cpu(path->nodes[0], &found,
+ 			      btrfs_header_nritems(path->nodes[0]) - 1);
+@@ -2666,6 +2683,8 @@ out:
+ 	}
+ 	btrfs_put_tree_mod_seq(fs_info, &tree_mod_seq_elem);
+ 
++	if (done && !ret)
++		ret = 1;
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 8f23a94dab77..2009cea65d89 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3116,8 +3116,11 @@ out_wake_log_root:
+ 	mutex_unlock(&log_root_tree->log_mutex);
+ 
+ 	/*
+-	 * The barrier before waitqueue_active is implied by mutex_unlock
++	 * The barrier before waitqueue_active is needed so all the updates
++	 * above are seen by the woken threads. It might not be necessary, but
++	 * proving that seems to be hard.
+ 	 */
++	smp_mb();
+ 	if (waitqueue_active(&log_root_tree->log_commit_wait[index2]))
+ 		wake_up(&log_root_tree->log_commit_wait[index2]);
+ out:
+@@ -3128,8 +3131,11 @@ out:
+ 	mutex_unlock(&root->log_mutex);
+ 
+ 	/*
+-	 * The barrier before waitqueue_active is implied by mutex_unlock
++	 * The barrier before waitqueue_active is needed so all the updates
++	 * above are seen by the woken threads. It might not be necessary, but
++	 * proving that seems to be hard.
+ 	 */
++	smp_mb();
+ 	if (waitqueue_active(&root->log_commit_wait[index1]))
+ 		wake_up(&root->log_commit_wait[index1]);
+ 	return ret;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index b33082e6878f..6f9b4cfbc33d 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -45,7 +45,7 @@ static void ceph_put_super(struct super_block *s)
+ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ {
+ 	struct ceph_fs_client *fsc = ceph_inode_to_client(d_inode(dentry));
+-	struct ceph_monmap *monmap = fsc->client->monc.monmap;
++	struct ceph_mon_client *monc = &fsc->client->monc;
+ 	struct ceph_statfs st;
+ 	u64 fsid;
+ 	int err;
+@@ -58,7 +58,7 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	}
+ 
+ 	dout("statfs\n");
+-	err = ceph_monc_do_statfs(&fsc->client->monc, data_pool, &st);
++	err = ceph_monc_do_statfs(monc, data_pool, &st);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -94,8 +94,11 @@ static int ceph_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_namelen = NAME_MAX;
+ 
+ 	/* Must convert the fsid, for consistent values across arches */
+-	fsid = le64_to_cpu(*(__le64 *)(&monmap->fsid)) ^
+-	       le64_to_cpu(*((__le64 *)&monmap->fsid + 1));
++	mutex_lock(&monc->mutex);
++	fsid = le64_to_cpu(*(__le64 *)(&monc->monmap->fsid)) ^
++	       le64_to_cpu(*((__le64 *)&monc->monmap->fsid + 1));
++	mutex_unlock(&monc->mutex);
++
+ 	buf->f_fsid.val[0] = fsid & 0xffffffff;
+ 	buf->f_fsid.val[1] = fsid >> 32;
+ 
+@@ -268,7 +271,7 @@ static int parse_fsopt_token(char *c, void *private)
+ 	case Opt_rasize:
+ 		if (intval < 0)
+ 			return -EINVAL;
+-		fsopt->rasize = ALIGN(intval + PAGE_SIZE - 1, PAGE_SIZE);
++		fsopt->rasize = ALIGN(intval, PAGE_SIZE);
+ 		break;
+ 	case Opt_caps_wanted_delay_min:
+ 		if (intval < 1)
+diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
+index ce654526c0fb..984e190f9b89 100644
+--- a/fs/crypto/crypto.c
++++ b/fs/crypto/crypto.c
+@@ -427,8 +427,17 @@ fail:
+  */
+ static int __init fscrypt_init(void)
+ {
++	/*
++	 * Use an unbound workqueue to allow bios to be decrypted in parallel
++	 * even when they happen to complete on the same CPU.  This sacrifices
++	 * locality, but it's worthwhile since decryption is CPU-intensive.
++	 *
++	 * Also use a high-priority workqueue to prioritize decryption work,
++	 * which blocks reads from completing, over regular application tasks.
++	 */
+ 	fscrypt_read_workqueue = alloc_workqueue("fscrypt_read_queue",
+-							WQ_HIGHPRI, 0);
++						 WQ_UNBOUND | WQ_HIGHPRI,
++						 num_online_cpus());
+ 	if (!fscrypt_read_workqueue)
+ 		goto fail;
+ 
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index f8b5635f0396..e4eab3a38e7c 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -379,6 +379,8 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ 		return -EFSCORRUPTED;
+ 
+ 	ext4_lock_group(sb, block_group);
++	if (buffer_verified(bh))
++		goto verified;
+ 	if (unlikely(!ext4_block_bitmap_csum_verify(sb, block_group,
+ 			desc, bh))) {
+ 		ext4_unlock_group(sb, block_group);
+@@ -401,6 +403,7 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
+ 		return -EFSCORRUPTED;
+ 	}
+ 	set_buffer_verified(bh);
++verified:
+ 	ext4_unlock_group(sb, block_group);
+ 	return 0;
+ }
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 478b8f21c814..257388a8032b 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -91,6 +91,8 @@ static int ext4_validate_inode_bitmap(struct super_block *sb,
+ 		return -EFSCORRUPTED;
+ 
+ 	ext4_lock_group(sb, block_group);
++	if (buffer_verified(bh))
++		goto verified;
+ 	blk = ext4_inode_bitmap(sb, desc);
+ 	if (!ext4_inode_bitmap_csum_verify(sb, block_group, desc, bh,
+ 					   EXT4_INODES_PER_GROUP(sb) / 8)) {
+@@ -108,6 +110,7 @@ static int ext4_validate_inode_bitmap(struct super_block *sb,
+ 		return -EFSBADCRC;
+ 	}
+ 	set_buffer_verified(bh);
++verified:
+ 	ext4_unlock_group(sb, block_group);
+ 	return 0;
+ }
+@@ -1392,7 +1395,10 @@ int ext4_init_inode_table(struct super_block *sb, ext4_group_t group,
+ 			    ext4_itable_unused_count(sb, gdp)),
+ 			    sbi->s_inodes_per_block);
+ 
+-	if ((used_blks < 0) || (used_blks > sbi->s_itb_per_group)) {
++	if ((used_blks < 0) || (used_blks > sbi->s_itb_per_group) ||
++	    ((group == 0) && ((EXT4_INODES_PER_GROUP(sb) -
++			       ext4_itable_unused_count(sb, gdp)) <
++			      EXT4_FIRST_INO(sb)))) {
+ 		ext4_error(sb, "Something is wrong with group %u: "
+ 			   "used itable blocks: %d; "
+ 			   "itable unused count: %u",
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 851bc552d849..716adc635506 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -682,6 +682,10 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
+ 		goto convert;
+ 	}
+ 
++	ret = ext4_journal_get_write_access(handle, iloc.bh);
++	if (ret)
++		goto out;
++
+ 	flags |= AOP_FLAG_NOFS;
+ 
+ 	page = grab_cache_page_write_begin(mapping, 0, flags);
+@@ -710,7 +714,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
+ out_up_read:
+ 	up_read(&EXT4_I(inode)->xattr_sem);
+ out:
+-	if (handle)
++	if (handle && (ret != 1))
+ 		ext4_journal_stop(handle);
+ 	brelse(iloc.bh);
+ 	return ret;
+@@ -752,6 +756,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 
+ 	ext4_write_unlock_xattr(inode, &no_expand);
+ 	brelse(iloc.bh);
++	mark_inode_dirty(inode);
+ out:
+ 	return copied;
+ }
+@@ -898,7 +903,6 @@ retry_journal:
+ 		goto out;
+ 	}
+ 
+-
+ 	page = grab_cache_page_write_begin(mapping, 0, flags);
+ 	if (!page) {
+ 		ret = -ENOMEM;
+@@ -916,6 +920,9 @@ retry_journal:
+ 		if (ret < 0)
+ 			goto out_release_page;
+ 	}
++	ret = ext4_journal_get_write_access(handle, iloc.bh);
++	if (ret)
++		goto out_release_page;
+ 
+ 	up_read(&EXT4_I(inode)->xattr_sem);
+ 	*pagep = page;
+@@ -936,7 +943,6 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ 				  unsigned len, unsigned copied,
+ 				  struct page *page)
+ {
+-	int i_size_changed = 0;
+ 	int ret;
+ 
+ 	ret = ext4_write_inline_data_end(inode, pos, len, copied, page);
+@@ -954,10 +960,8 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ 	 * But it's important to update i_size while still holding page lock:
+ 	 * page writeout could otherwise come in and zero beyond i_size.
+ 	 */
+-	if (pos+copied > inode->i_size) {
++	if (pos+copied > inode->i_size)
+ 		i_size_write(inode, pos+copied);
+-		i_size_changed = 1;
+-	}
+ 	unlock_page(page);
+ 	put_page(page);
+ 
+@@ -967,8 +971,7 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ 	 * ordering of page lock and transaction start for journaling
+ 	 * filesystems.
+ 	 */
+-	if (i_size_changed)
+-		mark_inode_dirty(inode);
++	mark_inode_dirty(inode);
+ 
+ 	return copied;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 06b963d2fc36..afb22e01f009 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1389,9 +1389,10 @@ static int ext4_write_end(struct file *file,
+ 	loff_t old_size = inode->i_size;
+ 	int ret = 0, ret2;
+ 	int i_size_changed = 0;
++	int inline_data = ext4_has_inline_data(inode);
+ 
+ 	trace_ext4_write_end(inode, pos, len, copied);
+-	if (ext4_has_inline_data(inode)) {
++	if (inline_data) {
+ 		ret = ext4_write_inline_data_end(inode, pos, len,
+ 						 copied, page);
+ 		if (ret < 0) {
+@@ -1419,7 +1420,7 @@ static int ext4_write_end(struct file *file,
+ 	 * ordering of page lock and transaction start for journaling
+ 	 * filesystems.
+ 	 */
+-	if (i_size_changed)
++	if (i_size_changed || inline_data)
+ 		ext4_mark_inode_dirty(handle, inode);
+ 
+ 	if (pos + len > inode->i_size && ext4_can_truncate(inode))
+@@ -1493,6 +1494,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 	int partial = 0;
+ 	unsigned from, to;
+ 	int size_changed = 0;
++	int inline_data = ext4_has_inline_data(inode);
+ 
+ 	trace_ext4_journalled_write_end(inode, pos, len, copied);
+ 	from = pos & (PAGE_SIZE - 1);
+@@ -1500,7 +1502,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 
+ 	BUG_ON(!ext4_handle_valid(handle));
+ 
+-	if (ext4_has_inline_data(inode)) {
++	if (inline_data) {
+ 		ret = ext4_write_inline_data_end(inode, pos, len,
+ 						 copied, page);
+ 		if (ret < 0) {
+@@ -1531,7 +1533,7 @@ static int ext4_journalled_write_end(struct file *file,
+ 	if (old_size < pos)
+ 		pagecache_isize_extended(inode, old_size, pos);
+ 
+-	if (size_changed) {
++	if (size_changed || inline_data) {
+ 		ret2 = ext4_mark_inode_dirty(handle, inode);
+ 		if (!ret)
+ 			ret = ret2;
+@@ -2028,11 +2030,7 @@ static int __ext4_journalled_writepage(struct page *page,
+ 	}
+ 
+ 	if (inline_data) {
+-		BUFFER_TRACE(inode_bh, "get write access");
+-		ret = ext4_journal_get_write_access(handle, inode_bh);
+-
+-		err = ext4_handle_dirty_metadata(handle, inode, inode_bh);
+-
++		ret = ext4_mark_inode_dirty(handle, inode);
+ 	} else {
+ 		ret = ext4_walk_page_buffers(handle, page_bufs, 0, len, NULL,
+ 					     do_journal_get_write_access);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 74a6d884ede4..d20cf383f2c1 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2307,7 +2307,7 @@ static int ext4_check_descriptors(struct super_block *sb,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	ext4_fsblk_t first_block = le32_to_cpu(sbi->s_es->s_first_data_block);
+ 	ext4_fsblk_t last_block;
+-	ext4_fsblk_t last_bg_block = sb_block + ext4_bg_num_gdb(sb, 0) + 1;
++	ext4_fsblk_t last_bg_block = sb_block + ext4_bg_num_gdb(sb, 0);
+ 	ext4_fsblk_t block_bitmap;
+ 	ext4_fsblk_t inode_bitmap;
+ 	ext4_fsblk_t inode_table;
+@@ -3106,14 +3106,8 @@ static ext4_group_t ext4_has_uninit_itable(struct super_block *sb)
+ 		if (!gdp)
+ 			continue;
+ 
+-		if (gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED))
+-			continue;
+-		if (group != 0)
++		if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED)))
+ 			break;
+-		ext4_error(sb, "Inode table for bg 0 marked as "
+-			   "needing zeroing");
+-		if (sb_rdonly(sb))
+-			return ngroups;
+ 	}
+ 
+ 	return group;
+@@ -4050,14 +4044,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 			goto failed_mount2;
+ 		}
+ 	}
++	sbi->s_gdb_count = db_count;
+ 	if (!ext4_check_descriptors(sb, logical_sb_block, &first_not_zeroed)) {
+ 		ext4_msg(sb, KERN_ERR, "group descriptors corrupted!");
+ 		ret = -EFSCORRUPTED;
+ 		goto failed_mount2;
+ 	}
+ 
+-	sbi->s_gdb_count = db_count;
+-
+ 	timer_setup(&sbi->s_err_report, print_daily_error_info, 0);
+ 
+ 	/* Register extent status tree shrinker */
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 02237d4d91f5..a9fec79dc3dd 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1745,6 +1745,12 @@ static int __write_data_page(struct page *page, bool *submitted,
+ 	/* we should bypass data pages to proceed the kworkder jobs */
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		mapping_set_error(page->mapping, -EIO);
++		/*
++		 * don't drop any dirty dentry pages for keeping lastest
++		 * directory structure.
++		 */
++		if (S_ISDIR(inode->i_mode))
++			goto redirty_out;
+ 		goto out;
+ 	}
+ 
+@@ -1842,7 +1848,13 @@ out:
+ 
+ redirty_out:
+ 	redirty_page_for_writepage(wbc, page);
+-	if (!err)
++	/*
++	 * pageout() in MM traslates EAGAIN, so calls handle_write_error()
++	 * -> mapping_set_error() -> set_bit(AS_EIO, ...).
++	 * file_write_and_wait_range() will see EIO error, which is critical
++	 * to return value of fsync() followed by atomic_write failure to user.
++	 */
++	if (!err || wbc->for_reclaim)
+ 		return AOP_WRITEPAGE_ACTIVATE;
+ 	unlock_page(page);
+ 	return err;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 20149b8771d9..f6cd5850be75 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1602,7 +1602,7 @@ static inline bool f2fs_has_xattr_block(unsigned int ofs)
+ }
+ 
+ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
+-					struct inode *inode)
++					struct inode *inode, bool cap)
+ {
+ 	if (!inode)
+ 		return true;
+@@ -1615,7 +1615,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
+ 	if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) &&
+ 					in_group_p(F2FS_OPTION(sbi).s_resgid))
+ 		return true;
+-	if (capable(CAP_SYS_RESOURCE))
++	if (cap && capable(CAP_SYS_RESOURCE))
+ 		return true;
+ 	return false;
+ }
+@@ -1650,7 +1650,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 	avail_user_block_count = sbi->user_block_count -
+ 					sbi->current_reserved_blocks;
+ 
+-	if (!__allow_reserved_blocks(sbi, inode))
++	if (!__allow_reserved_blocks(sbi, inode, true))
+ 		avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
+ 
+ 	if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
+@@ -1857,7 +1857,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
+ 	valid_block_count = sbi->total_valid_block_count +
+ 					sbi->current_reserved_blocks + 1;
+ 
+-	if (!__allow_reserved_blocks(sbi, inode))
++	if (!__allow_reserved_blocks(sbi, inode, false))
+ 		valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
+ 
+ 	if (unlikely(valid_block_count > sbi->user_block_count)) {
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6b94f19b3fa8..04c95812e5c9 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1670,6 +1670,8 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 
+ 	inode_lock(inode);
+ 
++	down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
++
+ 	if (f2fs_is_atomic_file(inode))
+ 		goto out;
+ 
+@@ -1699,6 +1701,7 @@ inc_stat:
+ 	stat_inc_atomic_write(inode);
+ 	stat_update_max_atomic_write(inode);
+ out:
++	up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+ 	inode_unlock(inode);
+ 	mnt_drop_write_file(filp);
+ 	return ret;
+@@ -1851,9 +1854,11 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 	if (get_user(in, (__u32 __user *)arg))
+ 		return -EFAULT;
+ 
+-	ret = mnt_want_write_file(filp);
+-	if (ret)
+-		return ret;
++	if (in != F2FS_GOING_DOWN_FULLSYNC) {
++		ret = mnt_want_write_file(filp);
++		if (ret)
++			return ret;
++	}
+ 
+ 	switch (in) {
+ 	case F2FS_GOING_DOWN_FULLSYNC:
+@@ -1894,7 +1899,8 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 
+ 	f2fs_update_time(sbi, REQ_TIME);
+ out:
+-	mnt_drop_write_file(filp);
++	if (in != F2FS_GOING_DOWN_FULLSYNC)
++		mnt_drop_write_file(filp);
+ 	return ret;
+ }
+ 
+@@ -2568,7 +2574,9 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
+ 	}
+ 	f2fs_put_page(ipage, 1);
+ 
+-	dquot_initialize(inode);
++	err = dquot_initialize(inode);
++	if (err)
++		goto out_unlock;
+ 
+ 	transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
+ 	if (!IS_ERR(transfer_to[PRJQUOTA])) {
+@@ -2924,6 +2932,8 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 						iov_iter_count(from)) ||
+ 					f2fs_has_inline_data(inode) ||
+ 					f2fs_force_buffered_io(inode, WRITE)) {
++						clear_inode_flag(inode,
++								FI_NO_PREALLOC);
+ 						inode_unlock(inode);
+ 						return -EAGAIN;
+ 				}
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9327411fd93b..6aecdd5b97d0 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -778,9 +778,14 @@ retry:
+ 		set_cold_data(page);
+ 
+ 		err = do_write_data_page(&fio);
+-		if (err == -ENOMEM && is_dirty) {
+-			congestion_wait(BLK_RW_ASYNC, HZ/50);
+-			goto retry;
++		if (err) {
++			clear_cold_data(page);
++			if (err == -ENOMEM) {
++				congestion_wait(BLK_RW_ASYNC, HZ/50);
++				goto retry;
++			}
++			if (is_dirty)
++				set_page_dirty(page);
+ 		}
+ 	}
+ out:
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index cffaf842f4e7..c06489634655 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -230,6 +230,8 @@ static int __revoke_inmem_pages(struct inode *inode,
+ 
+ 		lock_page(page);
+ 
++		f2fs_wait_on_page_writeback(page, DATA, true);
++
+ 		if (recover) {
+ 			struct dnode_of_data dn;
+ 			struct node_info ni;
+@@ -478,6 +480,9 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+ 
+ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi)
+ {
++	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
++		return;
++
+ 	/* try to shrink extent cache when there is no enough memory */
+ 	if (!available_free_memory(sbi, EXTENT_CACHE))
+ 		f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 42d564c5ccd0..cad77fbb1f14 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3063,6 +3063,12 @@ static int __init init_f2fs_fs(void)
+ {
+ 	int err;
+ 
++	if (PAGE_SIZE != F2FS_BLKSIZE) {
++		printk("F2FS not supported on PAGE_SIZE(%lu) != %d\n",
++				PAGE_SIZE, F2FS_BLKSIZE);
++		return -EINVAL;
++	}
++
+ 	f2fs_build_trace_ios();
+ 
+ 	err = init_inodecache();
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index bd15d0b57626..6e70445213e7 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1629,7 +1629,8 @@ int nfs_post_op_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 	nfs_fattr_set_barrier(fattr);
+ 	status = nfs_post_op_update_inode_locked(inode, fattr,
+ 			NFS_INO_INVALID_CHANGE
+-			| NFS_INO_INVALID_CTIME);
++			| NFS_INO_INVALID_CTIME
++			| NFS_INO_REVAL_FORCED);
+ 	spin_unlock(&inode->i_lock);
+ 
+ 	return status;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 409acdda70dd..2d94eb9cd386 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -746,6 +746,13 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ 			slot->slot_nr,
+ 			slot->seq_nr);
+ 		goto out_retry;
++	case -NFS4ERR_RETRY_UNCACHED_REP:
++	case -NFS4ERR_SEQ_FALSE_RETRY:
++		/*
++		 * The server thinks we tried to replay a request.
++		 * Retry the call after bumping the sequence ID.
++		 */
++		goto retry_new_seq;
+ 	case -NFS4ERR_BADSLOT:
+ 		/*
+ 		 * The slot id we used was probably retired. Try again
+@@ -770,10 +777,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ 			goto retry_nowait;
+ 		}
+ 		goto session_recover;
+-	case -NFS4ERR_SEQ_FALSE_RETRY:
+-		if (interrupted)
+-			goto retry_new_seq;
+-		goto session_recover;
+ 	default:
+ 		/* Just update the slot sequence no. */
+ 		slot->seq_done = 1;
+@@ -2804,7 +2807,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ 	if (ret != 0)
+ 		goto out;
+ 
+-	state = nfs4_opendata_to_nfs4_state(opendata);
++	state = _nfs4_opendata_to_nfs4_state(opendata);
+ 	ret = PTR_ERR(state);
+ 	if (IS_ERR(state))
+ 		goto out;
+@@ -2840,6 +2843,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ 			nfs4_schedule_stateid_recovery(server, state);
+ 	}
+ out:
++	nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index ee723aa153a3..b35d55e4851a 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1144,7 +1144,7 @@ _pnfs_return_layout(struct inode *ino)
+ 	LIST_HEAD(tmp_list);
+ 	nfs4_stateid stateid;
+ 	int status = 0;
+-	bool send;
++	bool send, valid_layout;
+ 
+ 	dprintk("NFS: %s for inode %lu\n", __func__, ino->i_ino);
+ 
+@@ -1165,6 +1165,7 @@ _pnfs_return_layout(struct inode *ino)
+ 			goto out_put_layout_hdr;
+ 		spin_lock(&ino->i_lock);
+ 	}
++	valid_layout = pnfs_layout_is_valid(lo);
+ 	pnfs_clear_layoutcommit(ino, &tmp_list);
+ 	pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL, 0);
+ 
+@@ -1178,7 +1179,8 @@ _pnfs_return_layout(struct inode *ino)
+ 	}
+ 
+ 	/* Don't send a LAYOUTRETURN if list was initially empty */
+-	if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags)) {
++	if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) ||
++			!valid_layout) {
+ 		spin_unlock(&ino->i_lock);
+ 		dprintk("NFS: %s no layout segments to return\n", __func__);
+ 		goto out_put_layout_hdr;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fc74d6f46bd5..3b40d1b57613 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4378,8 +4378,11 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 	spin_unlock(&state_lock);
+ 
+ 	if (status)
+-		destroy_unhashed_deleg(dp);
++		goto out_unlock;
++
+ 	return dp;
++out_unlock:
++	vfs_setlease(fp->fi_deleg_file, F_UNLCK, NULL, (void **)&dp);
+ out_clnt_odstate:
+ 	put_clnt_odstate(dp->dl_clnt_odstate);
+ out_stid:
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index cfe535c286c3..59d471025949 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -1585,6 +1585,8 @@ nfsd4_decode_getdeviceinfo(struct nfsd4_compoundargs *argp,
+ 	gdev->gd_maxcount = be32_to_cpup(p++);
+ 	num = be32_to_cpup(p++);
+ 	if (num) {
++		if (num > 1000)
++			goto xdr_error;
+ 		READ_BUF(4 * num);
+ 		gdev->gd_notify_types = be32_to_cpup(p++);
+ 		for (i = 1; i < num; i++) {
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index b0abfe02beab..308d64e72515 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -673,13 +673,13 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
+ 		[ilog2(VM_MERGEABLE)]	= "mg",
+ 		[ilog2(VM_UFFD_MISSING)]= "um",
+ 		[ilog2(VM_UFFD_WP)]	= "uw",
+-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
++#ifdef CONFIG_ARCH_HAS_PKEYS
+ 		/* These come out via ProtectionKey: */
+ 		[ilog2(VM_PKEY_BIT0)]	= "",
+ 		[ilog2(VM_PKEY_BIT1)]	= "",
+ 		[ilog2(VM_PKEY_BIT2)]	= "",
+ 		[ilog2(VM_PKEY_BIT3)]	= "",
+-#endif
++#endif /* CONFIG_ARCH_HAS_PKEYS */
+ 	};
+ 	size_t i;
+ 
+@@ -1259,8 +1259,9 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
+ 		if (pte_swp_soft_dirty(pte))
+ 			flags |= PM_SOFT_DIRTY;
+ 		entry = pte_to_swp_entry(pte);
+-		frame = swp_type(entry) |
+-			(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
++		if (pm->show_pfn)
++			frame = swp_type(entry) |
++				(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
+ 		flags |= PM_SWAP;
+ 		if (is_migration_entry(entry))
+ 			page = migration_entry_to_page(entry);
+@@ -1311,11 +1312,14 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ 		else if (is_swap_pmd(pmd)) {
+ 			swp_entry_t entry = pmd_to_swp_entry(pmd);
+-			unsigned long offset = swp_offset(entry);
++			unsigned long offset;
+ 
+-			offset += (addr & ~PMD_MASK) >> PAGE_SHIFT;
+-			frame = swp_type(entry) |
+-				(offset << MAX_SWAPFILES_SHIFT);
++			if (pm->show_pfn) {
++				offset = swp_offset(entry) +
++					((addr & ~PMD_MASK) >> PAGE_SHIFT);
++				frame = swp_type(entry) |
++					(offset << MAX_SWAPFILES_SHIFT);
++			}
+ 			flags |= PM_SWAP;
+ 			if (pmd_swp_soft_dirty(pmd))
+ 				flags |= PM_SOFT_DIRTY;
+@@ -1333,10 +1337,12 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ 			err = add_to_pagemap(addr, &pme, pm);
+ 			if (err)
+ 				break;
+-			if (pm->show_pfn && (flags & PM_PRESENT))
+-				frame++;
+-			else if (flags & PM_SWAP)
+-				frame += (1 << MAX_SWAPFILES_SHIFT);
++			if (pm->show_pfn) {
++				if (flags & PM_PRESENT)
++					frame++;
++				else if (flags & PM_SWAP)
++					frame += (1 << MAX_SWAPFILES_SHIFT);
++			}
+ 		}
+ 		spin_unlock(ptl);
+ 		return err;
+diff --git a/fs/squashfs/cache.c b/fs/squashfs/cache.c
+index 23813c078cc9..0839efa720b3 100644
+--- a/fs/squashfs/cache.c
++++ b/fs/squashfs/cache.c
+@@ -350,6 +350,9 @@ int squashfs_read_metadata(struct super_block *sb, void *buffer,
+ 
+ 	TRACE("Entered squashfs_read_metadata [%llx:%x]\n", *block, *offset);
+ 
++	if (unlikely(length < 0))
++		return -EIO;
++
+ 	while (length) {
+ 		entry = squashfs_cache_get(sb, msblk->block_cache, *block, 0);
+ 		if (entry->error) {
+diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
+index 13d80947bf9e..fcff2e0487fe 100644
+--- a/fs/squashfs/file.c
++++ b/fs/squashfs/file.c
+@@ -194,7 +194,11 @@ static long long read_indexes(struct super_block *sb, int n,
+ 		}
+ 
+ 		for (i = 0; i < blocks; i++) {
+-			int size = le32_to_cpu(blist[i]);
++			int size = squashfs_block_size(blist[i]);
++			if (size < 0) {
++				err = size;
++				goto failure;
++			}
+ 			block += SQUASHFS_COMPRESSED_SIZE_BLOCK(size);
+ 		}
+ 		n -= blocks;
+@@ -367,7 +371,7 @@ static int read_blocklist(struct inode *inode, int index, u64 *block)
+ 			sizeof(size));
+ 	if (res < 0)
+ 		return res;
+-	return le32_to_cpu(size);
++	return squashfs_block_size(size);
+ }
+ 
+ /* Copy data into page cache  */
+diff --git a/fs/squashfs/fragment.c b/fs/squashfs/fragment.c
+index 0ed6edbc5c71..86ad9a4b8c36 100644
+--- a/fs/squashfs/fragment.c
++++ b/fs/squashfs/fragment.c
+@@ -61,9 +61,7 @@ int squashfs_frag_lookup(struct super_block *sb, unsigned int fragment,
+ 		return size;
+ 
+ 	*fragment_block = le64_to_cpu(fragment_entry.start_block);
+-	size = le32_to_cpu(fragment_entry.size);
+-
+-	return size;
++	return squashfs_block_size(fragment_entry.size);
+ }
+ 
+ 
+diff --git a/fs/squashfs/squashfs_fs.h b/fs/squashfs/squashfs_fs.h
+index 24d12fd14177..4e6853f084d0 100644
+--- a/fs/squashfs/squashfs_fs.h
++++ b/fs/squashfs/squashfs_fs.h
+@@ -129,6 +129,12 @@
+ 
+ #define SQUASHFS_COMPRESSED_BLOCK(B)	(!((B) & SQUASHFS_COMPRESSED_BIT_BLOCK))
+ 
++static inline int squashfs_block_size(__le32 raw)
++{
++	u32 size = le32_to_cpu(raw);
++	return (size >> 25) ? -EIO : size;
++}
++
+ /*
+  * Inode number ops.  Inodes consist of a compressed block number, and an
+  * uncompressed offset within that block
+diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
+index 62903bae0221..0bac0c7d0dec 100644
+--- a/include/drm/drm_dp_helper.h
++++ b/include/drm/drm_dp_helper.h
+@@ -478,6 +478,7 @@
+ # define DP_PSR_FRAME_CAPTURE		    (1 << 3)
+ # define DP_PSR_SELECTIVE_UPDATE	    (1 << 4)
+ # define DP_PSR_IRQ_HPD_WITH_CRC_ERRORS     (1 << 5)
++# define DP_PSR_ENABLE_PSR2		    (1 << 6) /* eDP 1.4a */
+ 
+ #define DP_ADAPTER_CTRL			    0x1a0
+ # define DP_ADAPTER_CTRL_FORCE_LOAD_SENSE   (1 << 0)
+diff --git a/include/linux/delayacct.h b/include/linux/delayacct.h
+index 5e335b6203f4..31c865d1842e 100644
+--- a/include/linux/delayacct.h
++++ b/include/linux/delayacct.h
+@@ -29,7 +29,7 @@
+ 
+ #ifdef CONFIG_TASK_DELAY_ACCT
+ struct task_delay_info {
+-	spinlock_t	lock;
++	raw_spinlock_t	lock;
+ 	unsigned int	flags;	/* Private per-task flags */
+ 
+ 	/* For each stat XXX, add following, aligned appropriately
+@@ -124,7 +124,7 @@ static inline void delayacct_blkio_start(void)
+ 
+ static inline void delayacct_blkio_end(struct task_struct *p)
+ {
+-	if (current->delays)
++	if (p->delays)
+ 		__delayacct_blkio_end(p);
+ 	delayacct_clear_flag(DELAYACCT_PF_BLKIO);
+ }
+diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
+index 92f20832fd28..e8ca5e654277 100644
+--- a/include/linux/dma-iommu.h
++++ b/include/linux/dma-iommu.h
+@@ -17,6 +17,7 @@
+ #define __DMA_IOMMU_H
+ 
+ #ifdef __KERNEL__
++#include <linux/types.h>
+ #include <asm/errno.h>
+ 
+ #ifdef CONFIG_IOMMU_DMA
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index d99b71bc2c66..091690119144 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -158,6 +158,15 @@ enum memcg_kmem_state {
+ 	KMEM_ONLINE,
+ };
+ 
++#if defined(CONFIG_SMP)
++struct memcg_padding {
++	char x[0];
++} ____cacheline_internodealigned_in_smp;
++#define MEMCG_PADDING(name)      struct memcg_padding name;
++#else
++#define MEMCG_PADDING(name)
++#endif
++
+ /*
+  * The memory controller data structure. The memory controller controls both
+  * page cache and RSS per cgroup. We would eventually like to provide
+@@ -205,7 +214,6 @@ struct mem_cgroup {
+ 	int		oom_kill_disable;
+ 
+ 	/* memory.events */
+-	atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
+ 	struct cgroup_file events_file;
+ 
+ 	/* protect arrays of thresholds */
+@@ -225,19 +233,26 @@ struct mem_cgroup {
+ 	 * mem_cgroup ? And what type of charges should we move ?
+ 	 */
+ 	unsigned long move_charge_at_immigrate;
++	/* taken only while moving_account > 0 */
++	spinlock_t		move_lock;
++	unsigned long		move_lock_flags;
++
++	MEMCG_PADDING(_pad1_);
++
+ 	/*
+ 	 * set > 0 if pages under this cgroup are moving to other cgroup.
+ 	 */
+ 	atomic_t		moving_account;
+-	/* taken only while moving_account > 0 */
+-	spinlock_t		move_lock;
+ 	struct task_struct	*move_lock_task;
+-	unsigned long		move_lock_flags;
+ 
+ 	/* memory.stat */
+ 	struct mem_cgroup_stat_cpu __percpu *stat_cpu;
++
++	MEMCG_PADDING(_pad2_);
++
+ 	atomic_long_t		stat[MEMCG_NR_STAT];
+ 	atomic_long_t		events[NR_VM_EVENT_ITEMS];
++	atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
+ 
+ 	unsigned long		socket_pressure;
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index d14261d6b213..edab43d2bec8 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -228,15 +228,16 @@ extern unsigned int kobjsize(const void *objp);
+ #define VM_HIGH_ARCH_4	BIT(VM_HIGH_ARCH_BIT_4)
+ #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
+ 
+-#if defined(CONFIG_X86)
+-# define VM_PAT		VM_ARCH_1	/* PAT reserves whole VMA at once (x86) */
+-#if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS)
++#ifdef CONFIG_ARCH_HAS_PKEYS
+ # define VM_PKEY_SHIFT	VM_HIGH_ARCH_BIT_0
+ # define VM_PKEY_BIT0	VM_HIGH_ARCH_0	/* A protection key is a 4-bit value */
+ # define VM_PKEY_BIT1	VM_HIGH_ARCH_1
+ # define VM_PKEY_BIT2	VM_HIGH_ARCH_2
+ # define VM_PKEY_BIT3	VM_HIGH_ARCH_3
+-#endif
++#endif /* CONFIG_ARCH_HAS_PKEYS */
++
++#if defined(CONFIG_X86)
++# define VM_PAT		VM_ARCH_1	/* PAT reserves whole VMA at once (x86) */
+ #elif defined(CONFIG_PPC)
+ # define VM_SAO		VM_ARCH_1	/* Strong Access Ordering (powerpc) */
+ #elif defined(CONFIG_PARISC)
+diff --git a/include/linux/mmc/sdio_ids.h b/include/linux/mmc/sdio_ids.h
+index cdd66a5fbd5e..0a7abe8a407f 100644
+--- a/include/linux/mmc/sdio_ids.h
++++ b/include/linux/mmc/sdio_ids.h
+@@ -35,6 +35,7 @@
+ #define SDIO_DEVICE_ID_BROADCOM_4335_4339	0x4335
+ #define SDIO_DEVICE_ID_BROADCOM_4339		0x4339
+ #define SDIO_DEVICE_ID_BROADCOM_43362		0xa962
++#define SDIO_DEVICE_ID_BROADCOM_43364		0xa9a4
+ #define SDIO_DEVICE_ID_BROADCOM_43430		0xa9a6
+ #define SDIO_DEVICE_ID_BROADCOM_4345		0x4345
+ #define SDIO_DEVICE_ID_BROADCOM_43455		0xa9bf
+diff --git a/include/linux/netfilter/ipset/ip_set_timeout.h b/include/linux/netfilter/ipset/ip_set_timeout.h
+index bfb3531fd88a..7ad8ddf9ca8a 100644
+--- a/include/linux/netfilter/ipset/ip_set_timeout.h
++++ b/include/linux/netfilter/ipset/ip_set_timeout.h
+@@ -65,8 +65,14 @@ ip_set_timeout_set(unsigned long *timeout, u32 value)
+ static inline u32
+ ip_set_timeout_get(const unsigned long *timeout)
+ {
+-	return *timeout == IPSET_ELEM_PERMANENT ? 0 :
+-		jiffies_to_msecs(*timeout - jiffies)/MSEC_PER_SEC;
++	u32 t;
++
++	if (*timeout == IPSET_ELEM_PERMANENT)
++		return 0;
++
++	t = jiffies_to_msecs(*timeout - jiffies)/MSEC_PER_SEC;
++	/* Zero value in userspace means no timeout */
++	return t == 0 ? 1 : t;
+ }
+ 
+ #endif	/* __KERNEL__ */
+diff --git a/include/linux/regulator/consumer.h b/include/linux/regulator/consumer.h
+index df176d7c2b87..25602afd4844 100644
+--- a/include/linux/regulator/consumer.h
++++ b/include/linux/regulator/consumer.h
+@@ -80,6 +80,7 @@ struct regmap;
+  * These modes can be OR'ed together to make up a mask of valid register modes.
+  */
+ 
++#define REGULATOR_MODE_INVALID			0x0
+ #define REGULATOR_MODE_FAST			0x1
+ #define REGULATOR_MODE_NORMAL			0x2
+ #define REGULATOR_MODE_IDLE			0x4
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index b4c9fda9d833..3361cc8eb635 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -348,7 +348,8 @@ struct earlycon_device {
+ };
+ 
+ struct earlycon_id {
+-	char	name[16];
++	char	name[15];
++	char	name_term;	/* In case compiler didn't '\0' term name */
+ 	char	compatible[128];
+ 	int	(*setup)(struct earlycon_device *, const char *options);
+ };
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 9cf770150539..5ccc4ec646cb 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -342,7 +342,7 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
+ 			struct pipe_inode_info *pipe, size_t len,
+ 			unsigned int flags);
+ 
+-void tcp_enter_quickack_mode(struct sock *sk);
++void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks);
+ static inline void tcp_dec_quickack_mode(struct sock *sk,
+ 					 const unsigned int pkts)
+ {
+diff --git a/include/soc/tegra/mc.h b/include/soc/tegra/mc.h
+index 233bae954970..be6e49124c6d 100644
+--- a/include/soc/tegra/mc.h
++++ b/include/soc/tegra/mc.h
+@@ -108,6 +108,8 @@ struct tegra_mc_soc {
+ 	u8 client_id_mask;
+ 
+ 	const struct tegra_smmu_soc *smmu;
++
++	u32 intmask;
+ };
+ 
+ struct tegra_mc {
+diff --git a/include/uapi/sound/asoc.h b/include/uapi/sound/asoc.h
+index 69c37ecbff7e..f3c4b46e39d8 100644
+--- a/include/uapi/sound/asoc.h
++++ b/include/uapi/sound/asoc.h
+@@ -139,6 +139,11 @@
+ #define SND_SOC_TPLG_DAI_FLGBIT_SYMMETRIC_CHANNELS      (1 << 1)
+ #define SND_SOC_TPLG_DAI_FLGBIT_SYMMETRIC_SAMPLEBITS    (1 << 2)
+ 
++/* DAI clock gating */
++#define SND_SOC_TPLG_DAI_CLK_GATE_UNDEFINED	0
++#define SND_SOC_TPLG_DAI_CLK_GATE_GATED	1
++#define SND_SOC_TPLG_DAI_CLK_GATE_CONT		2
++
+ /* DAI physical PCM data formats.
+  * Add new formats to the end of the list.
+  */
+@@ -160,6 +165,18 @@
+ #define SND_SOC_TPLG_LNK_FLGBIT_SYMMETRIC_SAMPLEBITS    (1 << 2)
+ #define SND_SOC_TPLG_LNK_FLGBIT_VOICE_WAKEUP            (1 << 3)
+ 
++/* DAI topology BCLK parameter
++ * For the backwards capability, by default codec is bclk master
++ */
++#define SND_SOC_TPLG_BCLK_CM         0 /* codec is bclk master */
++#define SND_SOC_TPLG_BCLK_CS         1 /* codec is bclk slave */
++
++/* DAI topology FSYNC parameter
++ * For the backwards capability, by default codec is fsync master
++ */
++#define SND_SOC_TPLG_FSYNC_CM         0 /* codec is fsync master */
++#define SND_SOC_TPLG_FSYNC_CS         1 /* codec is fsync slave */
++
+ /*
+  * Block Header.
+  * This header precedes all object and object arrays below.
+@@ -312,11 +329,11 @@ struct snd_soc_tplg_hw_config {
+ 	__le32 size;            /* in bytes of this structure */
+ 	__le32 id;		/* unique ID - - used to match */
+ 	__le32 fmt;		/* SND_SOC_DAI_FORMAT_ format value */
+-	__u8 clock_gated;	/* 1 if clock can be gated to save power */
++	__u8 clock_gated;	/* SND_SOC_TPLG_DAI_CLK_GATE_ value */
+ 	__u8 invert_bclk;	/* 1 for inverted BCLK, 0 for normal */
+ 	__u8 invert_fsync;	/* 1 for inverted frame clock, 0 for normal */
+-	__u8 bclk_master;	/* 1 for master of BCLK, 0 for slave */
+-	__u8 fsync_master;	/* 1 for master of FSYNC, 0 for slave */
++	__u8 bclk_master;	/* SND_SOC_TPLG_BCLK_ value */
++	__u8 fsync_master;	/* SND_SOC_TPLG_FSYNC_ value */
+ 	__u8 mclk_direction;    /* 0 for input, 1 for output */
+ 	__le16 reserved;	/* for 32bit alignment */
+ 	__le32 mclk_rate;	/* MCLK or SYSCLK freqency in Hz */
+diff --git a/ipc/msg.c b/ipc/msg.c
+index 56fd1c73eedc..574f76c9a2ff 100644
+--- a/ipc/msg.c
++++ b/ipc/msg.c
+@@ -758,7 +758,7 @@ static inline int pipelined_send(struct msg_queue *msq, struct msg_msg *msg,
+ 				WRITE_ONCE(msr->r_msg, ERR_PTR(-E2BIG));
+ 			} else {
+ 				ipc_update_pid(&msq->q_lrpid, task_pid(msr->r_tsk));
+-				msq->q_rtime = get_seconds();
++				msq->q_rtime = ktime_get_real_seconds();
+ 
+ 				wake_q_add(wake_q, msr->r_tsk);
+ 				WRITE_ONCE(msr->r_msg, msg);
+@@ -859,7 +859,7 @@ static long do_msgsnd(int msqid, long mtype, void __user *mtext,
+ 	}
+ 
+ 	ipc_update_pid(&msq->q_lspid, task_tgid(current));
+-	msq->q_stime = get_seconds();
++	msq->q_stime = ktime_get_real_seconds();
+ 
+ 	if (!pipelined_send(msq, msg, &wake_q)) {
+ 		/* no one is waiting for this message, enqueue it */
+@@ -1087,7 +1087,7 @@ static long do_msgrcv(int msqid, void __user *buf, size_t bufsz, long msgtyp, in
+ 
+ 			list_del(&msg->m_list);
+ 			msq->q_qnum--;
+-			msq->q_rtime = get_seconds();
++			msq->q_rtime = ktime_get_real_seconds();
+ 			ipc_update_pid(&msq->q_lrpid, task_tgid(current));
+ 			msq->q_cbytes -= msg->m_ts;
+ 			atomic_sub(msg->m_ts, &ns->msg_bytes);
+diff --git a/ipc/sem.c b/ipc/sem.c
+index 06be75d9217a..c6a8a971769d 100644
+--- a/ipc/sem.c
++++ b/ipc/sem.c
+@@ -104,7 +104,7 @@ struct sem {
+ 					/* that alter the semaphore */
+ 	struct list_head pending_const; /* pending single-sop operations */
+ 					/* that do not alter the semaphore*/
+-	time_t	sem_otime;	/* candidate for sem_otime */
++	time64_t	 sem_otime;	/* candidate for sem_otime */
+ } ____cacheline_aligned_in_smp;
+ 
+ /* One sem_array data structure for each set of semaphores in the system. */
+@@ -984,10 +984,10 @@ again:
+ static void set_semotime(struct sem_array *sma, struct sembuf *sops)
+ {
+ 	if (sops == NULL) {
+-		sma->sems[0].sem_otime = get_seconds();
++		sma->sems[0].sem_otime = ktime_get_real_seconds();
+ 	} else {
+ 		sma->sems[sops[0].sem_num].sem_otime =
+-							get_seconds();
++						ktime_get_real_seconds();
+ 	}
+ }
+ 
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index d7a807e81451..a0c5a3ec6e60 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -426,7 +426,7 @@ static int audit_field_valid(struct audit_entry *entry, struct audit_field *f)
+ 			return -EINVAL;
+ 		break;
+ 	case AUDIT_EXE:
+-		if (f->op != Audit_equal)
++		if (f->op != Audit_not_equal && f->op != Audit_equal)
+ 			return -EINVAL;
+ 		if (entry->rule.listnr != AUDIT_FILTER_EXIT)
+ 			return -EINVAL;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 4e0a4ac803db..479c031ec54c 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -471,6 +471,8 @@ static int audit_filter_rules(struct task_struct *tsk,
+ 			break;
+ 		case AUDIT_EXE:
+ 			result = audit_exe_compare(tsk, rule->exe);
++			if (f->op == Audit_not_equal)
++				result = !result;
+ 			break;
+ 		case AUDIT_UID:
+ 			result = audit_uid_comparator(cred->uid, f->op, f->uid);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 74fa60b4b438..4ed4613ed362 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1946,13 +1946,44 @@ static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,
+ 	 * for offload.
+ 	 */
+ 	ulen = info.jited_prog_len;
+-	info.jited_prog_len = prog->jited_len;
++	if (prog->aux->func_cnt) {
++		u32 i;
++
++		info.jited_prog_len = 0;
++		for (i = 0; i < prog->aux->func_cnt; i++)
++			info.jited_prog_len += prog->aux->func[i]->jited_len;
++	} else {
++		info.jited_prog_len = prog->jited_len;
++	}
++
+ 	if (info.jited_prog_len && ulen) {
+ 		if (bpf_dump_raw_ok()) {
+ 			uinsns = u64_to_user_ptr(info.jited_prog_insns);
+ 			ulen = min_t(u32, info.jited_prog_len, ulen);
+-			if (copy_to_user(uinsns, prog->bpf_func, ulen))
+-				return -EFAULT;
++
++			/* for multi-function programs, copy the JITed
++			 * instructions for all the functions
++			 */
++			if (prog->aux->func_cnt) {
++				u32 len, free, i;
++				u8 *img;
++
++				free = ulen;
++				for (i = 0; i < prog->aux->func_cnt; i++) {
++					len = prog->aux->func[i]->jited_len;
++					len = min_t(u32, len, free);
++					img = (u8 *) prog->aux->func[i]->bpf_func;
++					if (copy_to_user(uinsns, img, len))
++						return -EFAULT;
++					uinsns += len;
++					free -= len;
++					if (!free)
++						break;
++				}
++			} else {
++				if (copy_to_user(uinsns, prog->bpf_func, ulen))
++					return -EFAULT;
++			}
+ 		} else {
+ 			info.jited_prog_insns = 0;
+ 		}
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1b586f31cbfd..23d187ec33ea 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5065,7 +5065,7 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
+ 			/* hold the map. If the program is rejected by verifier,
+ 			 * the map will be released by release_maps() or it
+ 			 * will be used by the valid program until it's unloaded
+-			 * and all maps are released in free_bpf_prog_info()
++			 * and all maps are released in free_used_maps()
+ 			 */
+ 			map = bpf_map_inc(map, false);
+ 			if (IS_ERR(map)) {
+@@ -5856,7 +5856,7 @@ skip_full_check:
+ err_release_maps:
+ 	if (!env->prog->aux->used_maps)
+ 		/* if we didn't copy map pointers into bpf_prog_info, release
+-		 * them now. Otherwise free_bpf_prog_info() will release them.
++		 * them now. Otherwise free_used_maps() will release them.
+ 		 */
+ 		release_maps(env);
+ 	*prog = env->prog;
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index e2764d767f18..ca8ac2824f0b 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -44,23 +44,24 @@ void __delayacct_tsk_init(struct task_struct *tsk)
+ {
+ 	tsk->delays = kmem_cache_zalloc(delayacct_cache, GFP_KERNEL);
+ 	if (tsk->delays)
+-		spin_lock_init(&tsk->delays->lock);
++		raw_spin_lock_init(&tsk->delays->lock);
+ }
+ 
+ /*
+  * Finish delay accounting for a statistic using its timestamps (@start),
+  * accumalator (@total) and @count
+  */
+-static void delayacct_end(spinlock_t *lock, u64 *start, u64 *total, u32 *count)
++static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total,
++			  u32 *count)
+ {
+ 	s64 ns = ktime_get_ns() - *start;
+ 	unsigned long flags;
+ 
+ 	if (ns > 0) {
+-		spin_lock_irqsave(lock, flags);
++		raw_spin_lock_irqsave(lock, flags);
+ 		*total += ns;
+ 		(*count)++;
+-		spin_unlock_irqrestore(lock, flags);
++		raw_spin_unlock_irqrestore(lock, flags);
+ 	}
+ }
+ 
+@@ -127,7 +128,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 
+ 	/* zero XXX_total, non-zero XXX_count implies XXX stat overflowed */
+ 
+-	spin_lock_irqsave(&tsk->delays->lock, flags);
++	raw_spin_lock_irqsave(&tsk->delays->lock, flags);
+ 	tmp = d->blkio_delay_total + tsk->delays->blkio_delay;
+ 	d->blkio_delay_total = (tmp < d->blkio_delay_total) ? 0 : tmp;
+ 	tmp = d->swapin_delay_total + tsk->delays->swapin_delay;
+@@ -137,7 +138,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	d->blkio_count += tsk->delays->blkio_count;
+ 	d->swapin_count += tsk->delays->swapin_count;
+ 	d->freepages_count += tsk->delays->freepages_count;
+-	spin_unlock_irqrestore(&tsk->delays->lock, flags);
++	raw_spin_unlock_irqrestore(&tsk->delays->lock, flags);
+ 
+ 	return 0;
+ }
+@@ -147,10 +148,10 @@ __u64 __delayacct_blkio_ticks(struct task_struct *tsk)
+ 	__u64 ret;
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&tsk->delays->lock, flags);
++	raw_spin_lock_irqsave(&tsk->delays->lock, flags);
+ 	ret = nsec_to_clock_t(tsk->delays->blkio_delay +
+ 				tsk->delays->swapin_delay);
+-	spin_unlock_irqrestore(&tsk->delays->lock, flags);
++	raw_spin_unlock_irqrestore(&tsk->delays->lock, flags);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/fork.c b/kernel/fork.c
+index a5d21c42acfc..5ad558e6f8fe 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -440,6 +440,14 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ 			continue;
+ 		}
+ 		charge = 0;
++		/*
++		 * Don't duplicate many vmas if we've been oom-killed (for
++		 * example)
++		 */
++		if (fatal_signal_pending(current)) {
++			retval = -EINTR;
++			goto out;
++		}
+ 		if (mpnt->vm_flags & VM_ACCOUNT) {
+ 			unsigned long len = vma_pages(mpnt);
+ 
+diff --git a/kernel/hung_task.c b/kernel/hung_task.c
+index 751593ed7c0b..32b479468e4d 100644
+--- a/kernel/hung_task.c
++++ b/kernel/hung_task.c
+@@ -44,6 +44,7 @@ int __read_mostly sysctl_hung_task_warnings = 10;
+ 
+ static int __read_mostly did_panic;
+ static bool hung_task_show_lock;
++static bool hung_task_call_panic;
+ 
+ static struct task_struct *watchdog_task;
+ 
+@@ -127,10 +128,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
+ 	touch_nmi_watchdog();
+ 
+ 	if (sysctl_hung_task_panic) {
+-		if (hung_task_show_lock)
+-			debug_show_all_locks();
+-		trigger_all_cpu_backtrace();
+-		panic("hung_task: blocked tasks");
++		hung_task_show_lock = true;
++		hung_task_call_panic = true;
+ 	}
+ }
+ 
+@@ -193,6 +192,10 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
+ 	rcu_read_unlock();
+ 	if (hung_task_show_lock)
+ 		debug_show_all_locks();
++	if (hung_task_call_panic) {
++		trigger_all_cpu_backtrace();
++		panic("hung_task: blocked tasks");
++	}
+ }
+ 
+ static long hung_timeout_jiffies(unsigned long last_checked,
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index 2c16f1ab5e10..5be9a60a959f 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -241,7 +241,8 @@ static void kcov_put(struct kcov *kcov)
+ 
+ void kcov_task_init(struct task_struct *t)
+ {
+-	t->kcov_mode = KCOV_MODE_DISABLED;
++	WRITE_ONCE(t->kcov_mode, KCOV_MODE_DISABLED);
++	barrier();
+ 	t->kcov_size = 0;
+ 	t->kcov_area = NULL;
+ 	t->kcov = NULL;
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 481951bf091d..1a481ae12dec 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -319,8 +319,14 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data),
+ 	task = create->result;
+ 	if (!IS_ERR(task)) {
+ 		static const struct sched_param param = { .sched_priority = 0 };
++		char name[TASK_COMM_LEN];
+ 
+-		vsnprintf(task->comm, sizeof(task->comm), namefmt, args);
++		/*
++		 * task is already visible to other tasks, so updating
++		 * COMM must be protected.
++		 */
++		vsnprintf(name, sizeof(name), namefmt, args);
++		set_task_comm(task, name);
+ 		/*
+ 		 * root may have changed our (kthreadd's) priority or CPU mask.
+ 		 * The kernel thread should not inherit these properties.
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 895e6b76b25e..1a63739f48e8 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -348,10 +348,27 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
+ 	unsigned long pfn, pgoff, order;
+ 	pgprot_t pgprot = PAGE_KERNEL;
+ 	int error, nid, is_ram;
++	struct dev_pagemap *conflict_pgmap;
+ 
+ 	align_start = res->start & ~(SECTION_SIZE - 1);
+ 	align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
+ 		- align_start;
++	align_end = align_start + align_size - 1;
++
++	conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_start), NULL);
++	if (conflict_pgmap) {
++		dev_WARN(dev, "Conflicting mapping in same section\n");
++		put_dev_pagemap(conflict_pgmap);
++		return ERR_PTR(-ENOMEM);
++	}
++
++	conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_end), NULL);
++	if (conflict_pgmap) {
++		dev_WARN(dev, "Conflicting mapping in same section\n");
++		put_dev_pagemap(conflict_pgmap);
++		return ERR_PTR(-ENOMEM);
++	}
++
+ 	is_ram = region_intersects(align_start, align_size,
+ 		IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE);
+ 
+@@ -371,7 +388,6 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
+ 
+ 	mutex_lock(&pgmap_lock);
+ 	error = 0;
+-	align_end = align_start + align_size - 1;
+ 
+ 	foreach_order_pgoff(res, order, pgoff) {
+ 		error = __radix_tree_insert(&pgmap_radix,
+diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
+index 4c10be0f4843..34238a7d48f6 100644
+--- a/kernel/power/suspend.c
++++ b/kernel/power/suspend.c
+@@ -60,7 +60,7 @@ static const struct platform_s2idle_ops *s2idle_ops;
+ static DECLARE_WAIT_QUEUE_HEAD(s2idle_wait_head);
+ 
+ enum s2idle_states __read_mostly s2idle_state;
+-static DEFINE_SPINLOCK(s2idle_lock);
++static DEFINE_RAW_SPINLOCK(s2idle_lock);
+ 
+ void s2idle_set_ops(const struct platform_s2idle_ops *ops)
+ {
+@@ -78,12 +78,12 @@ static void s2idle_enter(void)
+ {
+ 	trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, true);
+ 
+-	spin_lock_irq(&s2idle_lock);
++	raw_spin_lock_irq(&s2idle_lock);
+ 	if (pm_wakeup_pending())
+ 		goto out;
+ 
+ 	s2idle_state = S2IDLE_STATE_ENTER;
+-	spin_unlock_irq(&s2idle_lock);
++	raw_spin_unlock_irq(&s2idle_lock);
+ 
+ 	get_online_cpus();
+ 	cpuidle_resume();
+@@ -97,11 +97,11 @@ static void s2idle_enter(void)
+ 	cpuidle_pause();
+ 	put_online_cpus();
+ 
+-	spin_lock_irq(&s2idle_lock);
++	raw_spin_lock_irq(&s2idle_lock);
+ 
+  out:
+ 	s2idle_state = S2IDLE_STATE_NONE;
+-	spin_unlock_irq(&s2idle_lock);
++	raw_spin_unlock_irq(&s2idle_lock);
+ 
+ 	trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_TO_IDLE, false);
+ }
+@@ -156,12 +156,12 @@ void s2idle_wake(void)
+ {
+ 	unsigned long flags;
+ 
+-	spin_lock_irqsave(&s2idle_lock, flags);
++	raw_spin_lock_irqsave(&s2idle_lock, flags);
+ 	if (s2idle_state > S2IDLE_STATE_NONE) {
+ 		s2idle_state = S2IDLE_STATE_WAKE;
+ 		wake_up(&s2idle_wait_head);
+ 	}
+-	spin_unlock_irqrestore(&s2idle_lock, flags);
++	raw_spin_unlock_irqrestore(&s2idle_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(s2idle_wake);
+ 
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index 449d67edfa4b..d7d091309054 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -281,7 +281,7 @@ void printk_safe_flush_on_panic(void)
+ 	 * Make sure that we could access the main ring buffer.
+ 	 * Do not risk a double release when more CPUs are up.
+ 	 */
+-	if (in_nmi() && raw_spin_is_locked(&logbuf_lock)) {
++	if (raw_spin_is_locked(&logbuf_lock)) {
+ 		if (num_online_cpus() > 1)
+ 			return;
+ 
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index e13df951aca7..28592b62b1d5 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -183,22 +183,21 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ static unsigned long sugov_aggregate_util(struct sugov_cpu *sg_cpu)
+ {
+ 	struct rq *rq = cpu_rq(sg_cpu->cpu);
+-	unsigned long util;
+ 
+-	if (rq->rt.rt_nr_running) {
+-		util = sg_cpu->max;
+-	} else {
+-		util = sg_cpu->util_dl;
+-		if (rq->cfs.h_nr_running)
+-			util += sg_cpu->util_cfs;
+-	}
++	if (rq->rt.rt_nr_running)
++		return sg_cpu->max;
+ 
+ 	/*
++	 * Utilization required by DEADLINE must always be granted while, for
++	 * FAIR, we use blocked utilization of IDLE CPUs as a mechanism to
++	 * gracefully reduce the frequency when no tasks show up for longer
++	 * periods of time.
++	 *
+ 	 * Ideally we would like to set util_dl as min/guaranteed freq and
+ 	 * util_cfs + util_dl as requested freq. However, cpufreq is not yet
+ 	 * ready for such an interface. So, we only do the latter for now.
+ 	 */
+-	return min(util, sg_cpu->max);
++	return min(sg_cpu->max, (sg_cpu->util_dl + sg_cpu->util_cfs));
+ }
+ 
+ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags)
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 2f6fa95de2d8..1ff523dae6e2 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -37,7 +37,7 @@ struct cpu_stop_done {
+ struct cpu_stopper {
+ 	struct task_struct	*thread;
+ 
+-	spinlock_t		lock;
++	raw_spinlock_t		lock;
+ 	bool			enabled;	/* is this stopper enabled? */
+ 	struct list_head	works;		/* list of pending works */
+ 
+@@ -81,13 +81,13 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	unsigned long flags;
+ 	bool enabled;
+ 
+-	spin_lock_irqsave(&stopper->lock, flags);
++	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 	enabled = stopper->enabled;
+ 	if (enabled)
+ 		__cpu_stop_queue_work(stopper, work, &wakeq);
+ 	else if (work->done)
+ 		cpu_stop_signal_done(work->done);
+-	spin_unlock_irqrestore(&stopper->lock, flags);
++	raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ 
+ 	wake_up_q(&wakeq);
+ 
+@@ -237,8 +237,8 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 	DEFINE_WAKE_Q(wakeq);
+ 	int err;
+ retry:
+-	spin_lock_irq(&stopper1->lock);
+-	spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
++	raw_spin_lock_irq(&stopper1->lock);
++	raw_spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
+ 
+ 	err = -ENOENT;
+ 	if (!stopper1->enabled || !stopper2->enabled)
+@@ -261,8 +261,8 @@ retry:
+ 	__cpu_stop_queue_work(stopper1, work1, &wakeq);
+ 	__cpu_stop_queue_work(stopper2, work2, &wakeq);
+ unlock:
+-	spin_unlock(&stopper2->lock);
+-	spin_unlock_irq(&stopper1->lock);
++	raw_spin_unlock(&stopper2->lock);
++	raw_spin_unlock_irq(&stopper1->lock);
+ 
+ 	if (unlikely(err == -EDEADLK)) {
+ 		while (stop_cpus_in_progress)
+@@ -461,9 +461,9 @@ static int cpu_stop_should_run(unsigned int cpu)
+ 	unsigned long flags;
+ 	int run;
+ 
+-	spin_lock_irqsave(&stopper->lock, flags);
++	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 	run = !list_empty(&stopper->works);
+-	spin_unlock_irqrestore(&stopper->lock, flags);
++	raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ 	return run;
+ }
+ 
+@@ -474,13 +474,13 @@ static void cpu_stopper_thread(unsigned int cpu)
+ 
+ repeat:
+ 	work = NULL;
+-	spin_lock_irq(&stopper->lock);
++	raw_spin_lock_irq(&stopper->lock);
+ 	if (!list_empty(&stopper->works)) {
+ 		work = list_first_entry(&stopper->works,
+ 					struct cpu_stop_work, list);
+ 		list_del_init(&work->list);
+ 	}
+-	spin_unlock_irq(&stopper->lock);
++	raw_spin_unlock_irq(&stopper->lock);
+ 
+ 	if (work) {
+ 		cpu_stop_fn_t fn = work->fn;
+@@ -554,7 +554,7 @@ static int __init cpu_stop_init(void)
+ 	for_each_possible_cpu(cpu) {
+ 		struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
+ 
+-		spin_lock_init(&stopper->lock);
++		raw_spin_lock_init(&stopper->lock);
+ 		INIT_LIST_HEAD(&stopper->works);
+ 	}
+ 
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 84f37420fcf5..e0ff8f94f237 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -453,8 +453,8 @@ static inline int __clocksource_watchdog_kthread(void) { return 0; }
+ static bool clocksource_is_watchdog(struct clocksource *cs) { return false; }
+ void clocksource_mark_unstable(struct clocksource *cs) { }
+ 
+-static void inline clocksource_watchdog_lock(unsigned long *flags) { }
+-static void inline clocksource_watchdog_unlock(unsigned long *flags) { }
++static inline void clocksource_watchdog_lock(unsigned long *flags) { }
++static inline void clocksource_watchdog_unlock(unsigned long *flags) { }
+ 
+ #endif /* CONFIG_CLOCKSOURCE_WATCHDOG */
+ 
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index 8b5bdcf64871..f14a547f6303 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -681,6 +681,8 @@ event_trigger_callback(struct event_command *cmd_ops,
+ 		goto out_free;
+ 
+  out_reg:
++	/* Up the trigger_data count to make sure reg doesn't free it on failure */
++	event_trigger_init(trigger_ops, trigger_data);
+ 	ret = cmd_ops->reg(glob, trigger_ops, trigger_data, file);
+ 	/*
+ 	 * The above returns on success the # of functions enabled,
+@@ -688,11 +690,13 @@ event_trigger_callback(struct event_command *cmd_ops,
+ 	 * Consider no functions a failure too.
+ 	 */
+ 	if (!ret) {
++		cmd_ops->unreg(glob, trigger_ops, trigger_data, file);
+ 		ret = -ENOENT;
+-		goto out_free;
+-	} else if (ret < 0)
+-		goto out_free;
+-	ret = 0;
++	} else if (ret > 0)
++		ret = 0;
++
++	/* Down the counter of trigger_data or free it if not used anymore */
++	event_trigger_free(trigger_ops, trigger_data);
+  out:
+ 	return ret;
+ 
+@@ -1418,6 +1422,9 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+ 		goto out;
+ 	}
+ 
++	/* Up the trigger_data count to make sure nothing frees it on failure */
++	event_trigger_init(trigger_ops, trigger_data);
++
+ 	if (trigger) {
+ 		number = strsep(&trigger, ":");
+ 
+@@ -1468,6 +1475,7 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+ 		goto out_disable;
+ 	/* Just return zero, not the number of enabled functions */
+ 	ret = 0;
++	event_trigger_free(trigger_ops, trigger_data);
+  out:
+ 	return ret;
+ 
+@@ -1478,7 +1486,7 @@ int event_enable_trigger_func(struct event_command *cmd_ops,
+  out_free:
+ 	if (cmd_ops->set_filter)
+ 		cmd_ops->set_filter(NULL, trigger_data, NULL);
+-	kfree(trigger_data);
++	event_trigger_free(trigger_ops, trigger_data);
+ 	kfree(enable_data);
+ 	goto out;
+ }
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index eebc7c92f6d0..dd88cc0af065 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -400,11 +400,10 @@ static struct trace_kprobe *find_trace_kprobe(const char *event,
+ static int
+ enable_trace_kprobe(struct trace_kprobe *tk, struct trace_event_file *file)
+ {
++	struct event_file_link *link = NULL;
+ 	int ret = 0;
+ 
+ 	if (file) {
+-		struct event_file_link *link;
+-
+ 		link = kmalloc(sizeof(*link), GFP_KERNEL);
+ 		if (!link) {
+ 			ret = -ENOMEM;
+@@ -424,6 +423,18 @@ enable_trace_kprobe(struct trace_kprobe *tk, struct trace_event_file *file)
+ 		else
+ 			ret = enable_kprobe(&tk->rp.kp);
+ 	}
++
++	if (ret) {
++		if (file) {
++			/* Notice the if is true on not WARN() */
++			if (!WARN_ON_ONCE(!link))
++				list_del_rcu(&link->list);
++			kfree(link);
++			tk->tp.flags &= ~TP_FLAG_TRACE;
++		} else {
++			tk->tp.flags &= ~TP_FLAG_PROFILE;
++		}
++	}
+  out:
+ 	return ret;
+ }
+diff --git a/lib/dma-direct.c b/lib/dma-direct.c
+index bbfb229aa067..970d39155618 100644
+--- a/lib/dma-direct.c
++++ b/lib/dma-direct.c
+@@ -84,6 +84,13 @@ again:
+ 		__free_pages(page, page_order);
+ 		page = NULL;
+ 
++		if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
++		    dev->coherent_dma_mask < DMA_BIT_MASK(64) &&
++		    !(gfp & (GFP_DMA32 | GFP_DMA))) {
++			gfp |= GFP_DMA32;
++			goto again;
++		}
++
+ 		if (IS_ENABLED(CONFIG_ZONE_DMA) &&
+ 		    dev->coherent_dma_mask < DMA_BIT_MASK(32) &&
+ 		    !(gfp & GFP_DMA)) {
+diff --git a/mm/slub.c b/mm/slub.c
+index 613c8dc2f409..067db0ff7496 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -711,7 +711,7 @@ void object_err(struct kmem_cache *s, struct page *page,
+ 	print_trailer(s, page, object);
+ }
+ 
+-static void slab_err(struct kmem_cache *s, struct page *page,
++static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page,
+ 			const char *fmt, ...)
+ {
+ 	va_list args;
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index ebff729cc956..9ff21a12ea00 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -1519,7 +1519,7 @@ static void __vunmap(const void *addr, int deallocate_pages)
+ 			addr))
+ 		return;
+ 
+-	area = remove_vm_area(addr);
++	area = find_vmap_area((unsigned long)addr)->vm;
+ 	if (unlikely(!area)) {
+ 		WARN(1, KERN_ERR "Trying to vfree() nonexistent vm area (%p)\n",
+ 				addr);
+@@ -1529,6 +1529,7 @@ static void __vunmap(const void *addr, int deallocate_pages)
+ 	debug_check_no_locks_freed(addr, get_vm_area_size(area));
+ 	debug_check_no_obj_freed(addr, get_vm_area_size(area));
+ 
++	remove_vm_area(addr);
+ 	if (deallocate_pages) {
+ 		int i;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2af787e8b130..1ccc2a2ac2e9 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -7113,16 +7113,19 @@ int dev_change_tx_queue_len(struct net_device *dev, unsigned long new_len)
+ 		dev->tx_queue_len = new_len;
+ 		res = call_netdevice_notifiers(NETDEV_CHANGE_TX_QUEUE_LEN, dev);
+ 		res = notifier_to_errno(res);
+-		if (res) {
+-			netdev_err(dev,
+-				   "refused to change device tx_queue_len\n");
+-			dev->tx_queue_len = orig_len;
+-			return res;
+-		}
+-		return dev_qdisc_change_tx_queue_len(dev);
++		if (res)
++			goto err_rollback;
++		res = dev_qdisc_change_tx_queue_len(dev);
++		if (res)
++			goto err_rollback;
+ 	}
+ 
+ 	return 0;
++
++err_rollback:
++	netdev_err(dev, "refused to change device tx_queue_len\n");
++	dev->tx_queue_len = orig_len;
++	return res;
+ }
+ 
+ /**
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 511d6748ea5f..6901349f07d7 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -292,19 +292,19 @@ __be32 fib_compute_spec_dst(struct sk_buff *skb)
+ 		return ip_hdr(skb)->daddr;
+ 
+ 	in_dev = __in_dev_get_rcu(dev);
+-	BUG_ON(!in_dev);
+ 
+ 	net = dev_net(dev);
+ 
+ 	scope = RT_SCOPE_UNIVERSE;
+ 	if (!ipv4_is_zeronet(ip_hdr(skb)->saddr)) {
++		bool vmark = in_dev && IN_DEV_SRC_VMARK(in_dev);
+ 		struct flowi4 fl4 = {
+ 			.flowi4_iif = LOOPBACK_IFINDEX,
+ 			.flowi4_oif = l3mdev_master_ifindex_rcu(dev),
+ 			.daddr = ip_hdr(skb)->saddr,
+ 			.flowi4_tos = RT_TOS(ip_hdr(skb)->tos),
+ 			.flowi4_scope = scope,
+-			.flowi4_mark = IN_DEV_SRC_VMARK(in_dev) ? skb->mark : 0,
++			.flowi4_mark = vmark ? skb->mark : 0,
+ 		};
+ 		if (!fib_lookup(net, &fl4, &res, 0))
+ 			return FIB_RES_PREFSRC(net, res);
+diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
+index 43f620feb1c4..13722462d99b 100644
+--- a/net/ipv4/ipconfig.c
++++ b/net/ipv4/ipconfig.c
+@@ -748,6 +748,11 @@ static void __init ic_bootp_init_ext(u8 *e)
+  */
+ static inline void __init ic_bootp_init(void)
+ {
++	/* Re-initialise all name servers to NONE, in case any were set via the
++	 * "ip=" or "nfsaddrs=" kernel command line parameters: any IP addresses
++	 * specified there will already have been decoded but are no longer
++	 * needed
++	 */
+ 	ic_nameservers_predef();
+ 
+ 	dev_add_pack(&bootp_packet_type);
+@@ -1368,6 +1373,13 @@ static int __init ip_auto_config(void)
+ 	int err;
+ 	unsigned int i;
+ 
++	/* Initialise all name servers to NONE (but only if the "ip=" or
++	 * "nfsaddrs=" kernel command line parameters weren't decoded, otherwise
++	 * we'll overwrite the IP addresses specified there)
++	 */
++	if (ic_set_manually == 0)
++		ic_nameservers_predef();
++
+ #ifdef CONFIG_PROC_FS
+ 	proc_create("pnp", 0444, init_net.proc_net, &pnp_seq_fops);
+ #endif /* CONFIG_PROC_FS */
+@@ -1588,6 +1600,7 @@ static int __init ip_auto_config_setup(char *addrs)
+ 		return 1;
+ 	}
+ 
++	/* Initialise all name servers to NONE */
+ 	ic_nameservers_predef();
+ 
+ 	/* Parse string for static IP assignment.  */
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 58e2f479ffb4..4bfff3c87e8e 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -354,6 +354,10 @@ static u32 bbr_target_cwnd(struct sock *sk, u32 bw, int gain)
+ 	/* Reduce delayed ACKs by rounding up cwnd to the next even number. */
+ 	cwnd = (cwnd + 1) & ~1U;
+ 
++	/* Ensure gain cycling gets inflight above BDP even for small BDPs. */
++	if (bbr->mode == BBR_PROBE_BW && gain > BBR_UNIT)
++		cwnd += 2;
++
+ 	return cwnd;
+ }
+ 
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index c78fb53988a1..1a9b88c8cf72 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -138,7 +138,7 @@ static void dctcp_ce_state_0_to_1(struct sock *sk)
+ 		 */
+ 		if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
+ 			__tcp_send_ack(sk, ca->prior_rcv_nxt);
+-		tcp_enter_quickack_mode(sk);
++		tcp_enter_quickack_mode(sk, 1);
+ 	}
+ 
+ 	ca->prior_rcv_nxt = tp->rcv_nxt;
+@@ -159,7 +159,7 @@ static void dctcp_ce_state_1_to_0(struct sock *sk)
+ 		 */
+ 		if (inet_csk(sk)->icsk_ack.pending & ICSK_ACK_TIMER)
+ 			__tcp_send_ack(sk, ca->prior_rcv_nxt);
+-		tcp_enter_quickack_mode(sk);
++		tcp_enter_quickack_mode(sk, 1);
+ 	}
+ 
+ 	ca->prior_rcv_nxt = tp->rcv_nxt;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 0f5e9510c3fa..4f115830f6a8 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -184,21 +184,23 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
+ 	}
+ }
+ 
+-static void tcp_incr_quickack(struct sock *sk)
++static void tcp_incr_quickack(struct sock *sk, unsigned int max_quickacks)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	unsigned int quickacks = tcp_sk(sk)->rcv_wnd / (2 * icsk->icsk_ack.rcv_mss);
+ 
+ 	if (quickacks == 0)
+ 		quickacks = 2;
++	quickacks = min(quickacks, max_quickacks);
+ 	if (quickacks > icsk->icsk_ack.quick)
+-		icsk->icsk_ack.quick = min(quickacks, TCP_MAX_QUICKACKS);
++		icsk->icsk_ack.quick = quickacks;
+ }
+ 
+-void tcp_enter_quickack_mode(struct sock *sk)
++void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+-	tcp_incr_quickack(sk);
++
++	tcp_incr_quickack(sk, max_quickacks);
+ 	icsk->icsk_ack.pingpong = 0;
+ 	icsk->icsk_ack.ato = TCP_ATO_MIN;
+ }
+@@ -225,8 +227,15 @@ static void tcp_ecn_queue_cwr(struct tcp_sock *tp)
+ 
+ static void tcp_ecn_accept_cwr(struct tcp_sock *tp, const struct sk_buff *skb)
+ {
+-	if (tcp_hdr(skb)->cwr)
++	if (tcp_hdr(skb)->cwr) {
+ 		tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
++
++		/* If the sender is telling us it has entered CWR, then its
++		 * cwnd may be very low (even just 1 packet), so we should ACK
++		 * immediately.
++		 */
++		tcp_enter_quickack_mode((struct sock *)tp, 2);
++	}
+ }
+ 
+ static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp)
+@@ -234,8 +243,10 @@ static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp)
+ 	tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
+ }
+ 
+-static void __tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
++static void __tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb)
+ {
++	struct tcp_sock *tp = tcp_sk(sk);
++
+ 	switch (TCP_SKB_CB(skb)->ip_dsfield & INET_ECN_MASK) {
+ 	case INET_ECN_NOT_ECT:
+ 		/* Funny extension: if ECT is not set on a segment,
+@@ -243,31 +254,31 @@ static void __tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
+ 		 * it is probably a retransmit.
+ 		 */
+ 		if (tp->ecn_flags & TCP_ECN_SEEN)
+-			tcp_enter_quickack_mode((struct sock *)tp);
++			tcp_enter_quickack_mode(sk, 2);
+ 		break;
+ 	case INET_ECN_CE:
+-		if (tcp_ca_needs_ecn((struct sock *)tp))
+-			tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_IS_CE);
++		if (tcp_ca_needs_ecn(sk))
++			tcp_ca_event(sk, CA_EVENT_ECN_IS_CE);
+ 
+ 		if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {
+ 			/* Better not delay acks, sender can have a very low cwnd */
+-			tcp_enter_quickack_mode((struct sock *)tp);
++			tcp_enter_quickack_mode(sk, 2);
+ 			tp->ecn_flags |= TCP_ECN_DEMAND_CWR;
+ 		}
+ 		tp->ecn_flags |= TCP_ECN_SEEN;
+ 		break;
+ 	default:
+-		if (tcp_ca_needs_ecn((struct sock *)tp))
+-			tcp_ca_event((struct sock *)tp, CA_EVENT_ECN_NO_CE);
++		if (tcp_ca_needs_ecn(sk))
++			tcp_ca_event(sk, CA_EVENT_ECN_NO_CE);
+ 		tp->ecn_flags |= TCP_ECN_SEEN;
+ 		break;
+ 	}
+ }
+ 
+-static void tcp_ecn_check_ce(struct tcp_sock *tp, const struct sk_buff *skb)
++static void tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb)
+ {
+-	if (tp->ecn_flags & TCP_ECN_OK)
+-		__tcp_ecn_check_ce(tp, skb);
++	if (tcp_sk(sk)->ecn_flags & TCP_ECN_OK)
++		__tcp_ecn_check_ce(sk, skb);
+ }
+ 
+ static void tcp_ecn_rcv_synack(struct tcp_sock *tp, const struct tcphdr *th)
+@@ -666,7 +677,7 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
+ 		/* The _first_ data packet received, initialize
+ 		 * delayed ACK engine.
+ 		 */
+-		tcp_incr_quickack(sk);
++		tcp_incr_quickack(sk, TCP_MAX_QUICKACKS);
+ 		icsk->icsk_ack.ato = TCP_ATO_MIN;
+ 	} else {
+ 		int m = now - icsk->icsk_ack.lrcvtime;
+@@ -682,13 +693,13 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
+ 			/* Too long gap. Apparently sender failed to
+ 			 * restart window, so that we send ACKs quickly.
+ 			 */
+-			tcp_incr_quickack(sk);
++			tcp_incr_quickack(sk, TCP_MAX_QUICKACKS);
+ 			sk_mem_reclaim(sk);
+ 		}
+ 	}
+ 	icsk->icsk_ack.lrcvtime = now;
+ 
+-	tcp_ecn_check_ce(tp, skb);
++	tcp_ecn_check_ce(sk, skb);
+ 
+ 	if (skb->len >= 128)
+ 		tcp_grow_window(sk, skb);
+@@ -4136,7 +4147,7 @@ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
+ 	if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
+ 	    before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
+-		tcp_enter_quickack_mode(sk);
++		tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+ 
+ 		if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) {
+ 			u32 end_seq = TCP_SKB_CB(skb)->end_seq;
+@@ -4404,7 +4415,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
+ 	u32 seq, end_seq;
+ 	bool fragstolen;
+ 
+-	tcp_ecn_check_ce(tp, skb);
++	tcp_ecn_check_ce(sk, skb);
+ 
+ 	if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);
+@@ -4667,7 +4678,7 @@ queue_and_out:
+ 		tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
+ 
+ out_of_window:
+-		tcp_enter_quickack_mode(sk);
++		tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+ 		inet_csk_schedule_ack(sk);
+ drop:
+ 		tcp_drop(sk, skb);
+@@ -4678,8 +4689,6 @@ drop:
+ 	if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt + tcp_receive_window(tp)))
+ 		goto out_of_window;
+ 
+-	tcp_enter_quickack_mode(sk);
+-
+ 	if (before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
+ 		/* Partial packet, seq < rcv_next < end_seq */
+ 		SOCK_DEBUG(sk, "partial packet: rcv_next %X seq %X - %X\n",
+@@ -5746,7 +5755,7 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
+ 			 * to stand against the temptation 8)     --ANK
+ 			 */
+ 			inet_csk_schedule_ack(sk);
+-			tcp_enter_quickack_mode(sk);
++			tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS);
+ 			inet_csk_reset_xmit_timer(sk, ICSK_TIME_DACK,
+ 						  TCP_DELACK_MAX, TCP_RTO_MAX);
+ 
+diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h
+index bbad940c0137..8a33dac4e805 100644
+--- a/net/netfilter/ipset/ip_set_hash_gen.h
++++ b/net/netfilter/ipset/ip_set_hash_gen.h
+@@ -1234,7 +1234,10 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set,
+ 	pr_debug("Create set %s with family %s\n",
+ 		 set->name, set->family == NFPROTO_IPV4 ? "inet" : "inet6");
+ 
+-#ifndef IP_SET_PROTO_UNDEF
++#ifdef IP_SET_PROTO_UNDEF
++	if (set->family != NFPROTO_UNSPEC)
++		return -IPSET_ERR_INVALID_FAMILY;
++#else
+ 	if (!(set->family == NFPROTO_IPV4 || set->family == NFPROTO_IPV6))
+ 		return -IPSET_ERR_INVALID_FAMILY;
+ #endif
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 501e48a7965b..8d8dfe417014 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2728,12 +2728,13 @@ static struct nft_set *nf_tables_set_lookup_byid(const struct net *net,
+ 	u32 id = ntohl(nla_get_be32(nla));
+ 
+ 	list_for_each_entry(trans, &net->nft.commit_list, list) {
+-		struct nft_set *set = nft_trans_set(trans);
++		if (trans->msg_type == NFT_MSG_NEWSET) {
++			struct nft_set *set = nft_trans_set(trans);
+ 
+-		if (trans->msg_type == NFT_MSG_NEWSET &&
+-		    id == nft_trans_set_id(trans) &&
+-		    nft_active_genmask(set, genmask))
+-			return set;
++			if (id == nft_trans_set_id(trans) &&
++			    nft_active_genmask(set, genmask))
++				return set;
++		}
+ 	}
+ 	return ERR_PTR(-ENOENT);
+ }
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 2e2dd88fc79f..890f22f90344 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1009,6 +1009,11 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
+ 			return err;
+ 	}
+ 
++	if (nlk->ngroups == 0)
++		groups = 0;
++	else
++		groups &= (1ULL << nlk->ngroups) - 1;
++
+ 	bound = nlk->bound;
+ 	if (bound) {
+ 		/* Ensure nlk->portid is up-to-date. */
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index 48332a6ed738..d152e48ea371 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -344,6 +344,11 @@ struct rds_ib_mr *rds_ib_reg_frmr(struct rds_ib_device *rds_ibdev,
+ 	struct rds_ib_frmr *frmr;
+ 	int ret;
+ 
++	if (!ic) {
++		/* TODO: Add FRWR support for RDS_GET_MR using proxy qp*/
++		return ERR_PTR(-EOPNOTSUPP);
++	}
++
+ 	do {
+ 		if (ibmr)
+ 			rds_ib_free_frmr(ibmr, true);
+diff --git a/net/rds/ib_mr.h b/net/rds/ib_mr.h
+index 0ea4ab017a8c..655f01d427fe 100644
+--- a/net/rds/ib_mr.h
++++ b/net/rds/ib_mr.h
+@@ -115,7 +115,8 @@ void rds_ib_get_mr_info(struct rds_ib_device *rds_ibdev,
+ 			struct rds_info_rdma_connection *iinfo);
+ void rds_ib_destroy_mr_pool(struct rds_ib_mr_pool *);
+ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+-		    struct rds_sock *rs, u32 *key_ret);
++		    struct rds_sock *rs, u32 *key_ret,
++		    struct rds_connection *conn);
+ void rds_ib_sync_mr(void *trans_private, int dir);
+ void rds_ib_free_mr(void *trans_private, int invalidate);
+ void rds_ib_flush_mrs(void);
+diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
+index e678699268a2..2e49a40a5e11 100644
+--- a/net/rds/ib_rdma.c
++++ b/net/rds/ib_rdma.c
+@@ -537,11 +537,12 @@ void rds_ib_flush_mrs(void)
+ }
+ 
+ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+-		    struct rds_sock *rs, u32 *key_ret)
++		    struct rds_sock *rs, u32 *key_ret,
++		    struct rds_connection *conn)
+ {
+ 	struct rds_ib_device *rds_ibdev;
+ 	struct rds_ib_mr *ibmr = NULL;
+-	struct rds_ib_connection *ic = rs->rs_conn->c_transport_data;
++	struct rds_ib_connection *ic = NULL;
+ 	int ret;
+ 
+ 	rds_ibdev = rds_ib_get_device(rs->rs_bound_addr);
+@@ -550,6 +551,9 @@ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+ 		goto out;
+ 	}
+ 
++	if (conn)
++		ic = conn->c_transport_data;
++
+ 	if (!rds_ibdev->mr_8k_pool || !rds_ibdev->mr_1m_pool) {
+ 		ret = -ENODEV;
+ 		goto out;
+@@ -559,17 +563,18 @@ void *rds_ib_get_mr(struct scatterlist *sg, unsigned long nents,
+ 		ibmr = rds_ib_reg_frmr(rds_ibdev, ic, sg, nents, key_ret);
+ 	else
+ 		ibmr = rds_ib_reg_fmr(rds_ibdev, sg, nents, key_ret);
+-	if (ibmr)
+-		rds_ibdev = NULL;
+-
+- out:
+-	if (!ibmr)
++	if (IS_ERR(ibmr)) {
++		ret = PTR_ERR(ibmr);
+ 		pr_warn("RDS/IB: rds_ib_get_mr failed (errno=%d)\n", ret);
++	} else {
++		return ibmr;
++	}
+ 
++ out:
+ 	if (rds_ibdev)
+ 		rds_ib_dev_put(rds_ibdev);
+ 
+-	return ibmr;
++	return ERR_PTR(ret);
+ }
+ 
+ void rds_ib_destroy_mr_pool(struct rds_ib_mr_pool *pool)
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index 634cfcb7bba6..80920e47f2c7 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -170,7 +170,8 @@ static int rds_pin_pages(unsigned long user_addr, unsigned int nr_pages,
+ }
+ 
+ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+-				u64 *cookie_ret, struct rds_mr **mr_ret)
++			  u64 *cookie_ret, struct rds_mr **mr_ret,
++			  struct rds_conn_path *cp)
+ {
+ 	struct rds_mr *mr = NULL, *found;
+ 	unsigned int nr_pages;
+@@ -269,7 +270,8 @@ static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
+ 	 * Note that dma_map() implies that pending writes are
+ 	 * flushed to RAM, so no dma_sync is needed here. */
+ 	trans_private = rs->rs_transport->get_mr(sg, nents, rs,
+-						 &mr->r_key);
++						 &mr->r_key,
++						 cp ? cp->cp_conn : NULL);
+ 
+ 	if (IS_ERR(trans_private)) {
+ 		for (i = 0 ; i < nents; i++)
+@@ -330,7 +332,7 @@ int rds_get_mr(struct rds_sock *rs, char __user *optval, int optlen)
+ 			   sizeof(struct rds_get_mr_args)))
+ 		return -EFAULT;
+ 
+-	return __rds_rdma_map(rs, &args, NULL, NULL);
++	return __rds_rdma_map(rs, &args, NULL, NULL, NULL);
+ }
+ 
+ int rds_get_mr_for_dest(struct rds_sock *rs, char __user *optval, int optlen)
+@@ -354,7 +356,7 @@ int rds_get_mr_for_dest(struct rds_sock *rs, char __user *optval, int optlen)
+ 	new_args.cookie_addr = args.cookie_addr;
+ 	new_args.flags = args.flags;
+ 
+-	return __rds_rdma_map(rs, &new_args, NULL, NULL);
++	return __rds_rdma_map(rs, &new_args, NULL, NULL, NULL);
+ }
+ 
+ /*
+@@ -782,7 +784,8 @@ int rds_cmsg_rdma_map(struct rds_sock *rs, struct rds_message *rm,
+ 	    rm->m_rdma_cookie != 0)
+ 		return -EINVAL;
+ 
+-	return __rds_rdma_map(rs, CMSG_DATA(cmsg), &rm->m_rdma_cookie, &rm->rdma.op_rdma_mr);
++	return __rds_rdma_map(rs, CMSG_DATA(cmsg), &rm->m_rdma_cookie,
++			      &rm->rdma.op_rdma_mr, rm->m_conn_path);
+ }
+ 
+ /*
+diff --git a/net/rds/rds.h b/net/rds/rds.h
+index f2272fb8cd45..60b3b787fbdb 100644
+--- a/net/rds/rds.h
++++ b/net/rds/rds.h
+@@ -464,6 +464,8 @@ struct rds_message {
+ 			struct scatterlist	*op_sg;
+ 		} data;
+ 	};
++
++	struct rds_conn_path *m_conn_path;
+ };
+ 
+ /*
+@@ -544,7 +546,8 @@ struct rds_transport {
+ 					unsigned int avail);
+ 	void (*exit)(void);
+ 	void *(*get_mr)(struct scatterlist *sg, unsigned long nr_sg,
+-			struct rds_sock *rs, u32 *key_ret);
++			struct rds_sock *rs, u32 *key_ret,
++			struct rds_connection *conn);
+ 	void (*sync_mr)(void *trans_private, int direction);
+ 	void (*free_mr)(void *trans_private, int invalidate);
+ 	void (*flush_mrs)(void);
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 94c7f74909be..59f17a2335f4 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1169,6 +1169,13 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 		rs->rs_conn = conn;
+ 	}
+ 
++	if (conn->c_trans->t_mp_capable)
++		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
++	else
++		cpath = &conn->c_path[0];
++
++	rm->m_conn_path = cpath;
++
+ 	/* Parse any control messages the user may have included. */
+ 	ret = rds_cmsg_send(rs, rm, msg, &allocated_mr);
+ 	if (ret) {
+@@ -1192,11 +1199,6 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 		goto out;
+ 	}
+ 
+-	if (conn->c_trans->t_mp_capable)
+-		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
+-	else
+-		cpath = &conn->c_path[0];
+-
+ 	if (rds_destroy_pending(conn)) {
+ 		ret = -EAGAIN;
+ 		goto out;
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 1350f1be8037..8229a52c2acd 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -70,7 +70,7 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 	iov[2].iov_len	= sizeof(ack_info);
+ 
+ 	pkt.whdr.epoch		= htonl(conn->proto.epoch);
+-	pkt.whdr.cid		= htonl(conn->proto.cid);
++	pkt.whdr.cid		= htonl(conn->proto.cid | channel);
+ 	pkt.whdr.callNumber	= htonl(call_id);
+ 	pkt.whdr.seq		= 0;
+ 	pkt.whdr.type		= chan->last_type;
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 74d0bd7e76d7..1309b5509ef2 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -449,6 +449,7 @@ int ima_read_file(struct file *file, enum kernel_read_file_id read_id)
+ 
+ static int read_idmap[READING_MAX_ID] = {
+ 	[READING_FIRMWARE] = FIRMWARE_CHECK,
++	[READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
+ 	[READING_MODULE] = MODULE_CHECK,
+ 	[READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
+ 	[READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
+diff --git a/sound/pci/emu10k1/emupcm.c b/sound/pci/emu10k1/emupcm.c
+index cefe613ef7b7..a68c7554f30f 100644
+--- a/sound/pci/emu10k1/emupcm.c
++++ b/sound/pci/emu10k1/emupcm.c
+@@ -1858,7 +1858,9 @@ int snd_emu10k1_pcm_efx(struct snd_emu10k1 *emu, int device)
+ 	if (!kctl)
+ 		return -ENOMEM;
+ 	kctl->id.device = device;
+-	snd_ctl_add(emu->card, kctl);
++	err = snd_ctl_add(emu->card, kctl);
++	if (err < 0)
++		return err;
+ 
+ 	snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(emu->pci), 64*1024, 64*1024);
+ 
+diff --git a/sound/pci/emu10k1/memory.c b/sound/pci/emu10k1/memory.c
+index 5865f3b90b34..dbc7d8d0e1c4 100644
+--- a/sound/pci/emu10k1/memory.c
++++ b/sound/pci/emu10k1/memory.c
+@@ -248,13 +248,13 @@ __found_pages:
+ static int is_valid_page(struct snd_emu10k1 *emu, dma_addr_t addr)
+ {
+ 	if (addr & ~emu->dma_mask) {
+-		dev_err(emu->card->dev,
++		dev_err_ratelimited(emu->card->dev,
+ 			"max memory size is 0x%lx (addr = 0x%lx)!!\n",
+ 			emu->dma_mask, (unsigned long)addr);
+ 		return 0;
+ 	}
+ 	if (addr & (EMUPAGESIZE-1)) {
+-		dev_err(emu->card->dev, "page is not aligned\n");
++		dev_err_ratelimited(emu->card->dev, "page is not aligned\n");
+ 		return 0;
+ 	}
+ 	return 1;
+@@ -345,7 +345,7 @@ snd_emu10k1_alloc_pages(struct snd_emu10k1 *emu, struct snd_pcm_substream *subst
+ 		else
+ 			addr = snd_pcm_sgbuf_get_addr(substream, ofs);
+ 		if (! is_valid_page(emu, addr)) {
+-			dev_err(emu->card->dev,
++			dev_err_ratelimited(emu->card->dev,
+ 				"emu: failure page = %d\n", idx);
+ 			mutex_unlock(&hdr->block_mutex);
+ 			return NULL;
+diff --git a/sound/pci/fm801.c b/sound/pci/fm801.c
+index 73a67bc3586b..e3fb9c61017c 100644
+--- a/sound/pci/fm801.c
++++ b/sound/pci/fm801.c
+@@ -1068,11 +1068,19 @@ static int snd_fm801_mixer(struct fm801 *chip)
+ 		if ((err = snd_ac97_mixer(chip->ac97_bus, &ac97, &chip->ac97_sec)) < 0)
+ 			return err;
+ 	}
+-	for (i = 0; i < FM801_CONTROLS; i++)
+-		snd_ctl_add(chip->card, snd_ctl_new1(&snd_fm801_controls[i], chip));
++	for (i = 0; i < FM801_CONTROLS; i++) {
++		err = snd_ctl_add(chip->card,
++			snd_ctl_new1(&snd_fm801_controls[i], chip));
++		if (err < 0)
++			return err;
++	}
+ 	if (chip->multichannel) {
+-		for (i = 0; i < FM801_CONTROLS_MULTI; i++)
+-			snd_ctl_add(chip->card, snd_ctl_new1(&snd_fm801_controls_multi[i], chip));
++		for (i = 0; i < FM801_CONTROLS_MULTI; i++) {
++			err = snd_ctl_add(chip->card,
++				snd_ctl_new1(&snd_fm801_controls_multi[i], chip));
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index 768ea8651993..84261ef02c93 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -39,6 +39,10 @@
+ /* Enable this to see controls for tuning purpose. */
+ /*#define ENABLE_TUNING_CONTROLS*/
+ 
++#ifdef ENABLE_TUNING_CONTROLS
++#include <sound/tlv.h>
++#endif
++
+ #define FLOAT_ZERO	0x00000000
+ #define FLOAT_ONE	0x3f800000
+ #define FLOAT_TWO	0x40000000
+@@ -3068,8 +3072,8 @@ static int equalizer_ctl_put(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static const DECLARE_TLV_DB_SCALE(voice_focus_db_scale, 2000, 100, 0);
+-static const DECLARE_TLV_DB_SCALE(eq_db_scale, -2400, 100, 0);
++static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(voice_focus_db_scale, 2000, 100, 0);
++static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(eq_db_scale, -2400, 100, 0);
+ 
+ static int add_tuning_control(struct hda_codec *codec,
+ 				hda_nid_t pnid, hda_nid_t nid,
+diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
+index 89df2d9f63d7..1544166631e3 100644
+--- a/sound/soc/fsl/fsl_ssi.c
++++ b/sound/soc/fsl/fsl_ssi.c
+@@ -385,8 +385,7 @@ static irqreturn_t fsl_ssi_isr(int irq, void *dev_id)
+ {
+ 	struct fsl_ssi *ssi = dev_id;
+ 	struct regmap *regs = ssi->regs;
+-	__be32 sisr;
+-	__be32 sisr2;
++	u32 sisr, sisr2;
+ 
+ 	regmap_read(regs, REG_SSI_SISR, &sisr);
+ 
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 82402688bd8e..948505f74229 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -33,7 +33,7 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
+ 	struct snd_soc_component *component;
+ 	struct snd_soc_rtdcom_list *rtdcom;
+ 	struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+-	int ret = 0, __ret;
++	int ret;
+ 
+ 	mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
+ 
+@@ -68,16 +68,15 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
+ 		    !component->driver->compr_ops->open)
+ 			continue;
+ 
+-		__ret = component->driver->compr_ops->open(cstream);
+-		if (__ret < 0) {
++		ret = component->driver->compr_ops->open(cstream);
++		if (ret < 0) {
+ 			dev_err(component->dev,
+ 				"Compress ASoC: can't open platform %s: %d\n",
+-				component->name, __ret);
+-			ret = __ret;
++				component->name, ret);
++			goto machine_err;
+ 		}
+ 	}
+-	if (ret < 0)
+-		goto machine_err;
++	component = NULL;
+ 
+ 	if (rtd->dai_link->compr_ops && rtd->dai_link->compr_ops->startup) {
+ 		ret = rtd->dai_link->compr_ops->startup(cstream);
+@@ -97,17 +96,20 @@ static int soc_compr_open(struct snd_compr_stream *cstream)
+ 
+ machine_err:
+ 	for_each_rtdcom(rtd, rtdcom) {
+-		component = rtdcom->component;
++		struct snd_soc_component *err_comp = rtdcom->component;
++
++		if (err_comp == component)
++			break;
+ 
+ 		/* ignore duplication for now */
+-		if (platform && (component == &platform->component))
++		if (platform && (err_comp == &platform->component))
+ 			continue;
+ 
+-		if (!component->driver->compr_ops ||
+-		    !component->driver->compr_ops->free)
++		if (!err_comp->driver->compr_ops ||
++		    !err_comp->driver->compr_ops->free)
+ 			continue;
+ 
+-		component->driver->compr_ops->free(cstream);
++		err_comp->driver->compr_ops->free(cstream);
+ 	}
+ 
+ 	if (platform && platform->driver->compr_ops && platform->driver->compr_ops->free)
+@@ -132,7 +134,7 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ 	struct snd_soc_dpcm *dpcm;
+ 	struct snd_soc_dapm_widget_list *list;
+ 	int stream;
+-	int ret = 0, __ret;
++	int ret;
+ 
+ 	if (cstream->direction == SND_COMPRESS_PLAYBACK)
+ 		stream = SNDRV_PCM_STREAM_PLAYBACK;
+@@ -172,16 +174,15 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
+ 		    !component->driver->compr_ops->open)
+ 			continue;
+ 
+-		__ret = component->driver->compr_ops->open(cstream);
+-		if (__ret < 0) {
++		ret = component->driver->compr_ops->open(cstream);
++		if (ret < 0) {
+ 			dev_err(component->dev,
+ 				"Compress ASoC: can't open platform %s: %d\n",
+-				component->name, __ret);
+-			ret = __ret;
++				component->name, ret);
++			goto machine_err;
+ 		}
+ 	}
+-	if (ret < 0)
+-		goto machine_err;
++	component = NULL;
+ 
+ 	if (fe->dai_link->compr_ops && fe->dai_link->compr_ops->startup) {
+ 		ret = fe->dai_link->compr_ops->startup(cstream);
+@@ -236,17 +237,20 @@ fe_err:
+ 		fe->dai_link->compr_ops->shutdown(cstream);
+ machine_err:
+ 	for_each_rtdcom(fe, rtdcom) {
+-		component = rtdcom->component;
++		struct snd_soc_component *err_comp = rtdcom->component;
++
++		if (err_comp == component)
++			break;
+ 
+ 		/* ignore duplication for now */
+-		if (platform && (component == &platform->component))
++		if (platform && (err_comp == &platform->component))
+ 			continue;
+ 
+-		if (!component->driver->compr_ops ||
+-		    !component->driver->compr_ops->free)
++		if (!err_comp->driver->compr_ops ||
++		    !err_comp->driver->compr_ops->free)
+ 			continue;
+ 
+-		component->driver->compr_ops->free(cstream);
++		err_comp->driver->compr_ops->free(cstream);
+ 	}
+ 
+ 	if (platform && platform->driver->compr_ops && platform->driver->compr_ops->free)
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 68d9dc930096..d800b99ba5cc 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1965,8 +1965,10 @@ int dpcm_be_dai_shutdown(struct snd_soc_pcm_runtime *fe, int stream)
+ 			continue;
+ 
+ 		if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_FREE) &&
+-		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_OPEN))
+-			continue;
++		    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_OPEN)) {
++			soc_pcm_hw_free(be_substream);
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_FREE;
++		}
+ 
+ 		dev_dbg(be->dev, "ASoC: close BE %s\n",
+ 			be->dai_link->name);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 986b8b2f90fb..f1b4e3099513 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -2006,6 +2006,13 @@ static void set_link_hw_format(struct snd_soc_dai_link *link,
+ 
+ 		link->dai_fmt = hw_config->fmt & SND_SOC_DAIFMT_FORMAT_MASK;
+ 
++		/* clock gating */
++		if (hw_config->clock_gated == SND_SOC_TPLG_DAI_CLK_GATE_GATED)
++			link->dai_fmt |= SND_SOC_DAIFMT_GATED;
++		else if (hw_config->clock_gated ==
++			 SND_SOC_TPLG_DAI_CLK_GATE_CONT)
++			link->dai_fmt |= SND_SOC_DAIFMT_CONT;
++
+ 		/* clock signal polarity */
+ 		invert_bclk = hw_config->invert_bclk;
+ 		invert_fsync = hw_config->invert_fsync;
+@@ -2019,13 +2026,15 @@ static void set_link_hw_format(struct snd_soc_dai_link *link,
+ 			link->dai_fmt |= SND_SOC_DAIFMT_IB_IF;
+ 
+ 		/* clock masters */
+-		bclk_master = hw_config->bclk_master;
+-		fsync_master = hw_config->fsync_master;
+-		if (!bclk_master && !fsync_master)
++		bclk_master = (hw_config->bclk_master ==
++			       SND_SOC_TPLG_BCLK_CM);
++		fsync_master = (hw_config->fsync_master ==
++				SND_SOC_TPLG_FSYNC_CM);
++		if (bclk_master && fsync_master)
+ 			link->dai_fmt |= SND_SOC_DAIFMT_CBM_CFM;
+-		else if (bclk_master && !fsync_master)
+-			link->dai_fmt |= SND_SOC_DAIFMT_CBS_CFM;
+ 		else if (!bclk_master && fsync_master)
++			link->dai_fmt |= SND_SOC_DAIFMT_CBS_CFM;
++		else if (bclk_master && !fsync_master)
+ 			link->dai_fmt |= SND_SOC_DAIFMT_CBM_CFS;
+ 		else
+ 			link->dai_fmt |= SND_SOC_DAIFMT_CBS_CFS;
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 3cbfae6604f9..d8a46d46bcd2 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -1311,7 +1311,7 @@ static void retire_capture_urb(struct snd_usb_substream *subs,
+ 		if (bytes % (runtime->sample_bits >> 3) != 0) {
+ 			int oldbytes = bytes;
+ 			bytes = frames * stride;
+-			dev_warn(&subs->dev->dev,
++			dev_warn_ratelimited(&subs->dev->dev,
+ 				 "Corrected urb data len. %d->%d\n",
+ 							oldbytes, bytes);
+ 		}
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index e37608a87dba..155d2570274f 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -73,6 +73,7 @@ static void inc_group_count(struct list_head *list,
+ %type <num> value_sym
+ %type <head> event_config
+ %type <head> opt_event_config
++%type <head> opt_pmu_config
+ %type <term> event_term
+ %type <head> event_pmu
+ %type <head> event_legacy_symbol
+@@ -224,7 +225,7 @@ event_def: event_pmu |
+ 	   event_bpf_file
+ 
+ event_pmu:
+-PE_NAME opt_event_config
++PE_NAME opt_pmu_config
+ {
+ 	struct list_head *list, *orig_terms, *terms;
+ 
+@@ -496,6 +497,17 @@ opt_event_config:
+ 	$$ = NULL;
+ }
+ 
++opt_pmu_config:
++'/' event_config '/'
++{
++	$$ = $2;
++}
++|
++'/' '/'
++{
++	$$ = NULL;
++}
++
+ start_terms: event_config
+ {
+ 	struct parse_events_state *parse_state = _parse_state;
+diff --git a/tools/testing/selftests/filesystems/Makefile b/tools/testing/selftests/filesystems/Makefile
+index 5c7d7001ad37..129880fb42d3 100644
+--- a/tools/testing/selftests/filesystems/Makefile
++++ b/tools/testing/selftests/filesystems/Makefile
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ 
++CFLAGS += -I../../../../usr/include/
+ TEST_GEN_PROGS := devpts_pts
+ TEST_GEN_PROGS_EXTENDED := dnotify_test
+ 
+diff --git a/tools/testing/selftests/filesystems/devpts_pts.c b/tools/testing/selftests/filesystems/devpts_pts.c
+index b9055e974289..a425840dc30c 100644
+--- a/tools/testing/selftests/filesystems/devpts_pts.c
++++ b/tools/testing/selftests/filesystems/devpts_pts.c
+@@ -8,9 +8,10 @@
+ #include <stdlib.h>
+ #include <string.h>
+ #include <unistd.h>
+-#include <sys/ioctl.h>
++#include <asm/ioctls.h>
+ #include <sys/mount.h>
+ #include <sys/wait.h>
++#include "../kselftest.h"
+ 
+ static bool terminal_dup2(int duplicate, int original)
+ {
+@@ -125,10 +126,12 @@ static int do_tiocgptpeer(char *ptmx, char *expected_procfd_contents)
+ 		if (errno == EINVAL) {
+ 			fprintf(stderr, "TIOCGPTPEER is not supported. "
+ 					"Skipping test.\n");
+-			fret = EXIT_SUCCESS;
++			fret = KSFT_SKIP;
++		} else {
++			fprintf(stderr,
++				"Failed to perform TIOCGPTPEER ioctl\n");
++			fret = EXIT_FAILURE;
+ 		}
+-
+-		fprintf(stderr, "Failed to perform TIOCGPTPEER ioctl\n");
+ 		goto do_cleanup;
+ 	}
+ 
+@@ -281,7 +284,7 @@ int main(int argc, char *argv[])
+ 	if (!isatty(STDIN_FILENO)) {
+ 		fprintf(stderr, "Standard input file desciptor is not attached "
+ 				"to a terminal. Skipping test\n");
+-		exit(EXIT_FAILURE);
++		exit(KSFT_SKIP);
+ 	}
+ 
+ 	ret = unshare(CLONE_NEWNS);
+diff --git a/tools/testing/selftests/intel_pstate/run.sh b/tools/testing/selftests/intel_pstate/run.sh
+index c670359becc6..928978804342 100755
+--- a/tools/testing/selftests/intel_pstate/run.sh
++++ b/tools/testing/selftests/intel_pstate/run.sh
+@@ -30,9 +30,12 @@
+ 
+ EVALUATE_ONLY=0
+ 
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ if ! uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ | grep -q x86; then
+ 	echo "$0 # Skipped: Test can only run on x86 architectures."
+-	exit 0
++	exit $ksft_skip
+ fi
+ 
+ max_cpus=$(($(nproc)-1))
+@@ -48,11 +51,12 @@ function run_test () {
+ 
+ 	echo "sleeping for 5 seconds"
+ 	sleep 5
+-	num_freqs=$(cat /proc/cpuinfo | grep MHz | sort -u | wc -l)
+-	if [ $num_freqs -le 2 ]; then
+-		cat /proc/cpuinfo | grep MHz | sort -u | tail -1 > /tmp/result.$1
++	grep MHz /proc/cpuinfo | sort -u > /tmp/result.freqs
++	num_freqs=$(wc -l /tmp/result.freqs | awk ' { print $1 } ')
++	if [ $num_freqs -ge 2 ]; then
++		tail -n 1 /tmp/result.freqs > /tmp/result.$1
+ 	else
+-		cat /proc/cpuinfo | grep MHz | sort -u > /tmp/result.$1
++		cp /tmp/result.freqs /tmp/result.$1
+ 	fi
+ 	./msr 0 >> /tmp/result.$1
+ 
+@@ -82,21 +86,20 @@ _max_freq=$(cpupower frequency-info -l | tail -1 | awk ' { print $2 } ')
+ max_freq=$(($_max_freq / 1000))
+ 
+ 
+-for freq in `seq $max_freq -100 $min_freq`
++[ $EVALUATE_ONLY -eq 0 ] && for freq in `seq $max_freq -100 $min_freq`
+ do
+ 	echo "Setting maximum frequency to $freq"
+ 	cpupower frequency-set -g powersave --max=${freq}MHz >& /dev/null
+-	[ $EVALUATE_ONLY -eq 0 ] && run_test $freq
++	run_test $freq
+ done
+ 
+-echo "=============================================================================="
++[ $EVALUATE_ONLY -eq 0 ] && cpupower frequency-set -g powersave --max=${max_freq}MHz >& /dev/null
+ 
++echo "=============================================================================="
+ echo "The marketing frequency of the cpu is $mkt_freq MHz"
+ echo "The maximum frequency of the cpu is $max_freq MHz"
+ echo "The minimum frequency of the cpu is $min_freq MHz"
+ 
+-cpupower frequency-set -g powersave --max=${max_freq}MHz >& /dev/null
+-
+ # make a pretty table
+ echo "Target      Actual      Difference     MSR(0x199)     max_perf_pct"
+ for freq in `seq $max_freq -100 $min_freq`
+@@ -104,10 +107,6 @@ do
+ 	result_freq=$(cat /tmp/result.${freq} | grep "cpu MHz" | awk ' { print $4 } ' | awk -F "." ' { print $1 } ')
+ 	msr=$(cat /tmp/result.${freq} | grep "msr" | awk ' { print $3 } ')
+ 	max_perf_pct=$(cat /tmp/result.${freq} | grep "max_perf_pct" | awk ' { print $2 } ' )
+-	if [ $result_freq -eq $freq ]; then
+-		echo " $freq        $result_freq             0          $msr         $(($max_perf_pct*3300))"
+-	else
+-		echo " $freq        $result_freq          $(($result_freq-$freq))          $msr          $(($max_perf_pct*$max_freq))"
+-	fi
++	echo " $freq        $result_freq          $(($result_freq-$freq))          $msr          $(($max_perf_pct*$max_freq))"
+ done
+ exit 0
+diff --git a/tools/testing/selftests/kvm/lib/assert.c b/tools/testing/selftests/kvm/lib/assert.c
+index c9f5b7d4ce38..cd01144d27c8 100644
+--- a/tools/testing/selftests/kvm/lib/assert.c
++++ b/tools/testing/selftests/kvm/lib/assert.c
+@@ -13,6 +13,8 @@
+ #include <execinfo.h>
+ #include <sys/syscall.h>
+ 
++#include "../../kselftest.h"
++
+ /* Dumps the current stack trace to stderr. */
+ static void __attribute__((noinline)) test_dump_stack(void);
+ static void test_dump_stack(void)
+@@ -70,8 +72,9 @@ test_assert(bool exp, const char *exp_str,
+ 
+ 		fprintf(stderr, "==== Test Assertion Failure ====\n"
+ 			"  %s:%u: %s\n"
+-			"  pid=%d tid=%d\n",
+-			file, line, exp_str, getpid(), gettid());
++			"  pid=%d tid=%d - %s\n",
++			file, line, exp_str, getpid(), gettid(),
++			strerror(errno));
+ 		test_dump_stack();
+ 		if (fmt) {
+ 			fputs("  ", stderr);
+@@ -80,6 +83,8 @@ test_assert(bool exp, const char *exp_str,
+ 		}
+ 		va_end(ap);
+ 
++		if (errno == EACCES)
++			ksft_exit_skip("Access denied - Exiting.\n");
+ 		exit(254);
+ 	}
+ 
+diff --git a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
+index aaa633263b2c..d7cb7944a42e 100644
+--- a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
++++ b/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
+@@ -28,6 +28,8 @@
+ #include <string.h>
+ #include <sys/ioctl.h>
+ 
++#include "../kselftest.h"
++
+ #ifndef MSR_IA32_TSC_ADJUST
+ #define MSR_IA32_TSC_ADJUST 0x3b
+ #endif
+diff --git a/tools/testing/selftests/memfd/run_tests.sh b/tools/testing/selftests/memfd/run_tests.sh
+index c2d41ed81b24..2013f195e623 100755
+--- a/tools/testing/selftests/memfd/run_tests.sh
++++ b/tools/testing/selftests/memfd/run_tests.sh
+@@ -1,6 +1,9 @@
+ #!/bin/bash
+ # please run as root
+ 
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++
+ #
+ # Normal tests requiring no special resources
+ #
+@@ -29,12 +32,13 @@ if [ -n "$freepgs" ] && [ $freepgs -lt $hpages_test ]; then
+ 	nr_hugepgs=`cat /proc/sys/vm/nr_hugepages`
+ 	hpages_needed=`expr $hpages_test - $freepgs`
+ 
++	if [ $UID != 0 ]; then
++		echo "Please run memfd with hugetlbfs test as root"
++		exit $ksft_skip
++	fi
++
+ 	echo 3 > /proc/sys/vm/drop_caches
+ 	echo $(( $hpages_needed + $nr_hugepgs )) > /proc/sys/vm/nr_hugepages
+-	if [ $? -ne 0 ]; then
+-		echo "Please run this test as root"
+-		exit 1
+-	fi
+ 	while read name size unit; do
+ 		if [ "$name" = "HugePages_Free:" ]; then
+ 			freepgs=$size
+@@ -53,7 +57,7 @@ if [ $freepgs -lt $hpages_test ]; then
+ 	fi
+ 	printf "Not enough huge pages available (%d < %d)\n" \
+ 		$freepgs $needpgs
+-	exit 1
++	exit $ksft_skip
+ fi
+ 
+ #
+diff --git a/tools/usb/usbip/libsrc/vhci_driver.c b/tools/usb/usbip/libsrc/vhci_driver.c
+index c9c81614a66a..4204359c9fee 100644
+--- a/tools/usb/usbip/libsrc/vhci_driver.c
++++ b/tools/usb/usbip/libsrc/vhci_driver.c
+@@ -135,11 +135,11 @@ static int refresh_imported_device_list(void)
+ 	return 0;
+ }
+ 
+-static int get_nports(void)
++static int get_nports(struct udev_device *hc_device)
+ {
+ 	const char *attr_nports;
+ 
+-	attr_nports = udev_device_get_sysattr_value(vhci_driver->hc_device, "nports");
++	attr_nports = udev_device_get_sysattr_value(hc_device, "nports");
+ 	if (!attr_nports) {
+ 		err("udev_device_get_sysattr_value nports failed");
+ 		return -1;
+@@ -242,35 +242,41 @@ static int read_record(int rhport, char *host, unsigned long host_len,
+ 
+ int usbip_vhci_driver_open(void)
+ {
++	int nports;
++	struct udev_device *hc_device;
++
+ 	udev_context = udev_new();
+ 	if (!udev_context) {
+ 		err("udev_new failed");
+ 		return -1;
+ 	}
+ 
+-	vhci_driver = calloc(1, sizeof(struct usbip_vhci_driver));
+-
+ 	/* will be freed in usbip_driver_close() */
+-	vhci_driver->hc_device =
++	hc_device =
+ 		udev_device_new_from_subsystem_sysname(udev_context,
+ 						       USBIP_VHCI_BUS_TYPE,
+ 						       USBIP_VHCI_DEVICE_NAME);
+-	if (!vhci_driver->hc_device) {
++	if (!hc_device) {
+ 		err("udev_device_new_from_subsystem_sysname failed");
+ 		goto err;
+ 	}
+ 
+-	vhci_driver->nports = get_nports();
+-	dbg("available ports: %d", vhci_driver->nports);
+-
+-	if (vhci_driver->nports <= 0) {
++	nports = get_nports(hc_device);
++	if (nports <= 0) {
+ 		err("no available ports");
+ 		goto err;
+-	} else if (vhci_driver->nports > MAXNPORT) {
+-		err("port number exceeds %d", MAXNPORT);
++	}
++	dbg("available ports: %d", nports);
++
++	vhci_driver = calloc(1, sizeof(struct usbip_vhci_driver) +
++			nports * sizeof(struct usbip_imported_device));
++	if (!vhci_driver) {
++		err("vhci_driver allocation failed");
+ 		goto err;
+ 	}
+ 
++	vhci_driver->nports = nports;
++	vhci_driver->hc_device = hc_device;
+ 	vhci_driver->ncontrollers = get_ncontrollers();
+ 	dbg("available controllers: %d", vhci_driver->ncontrollers);
+ 
+@@ -285,7 +291,7 @@ int usbip_vhci_driver_open(void)
+ 	return 0;
+ 
+ err:
+-	udev_device_unref(vhci_driver->hc_device);
++	udev_device_unref(hc_device);
+ 
+ 	if (vhci_driver)
+ 		free(vhci_driver);
+diff --git a/tools/usb/usbip/libsrc/vhci_driver.h b/tools/usb/usbip/libsrc/vhci_driver.h
+index 418b404d5121..6c9aca216705 100644
+--- a/tools/usb/usbip/libsrc/vhci_driver.h
++++ b/tools/usb/usbip/libsrc/vhci_driver.h
+@@ -13,7 +13,6 @@
+ 
+ #define USBIP_VHCI_BUS_TYPE "platform"
+ #define USBIP_VHCI_DEVICE_NAME "vhci_hcd.0"
+-#define MAXNPORT 128
+ 
+ enum hub_speed {
+ 	HUB_SPEED_HIGH = 0,
+@@ -41,7 +40,7 @@ struct usbip_vhci_driver {
+ 
+ 	int ncontrollers;
+ 	int nports;
+-	struct usbip_imported_device idev[MAXNPORT];
++	struct usbip_imported_device idev[];
+ };
+ 
+ 
+diff --git a/tools/usb/usbip/src/usbip_detach.c b/tools/usb/usbip/src/usbip_detach.c
+index 9db9d21bb2ec..6a8db858caa5 100644
+--- a/tools/usb/usbip/src/usbip_detach.c
++++ b/tools/usb/usbip/src/usbip_detach.c
+@@ -43,7 +43,7 @@ void usbip_detach_usage(void)
+ 
+ static int detach_port(char *port)
+ {
+-	int ret;
++	int ret = 0;
+ 	uint8_t portnum;
+ 	char path[PATH_MAX+1];
+ 
+@@ -73,9 +73,12 @@ static int detach_port(char *port)
+ 	}
+ 
+ 	ret = usbip_vhci_detach_device(portnum);
+-	if (ret < 0)
+-		return -1;
++	if (ret < 0) {
++		ret = -1;
++		goto call_driver_close;
++	}
+ 
++call_driver_close:
+ 	usbip_vhci_driver_close();
+ 
+ 	return ret;


             reply	other threads:[~2018-08-03 12:19 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-03 12:19 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-08-24 11:45 [gentoo-commits] proj/linux-patches:4.17 commit in: / Mike Pagano
2018-08-22  9:56 Alice Ferrazzi
2018-08-18 18:10 Mike Pagano
2018-08-17 19:40 Mike Pagano
2018-08-17 19:27 Mike Pagano
2018-08-16 11:47 Mike Pagano
2018-08-15 16:35 Mike Pagano
2018-08-09 10:55 Mike Pagano
2018-08-07 18:10 Mike Pagano
2018-07-28 10:41 Mike Pagano
2018-07-25 12:19 Mike Pagano
2018-07-25 10:28 Mike Pagano
2018-07-22 15:12 Mike Pagano
2018-07-18 11:18 Mike Pagano
2018-07-17 16:18 Mike Pagano
2018-07-12 15:15 Alice Ferrazzi
2018-07-09 15:01 Alice Ferrazzi
2018-07-03 13:36 Mike Pagano
2018-07-03 13:19 Mike Pagano
2018-06-29 23:18 Mike Pagano
2018-06-26 16:29 Alice Ferrazzi
2018-06-25 12:40 Mike Pagano
2018-06-20 17:47 Mike Pagano
2018-06-19 23:30 Mike Pagano
2018-06-16 15:46 Mike Pagano
2018-06-11 21:50 Mike Pagano
2018-06-08 23:11 Mike Pagano
2018-06-03 22:19 Mike Pagano
2018-05-23 18:47 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1533298727.a435a0a68c5f50f33231d974dc564c153c825d1f.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox