public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Wed, 22 Dec 2021 14:05:40 +0000 (UTC)	[thread overview]
Message-ID: <1640181930.a65240864c0cc8dd0f787643c5f01b65e5dd5685.mpagano@gentoo> (raw)

commit:     a65240864c0cc8dd0f787643c5f01b65e5dd5685
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 22 14:05:30 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 22 14:05:30 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a6524086

Linux patch 5.10.88

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1087_linux-5.10.88.patch | 3012 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3016 insertions(+)

diff --git a/0000_README b/0000_README
index b743d7ac..a6cd68ff 100644
--- a/0000_README
+++ b/0000_README
@@ -391,6 +391,10 @@ Patch:  1086_linux-5.10.87.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.10.87
 
+Patch:  1087_linux-5.10.88.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.10.88
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1087_linux-5.10.88.patch b/1087_linux-5.10.88.patch
new file mode 100644
index 00000000..2deec06c
--- /dev/null
+++ b/1087_linux-5.10.88.patch
@@ -0,0 +1,3012 @@
+diff --git a/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst b/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
+index f1d5233e5e510..0a233b17c664e 100644
+--- a/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
++++ b/Documentation/networking/device_drivers/ethernet/intel/ixgbe.rst
+@@ -440,6 +440,22 @@ NOTE: For 82599-based network connections, if you are enabling jumbo frames in
+ a virtual function (VF), jumbo frames must first be enabled in the physical
+ function (PF). The VF MTU setting cannot be larger than the PF MTU.
+ 
++NBASE-T Support
++---------------
++The ixgbe driver supports NBASE-T on some devices. However, the advertisement
++of NBASE-T speeds is suppressed by default, to accommodate broken network
++switches which cannot cope with advertised NBASE-T speeds. Use the ethtool
++command to enable advertising NBASE-T speeds on devices which support it::
++
++  ethtool -s eth? advertise 0x1800000001028
++
++On Linux systems with INTERFACES(5), this can be specified as a pre-up command
++in /etc/network/interfaces so that the interface is always brought up with
++NBASE-T support, e.g.::
++
++  iface eth? inet dhcp
++       pre-up ethtool -s eth? advertise 0x1800000001028 || true
++
+ Generic Receive Offload, aka GRO
+ --------------------------------
+ The driver supports the in-kernel software implementation of GRO. GRO has
+diff --git a/Makefile b/Makefile
+index d627f4ae5af56..0b74b414f4e57 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 87
++SUBLEVEL = 88
+ EXTRAVERSION =
+ NAME = Dare mighty things
+ 
+diff --git a/arch/arm/boot/dts/imx6ull-pinfunc.h b/arch/arm/boot/dts/imx6ull-pinfunc.h
+index eb025a9d47592..7328d4ef8559f 100644
+--- a/arch/arm/boot/dts/imx6ull-pinfunc.h
++++ b/arch/arm/boot/dts/imx6ull-pinfunc.h
+@@ -82,6 +82,6 @@
+ #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS                         0x01F4 0x0480 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK                        0x01F8 0x0484 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0                       0x01FC 0x0488 0x0000 0x9 0x0
+-#define MX6ULL_PAD_CSI_DATA07__ESAI_T0                            0x0200 0x048C 0x0000 0x9 0x0
++#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0                           0x0200 0x048C 0x0000 0x9 0x0
+ 
+ #endif /* __DTS_IMX6ULL_PINFUNC_H */
+diff --git a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
+index 2b645642b9352..2a745522404d6 100644
+--- a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
++++ b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts
+@@ -12,7 +12,7 @@
+ 	flash0: n25q00@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00aa";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_arria5_socdk.dts b/arch/arm/boot/dts/socfpga_arria5_socdk.dts
+index 90e676e7019f2..1b02d46496a85 100644
+--- a/arch/arm/boot/dts/socfpga_arria5_socdk.dts
++++ b/arch/arm/boot/dts/socfpga_arria5_socdk.dts
+@@ -119,7 +119,7 @@
+ 	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q256a";
++		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
+index 6f138b2b26163..51bb436784e24 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts
+@@ -124,7 +124,7 @@
+ 	flash0: n25q00@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <0>;	/* chip select */
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
+index c155ff02eb6e0..cae9ddd5ed38b 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
+@@ -169,7 +169,7 @@
+ 	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
+index 8d5d3996f6f27..ca18b959e6559 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
+@@ -80,7 +80,7 @@
+ 	flash: flash@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q256a";
++		compatible = "micron,n25q256a", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 		m25p,fast-read;
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
+index 99a71757cdf46..3f7aa7bf0863a 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts
+@@ -116,7 +116,7 @@
+ 	flash0: n25q512a@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q512a";
++		compatible = "micron,n25q512a", "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-max-frequency = <100000000>;
+ 
+diff --git a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
+index a060718758b67..25874e1b9c829 100644
+--- a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
++++ b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
+@@ -224,7 +224,7 @@
+ 	n25q128@0 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q128";
++		compatible = "micron,n25q128", "jedec,spi-nor";
+ 		reg = <0>;		/* chip select */
+ 		spi-max-frequency = <100000000>;
+ 		m25p,fast-read;
+@@ -241,7 +241,7 @@
+ 	n25q00@1 {
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+-		compatible = "n25q00";
++		compatible = "micron,mt25qu02g", "jedec,spi-nor";
+ 		reg = <1>;		/* chip select */
+ 		spi-max-frequency = <100000000>;
+ 		m25p,fast-read;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+index 05ee062548e4f..f4d7bb75707df 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+@@ -866,11 +866,12 @@
+ 				assigned-clocks = <&clk IMX8MM_CLK_ENET_AXI>,
+ 						  <&clk IMX8MM_CLK_ENET_TIMER>,
+ 						  <&clk IMX8MM_CLK_ENET_REF>,
+-						  <&clk IMX8MM_CLK_ENET_TIMER>;
++						  <&clk IMX8MM_CLK_ENET_PHY_REF>;
+ 				assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_266M>,
+ 							 <&clk IMX8MM_SYS_PLL2_100M>,
+-							 <&clk IMX8MM_SYS_PLL2_125M>;
+-				assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
++							 <&clk IMX8MM_SYS_PLL2_125M>,
++							 <&clk IMX8MM_SYS_PLL2_50M>;
++				assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
+ 				fsl,num-tx-queues = <3>;
+ 				fsl,num-rx-queues = <3>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index 16c7202885d70..aea723eb2ba3f 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -753,11 +753,12 @@
+ 				assigned-clocks = <&clk IMX8MN_CLK_ENET_AXI>,
+ 						  <&clk IMX8MN_CLK_ENET_TIMER>,
+ 						  <&clk IMX8MN_CLK_ENET_REF>,
+-						  <&clk IMX8MN_CLK_ENET_TIMER>;
++						  <&clk IMX8MN_CLK_ENET_PHY_REF>;
+ 				assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>,
+ 							 <&clk IMX8MN_SYS_PLL2_100M>,
+-							 <&clk IMX8MN_SYS_PLL2_125M>;
+-				assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
++							 <&clk IMX8MN_SYS_PLL2_125M>,
++							 <&clk IMX8MN_SYS_PLL2_50M>;
++				assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
+ 				fsl,num-tx-queues = <3>;
+ 				fsl,num-rx-queues = <3>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+index ad66f1286d95c..c13b4a02d12f8 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts
+@@ -62,6 +62,8 @@
+ 			reg = <1>;
+ 			eee-broken-1000t;
+ 			reset-gpios = <&gpio4 2 GPIO_ACTIVE_LOW>;
++			reset-assert-us = <10000>;
++			reset-deassert-us = <80000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index 03ef0e5f909e4..acee71ca32d83 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -725,11 +725,12 @@
+ 				assigned-clocks = <&clk IMX8MP_CLK_ENET_AXI>,
+ 						  <&clk IMX8MP_CLK_ENET_TIMER>,
+ 						  <&clk IMX8MP_CLK_ENET_REF>,
+-						  <&clk IMX8MP_CLK_ENET_TIMER>;
++						  <&clk IMX8MP_CLK_ENET_PHY_REF>;
+ 				assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_266M>,
+ 							 <&clk IMX8MP_SYS_PLL2_100M>,
+-							 <&clk IMX8MP_SYS_PLL2_125M>;
+-				assigned-clock-rates = <0>, <0>, <125000000>, <100000000>;
++							 <&clk IMX8MP_SYS_PLL2_125M>,
++							 <&clk IMX8MP_SYS_PLL2_50M>;
++				assigned-clock-rates = <0>, <100000000>, <125000000>, <0>;
+ 				fsl,num-tx-queues = <3>;
+ 				fsl,num-rx-queues = <3>;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index bce6f8b7db436..fbcb9531cc70d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -91,7 +91,7 @@
+ 		regulator-max-microvolt = <3300000>;
+ 		regulator-always-on;
+ 		regulator-boot-on;
+-		vim-supply = <&vcc_io>;
++		vin-supply = <&vcc_io>;
+ 	};
+ 
+ 	vdd_core: vdd-core {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
+index 635afdd99122f..2c644ac1f84b9 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi
+@@ -699,7 +699,6 @@
+ &sdhci {
+ 	bus-width = <8>;
+ 	mmc-hs400-1_8v;
+-	mmc-hs400-enhanced-strobe;
+ 	non-removable;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
+index 1fa80ac15464b..88984b5e67b6e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts
+@@ -49,7 +49,7 @@
+ 		regulator-boot-on;
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+-		vim-supply = <&vcc3v3_sys>;
++		vin-supply = <&vcc3v3_sys>;
+ 	};
+ 
+ 	vcc3v3_sys: vcc3v3-sys {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+index 678a336010bf8..f121203081b97 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+@@ -459,7 +459,7 @@
+ 	status = "okay";
+ 
+ 	bt656-supply = <&vcc_3v0>;
+-	audio-supply = <&vcc_3v0>;
++	audio-supply = <&vcc1v8_codec>;
+ 	sdmmc-supply = <&vcc_sdio>;
+ 	gpio1830-supply = <&vcc_3v0>;
+ };
+diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
+index 83f4a6389a282..d7081e9af65c7 100644
+--- a/arch/powerpc/platforms/85xx/smp.c
++++ b/arch/powerpc/platforms/85xx/smp.c
+@@ -220,7 +220,7 @@ static int smp_85xx_start_cpu(int cpu)
+ 	local_irq_save(flags);
+ 	hard_irq_disable();
+ 
+-	if (qoriq_pm_ops)
++	if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
+ 		qoriq_pm_ops->cpu_up_prepare(cpu);
+ 
+ 	/* if cpu is not spinning, reset it */
+@@ -292,7 +292,7 @@ static int smp_85xx_kick_cpu(int nr)
+ 		booting_thread_hwid = cpu_thread_in_core(nr);
+ 		primary = cpu_first_thread_sibling(nr);
+ 
+-		if (qoriq_pm_ops)
++		if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
+ 			qoriq_pm_ops->cpu_up_prepare(nr);
+ 
+ 		/*
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index e7435f3a3d2d2..76cd09879eaf4 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -277,6 +277,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ {
+ 	Elf_Rela *relas;
+ 	int i, r_type;
++	int ret;
+ 
+ 	relas = (void *)pi->ehdr + relsec->sh_offset;
+ 
+@@ -311,7 +312,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ 		addr = section->sh_addr + relas[i].r_offset;
+ 
+ 		r_type = ELF64_R_TYPE(relas[i].r_info);
+-		arch_kexec_do_relocs(r_type, loc, val, addr);
++		ret = arch_kexec_do_relocs(r_type, loc, val, addr);
++		if (ret) {
++			pr_err("Unknown rela relocation: %d\n", r_type);
++			return -ENOEXEC;
++		}
+ 	}
+ 	return 0;
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b885063dc393f..4f828cac0273e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3065,7 +3065,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 
+ 		if (!msr_info->host_initiated)
+ 			return 1;
+-		if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent))
++		if (kvm_get_msr_feature(&msr_ent))
+ 			return 1;
+ 		if (data & ~msr_ent.data)
+ 			return 1;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index e95b93f72bd5c..9af32b44b7173 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2246,7 +2246,14 @@ static void ioc_timer_fn(struct timer_list *timer)
+ 			hwm = current_hweight_max(iocg);
+ 			new_hwi = hweight_after_donation(iocg, old_hwi, hwm,
+ 							 usage, &now);
+-			if (new_hwi < hwm) {
++			/*
++			 * Donation calculation assumes hweight_after_donation
++			 * to be positive, a condition that a donor w/ hwa < 2
++			 * can't meet. Don't bother with donation if hwa is
++			 * below 2. It's not gonna make a meaningful difference
++			 * anyway.
++			 */
++			if (new_hwi < hwm && hwa >= 2) {
+ 				iocg->hweight_donating = hwa;
+ 				iocg->hweight_after_donation = new_hwi;
+ 				list_add(&iocg->surplus_list, &surpluses);
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 48b8934970f36..a0e788b648214 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2870,8 +2870,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
+ 		goto invalid_fld;
+ 	}
+ 
+-	if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)
+-		tf->protocol = ATA_PROT_NCQ_NODATA;
++	if ((cdb[2 + cdb_offset] & 0x3) == 0) {
++		/*
++		 * When T_LENGTH is zero (No data is transferred), dir should
++		 * be DMA_NONE.
++		 */
++		if (scmd->sc_data_direction != DMA_NONE) {
++			fp = 2 + cdb_offset;
++			goto invalid_fld;
++		}
++
++		if (ata_is_ncq(tf->protocol))
++			tf->protocol = ATA_PROT_NCQ_NODATA;
++	}
+ 
+ 	/* enable LBA */
+ 	tf->flags |= ATA_TFLAG_LBA;
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index ff7b62597b525..22842d2938c28 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1573,9 +1573,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 	unsigned long flags;
+ 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
+ 	struct blkfront_info *info = rinfo->dev_info;
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
+ 
+-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
++	if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
++		xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
+ 		return IRQ_HANDLED;
++	}
+ 
+ 	spin_lock_irqsave(&rinfo->ring_lock, flags);
+  again:
+@@ -1591,6 +1594,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 		unsigned long id;
+ 		unsigned int op;
+ 
++		eoiflag = 0;
++
+ 		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
+ 		id = bret.id;
+ 
+@@ -1707,6 +1712,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 
+ 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ 
++	xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ 
+  err:
+@@ -1714,6 +1721,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
+ 
+ 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ 
++	/* No EOI in order to avoid further interrupts. */
++
+ 	pr_alert("%s disabled for further use\n", info->gd->disk_name);
+ 	return IRQ_HANDLED;
+ }
+@@ -1753,8 +1762,8 @@ static int setup_blkring(struct xenbus_device *dev,
+ 	if (err)
+ 		goto fail;
+ 
+-	err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,
+-					"blkif", rinfo);
++	err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,
++						0, "blkif", rinfo);
+ 	if (err <= 0) {
+ 		xenbus_dev_fatal(dev, err,
+ 				 "bind_evtchn_to_irqhandler failed");
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 43603dc9da430..18f0650c5d405 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -2443,12 +2443,11 @@ static void sysc_reinit_modules(struct sysc_soc_info *soc)
+ 	struct sysc_module *module;
+ 	struct list_head *pos;
+ 	struct sysc *ddata;
+-	int error = 0;
+ 
+ 	list_for_each(pos, &sysc_soc->restored_modules) {
+ 		module = list_entry(pos, struct sysc_module, node);
+ 		ddata = module->ddata;
+-		error = sysc_reinit_module(ddata, ddata->enabled);
++		sysc_reinit_module(ddata, ddata->enabled);
+ 	}
+ }
+ 
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 61c78714c0957..515ef39c4610c 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3389,6 +3389,14 @@ static int __clk_core_init(struct clk_core *core)
+ 
+ 	clk_prepare_lock();
+ 
++	/*
++	 * Set hw->core after grabbing the prepare_lock to synchronize with
++	 * callers of clk_core_fill_parent_index() where we treat hw->core
++	 * being NULL as the clk not being registered yet. This is crucial so
++	 * that clks aren't parented until their parent is fully registered.
++	 */
++	core->hw->core = core;
++
+ 	ret = clk_pm_runtime_get(core);
+ 	if (ret)
+ 		goto unlock;
+@@ -3557,8 +3565,10 @@ static int __clk_core_init(struct clk_core *core)
+ out:
+ 	clk_pm_runtime_put(core);
+ unlock:
+-	if (ret)
++	if (ret) {
+ 		hlist_del_init(&core->child_node);
++		core->hw->core = NULL;
++	}
+ 
+ 	clk_prepare_unlock();
+ 
+@@ -3804,7 +3814,6 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 	core->num_parents = init->num_parents;
+ 	core->min_rate = 0;
+ 	core->max_rate = ULONG_MAX;
+-	hw->core = core;
+ 
+ 	ret = clk_core_populate_parent_map(core, init);
+ 	if (ret)
+@@ -3822,7 +3831,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ 		goto fail_create_clk;
+ 	}
+ 
+-	clk_core_link_consumer(hw->core, hw->clk);
++	clk_core_link_consumer(core, hw->clk);
+ 
+ 	ret = __clk_core_init(core);
+ 	if (!ret)
+diff --git a/drivers/dma/st_fdma.c b/drivers/dma/st_fdma.c
+index 962b6e05287b5..d95c421877fb7 100644
+--- a/drivers/dma/st_fdma.c
++++ b/drivers/dma/st_fdma.c
+@@ -874,4 +874,4 @@ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
+ MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>");
+ MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>");
+-MODULE_ALIAS("platform: " DRIVER_NAME);
++MODULE_ALIAS("platform:" DRIVER_NAME);
+diff --git a/drivers/firmware/scpi_pm_domain.c b/drivers/firmware/scpi_pm_domain.c
+index 51201600d789b..800673910b511 100644
+--- a/drivers/firmware/scpi_pm_domain.c
++++ b/drivers/firmware/scpi_pm_domain.c
+@@ -16,7 +16,6 @@ struct scpi_pm_domain {
+ 	struct generic_pm_domain genpd;
+ 	struct scpi_ops *ops;
+ 	u32 domain;
+-	char name[30];
+ };
+ 
+ /*
+@@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev)
+ 
+ 		scpi_pd->domain = i;
+ 		scpi_pd->ops = scpi_ops;
+-		sprintf(scpi_pd->name, "%pOFn.%d", np, i);
+-		scpi_pd->genpd.name = scpi_pd->name;
++		scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL,
++						     "%pOFn.%d", np, i);
++		if (!scpi_pd->genpd.name) {
++			dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n",
++				np, i);
++			continue;
++		}
+ 		scpi_pd->genpd.power_off = scpi_pd_power_off;
+ 		scpi_pd->genpd.power_on = scpi_pd_power_on;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index bea451a39d601..b19f7bd37781f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3002,8 +3002,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
+ 			      AMD_PG_SUPPORT_CP |
+ 			      AMD_PG_SUPPORT_GDS |
+ 			      AMD_PG_SUPPORT_RLC_SMU_HS)) {
+-		WREG32(mmRLC_JUMP_TABLE_RESTORE,
+-		       adev->gfx.rlc.cp_table_gpu_addr >> 8);
++		WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE,
++			     adev->gfx.rlc.cp_table_gpu_addr >> 8);
+ 		gfx_v9_0_init_gfx_power_gating(adev);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
+index 7907c9e0b5dec..b938fd12da4d5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
+@@ -187,6 +187,9 @@ int smu_v12_0_fini_smc_tables(struct smu_context *smu)
+ 	kfree(smu_table->watermarks_table);
+ 	smu_table->watermarks_table = NULL;
+ 
++	kfree(smu_table->gpu_metrics_table);
++	smu_table->gpu_metrics_table = NULL;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index a3c2f76668abe..d27f2840b9555 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -857,7 +857,10 @@ static void ast_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		crtc->funcs->atomic_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
++	if (ast_state)
++		__drm_atomic_helper_crtc_reset(crtc, &ast_state->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ static struct drm_crtc_state *
+diff --git a/drivers/input/touchscreen/of_touchscreen.c b/drivers/input/touchscreen/of_touchscreen.c
+index 97342e14b4f18..8719a8b0e8682 100644
+--- a/drivers/input/touchscreen/of_touchscreen.c
++++ b/drivers/input/touchscreen/of_touchscreen.c
+@@ -79,27 +79,27 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
+ 
+ 	data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-x",
+ 						input_abs_get_min(input, axis_x),
+-						&minimum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-size-x",
+-						input_abs_get_max(input,
+-								  axis_x) + 1,
+-						&maximum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
+-						input_abs_get_fuzz(input, axis_x),
+-						&fuzz);
++						&minimum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-x",
++						 input_abs_get_max(input,
++								   axis_x) + 1,
++						 &maximum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
++						 input_abs_get_fuzz(input, axis_x),
++						 &fuzz);
+ 	if (data_present)
+ 		touchscreen_set_params(input, axis_x, minimum, maximum - 1, fuzz);
+ 
+ 	data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-y",
+ 						input_abs_get_min(input, axis_y),
+-						&minimum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-size-y",
+-						input_abs_get_max(input,
+-								  axis_y) + 1,
+-						&maximum) |
+-		       touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
+-						input_abs_get_fuzz(input, axis_y),
+-						&fuzz);
++						&minimum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-y",
++						 input_abs_get_max(input,
++								   axis_y) + 1,
++						 &maximum);
++	data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
++						 input_abs_get_fuzz(input, axis_y),
++						 &fuzz);
+ 	if (data_present)
+ 		touchscreen_set_params(input, axis_y, minimum, maximum - 1, fuzz);
+ 
+@@ -107,11 +107,11 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
+ 	data_present = touchscreen_get_prop_u32(dev,
+ 						"touchscreen-max-pressure",
+ 						input_abs_get_max(input, axis),
+-						&maximum) |
+-		       touchscreen_get_prop_u32(dev,
+-						"touchscreen-fuzz-pressure",
+-						input_abs_get_fuzz(input, axis),
+-						&fuzz);
++						&maximum);
++	data_present |= touchscreen_get_prop_u32(dev,
++						 "touchscreen-fuzz-pressure",
++						 input_abs_get_fuzz(input, axis),
++						 &fuzz);
+ 	if (data_present)
+ 		touchscreen_set_params(input, axis, 0, maximum, fuzz);
+ 
+diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
+index 9e4d1212f4c16..63f2baed3c8a6 100644
+--- a/drivers/md/persistent-data/dm-btree-remove.c
++++ b/drivers/md/persistent-data/dm-btree-remove.c
+@@ -423,9 +423,9 @@ static int rebalance_children(struct shadow_spine *s,
+ 
+ 		memcpy(n, dm_block_data(child),
+ 		       dm_bm_block_size(dm_tm_get_bm(info->tm)));
+-		dm_tm_unlock(info->tm, child);
+ 
+ 		dm_tm_dec(info->tm, dm_block_location(child));
++		dm_tm_unlock(info->tm, child);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/mxl111sf.c b/drivers/media/usb/dvb-usb-v2/mxl111sf.c
+index 7865fa0a82957..cd5861a30b6f8 100644
+--- a/drivers/media/usb/dvb-usb-v2/mxl111sf.c
++++ b/drivers/media/usb/dvb-usb-v2/mxl111sf.c
+@@ -931,8 +931,6 @@ static int mxl111sf_init(struct dvb_usb_device *d)
+ 		  .len = sizeof(eeprom), .buf = eeprom },
+ 	};
+ 
+-	mutex_init(&state->msg_lock);
+-
+ 	ret = get_chip_info(state);
+ 	if (mxl_fail(ret))
+ 		pr_err("failed to get chip info during probe");
+@@ -1074,6 +1072,14 @@ static int mxl111sf_get_stream_config_dvbt(struct dvb_frontend *fe,
+ 	return 0;
+ }
+ 
++static int mxl111sf_probe(struct dvb_usb_device *dev)
++{
++	struct mxl111sf_state *state = d_to_priv(dev);
++
++	mutex_init(&state->msg_lock);
++	return 0;
++}
++
+ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
+ 	.driver_name = KBUILD_MODNAME,
+ 	.owner = THIS_MODULE,
+@@ -1083,6 +1089,7 @@ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_dvbt,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1124,6 +1131,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_atsc,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1165,6 +1173,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mh = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_mh,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1233,6 +1242,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc_mh = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_atsc_mh,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1311,6 +1321,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_mercury,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+@@ -1381,6 +1392,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury_mh = {
+ 	.generic_bulk_ctrl_endpoint = 0x02,
+ 	.generic_bulk_ctrl_endpoint_response = 0x81,
+ 
++	.probe             = mxl111sf_probe,
+ 	.i2c_algo          = &mxl111sf_i2c_algo,
+ 	.frontend_attach   = mxl111sf_frontend_attach_mercury_mh,
+ 	.tuner_attach      = mxl111sf_attach_tuner,
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 0404aafd5ce56..1a703b95208b0 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1304,11 +1304,11 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
+ 	struct bcm_sysport_priv *priv = netdev_priv(dev);
+ 	struct device *kdev = &priv->pdev->dev;
+ 	struct bcm_sysport_tx_ring *ring;
++	unsigned long flags, desc_flags;
+ 	struct bcm_sysport_cb *cb;
+ 	struct netdev_queue *txq;
+ 	u32 len_status, addr_lo;
+ 	unsigned int skb_len;
+-	unsigned long flags;
+ 	dma_addr_t mapping;
+ 	u16 queue;
+ 	int ret;
+@@ -1368,8 +1368,10 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
+ 	ring->desc_count--;
+ 
+ 	/* Ports are latched, so write upper address first */
++	spin_lock_irqsave(&priv->desc_lock, desc_flags);
+ 	tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));
+ 	tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));
++	spin_unlock_irqrestore(&priv->desc_lock, desc_flags);
+ 
+ 	/* Check ring space and update SW control flow */
+ 	if (ring->desc_count == 0)
+@@ -2008,6 +2010,7 @@ static int bcm_sysport_open(struct net_device *dev)
+ 	}
+ 
+ 	/* Initialize both hardware and software ring */
++	spin_lock_init(&priv->desc_lock);
+ 	for (i = 0; i < dev->num_tx_queues; i++) {
+ 		ret = bcm_sysport_init_tx_ring(priv, i);
+ 		if (ret) {
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h
+index 3a5cb6f128f57..1276e330e9d03 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.h
++++ b/drivers/net/ethernet/broadcom/bcmsysport.h
+@@ -742,6 +742,7 @@ struct bcm_sysport_priv {
+ 	int			wol_irq;
+ 
+ 	/* Transmit rings */
++	spinlock_t		desc_lock;
+ 	struct bcm_sysport_tx_ring *tx_rings;
+ 
+ 	/* Receive queue */
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index 5b2dcd97c1078..b8e5ca6700ed5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -109,7 +109,8 @@ int hclgevf_send_mbx_msg(struct hclgevf_dev *hdev,
+ 
+ 	memcpy(&req->msg, send_msg, sizeof(struct hclge_vf_to_pf_msg));
+ 
+-	trace_hclge_vf_mbx_send(hdev, req);
++	if (test_bit(HCLGEVF_STATE_NIC_REGISTERED, &hdev->state))
++		trace_hclge_vf_mbx_send(hdev, req);
+ 
+ 	/* synchronous send */
+ 	if (need_resp) {
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index d5432d1448c05..1662c0985eca4 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -7654,6 +7654,20 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
+ 	struct vf_mac_filter *entry = NULL;
+ 	int ret = 0;
+ 
++	if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
++	    !vf_data->trusted) {
++		dev_warn(&pdev->dev,
++			 "VF %d requested MAC filter but is administratively denied\n",
++			  vf);
++		return -EINVAL;
++	}
++	if (!is_valid_ether_addr(addr)) {
++		dev_warn(&pdev->dev,
++			 "VF %d attempted to set invalid MAC filter\n",
++			  vf);
++		return -EINVAL;
++	}
++
+ 	switch (info) {
+ 	case E1000_VF_MAC_FILTER_CLR:
+ 		/* remove all unicast MAC filters related to the current VF */
+@@ -7667,20 +7681,6 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
+ 		}
+ 		break;
+ 	case E1000_VF_MAC_FILTER_ADD:
+-		if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
+-		    !vf_data->trusted) {
+-			dev_warn(&pdev->dev,
+-				 "VF %d requested MAC filter but is administratively denied\n",
+-				 vf);
+-			return -EINVAL;
+-		}
+-		if (!is_valid_ether_addr(addr)) {
+-			dev_warn(&pdev->dev,
+-				 "VF %d attempted to set invalid MAC filter\n",
+-				 vf);
+-			return -EINVAL;
+-		}
+-
+ 		/* try to find empty slot in the list */
+ 		list_for_each(pos, &adapter->vf_macs.l) {
+ 			entry = list_entry(pos, struct vf_mac_filter, l);
+diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
+index 07c9e9e0546f5..fe8c0a26b7201 100644
+--- a/drivers/net/ethernet/intel/igbvf/netdev.c
++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
+@@ -2873,6 +2873,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	return 0;
+ 
+ err_hw_init:
++	netif_napi_del(&adapter->rx_ring->napi);
+ 	kfree(adapter->tx_ring);
+ 	kfree(adapter->rx_ring);
+ err_sw_init:
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 7ec04e48860c6..553d6bc78e6bd 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -636,7 +636,7 @@ s32 igc_set_ltr_i225(struct igc_hw *hw, bool link)
+ 		ltrv = rd32(IGC_LTRMAXV);
+ 		if (ltr_max != (ltrv & IGC_LTRMAXV_LTRV_MASK)) {
+ 			ltrv = IGC_LTRMAXV_LSNP_REQ | ltr_max |
+-			       (scale_min << IGC_LTRMAXV_SCALE_SHIFT);
++			       (scale_max << IGC_LTRMAXV_SCALE_SHIFT);
+ 			wr32(IGC_LTRMAXV, ltrv);
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index ffe322136c584..a3a02e2f92f64 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -5532,6 +5532,10 @@ static int ixgbe_non_sfp_link_config(struct ixgbe_hw *hw)
+ 	if (!speed && hw->mac.ops.get_link_capabilities) {
+ 		ret = hw->mac.ops.get_link_capabilities(hw, &speed,
+ 							&autoneg);
++		/* remove NBASE-T speeds from default autonegotiation
++		 * to accommodate broken network switches in the field
++		 * which cannot cope with advertised NBASE-T speeds
++		 */
+ 		speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |
+ 			   IXGBE_LINK_SPEED_2_5GB_FULL);
+ 	}
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+index 5e339afa682a6..37f2bc6de4b65 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+@@ -3405,6 +3405,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
+ 	/* flush pending Tx transactions */
+ 	ixgbe_clear_tx_pending(hw);
+ 
++	/* set MDIO speed before talking to the PHY in case it's the 1st time */
++	ixgbe_set_mdio_speed(hw);
++
+ 	/* PHY ops must be identified and initialized prior to reset */
+ 	status = hw->phy.ops.init(hw);
+ 	if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+diff --git a/drivers/net/ethernet/sfc/ef100_nic.c b/drivers/net/ethernet/sfc/ef100_nic.c
+index 3148fe7703564..cb6897c2193c2 100644
+--- a/drivers/net/ethernet/sfc/ef100_nic.c
++++ b/drivers/net/ethernet/sfc/ef100_nic.c
+@@ -597,6 +597,9 @@ static size_t ef100_update_stats(struct efx_nic *efx,
+ 	ef100_common_stat_mask(mask);
+ 	ef100_ethtool_stat_mask(mask);
+ 
++	if (!mc_stats)
++		return 0;
++
+ 	efx_nic_copy_stats(efx, mc_stats);
+ 	efx_nic_update_stats(ef100_stat_desc, EF100_STAT_COUNT, mask,
+ 			     stats, mc_stats, false);
+diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c
+index 90aafb56f1409..a438202129323 100644
+--- a/drivers/net/netdevsim/bpf.c
++++ b/drivers/net/netdevsim/bpf.c
+@@ -514,6 +514,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
+ 				goto err_free;
+ 			key = nmap->entry[i].key;
+ 			*key = i;
++			memset(nmap->entry[i].value, 0, offmap->map.value_size);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
+index 8ee24e351bdc2..6a9178896c909 100644
+--- a/drivers/net/xen-netback/common.h
++++ b/drivers/net/xen-netback/common.h
+@@ -203,6 +203,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
+ 	unsigned int rx_queue_max;
+ 	unsigned int rx_queue_len;
+ 	unsigned long last_rx_time;
++	unsigned int rx_slots_needed;
+ 	bool stalled;
+ 
+ 	struct xenvif_copy_state rx_copy;
+diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
+index accc991d153f7..dbac4c03d21a1 100644
+--- a/drivers/net/xen-netback/rx.c
++++ b/drivers/net/xen-netback/rx.c
+@@ -33,28 +33,36 @@
+ #include <xen/xen.h>
+ #include <xen/events.h>
+ 
+-static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
++/*
++ * Update the needed ring page slots for the first SKB queued.
++ * Note that any call sequence outside the RX thread calling this function
++ * needs to wake up the RX thread via a call of xenvif_kick_thread()
++ * afterwards in order to avoid a race with putting the thread to sleep.
++ */
++static void xenvif_update_needed_slots(struct xenvif_queue *queue,
++				       const struct sk_buff *skb)
+ {
+-	RING_IDX prod, cons;
+-	struct sk_buff *skb;
+-	int needed;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&queue->rx_queue.lock, flags);
++	unsigned int needed = 0;
+ 
+-	skb = skb_peek(&queue->rx_queue);
+-	if (!skb) {
+-		spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
+-		return false;
++	if (skb) {
++		needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
++		if (skb_is_gso(skb))
++			needed++;
++		if (skb->sw_hash)
++			needed++;
+ 	}
+ 
+-	needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
+-	if (skb_is_gso(skb))
+-		needed++;
+-	if (skb->sw_hash)
+-		needed++;
++	WRITE_ONCE(queue->rx_slots_needed, needed);
++}
+ 
+-	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
++static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
++{
++	RING_IDX prod, cons;
++	unsigned int needed;
++
++	needed = READ_ONCE(queue->rx_slots_needed);
++	if (!needed)
++		return false;
+ 
+ 	do {
+ 		prod = queue->rx.sring->req_prod;
+@@ -80,13 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
+ 
+ 	spin_lock_irqsave(&queue->rx_queue.lock, flags);
+ 
+-	__skb_queue_tail(&queue->rx_queue, skb);
+-
+-	queue->rx_queue_len += skb->len;
+-	if (queue->rx_queue_len > queue->rx_queue_max) {
++	if (queue->rx_queue_len >= queue->rx_queue_max) {
+ 		struct net_device *dev = queue->vif->dev;
+ 
+ 		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
++		kfree_skb(skb);
++		queue->vif->dev->stats.rx_dropped++;
++	} else {
++		if (skb_queue_empty(&queue->rx_queue))
++			xenvif_update_needed_slots(queue, skb);
++
++		__skb_queue_tail(&queue->rx_queue, skb);
++
++		queue->rx_queue_len += skb->len;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
+@@ -100,6 +114,8 @@ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
+ 
+ 	skb = __skb_dequeue(&queue->rx_queue);
+ 	if (skb) {
++		xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));
++
+ 		queue->rx_queue_len -= skb->len;
+ 		if (queue->rx_queue_len < queue->rx_queue_max) {
+ 			struct netdev_queue *txq;
+@@ -134,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
+ 			break;
+ 		xenvif_rx_dequeue(queue);
+ 		kfree_skb(skb);
++		queue->vif->dev->stats.rx_dropped++;
+ 	}
+ }
+ 
+@@ -487,27 +504,31 @@ void xenvif_rx_action(struct xenvif_queue *queue)
+ 	xenvif_rx_copy_flush(queue);
+ }
+ 
+-static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
++static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)
+ {
+ 	RING_IDX prod, cons;
+ 
+ 	prod = queue->rx.sring->req_prod;
+ 	cons = queue->rx.req_cons;
+ 
++	return prod - cons;
++}
++
++static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)
++{
++	unsigned int needed = READ_ONCE(queue->rx_slots_needed);
++
+ 	return !queue->stalled &&
+-		prod - cons < 1 &&
++		xenvif_rx_queue_slots(queue) < needed &&
+ 		time_after(jiffies,
+ 			   queue->last_rx_time + queue->vif->stall_timeout);
+ }
+ 
+ static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
+ {
+-	RING_IDX prod, cons;
+-
+-	prod = queue->rx.sring->req_prod;
+-	cons = queue->rx.req_cons;
++	unsigned int needed = READ_ONCE(queue->rx_slots_needed);
+ 
+-	return queue->stalled && prod - cons >= 1;
++	return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;
+ }
+ 
+ bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 8505024b89e9e..fce3a90a335cb 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -148,6 +148,9 @@ struct netfront_queue {
+ 	grant_ref_t gref_rx_head;
+ 	grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
+ 
++	unsigned int rx_rsp_unconsumed;
++	spinlock_t rx_cons_lock;
++
+ 	struct page_pool *page_pool;
+ 	struct xdp_rxq_info xdp_rxq;
+ };
+@@ -376,12 +379,13 @@ static int xennet_open(struct net_device *dev)
+ 	return 0;
+ }
+ 
+-static void xennet_tx_buf_gc(struct netfront_queue *queue)
++static bool xennet_tx_buf_gc(struct netfront_queue *queue)
+ {
+ 	RING_IDX cons, prod;
+ 	unsigned short id;
+ 	struct sk_buff *skb;
+ 	bool more_to_do;
++	bool work_done = false;
+ 	const struct device *dev = &queue->info->netdev->dev;
+ 
+ 	BUG_ON(!netif_carrier_ok(queue->info->netdev));
+@@ -398,6 +402,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
+ 			struct xen_netif_tx_response txrsp;
+ 
++			work_done = true;
++
+ 			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
+ 			if (txrsp.status == XEN_NETIF_RSP_NULL)
+ 				continue;
+@@ -441,11 +447,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
+ 
+ 	xennet_maybe_wake_tx(queue);
+ 
+-	return;
++	return work_done;
+ 
+  err:
+ 	queue->info->broken = true;
+ 	dev_alert(dev, "Disabled for further use\n");
++
++	return work_done;
+ }
+ 
+ struct xennet_gnttab_make_txreq {
+@@ -836,6 +844,16 @@ static int xennet_close(struct net_device *dev)
+ 	return 0;
+ }
+ 
++static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&queue->rx_cons_lock, flags);
++	queue->rx.rsp_cons = val;
++	queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
++	spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
++}
++
+ static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
+ 				grant_ref_t ref)
+ {
+@@ -887,7 +905,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
+ 		xennet_move_rx_slot(queue, skb, ref);
+ 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
+ 
+-	queue->rx.rsp_cons = cons;
++	xennet_set_rx_rsp_cons(queue, cons);
+ 	return err;
+ }
+ 
+@@ -1041,7 +1059,7 @@ next:
+ 	}
+ 
+ 	if (unlikely(err))
+-		queue->rx.rsp_cons = cons + slots;
++		xennet_set_rx_rsp_cons(queue, cons + slots);
+ 
+ 	return err;
+ }
+@@ -1095,7 +1113,8 @@ static int xennet_fill_frags(struct netfront_queue *queue,
+ 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ 		}
+ 		if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
+-			queue->rx.rsp_cons = ++cons + skb_queue_len(list);
++			xennet_set_rx_rsp_cons(queue,
++					       ++cons + skb_queue_len(list));
+ 			kfree_skb(nskb);
+ 			return -ENOENT;
+ 		}
+@@ -1108,7 +1127,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
+ 		kfree_skb(nskb);
+ 	}
+ 
+-	queue->rx.rsp_cons = cons;
++	xennet_set_rx_rsp_cons(queue, cons);
+ 
+ 	return 0;
+ }
+@@ -1231,7 +1250,9 @@ err:
+ 
+ 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
+ 				__skb_queue_head(&tmpq, skb);
+-				queue->rx.rsp_cons += skb_queue_len(&tmpq);
++				xennet_set_rx_rsp_cons(queue,
++						       queue->rx.rsp_cons +
++						       skb_queue_len(&tmpq));
+ 				goto err;
+ 			}
+ 		}
+@@ -1255,7 +1276,8 @@ err:
+ 
+ 		__skb_queue_tail(&rxq, skb);
+ 
+-		i = ++queue->rx.rsp_cons;
++		i = queue->rx.rsp_cons + 1;
++		xennet_set_rx_rsp_cons(queue, i);
+ 		work_done++;
+ 	}
+ 	if (need_xdp_flush)
+@@ -1419,40 +1441,79 @@ static int xennet_set_features(struct net_device *dev,
+ 	return 0;
+ }
+ 
+-static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
++static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi)
+ {
+-	struct netfront_queue *queue = dev_id;
+ 	unsigned long flags;
+ 
+-	if (queue->info->broken)
+-		return IRQ_HANDLED;
++	if (unlikely(queue->info->broken))
++		return false;
+ 
+ 	spin_lock_irqsave(&queue->tx_lock, flags);
+-	xennet_tx_buf_gc(queue);
++	if (xennet_tx_buf_gc(queue))
++		*eoi = 0;
+ 	spin_unlock_irqrestore(&queue->tx_lock, flags);
+ 
++	return true;
++}
++
++static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
++{
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
++
++	if (likely(xennet_handle_tx(dev_id, &eoiflag)))
++		xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+-static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
++static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi)
+ {
+-	struct netfront_queue *queue = dev_id;
+-	struct net_device *dev = queue->info->netdev;
++	unsigned int work_queued;
++	unsigned long flags;
+ 
+-	if (queue->info->broken)
+-		return IRQ_HANDLED;
++	if (unlikely(queue->info->broken))
++		return false;
++
++	spin_lock_irqsave(&queue->rx_cons_lock, flags);
++	work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
++	if (work_queued > queue->rx_rsp_unconsumed) {
++		queue->rx_rsp_unconsumed = work_queued;
++		*eoi = 0;
++	} else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) {
++		const struct device *dev = &queue->info->netdev->dev;
++
++		spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
++		dev_alert(dev, "RX producer index going backwards\n");
++		dev_alert(dev, "Disabled for further use\n");
++		queue->info->broken = true;
++		return false;
++	}
++	spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
+ 
+-	if (likely(netif_carrier_ok(dev) &&
+-		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
++	if (likely(netif_carrier_ok(queue->info->netdev) && work_queued))
+ 		napi_schedule(&queue->napi);
+ 
++	return true;
++}
++
++static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
++{
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
++
++	if (likely(xennet_handle_rx(dev_id, &eoiflag)))
++		xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
+ {
+-	xennet_tx_interrupt(irq, dev_id);
+-	xennet_rx_interrupt(irq, dev_id);
++	unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
++
++	if (xennet_handle_tx(dev_id, &eoiflag) &&
++	    xennet_handle_rx(dev_id, &eoiflag))
++		xen_irq_lateeoi(irq, eoiflag);
++
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -1770,9 +1831,10 @@ static int setup_netfront_single(struct netfront_queue *queue)
+ 	if (err < 0)
+ 		goto fail;
+ 
+-	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
+-					xennet_interrupt,
+-					0, queue->info->netdev->name, queue);
++	err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
++						xennet_interrupt, 0,
++						queue->info->netdev->name,
++						queue);
+ 	if (err < 0)
+ 		goto bind_fail;
+ 	queue->rx_evtchn = queue->tx_evtchn;
+@@ -1800,18 +1862,18 @@ static int setup_netfront_split(struct netfront_queue *queue)
+ 
+ 	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+ 		 "%s-tx", queue->name);
+-	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
+-					xennet_tx_interrupt,
+-					0, queue->tx_irq_name, queue);
++	err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
++						xennet_tx_interrupt, 0,
++						queue->tx_irq_name, queue);
+ 	if (err < 0)
+ 		goto bind_tx_fail;
+ 	queue->tx_irq = err;
+ 
+ 	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+ 		 "%s-rx", queue->name);
+-	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
+-					xennet_rx_interrupt,
+-					0, queue->rx_irq_name, queue);
++	err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn,
++						xennet_rx_interrupt, 0,
++						queue->rx_irq_name, queue);
+ 	if (err < 0)
+ 		goto bind_rx_fail;
+ 	queue->rx_irq = err;
+@@ -1913,6 +1975,7 @@ static int xennet_init_queue(struct netfront_queue *queue)
+ 
+ 	spin_lock_init(&queue->tx_lock);
+ 	spin_lock_init(&queue->rx_lock);
++	spin_lock_init(&queue->rx_cons_lock);
+ 
+ 	timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
+ 
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index db7475dc601f5..57314fec2261b 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -828,9 +828,6 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 		goto out_disable;
+ 	}
+ 
+-	/* Ensure that all table entries are masked. */
+-	msix_mask_all(base, tsize);
+-
+ 	ret = msix_setup_entries(dev, base, entries, nvec, affd);
+ 	if (ret)
+ 		goto out_disable;
+@@ -853,6 +850,16 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 	/* Set MSI-X enabled bits and unmask the function */
+ 	pci_intx_for_msi(dev, 0);
+ 	dev->msix_enabled = 1;
++
++	/*
++	 * Ensure that all table entries are masked to prevent
++	 * stale entries from firing in a crash kernel.
++	 *
++	 * Done late to deal with a broken Marvell NVME device
++	 * which takes the MSI-X mask bits into account even
++	 * when MSI-X is disabled, which prevents MSI delivery.
++	 */
++	msix_mask_all(base, tsize);
+ 	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
+ 
+ 	pcibios_free_irq(dev);
+@@ -879,7 +886,7 @@ out_free:
+ 	free_msi_irqs(dev);
+ 
+ out_disable:
+-	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
++	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 9188191433439..6b00de6b6f0ef 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -1188,7 +1188,7 @@ static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr,
+ 		 __func__, off_dst, scsi_bufflen(scp), act_len,
+ 		 scsi_get_resid(scp));
+ 	n = scsi_bufflen(scp) - (off_dst + act_len);
+-	scsi_set_resid(scp, min_t(int, scsi_get_resid(scp), n));
++	scsi_set_resid(scp, min_t(u32, scsi_get_resid(scp), n));
+ 	return 0;
+ }
+ 
+@@ -1561,7 +1561,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 	unsigned char pq_pdt;
+ 	unsigned char *arr;
+ 	unsigned char *cmd = scp->cmnd;
+-	int alloc_len, n, ret;
++	u32 alloc_len, n;
++	int ret;
+ 	bool have_wlun, is_disk, is_zbc, is_disk_zbc;
+ 
+ 	alloc_len = get_unaligned_be16(cmd + 3);
+@@ -1584,7 +1585,8 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 		kfree(arr);
+ 		return check_condition_result;
+ 	} else if (0x1 & cmd[1]) {  /* EVPD bit set */
+-		int lu_id_num, port_group_id, target_dev_id, len;
++		int lu_id_num, port_group_id, target_dev_id;
++		u32 len;
+ 		char lu_id_str[6];
+ 		int host_no = devip->sdbg_host->shost->host_no;
+ 		
+@@ -1675,9 +1677,9 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 			kfree(arr);
+ 			return check_condition_result;
+ 		}
+-		len = min(get_unaligned_be16(arr + 2) + 4, alloc_len);
++		len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
+ 		ret = fill_from_dev_buffer(scp, arr,
+-			    min(len, SDEBUG_MAX_INQ_ARR_SZ));
++			    min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
+ 		kfree(arr);
+ 		return ret;
+ 	}
+@@ -1713,7 +1715,7 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 	}
+ 	put_unaligned_be16(0x2100, arr + n);	/* SPL-4 no version claimed */
+ 	ret = fill_from_dev_buffer(scp, arr,
+-			    min_t(int, alloc_len, SDEBUG_LONG_INQ_SZ));
++			    min_t(u32, alloc_len, SDEBUG_LONG_INQ_SZ));
+ 	kfree(arr);
+ 	return ret;
+ }
+@@ -1728,8 +1730,8 @@ static int resp_requests(struct scsi_cmnd *scp,
+ 	unsigned char *cmd = scp->cmnd;
+ 	unsigned char arr[SCSI_SENSE_BUFFERSIZE];	/* assume >= 18 bytes */
+ 	bool dsense = !!(cmd[1] & 1);
+-	int alloc_len = cmd[4];
+-	int len = 18;
++	u32 alloc_len = cmd[4];
++	u32 len = 18;
+ 	int stopped_state = atomic_read(&devip->stopped);
+ 
+ 	memset(arr, 0, sizeof(arr));
+@@ -1773,7 +1775,7 @@ static int resp_requests(struct scsi_cmnd *scp,
+ 			arr[7] = 0xa;
+ 		}
+ 	}
+-	return fill_from_dev_buffer(scp, arr, min_t(int, len, alloc_len));
++	return fill_from_dev_buffer(scp, arr, min_t(u32, len, alloc_len));
+ }
+ 
+ static int resp_start_stop(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+@@ -2311,7 +2313,8 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ {
+ 	int pcontrol, pcode, subpcode, bd_len;
+ 	unsigned char dev_spec;
+-	int alloc_len, offset, len, target_dev_id;
++	u32 alloc_len, offset, len;
++	int target_dev_id;
+ 	int target = scp->device->id;
+ 	unsigned char *ap;
+ 	unsigned char arr[SDEBUG_MAX_MSENSE_SZ];
+@@ -2467,7 +2470,7 @@ static int resp_mode_sense(struct scsi_cmnd *scp,
+ 		arr[0] = offset - 1;
+ 	else
+ 		put_unaligned_be16((offset - 2), arr + 0);
+-	return fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, offset));
++	return fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, offset));
+ }
+ 
+ #define SDEBUG_MAX_MSELECT_SZ 512
+@@ -2498,11 +2501,11 @@ static int resp_mode_select(struct scsi_cmnd *scp,
+ 			    __func__, param_len, res);
+ 	md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
+ 	bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
+-	if (md_len > 2) {
++	off = bd_len + (mselect6 ? 4 : 8);
++	if (md_len > 2 || off >= res) {
+ 		mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
+ 		return check_condition_result;
+ 	}
+-	off = bd_len + (mselect6 ? 4 : 8);
+ 	mpage = arr[off] & 0x3f;
+ 	ps = !!(arr[off] & 0x80);
+ 	if (ps) {
+@@ -2582,7 +2585,8 @@ static int resp_ie_l_pg(unsigned char *arr)
+ static int resp_log_sense(struct scsi_cmnd *scp,
+ 			  struct sdebug_dev_info *devip)
+ {
+-	int ppc, sp, pcode, subpcode, alloc_len, len, n;
++	int ppc, sp, pcode, subpcode;
++	u32 alloc_len, len, n;
+ 	unsigned char arr[SDEBUG_MAX_LSENSE_SZ];
+ 	unsigned char *cmd = scp->cmnd;
+ 
+@@ -2652,9 +2656,9 @@ static int resp_log_sense(struct scsi_cmnd *scp,
+ 		mk_sense_invalid_fld(scp, SDEB_IN_CDB, 3, -1);
+ 		return check_condition_result;
+ 	}
+-	len = min_t(int, get_unaligned_be16(arr + 2) + 4, alloc_len);
++	len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len);
+ 	return fill_from_dev_buffer(scp, arr,
+-		    min_t(int, len, SDEBUG_MAX_INQ_ARR_SZ));
++		    min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ));
+ }
+ 
+ static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip)
+@@ -4238,6 +4242,8 @@ static int resp_verify(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
+ 		mk_sense_invalid_opcode(scp);
+ 		return check_condition_result;
+ 	}
++	if (vnum == 0)
++		return 0;	/* not an error */
+ 	a_num = is_bytchk3 ? 1 : vnum;
+ 	/* Treat following check like one for read (i.e. no write) access */
+ 	ret = check_device_access_params(scp, lba, a_num, false);
+@@ -4301,6 +4307,8 @@ static int resp_report_zones(struct scsi_cmnd *scp,
+ 	}
+ 	zs_lba = get_unaligned_be64(cmd + 2);
+ 	alloc_len = get_unaligned_be32(cmd + 10);
++	if (alloc_len == 0)
++		return 0;	/* not an error */
+ 	rep_opts = cmd[14] & 0x3f;
+ 	partial = cmd[14] & 0x80;
+ 
+@@ -4405,7 +4413,7 @@ static int resp_report_zones(struct scsi_cmnd *scp,
+ 	put_unaligned_be64(sdebug_capacity - 1, arr + 8);
+ 
+ 	rep_len = (unsigned long)desc - (unsigned long)arr;
+-	ret = fill_from_dev_buffer(scp, arr, min_t(int, alloc_len, rep_len));
++	ret = fill_from_dev_buffer(scp, arr, min_t(u32, alloc_len, rep_len));
+ 
+ fini:
+ 	read_unlock(macc_lckp);
+diff --git a/drivers/soc/imx/soc-imx.c b/drivers/soc/imx/soc-imx.c
+index 01bfea1cb64a8..1e8780299d5c4 100644
+--- a/drivers/soc/imx/soc-imx.c
++++ b/drivers/soc/imx/soc-imx.c
+@@ -33,6 +33,10 @@ static int __init imx_soc_device_init(void)
+ 	u32 val;
+ 	int ret;
+ 
++	/* Return early if this is running on devices with different SoCs */
++	if (!__mxc_cpu_type)
++		return 0;
++
+ 	if (of_machine_is_compatible("fsl,ls1021a"))
+ 		return 0;
+ 
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
+index 94b60a692b515..4388a4a5e0919 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra.c
+@@ -260,7 +260,7 @@ static struct platform_driver tegra_fuse_driver = {
+ };
+ builtin_platform_driver(tegra_fuse_driver);
+ 
+-bool __init tegra_fuse_read_spare(unsigned int spare)
++u32 __init tegra_fuse_read_spare(unsigned int spare)
+ {
+ 	unsigned int offset = fuse->soc->info->spare + spare * 4;
+ 
+diff --git a/drivers/soc/tegra/fuse/fuse.h b/drivers/soc/tegra/fuse/fuse.h
+index e057a58e20603..21887a57cf2c2 100644
+--- a/drivers/soc/tegra/fuse/fuse.h
++++ b/drivers/soc/tegra/fuse/fuse.h
+@@ -63,7 +63,7 @@ struct tegra_fuse {
+ void tegra_init_revision(void);
+ void tegra_init_apbmisc(void);
+ 
+-bool __init tegra_fuse_read_spare(unsigned int spare);
++u32 __init tegra_fuse_read_spare(unsigned int spare);
+ u32 __init tegra_fuse_read_early(unsigned int offset);
+ 
+ u8 tegra_get_major_rev(void);
+diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
+index da6b88e80dc07..297dc62bca298 100644
+--- a/drivers/tee/amdtee/core.c
++++ b/drivers/tee/amdtee/core.c
+@@ -203,9 +203,8 @@ static int copy_ta_binary(struct tee_context *ctx, void *ptr, void **ta,
+ 
+ 	*ta_size = roundup(fw->size, PAGE_SIZE);
+ 	*ta = (void *)__get_free_pages(GFP_KERNEL, get_order(*ta_size));
+-	if (IS_ERR(*ta)) {
+-		pr_err("%s: get_free_pages failed 0x%llx\n", __func__,
+-		       (u64)*ta);
++	if (!*ta) {
++		pr_err("%s: get_free_pages failed\n", __func__);
+ 		rc = -ENOMEM;
+ 		goto rel_fw;
+ 	}
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 8f143c09a1696..7948660e042fd 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -37,6 +37,8 @@ struct xencons_info {
+ 	struct xenbus_device *xbdev;
+ 	struct xencons_interface *intf;
+ 	unsigned int evtchn;
++	XENCONS_RING_IDX out_cons;
++	unsigned int out_cons_same;
+ 	struct hvc_struct *hvc;
+ 	int irq;
+ 	int vtermno;
+@@ -138,6 +140,8 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 	XENCONS_RING_IDX cons, prod;
+ 	int recv = 0;
+ 	struct xencons_info *xencons = vtermno_to_xencons(vtermno);
++	unsigned int eoiflag = 0;
++
+ 	if (xencons == NULL)
+ 		return -EINVAL;
+ 	intf = xencons->intf;
+@@ -157,7 +161,27 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
+ 	mb();			/* read ring before consuming */
+ 	intf->in_cons = cons;
+ 
+-	notify_daemon(xencons);
++	/*
++	 * When to mark interrupt having been spurious:
++	 * - there was no new data to be read, and
++	 * - the backend did not consume some output bytes, and
++	 * - the previous round with no read data didn't see consumed bytes
++	 *   (we might have a race with an interrupt being in flight while
++	 *   updating xencons->out_cons, so account for that by allowing one
++	 *   round without any visible reason)
++	 */
++	if (intf->out_cons != xencons->out_cons) {
++		xencons->out_cons = intf->out_cons;
++		xencons->out_cons_same = 0;
++	}
++	if (recv) {
++		notify_daemon(xencons);
++	} else if (xencons->out_cons_same++ > 1) {
++		eoiflag = XEN_EOI_FLAG_SPURIOUS;
++	}
++
++	xen_irq_lateeoi(xencons->irq, eoiflag);
++
+ 	return recv;
+ }
+ 
+@@ -386,7 +410,7 @@ static int xencons_connect_backend(struct xenbus_device *dev,
+ 	if (ret)
+ 		return ret;
+ 	info->evtchn = evtchn;
+-	irq = bind_evtchn_to_irq(evtchn);
++	irq = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
+ 	if (irq < 0)
+ 		return irq;
+ 	info->irq = irq;
+@@ -550,7 +574,7 @@ static int __init xen_hvc_init(void)
+ 			return r;
+ 
+ 		info = vtermno_to_xencons(HVC_COOKIE);
+-		info->irq = bind_evtchn_to_irq(info->evtchn);
++		info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);
+ 	}
+ 	if (info->irq < 0)
+ 		info->irq = 0; /* NO_IRQ */
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index 1363e659dc1db..48c64e68017cd 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -139,6 +139,8 @@ struct n_hdlc {
+ 	struct n_hdlc_buf_list	rx_buf_list;
+ 	struct n_hdlc_buf_list	tx_free_buf_list;
+ 	struct n_hdlc_buf_list	rx_free_buf_list;
++	struct work_struct	write_work;
++	struct tty_struct	*tty_for_write_work;
+ };
+ 
+ /*
+@@ -153,6 +155,7 @@ static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list);
+ /* Local functions */
+ 
+ static struct n_hdlc *n_hdlc_alloc(void);
++static void n_hdlc_tty_write_work(struct work_struct *work);
+ 
+ /* max frame size for memory allocations */
+ static int maxframe = 4096;
+@@ -209,6 +212,8 @@ static void n_hdlc_tty_close(struct tty_struct *tty)
+ 	wake_up_interruptible(&tty->read_wait);
+ 	wake_up_interruptible(&tty->write_wait);
+ 
++	cancel_work_sync(&n_hdlc->write_work);
++
+ 	n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list);
+ 	n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list);
+ 	n_hdlc_free_buf_list(&n_hdlc->rx_buf_list);
+@@ -240,6 +245,8 @@ static int n_hdlc_tty_open(struct tty_struct *tty)
+ 		return -ENFILE;
+ 	}
+ 
++	INIT_WORK(&n_hdlc->write_work, n_hdlc_tty_write_work);
++	n_hdlc->tty_for_write_work = tty;
+ 	tty->disc_data = n_hdlc;
+ 	tty->receive_room = 65536;
+ 
+@@ -333,6 +340,20 @@ check_again:
+ 		goto check_again;
+ }	/* end of n_hdlc_send_frames() */
+ 
++/**
++ * n_hdlc_tty_write_work - Asynchronous callback for transmit wakeup
++ * @work: pointer to work_struct
++ *
++ * Called when low level device driver can accept more send data.
++ */
++static void n_hdlc_tty_write_work(struct work_struct *work)
++{
++	struct n_hdlc *n_hdlc = container_of(work, struct n_hdlc, write_work);
++	struct tty_struct *tty = n_hdlc->tty_for_write_work;
++
++	n_hdlc_send_frames(n_hdlc, tty);
++}	/* end of n_hdlc_tty_write_work() */
++
+ /**
+  * n_hdlc_tty_wakeup - Callback for transmit wakeup
+  * @tty: pointer to associated tty instance data
+@@ -343,7 +364,7 @@ static void n_hdlc_tty_wakeup(struct tty_struct *tty)
+ {
+ 	struct n_hdlc *n_hdlc = tty->disc_data;
+ 
+-	n_hdlc_send_frames(n_hdlc, tty);
++	schedule_work(&n_hdlc->write_work);
+ }	/* end of n_hdlc_tty_wakeup() */
+ 
+ /**
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index 31c9e83ea3cb2..251f0018ae8ca 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -290,25 +290,6 @@ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
+ 	}
+ }
+ 
+-static void fintek_8250_goto_highspeed(struct uart_8250_port *uart,
+-			      struct fintek_8250 *pdata)
+-{
+-	sio_write_reg(pdata, LDN, pdata->index);
+-
+-	switch (pdata->pid) {
+-	case CHIP_ID_F81966:
+-	case CHIP_ID_F81866: /* set uart clock for high speed serial mode */
+-		sio_write_mask_reg(pdata, F81866_UART_CLK,
+-			F81866_UART_CLK_MASK,
+-			F81866_UART_CLK_14_769MHZ);
+-
+-		uart->port.uartclk = 921600 * 16;
+-		break;
+-	default: /* leave clock speed untouched */
+-		break;
+-	}
+-}
+-
+ static void fintek_8250_set_termios(struct uart_port *port,
+ 				    struct ktermios *termios,
+ 				    struct ktermios *old)
+@@ -430,7 +411,6 @@ static int probe_setup_port(struct fintek_8250 *pdata,
+ 
+ 				fintek_8250_set_irq_mode(pdata, level_mode);
+ 				fintek_8250_set_max_fifo(pdata);
+-				fintek_8250_goto_highspeed(uart, pdata);
+ 
+ 				fintek_8250_exit_key(addr[i]);
+ 
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 61f686c5bd9c6..baf80e2ac7d8e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -435,6 +435,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x1532, 0x0116), .driver_info =
+ 			USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+ 
++	/* Lenovo USB-C to Ethernet Adapter RTL8153-04 */
++	{ USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */
+ 	{ USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM },
+ 
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 5f18acac74068..49d333f02af4e 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -542,6 +542,9 @@ static int dwc2_driver_probe(struct platform_device *dev)
+ 		ggpio |= GGPIO_STM32_OTG_GCCFG_IDEN;
+ 		ggpio |= GGPIO_STM32_OTG_GCCFG_VBDEN;
+ 		dwc2_writel(hsotg, ggpio, GGPIO);
++
++		/* ID/VBUS detection startup time */
++		usleep_range(5000, 7000);
+ 	}
+ 
+ 	retval = dwc2_drd_init(hsotg);
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index be4ecbabdd586..6c0434100e38c 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -14,7 +14,6 @@
+ #include <linux/pci_ids.h>
+ #include <linux/memblock.h>
+ #include <linux/io.h>
+-#include <linux/iopoll.h>
+ #include <asm/pci-direct.h>
+ #include <asm/fixmap.h>
+ #include <linux/bcd.h>
+@@ -136,9 +135,17 @@ static int handshake(void __iomem *ptr, u32 mask, u32 done, int wait, int delay)
+ {
+ 	u32 result;
+ 
+-	return readl_poll_timeout_atomic(ptr, result,
+-					 ((result & mask) == done),
+-					 delay, wait);
++	/* Can not use readl_poll_timeout_atomic() for early boot things */
++	do {
++		result = readl(ptr);
++		result &= mask;
++		if (result == done)
++			return 0;
++		udelay(delay);
++		wait -= delay;
++	} while (wait > 0);
++
++	return -ETIMEDOUT;
+ }
+ 
+ static void __init xdbc_bios_handoff(void)
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 426132988512d..8bec0cbf844ed 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1649,14 +1649,14 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 	u8				endp;
+ 
+ 	if (w_length > USB_COMP_EP0_BUFSIZ) {
+-		if (ctrl->bRequestType == USB_DIR_OUT) {
+-			goto done;
+-		} else {
++		if (ctrl->bRequestType & USB_DIR_IN) {
+ 			/* Cast away the const, we are going to overwrite on purpose. */
+ 			__le16 *temp = (__le16 *)&ctrl->wLength;
+ 
+ 			*temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);
+ 			w_length = USB_COMP_EP0_BUFSIZ;
++		} else {
++			goto done;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/gadget/legacy/dbgp.c b/drivers/usb/gadget/legacy/dbgp.c
+index 355bc7dab9d5f..6bcbad3825802 100644
+--- a/drivers/usb/gadget/legacy/dbgp.c
++++ b/drivers/usb/gadget/legacy/dbgp.c
+@@ -346,14 +346,14 @@ static int dbgp_setup(struct usb_gadget *gadget,
+ 	u16 len = 0;
+ 
+ 	if (length > DBGP_REQ_LEN) {
+-		if (ctrl->bRequestType == USB_DIR_OUT) {
+-			return err;
+-		} else {
++		if (ctrl->bRequestType & USB_DIR_IN) {
+ 			/* Cast away the const, we are going to overwrite on purpose. */
+ 			__le16 *temp = (__le16 *)&ctrl->wLength;
+ 
+ 			*temp = cpu_to_le16(DBGP_REQ_LEN);
+ 			length = DBGP_REQ_LEN;
++		} else {
++			return err;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 04b9c4f5f129d..217d2b66fa514 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -1336,14 +1336,14 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 	u16				w_length = le16_to_cpu(ctrl->wLength);
+ 
+ 	if (w_length > RBUF_SIZE) {
+-		if (ctrl->bRequestType == USB_DIR_OUT) {
+-			return value;
+-		} else {
++		if (ctrl->bRequestType & USB_DIR_IN) {
+ 			/* Cast away the const, we are going to overwrite on purpose. */
+ 			__le16 *temp = (__le16 *)&ctrl->wLength;
+ 
+ 			*temp = cpu_to_le16(RBUF_SIZE);
+ 			w_length = RBUF_SIZE;
++		} else {
++			return value;
+ 		}
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 80251a2579fda..c9133df71e52b 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -70,6 +70,8 @@
+ #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4		0x161e
+ #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5		0x15d6
+ #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6		0x15d7
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7		0x161c
++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8		0x161f
+ 
+ #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI			0x1042
+ #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI		0x1142
+@@ -325,7 +327,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
+ 	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
+ 	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
+-	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6))
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 ||
++	    pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+ 	if (xhci->quirks & XHCI_RESET_ON_RESUME)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 6d858bdaf33ce..f906c1308f9f9 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -1750,6 +1750,8 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
+ 
+ 	/*  2 banks of GPIO - One for the pins taken from each serial port */
+ 	if (intf_num == 0) {
++		priv->gc.ngpio = 2;
++
+ 		if (mode.eci == CP210X_PIN_MODE_MODEM) {
+ 			/* mark all GPIOs of this interface as reserved */
+ 			priv->gpio_altfunc = 0xff;
+@@ -1760,8 +1762,9 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
+ 		priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
+ 						CP210X_ECI_GPIO_MODE_MASK) >>
+ 						CP210X_ECI_GPIO_MODE_OFFSET);
+-		priv->gc.ngpio = 2;
+ 	} else if (intf_num == 1) {
++		priv->gc.ngpio = 3;
++
+ 		if (mode.sci == CP210X_PIN_MODE_MODEM) {
+ 			/* mark all GPIOs of this interface as reserved */
+ 			priv->gpio_altfunc = 0xff;
+@@ -1772,7 +1775,6 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
+ 		priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
+ 						CP210X_SCI_GPIO_MODE_MASK) >>
+ 						CP210X_SCI_GPIO_MODE_OFFSET);
+-		priv->gc.ngpio = 3;
+ 	} else {
+ 		return -ENODEV;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 28ffe4e358b77..21b1488fe4461 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1219,6 +1219,14 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff),	/* Telit LN920 (ECM) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff),	/* Telit FN990 (rmnet) */
++	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff),	/* Telit FN990 (MBIM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff),	/* Telit FN990 (RNDIS) */
++	  .driver_info = NCTRL(2) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff),	/* Telit FN990 (ECM) */
++	  .driver_info = NCTRL(0) | RSVD(1) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index fdeb20f2f174c..e4d60009d9083 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -196,7 +196,7 @@ static int vhost_vdpa_config_validate(struct vhost_vdpa *v,
+ 		break;
+ 	}
+ 
+-	if (c->len == 0)
++	if (c->len == 0 || c->off > size)
+ 		return -EINVAL;
+ 
+ 	if (c->len > size - c->off)
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index e9432dbbec0a7..cce75d3b3ba05 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -263,7 +263,7 @@ size_t virtio_max_dma_size(struct virtio_device *vdev)
+ 	size_t max_segment_size = SIZE_MAX;
+ 
+ 	if (vring_use_dma_api(vdev))
+-		max_segment_size = dma_max_mapping_size(&vdev->dev);
++		max_segment_size = dma_max_mapping_size(vdev->dev.parent);
+ 
+ 	return max_segment_size;
+ }
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index bab2091c81683..a5bcad0278835 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1603,6 +1603,14 @@ again:
+ 	}
+ 	return root;
+ fail:
++	/*
++	 * If our caller provided us an anonymous device, then it's his
++	 * responsability to free it in case we fail. So we have to set our
++	 * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root()
++	 * and once again by our caller.
++	 */
++	if (anon_dev)
++		root->anon_dev = 0;
+ 	btrfs_put_root(root);
+ 	return ERR_PTR(ret);
+ }
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 4a5a3ae0acaae..09ef6419e890a 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1109,6 +1109,7 @@ again:
+ 					     parent_objectid, victim_name,
+ 					     victim_name_len);
+ 			if (ret < 0) {
++				kfree(victim_name);
+ 				return ret;
+ 			} else if (!ret) {
+ 				ret = -ENOENT;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 676f551953060..d3f67271d3c72 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -4359,7 +4359,7 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
+ {
+ 	struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);
+ 	int bits = (fmode << 1) | 1;
+-	bool is_opened = false;
++	bool already_opened = false;
+ 	int i;
+ 
+ 	if (count == 1)
+@@ -4367,19 +4367,19 @@ void ceph_get_fmode(struct ceph_inode_info *ci, int fmode, int count)
+ 
+ 	spin_lock(&ci->i_ceph_lock);
+ 	for (i = 0; i < CEPH_FILE_MODE_BITS; i++) {
+-		if (bits & (1 << i))
+-			ci->i_nr_by_mode[i] += count;
+-
+ 		/*
+-		 * If any of the mode ref is larger than 1,
++		 * If any of the mode ref is larger than 0,
+ 		 * that means it has been already opened by
+ 		 * others. Just skip checking the PIN ref.
+ 		 */
+-		if (i && ci->i_nr_by_mode[i] > 1)
+-			is_opened = true;
++		if (i && ci->i_nr_by_mode[i])
++			already_opened = true;
++
++		if (bits & (1 << i))
++			ci->i_nr_by_mode[i] += count;
+ 	}
+ 
+-	if (!is_opened)
++	if (!already_opened)
+ 		percpu_counter_inc(&mdsc->metric.opened_inodes);
+ 	spin_unlock(&ci->i_ceph_lock);
+ }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 76e347a8cf088..981a915906314 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3696,7 +3696,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 	struct ceph_pagelist *pagelist = recon_state->pagelist;
+ 	struct dentry *dentry;
+ 	char *path;
+-	int pathlen, err;
++	int pathlen = 0, err;
+ 	u64 pathbase;
+ 	u64 snap_follows;
+ 
+@@ -3716,7 +3716,6 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		}
+ 	} else {
+ 		path = NULL;
+-		pathlen = 0;
+ 		pathbase = 0;
+ 	}
+ 
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index e7667497b6b77..8e95a75a4559c 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1132,7 +1132,7 @@ int fuse_reverse_inval_entry(struct fuse_conn *fc, u64 parent_nodeid,
+ 	if (!parent)
+ 		return -ENOENT;
+ 
+-	inode_lock(parent);
++	inode_lock_nested(parent, I_MUTEX_PARENT);
+ 	if (!S_ISDIR(parent->i_mode))
+ 		goto unlock;
+ 
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 16955a307dcd9..d0e5cde277022 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -137,8 +137,7 @@ kill_whiteout:
+ 	goto out;
+ }
+ 
+-static int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry,
+-			  umode_t mode)
++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode)
+ {
+ 	int err;
+ 	struct dentry *d, *dentry = *newdentry;
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index e43dc68bd1b54..898de3bf884e4 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -519,6 +519,7 @@ struct ovl_cattr {
+ 
+ #define OVL_CATTR(m) (&(struct ovl_cattr) { .mode = (m) })
+ 
++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode);
+ struct dentry *ovl_create_real(struct inode *dir, struct dentry *newdentry,
+ 			       struct ovl_cattr *attr);
+ int ovl_cleanup(struct inode *dir, struct dentry *dentry);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 77f08ac04d1f3..45c596dfe3a36 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -743,10 +743,14 @@ retry:
+ 			goto retry;
+ 		}
+ 
+-		work = ovl_create_real(dir, work, OVL_CATTR(attr.ia_mode));
+-		err = PTR_ERR(work);
+-		if (IS_ERR(work))
+-			goto out_err;
++		err = ovl_mkdir_real(dir, &work, attr.ia_mode);
++		if (err)
++			goto out_dput;
++
++		/* Weird filesystem returning with hashed negative (kernfs)? */
++		err = -EINVAL;
++		if (d_really_is_negative(work))
++			goto out_dput;
+ 
+ 		/*
+ 		 * Try to remove POSIX ACL xattrs from workdir.  We are good if:
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 2243dc1fb48fe..e60759d8bb5fb 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -1799,5 +1799,6 @@ static void __exit zonefs_exit(void)
+ MODULE_AUTHOR("Damien Le Moal");
+ MODULE_DESCRIPTION("Zone file system for zoned block devices");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS_FS("zonefs");
+ module_init(zonefs_init);
+ module_exit(zonefs_exit);
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 68cee3bc8cfe6..d784000921da3 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -718,7 +718,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ {
+ 	int rc = 0;
+ 	struct sk_buff *skb;
+-	static unsigned int failed = 0;
++	unsigned int failed = 0;
+ 
+ 	/* NOTE: kauditd_thread takes care of all our locking, we just use
+ 	 *       the netlink info passed to us (e.g. sk and portid) */
+@@ -735,32 +735,30 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
+ 			continue;
+ 		}
+ 
++retry:
+ 		/* grab an extra skb reference in case of error */
+ 		skb_get(skb);
+ 		rc = netlink_unicast(sk, skb, portid, 0);
+ 		if (rc < 0) {
+-			/* fatal failure for our queue flush attempt? */
++			/* send failed - try a few times unless fatal error */
+ 			if (++failed >= retry_limit ||
+ 			    rc == -ECONNREFUSED || rc == -EPERM) {
+-				/* yes - error processing for the queue */
+ 				sk = NULL;
+ 				if (err_hook)
+ 					(*err_hook)(skb);
+-				if (!skb_hook)
+-					goto out;
+-				/* keep processing with the skb_hook */
++				if (rc == -EAGAIN)
++					rc = 0;
++				/* continue to drain the queue */
+ 				continue;
+ 			} else
+-				/* no - requeue to preserve ordering */
+-				skb_queue_head(queue, skb);
++				goto retry;
+ 		} else {
+-			/* it worked - drop the extra reference and continue */
++			/* skb sent - drop the extra reference and continue */
+ 			consume_skb(skb);
+ 			failed = 0;
+ 		}
+ 	}
+ 
+-out:
+ 	return (rc >= 0 ? 0 : rc);
+ }
+ 
+@@ -1609,7 +1607,8 @@ static int __net_init audit_net_init(struct net *net)
+ 		audit_panic("cannot initialize netlink socket in namespace");
+ 		return -ENOMEM;
+ 	}
+-	aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;
++	/* limit the timeout in case auditd is blocked/stopped */
++	aunet->sk->sk_sndtimeo = HZ / 10;
+ 
+ 	return 0;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 95ab3f243acde..4e28961cfa53e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1249,22 +1249,28 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
+ 	reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
+ }
+ 
++static bool __reg32_bound_s64(s32 a)
++{
++	return a >= 0 && a <= S32_MAX;
++}
++
+ static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
+ {
+ 	reg->umin_value = reg->u32_min_value;
+ 	reg->umax_value = reg->u32_max_value;
+-	/* Attempt to pull 32-bit signed bounds into 64-bit bounds
+-	 * but must be positive otherwise set to worse case bounds
+-	 * and refine later from tnum.
++
++	/* Attempt to pull 32-bit signed bounds into 64-bit bounds but must
++	 * be positive otherwise set to worse case bounds and refine later
++	 * from tnum.
+ 	 */
+-	if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0)
+-		reg->smax_value = reg->s32_max_value;
+-	else
+-		reg->smax_value = U32_MAX;
+-	if (reg->s32_min_value >= 0)
++	if (__reg32_bound_s64(reg->s32_min_value) &&
++	    __reg32_bound_s64(reg->s32_max_value)) {
+ 		reg->smin_value = reg->s32_min_value;
+-	else
++		reg->smax_value = reg->s32_max_value;
++	} else {
+ 		reg->smin_value = 0;
++		reg->smax_value = U32_MAX;
++	}
+ }
+ 
+ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
+@@ -7125,6 +7131,10 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 							 insn->dst_reg);
+ 				}
+ 				zext_32_to_64(dst_reg);
++
++				__update_reg_bounds(dst_reg);
++				__reg_deduce_bounds(dst_reg);
++				__reg_bound_offset(dst_reg);
+ 			}
+ 		} else {
+ 			/* case: R = imm
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 8c81c05c4236a..b74e7ace4376b 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1888,7 +1888,7 @@ static void rcu_gp_fqs(bool first_time)
+ 	struct rcu_node *rnp = rcu_get_root();
+ 
+ 	WRITE_ONCE(rcu_state.gp_activity, jiffies);
+-	rcu_state.n_force_qs++;
++	WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1);
+ 	if (first_time) {
+ 		/* Collect dyntick-idle snapshots. */
+ 		force_qs_rnp(dyntick_save_progress_counter);
+@@ -2530,7 +2530,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
+ 	/* Reset ->qlen_last_fqs_check trigger if enough CBs have drained. */
+ 	if (count == 0 && rdp->qlen_last_fqs_check != 0) {
+ 		rdp->qlen_last_fqs_check = 0;
+-		rdp->n_force_qs_snap = rcu_state.n_force_qs;
++		rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
+ 	} else if (count < rdp->qlen_last_fqs_check - qhimark)
+ 		rdp->qlen_last_fqs_check = count;
+ 
+@@ -2876,10 +2876,10 @@ static void __call_rcu_core(struct rcu_data *rdp, struct rcu_head *head,
+ 		} else {
+ 			/* Give the grace period a kick. */
+ 			rdp->blimit = DEFAULT_MAX_RCU_BLIMIT;
+-			if (rcu_state.n_force_qs == rdp->n_force_qs_snap &&
++			if (READ_ONCE(rcu_state.n_force_qs) == rdp->n_force_qs_snap &&
+ 			    rcu_segcblist_first_pend_cb(&rdp->cblist) != head)
+ 				rcu_force_quiescent_state();
+-			rdp->n_force_qs_snap = rcu_state.n_force_qs;
++			rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
+ 			rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
+ 		}
+ 	}
+@@ -3986,7 +3986,7 @@ int rcutree_prepare_cpu(unsigned int cpu)
+ 	/* Set up local state, ensuring consistent view of global state. */
+ 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
+ 	rdp->qlen_last_fqs_check = 0;
+-	rdp->n_force_qs_snap = rcu_state.n_force_qs;
++	rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
+ 	rdp->blimit = blimit;
+ 	if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */
+ 	    !rcu_segcblist_is_offloaded(&rdp->cblist))
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 6858a31364b64..cc4dc2857a870 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1310,8 +1310,7 @@ int do_settimeofday64(const struct timespec64 *ts)
+ 	timekeeping_forward_now(tk);
+ 
+ 	xt = tk_xtime(tk);
+-	ts_delta.tv_sec = ts->tv_sec - xt.tv_sec;
+-	ts_delta.tv_nsec = ts->tv_nsec - xt.tv_nsec;
++	ts_delta = timespec64_sub(*ts, xt);
+ 
+ 	if (timespec64_compare(&tk->wall_to_monotonic, &ts_delta) > 0) {
+ 		ret = -EINVAL;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 825e6b9880030..0215ae898e836 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -769,7 +769,7 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt)
+ 	       ntohs(skb->protocol), skb->pkt_type, skb->skb_iif);
+ 
+ 	if (dev)
+-		printk("%sdev name=%s feat=0x%pNF\n",
++		printk("%sdev name=%s feat=%pNF\n",
+ 		       level, dev->name, &dev->features);
+ 	if (sk)
+ 		printk("%ssk family=%hu type=%u proto=%u\n",
+diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
+index 93474b1bea4e0..fa9f1de58df46 100644
+--- a/net/ipv4/inet_diag.c
++++ b/net/ipv4/inet_diag.c
+@@ -261,6 +261,7 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
+ 	r->idiag_state = sk->sk_state;
+ 	r->idiag_timer = 0;
+ 	r->idiag_retrans = 0;
++	r->idiag_expires = 0;
+ 
+ 	if (inet_diag_msg_attrs_fill(sk, skb, r, ext,
+ 				     sk_user_ns(NETLINK_CB(cb->skb).sk),
+@@ -314,9 +315,6 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
+ 		r->idiag_retrans = icsk->icsk_probes_out;
+ 		r->idiag_expires =
+ 			jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies);
+-	} else {
+-		r->idiag_timer = 0;
+-		r->idiag_expires = 0;
+ 	}
+ 
+ 	if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index a6a3d759246ec..bab0e99f6e356 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1924,7 +1924,6 @@ static int __net_init sit_init_net(struct net *net)
+ 	return 0;
+ 
+ err_reg_dev:
+-	ipip6_dev_free(sitn->fb_tunnel_dev);
+ 	free_netdev(sitn->fb_tunnel_dev);
+ err_alloc_dev:
+ 	return err;
+diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c
+index cd4cf84a7f99f..6ef8ded4ec764 100644
+--- a/net/mac80211/agg-rx.c
++++ b/net/mac80211/agg-rx.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ 
+ /**
+@@ -191,7 +191,8 @@ static void ieee80211_add_addbaext(struct ieee80211_sub_if_data *sdata,
+ 	sband = ieee80211_get_sband(sdata);
+ 	if (!sband)
+ 		return;
+-	he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type);
++	he_cap = ieee80211_get_he_iftype_cap(sband,
++					     ieee80211_vif_type_p2p(&sdata->vif));
+ 	if (!he_cap)
+ 		return;
+ 
+diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
+index b37c8a983d88d..190f300d8923c 100644
+--- a/net/mac80211/agg-tx.c
++++ b/net/mac80211/agg-tx.c
+@@ -9,7 +9,7 @@
+  * Copyright 2007, Michael Wu <flamingice@sourmilk.net>
+  * Copyright 2007-2010, Intel Corporation
+  * Copyright(c) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2020 Intel Corporation
++ * Copyright (C) 2018 - 2021 Intel Corporation
+  */
+ 
+ #include <linux/ieee80211.h>
+@@ -106,7 +106,7 @@ static void ieee80211_send_addba_request(struct ieee80211_sub_if_data *sdata,
+ 	mgmt->u.action.u.addba_req.start_seq_num =
+ 					cpu_to_le16(start_seq_num << 4);
+ 
+-	ieee80211_tx_skb(sdata, skb);
++	ieee80211_tx_skb_tid(sdata, skb, tid);
+ }
+ 
+ void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn)
+@@ -213,6 +213,8 @@ ieee80211_agg_start_txq(struct sta_info *sta, int tid, bool enable)
+ 	struct ieee80211_txq *txq = sta->sta.txq[tid];
+ 	struct txq_info *txqi;
+ 
++	lockdep_assert_held(&sta->ampdu_mlme.mtx);
++
+ 	if (!txq)
+ 		return;
+ 
+@@ -290,7 +292,6 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid)
+ 	ieee80211_assign_tid_tx(sta, tid, NULL);
+ 
+ 	ieee80211_agg_splice_finish(sta->sdata, tid);
+-	ieee80211_agg_start_txq(sta, tid, false);
+ 
+ 	kfree_rcu(tid_tx, rcu_head);
+ }
+@@ -480,8 +481,7 @@ static void ieee80211_send_addba_with_timeout(struct sta_info *sta,
+ 
+ 	/* send AddBA request */
+ 	ieee80211_send_addba_request(sdata, sta->sta.addr, tid,
+-				     tid_tx->dialog_token,
+-				     sta->tid_seq[tid] >> 4,
++				     tid_tx->dialog_token, tid_tx->ssn,
+ 				     buf_size, tid_tx->timeout);
+ 
+ 	WARN_ON(test_and_set_bit(HT_AGG_STATE_SENT_ADDBA, &tid_tx->state));
+@@ -523,6 +523,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
+ 
+ 	params.ssn = sta->tid_seq[tid] >> 4;
+ 	ret = drv_ampdu_action(local, sdata, &params);
++	tid_tx->ssn = params.ssn;
+ 	if (ret == IEEE80211_AMPDU_TX_START_DELAY_ADDBA) {
+ 		return;
+ 	} else if (ret == IEEE80211_AMPDU_TX_START_IMMEDIATE) {
+@@ -889,6 +890,7 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
+ {
+ 	struct ieee80211_sub_if_data *sdata = sta->sdata;
+ 	bool send_delba = false;
++	bool start_txq = false;
+ 
+ 	ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n",
+ 	       sta->sta.addr, tid);
+@@ -906,10 +908,14 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
+ 		send_delba = true;
+ 
+ 	ieee80211_remove_tid_tx(sta, tid);
++	start_txq = true;
+ 
+  unlock_sta:
+ 	spin_unlock_bh(&sta->lock);
+ 
++	if (start_txq)
++		ieee80211_agg_start_txq(sta, tid, false);
++
+ 	if (send_delba)
+ 		ieee80211_send_delba(sdata, sta->sta.addr, tid,
+ 			WLAN_BACK_INITIATOR, WLAN_REASON_QSTA_NOT_USE);
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index bcdfd19a596be..a172f69c71123 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -1201,8 +1201,11 @@ static inline void drv_wake_tx_queue(struct ieee80211_local *local,
+ {
+ 	struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);
+ 
+-	if (local->in_reconfig)
++	/* In reconfig don't transmit now, but mark for waking later */
++	if (local->in_reconfig) {
++		set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags);
+ 		return;
++	}
+ 
+ 	if (!check_sdata_in_driver(sdata))
+ 		return;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 32bc30ec50ec9..7bd42827540ae 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2493,11 +2493,18 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
+ 					   u16 tx_time)
+ {
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+-	u16 tid = ieee80211_get_tid(hdr);
+-	int ac = ieee80211_ac_from_tid(tid);
+-	struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];
++	u16 tid;
++	int ac;
++	struct ieee80211_sta_tx_tspec *tx_tspec;
+ 	unsigned long now = jiffies;
+ 
++	if (!ieee80211_is_data_qos(hdr->frame_control))
++		return;
++
++	tid = ieee80211_get_tid(hdr);
++	ac = ieee80211_ac_from_tid(tid);
++	tx_tspec = &ifmgd->tx_tspec[ac];
++
+ 	if (likely(!tx_tspec->admitted_time))
+ 		return;
+ 
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index 355e006432ccc..b9e5f8e8f29cc 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -190,6 +190,7 @@ struct tid_ampdu_tx {
+ 	u8 stop_initiator;
+ 	bool tx_stop;
+ 	u16 buf_size;
++	u16 ssn;
+ 
+ 	u16 failed_bar_ssn;
+ 	bool bar_pending;
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index fbf56a203c0e8..a1f129292ad88 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -950,7 +950,12 @@ static void ieee80211_parse_extension_element(u32 *crc,
+ 					      struct ieee802_11_elems *elems)
+ {
+ 	const void *data = elem->data + 1;
+-	u8 len = elem->datalen - 1;
++	u8 len;
++
++	if (!elem->datalen)
++		return;
++
++	len = elem->datalen - 1;
+ 
+ 	switch (elem->data[0]) {
+ 	case WLAN_EID_EXT_HE_MU_EDCA:
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 3ca8b359e399a..8123c79e27913 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2149,7 +2149,7 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 		 */
+ 		if (WARN_ON_ONCE(!new_mptcp_sock)) {
+ 			tcp_sk(newsk)->is_mptcp = 0;
+-			return newsk;
++			goto out;
+ 		}
+ 
+ 		/* acquire the 2nd reference for the owning socket */
+@@ -2174,6 +2174,8 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err,
+ 				MPTCP_MIB_MPCAPABLEPASSIVEFALLBACK);
+ 	}
+ 
++out:
++	newsk->sk_kern_sock = kern;
+ 	return newsk;
+ }
+ 
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 08144559eed56..f78097aa403a8 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4461,9 +4461,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ 	}
+ 
+ out_free_pg_vec:
+-	bitmap_free(rx_owner_map);
+-	if (pg_vec)
++	if (pg_vec) {
++		bitmap_free(rx_owner_map);
+ 		free_pg_vec(pg_vec, order, req->tp_block_nr);
++	}
+ out:
+ 	return err;
+ }
+diff --git a/net/rds/connection.c b/net/rds/connection.c
+index a3bc4b54d4910..b4cc699c5fad3 100644
+--- a/net/rds/connection.c
++++ b/net/rds/connection.c
+@@ -253,6 +253,7 @@ static struct rds_connection *__rds_conn_create(struct net *net,
+ 				 * should end up here, but if it
+ 				 * does, reset/destroy the connection.
+ 				 */
++				kfree(conn->c_path);
+ 				kmem_cache_free(rds_conn_slab, conn);
+ 				conn = ERR_PTR(-EOPNOTSUPP);
+ 				goto out;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 8073657a0fd25..cb1331b357451 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -3703,6 +3703,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
+ 				entry->mpls_mangle.ttl = tcf_mpls_ttl(act);
+ 				break;
+ 			default:
++				err = -EOPNOTSUPP;
+ 				goto err_out_locked;
+ 			}
+ 		} else if (is_tcf_skbedit_ptype(act)) {
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index c2c37ffd94f22..c580139fcedec 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -2736,7 +2736,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
+ 	q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),
+ 			   GFP_KERNEL);
+ 	if (!q->tins)
+-		goto nomem;
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < CAKE_MAX_TINS; i++) {
+ 		struct cake_tin_data *b = q->tins + i;
+@@ -2766,10 +2766,6 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
+ 	q->min_netlen = ~0;
+ 	q->min_adjlen = ~0;
+ 	return 0;
+-
+-nomem:
+-	cake_destroy(sch);
+-	return -ENOMEM;
+ }
+ 
+ static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index c34cb6e81d855..9c224872ef035 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -668,9 +668,9 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 		}
+ 	}
+ 	for (i = q->nbands; i < oldbands; i++) {
+-		qdisc_tree_flush_backlog(q->classes[i].qdisc);
+-		if (i >= q->nstrict)
++		if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)
+ 			list_del(&q->classes[i].alist);
++		qdisc_tree_flush_backlog(q->classes[i].qdisc);
+ 	}
+ 	q->nstrict = nstrict;
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index d324a12c26cd9..99b902e410c49 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -191,7 +191,9 @@ static int smc_release(struct socket *sock)
+ 	/* cleanup for a dangling non-blocking connect */
+ 	if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
+ 		tcp_abort(smc->clcsock->sk, ECONNABORTED);
+-	flush_work(&smc->connect_work);
++
++	if (cancel_work_sync(&smc->connect_work))
++		sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */
+ 
+ 	if (sk->sk_state == SMC_LISTEN)
+ 		/* smc_close_non_accepted() is called and acquires
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 902cb6dd710bd..d6d3a05c008a4 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1153,7 +1153,8 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+ 	space_available = virtio_transport_space_update(sk, pkt);
+ 
+ 	/* Update CID in case it has changed after a transport reset event */
+-	vsk->local_addr.svm_cid = dst.svm_cid;
++	if (vsk->local_addr.svm_cid != VMADDR_CID_ANY)
++		vsk->local_addr.svm_cid = dst.svm_cid;
+ 
+ 	if (space_available)
+ 		sk->sk_write_space(sk);
+diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
+index f459ae883a0a6..a4ca050815aba 100755
+--- a/scripts/recordmcount.pl
++++ b/scripts/recordmcount.pl
+@@ -252,7 +252,7 @@ if ($arch eq "x86_64") {
+ 
+ } elsif ($arch eq "s390" && $bits == 64) {
+     if ($cc =~ /-DCC_USING_HOTPATCH/) {
+-	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$";
++	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$";
+ 	$mcount_adjust = 0;
+     } else {
+ 	$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$";
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c b/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
+index 86ccf37e26b3f..d16fd888230a5 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c
+@@ -90,7 +90,7 @@ static void print_err_line(void)
+ 
+ static void test_conn(void)
+ {
+-	int listen_fd = -1, cli_fd = -1, err;
++	int listen_fd = -1, cli_fd = -1, srv_fd = -1, err;
+ 	socklen_t addrlen = sizeof(srv_sa6);
+ 	int srv_port;
+ 
+@@ -112,6 +112,10 @@ static void test_conn(void)
+ 	if (CHECK_FAIL(cli_fd == -1))
+ 		goto done;
+ 
++	srv_fd = accept(listen_fd, NULL, NULL);
++	if (CHECK_FAIL(srv_fd == -1))
++		goto done;
++
+ 	if (CHECK(skel->bss->listen_tp_sport != srv_port ||
+ 		  skel->bss->req_sk_sport != srv_port,
+ 		  "Unexpected sk src port",
+@@ -134,11 +138,13 @@ done:
+ 		close(listen_fd);
+ 	if (cli_fd != -1)
+ 		close(cli_fd);
++	if (srv_fd != -1)
++		close(srv_fd);
+ }
+ 
+ static void test_syncookie(void)
+ {
+-	int listen_fd = -1, cli_fd = -1, err;
++	int listen_fd = -1, cli_fd = -1, srv_fd = -1, err;
+ 	socklen_t addrlen = sizeof(srv_sa6);
+ 	int srv_port;
+ 
+@@ -161,6 +167,10 @@ static void test_syncookie(void)
+ 	if (CHECK_FAIL(cli_fd == -1))
+ 		goto done;
+ 
++	srv_fd = accept(listen_fd, NULL, NULL);
++	if (CHECK_FAIL(srv_fd == -1))
++		goto done;
++
+ 	if (CHECK(skel->bss->listen_tp_sport != srv_port,
+ 		  "Unexpected tp src port",
+ 		  "listen_tp_sport:%u expected:%u\n",
+@@ -188,6 +198,8 @@ done:
+ 		close(listen_fd);
+ 	if (cli_fd != -1)
+ 		close(cli_fd);
++	if (srv_fd != -1)
++		close(srv_fd);
+ }
+ 
+ struct test {
+diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+index a3e593ddfafc9..d8765a4d5bc6b 100644
+--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
++++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+@@ -848,6 +848,29 @@
+ 	.errstr = "R0 invalid mem access 'inv'",
+ 	.errstr_unpriv = "R0 pointer -= pointer prohibited",
+ },
++{
++	"map access: trying to leak tained dst reg",
++	.insns = {
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
++	BPF_LD_MAP_FD(BPF_REG_1, 0),
++	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
++	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
++	BPF_EXIT_INSN(),
++	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
++	BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF),
++	BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
++	BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1),
++	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
++	BPF_MOV64_IMM(BPF_REG_0, 0),
++	BPF_EXIT_INSN(),
++	},
++	.fixup_map_array_48b = { 4 },
++	.result = REJECT,
++	.errstr = "math between map_value pointer and 4294967295 is not allowed",
++},
+ {
+ 	"32bit pkt_ptr -= scalar",
+ 	.insns = {
+diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+index 0299cd81b8ba2..aa3795cd7bd3d 100644
+--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
++++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
+@@ -12,6 +12,7 @@
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
++#include <sys/resource.h>
+ 
+ #include "test_util.h"
+ 
+@@ -40,10 +41,39 @@ int main(int argc, char *argv[])
+ {
+ 	int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID);
+ 	int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
++	/*
++	 * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds +
++	 * an arbitrary number for everything else.
++	 */
++	int nr_fds_wanted = kvm_max_vcpus + 100;
++	struct rlimit rl;
+ 
+ 	pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
+ 	pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
+ 
++	/*
++	 * Check that we're allowed to open nr_fds_wanted file descriptors and
++	 * try raising the limits if needed.
++	 */
++	TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!");
++
++	if (rl.rlim_cur < nr_fds_wanted) {
++		rl.rlim_cur = nr_fds_wanted;
++		if (rl.rlim_max < nr_fds_wanted) {
++			int old_rlim_max = rl.rlim_max;
++			rl.rlim_max = nr_fds_wanted;
++
++			int r = setrlimit(RLIMIT_NOFILE, &rl);
++			if (r < 0) {
++				printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n",
++				       old_rlim_max, nr_fds_wanted);
++				exit(KSFT_SKIP);
++			}
++		} else {
++			TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
++		}
++	}
++
+ 	/*
+ 	 * Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
+ 	 * Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 7c9ace9d15991..ace976d891252 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -446,6 +446,22 @@ cleanup()
+ 	ip netns del ${NSC} >/dev/null 2>&1
+ }
+ 
++cleanup_vrf_dup()
++{
++	ip link del ${NSA_DEV2} >/dev/null 2>&1
++	ip netns pids ${NSC} | xargs kill 2>/dev/null
++	ip netns del ${NSC} >/dev/null 2>&1
++}
++
++setup_vrf_dup()
++{
++	# some VRF tests use ns-C which has the same config as
++	# ns-B but for a device NOT in the VRF
++	create_ns ${NSC} "-" "-"
++	connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \
++		   ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64
++}
++
+ setup()
+ {
+ 	local with_vrf=${1}
+@@ -475,12 +491,6 @@ setup()
+ 
+ 		ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV}
+ 		ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV}
+-
+-		# some VRF tests use ns-C which has the same config as
+-		# ns-B but for a device NOT in the VRF
+-		create_ns ${NSC} "-" "-"
+-		connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \
+-			   ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64
+ 	else
+ 		ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV}
+ 		ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV}
+@@ -1177,7 +1187,9 @@ ipv4_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
++	setup_vrf_dup
+ 	ipv4_tcp_md5
++	cleanup_vrf_dup
+ 
+ 	#
+ 	# enable VRF global server
+@@ -1735,8 +1747,9 @@ ipv4_addr_bind_vrf()
+ 	for a in ${NSA_IP} ${VRF_IP}
+ 	do
+ 		log_start
++		show_hint "Socket not bound to VRF, but address is in VRF"
+ 		run_cmd nettest -s -R -P icmp -l ${a} -b
+-		log_test_addr ${a} $? 0 "Raw socket bind to local address"
++		log_test_addr ${a} $? 1 "Raw socket bind to local address"
+ 
+ 		log_start
+ 		run_cmd nettest -s -R -P icmp -l ${a} -d ${NSA_DEV} -b
+@@ -2128,7 +2141,7 @@ ipv6_ping_vrf()
+ 		log_start
+ 		show_hint "Fails since VRF device does not support linklocal or multicast"
+ 		run_cmd ${ping6} -c1 -w1 ${a}
+-		log_test_addr ${a} $? 2 "ping out, VRF bind"
++		log_test_addr ${a} $? 1 "ping out, VRF bind"
+ 	done
+ 
+ 	for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV}
+@@ -2656,7 +2669,9 @@ ipv6_tcp_vrf()
+ 	log_test_addr ${a} $? 1 "Global server, local connection"
+ 
+ 	# run MD5 tests
++	setup_vrf_dup
+ 	ipv6_tcp_md5
++	cleanup_vrf_dup
+ 
+ 	#
+ 	# enable VRF global server
+@@ -3351,11 +3366,14 @@ ipv6_addr_bind_novrf()
+ 	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+ 	log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind"
+ 
++	# Sadly, the kernel allows binding a socket to a device and then
++	# binding to an address not on the device. So this test passes
++	# when it really should not
+ 	a=${NSA_LO_IP6}
+ 	log_start
+-	show_hint "Should fail with 'Cannot assign requested address'"
+-	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+-	log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address"
++	show_hint "Tecnically should fail since address is not on device but kernel allows"
++	run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
++	log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address"
+ }
+ 
+ ipv6_addr_bind_vrf()
+@@ -3396,10 +3414,15 @@ ipv6_addr_bind_vrf()
+ 	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+ 	log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind"
+ 
++	# Sadly, the kernel allows binding a socket to a device and then
++	# binding to an address not on the device. The only restriction
++	# is that the address is valid in the L3 domain. So this test
++	# passes when it really should not
+ 	a=${VRF_IP6}
+ 	log_start
+-	run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
+-	log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind"
++	show_hint "Tecnically should fail since address is not on device but kernel allows"
++	run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
++	log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind"
+ 
+ 	a=${NSA_LO_IP6}
+ 	log_start
+diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+index e5e2fbeca22ec..e51def39fd801 100644
+--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
++++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
+@@ -13,6 +13,8 @@ NETIFS[p5]=veth4
+ NETIFS[p6]=veth5
+ NETIFS[p7]=veth6
+ NETIFS[p8]=veth7
++NETIFS[p9]=veth8
++NETIFS[p10]=veth9
+ 
+ # Port that does not have a cable connected.
+ NETIF_NO_CABLE=eth8
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 97ac3c6fd4441..4a7d377b3a500 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2590,7 +2590,8 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	int r;
+ 	gpa_t gpa = ghc->gpa + offset;
+ 
+-	BUG_ON(len + offset > ghc->len);
++	if (WARN_ON_ONCE(len + offset > ghc->len))
++		return -EINVAL;
+ 
+ 	if (slots->generation != ghc->generation) {
+ 		if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))
+@@ -2627,7 +2628,8 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
+ 	int r;
+ 	gpa_t gpa = ghc->gpa + offset;
+ 
+-	BUG_ON(len + offset > ghc->len);
++	if (WARN_ON_ONCE(len + offset > ghc->len))
++		return -EINVAL;
+ 
+ 	if (slots->generation != ghc->generation) {
+ 		if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))


             reply	other threads:[~2021-12-22 14:05 UTC|newest]

Thread overview: 289+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-22 14:05 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2024-09-12 12:42 [gentoo-commits] proj/linux-patches:5.10 commit in: / Mike Pagano
2024-09-04 13:53 Mike Pagano
2024-08-19 10:44 Mike Pagano
2024-07-27  9:20 Mike Pagano
2024-07-27  9:17 Mike Pagano
2024-07-18 12:17 Mike Pagano
2024-07-05 10:53 Mike Pagano
2024-07-05 10:51 Mike Pagano
2024-06-21 14:08 Mike Pagano
2024-06-16 14:35 Mike Pagano
2024-05-25 15:14 Mike Pagano
2024-05-17 11:38 Mike Pagano
2024-05-05 18:14 Mike Pagano
2024-05-02 15:03 Mike Pagano
2024-04-27 22:57 Mike Pagano
2024-04-13 13:09 Mike Pagano
2024-03-27 11:26 Mike Pagano
2024-03-15 22:02 Mike Pagano
2024-03-06 18:09 Mike Pagano
2024-03-01 13:09 Mike Pagano
2024-02-23 12:45 Mike Pagano
2024-02-23 12:39 Mike Pagano
2024-01-25 23:34 Mike Pagano
2024-01-15 18:49 Mike Pagano
2024-01-12 20:35 Mike Pagano
2024-01-05 14:29 Mike Pagano
2023-12-20 15:21 Mike Pagano
2023-12-13 18:29 Mike Pagano
2023-12-08 11:16 Mike Pagano
2023-12-01 17:47 Mike Pagano
2023-11-28 17:52 Mike Pagano
2023-11-20 11:25 Mike Pagano
2023-11-08 17:28 Mike Pagano
2023-10-25 11:38 Mike Pagano
2023-10-18 20:16 Mike Pagano
2023-10-10 20:34 Mike Pagano
2023-10-05 14:24 Mike Pagano
2023-09-23 10:19 Mike Pagano
2023-09-21 11:29 Mike Pagano
2023-09-19 13:22 Mike Pagano
2023-09-02  9:59 Mike Pagano
2023-08-30 14:45 Mike Pagano
2023-08-26 15:21 Mike Pagano
2023-08-16 17:01 Mike Pagano
2023-08-11 11:56 Mike Pagano
2023-08-08 18:42 Mike Pagano
2023-07-27 11:50 Mike Pagano
2023-07-24 20:28 Mike Pagano
2023-06-28 10:27 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:34 Mike Pagano
2023-06-14 10:20 Mike Pagano
2023-06-09 11:31 Mike Pagano
2023-06-05 11:50 Mike Pagano
2023-05-30 12:56 Mike Pagano
2023-05-17 11:25 Mike Pagano
2023-05-17 10:59 Mike Pagano
2023-05-10 17:56 Mike Pagano
2023-04-27 14:11 Mike Pagano
2023-04-26  9:50 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 10:01 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-17 10:45 Mike Pagano
2023-03-13 11:32 Alice Ferrazzi
2023-03-11 16:05 Mike Pagano
2023-03-03 15:01 Mike Pagano
2023-03-03 12:30 Mike Pagano
2023-02-25 11:44 Mike Pagano
2023-02-24  3:06 Alice Ferrazzi
2023-02-22 14:04 Alice Ferrazzi
2023-02-15 16:40 Mike Pagano
2023-02-06 12:47 Mike Pagano
2023-02-02 19:11 Mike Pagano
2023-02-01  8:09 Alice Ferrazzi
2023-01-24  7:13 Alice Ferrazzi
2023-01-18 11:09 Mike Pagano
2023-01-14 13:52 Mike Pagano
2023-01-04 11:39 Mike Pagano
2022-12-21 19:00 Alice Ferrazzi
2022-12-19 12:33 Alice Ferrazzi
2022-12-14 12:14 Mike Pagano
2022-12-08 11:51 Alice Ferrazzi
2022-12-02 17:26 Mike Pagano
2022-11-25 17:06 Mike Pagano
2022-11-16 12:08 Alice Ferrazzi
2022-11-10 18:05 Mike Pagano
2022-11-03 15:17 Mike Pagano
2022-10-30  9:33 Mike Pagano
2022-10-28 13:38 Mike Pagano
2022-10-26 11:46 Mike Pagano
2022-10-17 16:46 Mike Pagano
2022-10-15 10:05 Mike Pagano
2022-10-05 11:58 Mike Pagano
2022-09-28  9:30 Mike Pagano
2022-09-23 12:40 Mike Pagano
2022-09-20 12:01 Mike Pagano
2022-09-15 10:31 Mike Pagano
2022-09-08 10:46 Mike Pagano
2022-09-05 12:04 Mike Pagano
2022-08-31 15:39 Mike Pagano
2022-08-29 10:46 Mike Pagano
2022-08-25 10:33 Mike Pagano
2022-08-21 16:52 Mike Pagano
2022-08-11 12:34 Mike Pagano
2022-08-03 14:24 Alice Ferrazzi
2022-07-29 16:37 Mike Pagano
2022-07-25 10:19 Alice Ferrazzi
2022-07-21 20:08 Mike Pagano
2022-07-15 10:03 Mike Pagano
2022-07-12 15:59 Mike Pagano
2022-07-07 16:17 Mike Pagano
2022-07-02 16:10 Mike Pagano
2022-06-29 11:08 Mike Pagano
2022-06-27 11:12 Mike Pagano
2022-06-25 19:45 Mike Pagano
2022-06-22 12:45 Mike Pagano
2022-06-16 11:44 Mike Pagano
2022-06-14 17:12 Mike Pagano
2022-06-09 11:27 Mike Pagano
2022-06-06 11:03 Mike Pagano
2022-05-30 13:59 Mike Pagano
2022-05-25 11:54 Mike Pagano
2022-05-18  9:48 Mike Pagano
2022-05-15 22:10 Mike Pagano
2022-05-12 11:29 Mike Pagano
2022-05-09 10:56 Mike Pagano
2022-04-27 12:24 Mike Pagano
2022-04-27 12:20 Mike Pagano
2022-04-26 12:17 Mike Pagano
2022-04-20 12:07 Mike Pagano
2022-04-13 20:20 Mike Pagano
2022-04-13 19:48 Mike Pagano
2022-04-12 19:08 Mike Pagano
2022-04-08 13:16 Mike Pagano
2022-03-28 10:58 Mike Pagano
2022-03-23 11:55 Mike Pagano
2022-03-19 13:20 Mike Pagano
2022-03-16 13:33 Mike Pagano
2022-03-11 11:31 Mike Pagano
2022-03-08 18:32 Mike Pagano
2022-03-02 13:06 Mike Pagano
2022-02-26 20:27 Mike Pagano
2022-02-23 12:37 Mike Pagano
2022-02-16 12:46 Mike Pagano
2022-02-11 12:35 Mike Pagano
2022-02-08 17:54 Mike Pagano
2022-02-05 19:04 Mike Pagano
2022-02-05 12:13 Mike Pagano
2022-02-01 17:23 Mike Pagano
2022-01-31 12:25 Mike Pagano
2022-01-29 17:43 Mike Pagano
2022-01-27 11:37 Mike Pagano
2022-01-20 10:00 Mike Pagano
2022-01-16 10:21 Mike Pagano
2022-01-11 14:50 Mike Pagano
2022-01-05 12:53 Mike Pagano
2021-12-29 13:06 Mike Pagano
2021-12-21 19:37 Mike Pagano
2021-12-17 11:55 Mike Pagano
2021-12-16 16:04 Mike Pagano
2021-12-14 12:51 Mike Pagano
2021-12-14 12:12 Mike Pagano
2021-12-08 12:53 Mike Pagano
2021-12-01 12:49 Mike Pagano
2021-11-26 11:57 Mike Pagano
2021-11-21 20:42 Mike Pagano
2021-11-18 15:33 Mike Pagano
2021-11-12 14:18 Mike Pagano
2021-11-06 13:36 Mike Pagano
2021-11-02 19:30 Mike Pagano
2021-10-27 14:55 Mike Pagano
2021-10-27 11:57 Mike Pagano
2021-10-20 13:23 Mike Pagano
2021-10-18 21:17 Mike Pagano
2021-10-17 13:11 Mike Pagano
2021-10-13  9:35 Alice Ferrazzi
2021-10-09 21:31 Mike Pagano
2021-10-06 14:18 Mike Pagano
2021-09-30 10:48 Mike Pagano
2021-09-26 14:12 Mike Pagano
2021-09-22 11:38 Mike Pagano
2021-09-20 22:02 Mike Pagano
2021-09-18 16:07 Mike Pagano
2021-09-17 12:50 Mike Pagano
2021-09-17 12:46 Mike Pagano
2021-09-16 11:20 Mike Pagano
2021-09-15 12:00 Mike Pagano
2021-09-12 14:38 Mike Pagano
2021-09-08 13:00 Alice Ferrazzi
2021-09-03 11:47 Mike Pagano
2021-09-03 11:20 Mike Pagano
2021-08-26 14:34 Mike Pagano
2021-08-25 16:23 Mike Pagano
2021-08-24 21:33 Mike Pagano
2021-08-24 21:32 Mike Pagano
2021-08-21 14:17 Mike Pagano
2021-08-19 11:56 Mike Pagano
2021-08-18 12:46 Mike Pagano
2021-08-15 20:05 Mike Pagano
2021-08-12 11:53 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-08 13:36 Mike Pagano
2021-08-04 11:52 Mike Pagano
2021-08-03 11:03 Mike Pagano
2021-08-02 22:35 Mike Pagano
2021-07-31 10:30 Alice Ferrazzi
2021-07-28 13:22 Mike Pagano
2021-07-25 17:28 Mike Pagano
2021-07-25 17:26 Mike Pagano
2021-07-20 15:44 Alice Ferrazzi
2021-07-19 11:17 Mike Pagano
2021-07-14 16:31 Mike Pagano
2021-07-14 16:21 Mike Pagano
2021-07-13 12:37 Mike Pagano
2021-07-12 17:25 Mike Pagano
2021-07-11 15:11 Mike Pagano
2021-07-11 14:43 Mike Pagano
2021-07-08 12:27 Mike Pagano
2021-07-08  3:27 Alice Ferrazzi
2021-07-07 13:13 Mike Pagano
2021-07-02 19:38 Mike Pagano
2021-07-01 14:32 Mike Pagano
2021-06-30 14:23 Mike Pagano
2021-06-23 15:12 Mike Pagano
2021-06-18 11:37 Mike Pagano
2021-06-16 12:24 Mike Pagano
2021-06-11 17:34 Mike Pagano
2021-06-10 13:14 Mike Pagano
2021-06-10 12:09 Mike Pagano
2021-06-08 22:42 Mike Pagano
2021-06-03 10:26 Alice Ferrazzi
2021-05-28 12:15 Alice Ferrazzi
2021-05-26 12:07 Mike Pagano
2021-05-22 16:59 Mike Pagano
2021-05-19 12:24 Mike Pagano
2021-05-14 14:07 Alice Ferrazzi
2021-05-11 14:20 Mike Pagano
2021-05-07 11:27 Alice Ferrazzi
2021-05-02 16:03 Mike Pagano
2021-04-30 18:58 Mike Pagano
2021-04-28 12:03 Alice Ferrazzi
2021-04-21 11:42 Mike Pagano
2021-04-16 11:02 Alice Ferrazzi
2021-04-14 11:07 Alice Ferrazzi
2021-04-10 13:26 Mike Pagano
2021-04-07 13:27 Mike Pagano
2021-03-30 12:57 Alice Ferrazzi
2021-03-25  9:04 Alice Ferrazzi
2021-03-22 15:57 Mike Pagano
2021-03-20 14:35 Mike Pagano
2021-03-17 17:00 Mike Pagano
2021-03-11 15:08 Mike Pagano
2021-03-09 12:18 Mike Pagano
2021-03-07 15:17 Mike Pagano
2021-03-04 12:04 Alice Ferrazzi
2021-02-26 13:22 Mike Pagano
2021-02-26 10:42 Alice Ferrazzi
2021-02-23 15:16 Alice Ferrazzi
2021-02-18 20:45 Mike Pagano
2021-02-18 14:48 Mike Pagano
2021-02-17 11:14 Alice Ferrazzi
2021-02-13 15:51 Mike Pagano
2021-02-13 15:48 Mike Pagano
2021-02-13 14:42 Alice Ferrazzi
2021-02-10 10:23 Alice Ferrazzi
2021-02-10  9:51 Alice Ferrazzi
2021-02-09 19:10 Mike Pagano
2021-02-07 15:20 Alice Ferrazzi
2021-02-03 23:43 Alice Ferrazzi
2021-01-30 13:27 Alice Ferrazzi
2021-01-27 11:29 Mike Pagano
2021-01-23 16:38 Mike Pagano
2021-01-19 20:31 Mike Pagano
2021-01-17 16:18 Mike Pagano
2021-01-12 20:03 Mike Pagano
2021-01-09 17:58 Mike Pagano
2021-01-09  0:14 Mike Pagano
2021-01-06 14:54 Mike Pagano
2020-12-30 12:54 Mike Pagano
2020-12-26 15:32 Mike Pagano
2020-12-26 15:29 Mike Pagano
2020-12-21 13:26 Mike Pagano
2020-12-18 16:08 Mike Pagano
2020-12-14 20:45 Mike Pagano
2020-12-13 16:09 Mike Pagano
2020-11-19 13:03 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1640181930.a65240864c0cc8dd0f787643c5f01b65e5dd5685.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox