public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: /
Date: Wed, 18 Jan 2023 11:11:12 +0000 (UTC)	[thread overview]
Message-ID: <1674040258.56b72a550443e7e9eefd7614b6850bd17794ddef.mpagano@gentoo> (raw)

commit:     56b72a550443e7e9eefd7614b6850bd17794ddef
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jan 18 11:10:58 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jan 18 11:10:58 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=56b72a55

Linux patch 4.19.270

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README               |     4 +
 1269_linux-4.19.270.patch | 16094 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 16098 insertions(+)

diff --git a/0000_README b/0000_README
index fa94191a..851ba936 100644
--- a/0000_README
+++ b/0000_README
@@ -1119,6 +1119,10 @@ Patch:  1268_linux-4.19.269.patch
 From:   https://www.kernel.org
 Desc:   Linux 4.19.269
 
+Patch:  1269_linux-4.19.270.patch
+From:   https://www.kernel.org
+Desc:   Linux 4.19.270
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1269_linux-4.19.270.patch b/1269_linux-4.19.270.patch
new file mode 100644
index 00000000..49a7e801
--- /dev/null
+++ b/1269_linux-4.19.270.patch
@@ -0,0 +1,16094 @@
+diff --git a/Documentation/driver-api/spi.rst b/Documentation/driver-api/spi.rst
+index f64cb666498aa..f28887045049d 100644
+--- a/Documentation/driver-api/spi.rst
++++ b/Documentation/driver-api/spi.rst
+@@ -25,8 +25,8 @@ hardware, which may be as simple as a set of GPIO pins or as complex as
+ a pair of FIFOs connected to dual DMA engines on the other side of the
+ SPI shift register (maximizing throughput). Such drivers bridge between
+ whatever bus they sit on (often the platform bus) and SPI, and expose
+-the SPI side of their device as a :c:type:`struct spi_master
+-<spi_master>`. SPI devices are children of that master,
++the SPI side of their device as a :c:type:`struct spi_controller
++<spi_controller>`. SPI devices are children of that master,
+ represented as a :c:type:`struct spi_device <spi_device>` and
+ manufactured from :c:type:`struct spi_board_info
+ <spi_board_info>` descriptors which are usually provided by
+diff --git a/Documentation/fault-injection/fault-injection.txt b/Documentation/fault-injection/fault-injection.txt
+index 4d1b7b4ccfaf0..b2b86147bafe4 100644
+--- a/Documentation/fault-injection/fault-injection.txt
++++ b/Documentation/fault-injection/fault-injection.txt
+@@ -71,8 +71,8 @@ configuration of fault-injection capabilities.
+ 
+ - /sys/kernel/debug/fail*/times:
+ 
+-	specifies how many times failures may happen at most.
+-	A value of -1 means "no limit".
++	specifies how many times failures may happen at most. A value of -1
++	means "no limit".
+ 
+ - /sys/kernel/debug/fail*/space:
+ 
+diff --git a/Documentation/sphinx/load_config.py b/Documentation/sphinx/load_config.py
+index 301a21aa4f631..4c9cdcb71c2c6 100644
+--- a/Documentation/sphinx/load_config.py
++++ b/Documentation/sphinx/load_config.py
+@@ -3,7 +3,7 @@
+ 
+ import os
+ import sys
+-from sphinx.util.pycompat import execfile_
++from sphinx.util.osutil import fs_encoding
+ 
+ # ------------------------------------------------------------------------------
+ def loadConfig(namespace):
+@@ -25,7 +25,9 @@ def loadConfig(namespace):
+             sys.stdout.write("load additional sphinx-config: %s\n" % config_file)
+             config = namespace.copy()
+             config['__file__'] = config_file
+-            execfile_(config_file, config)
++            with open(config_file, 'rb') as f:
++                code = compile(f.read(), fs_encoding, 'exec')
++                exec(code, config)
+             del config['__file__']
+             namespace.update(config)
+         else:
+diff --git a/Makefile b/Makefile
+index bde1d2d704e72..11c68e88776a5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 19
+-SUBLEVEL = 269
++SUBLEVEL = 270
+ EXTRAVERSION =
+ NAME = "People's Front"
+ 
+diff --git a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
+index c64806a2daf54..ebe52200a9dca 100644
+--- a/arch/alpha/kernel/entry.S
++++ b/arch/alpha/kernel/entry.S
+@@ -469,8 +469,10 @@ entSys:
+ #ifdef CONFIG_AUDITSYSCALL
+ 	lda     $6, _TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT
+ 	and     $3, $6, $3
+-#endif
+ 	bne     $3, strace
++#else
++	blbs    $3, strace		/* check for SYSCALL_TRACE in disguise */
++#endif
+ 	beq	$4, 1f
+ 	ldq	$27, 0($5)
+ 1:	jsr	$26, ($27), alpha_ni_syscall
+diff --git a/arch/arm/boot/dts/armada-370.dtsi b/arch/arm/boot/dts/armada-370.dtsi
+index 46e6d3ed8f35a..c042c416a94a3 100644
+--- a/arch/arm/boot/dts/armada-370.dtsi
++++ b/arch/arm/boot/dts/armada-370.dtsi
+@@ -74,7 +74,7 @@
+ 
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x80000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-375.dtsi b/arch/arm/boot/dts/armada-375.dtsi
+index 2932a29ae272e..230f6dd876a2a 100644
+--- a/arch/arm/boot/dts/armada-375.dtsi
++++ b/arch/arm/boot/dts/armada-375.dtsi
+@@ -584,7 +584,7 @@
+ 
+ 			pcie1: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-380.dtsi b/arch/arm/boot/dts/armada-380.dtsi
+index cff1269f3fbfd..7146cc8f082af 100644
+--- a/arch/arm/boot/dts/armada-380.dtsi
++++ b/arch/arm/boot/dts/armada-380.dtsi
+@@ -79,7 +79,7 @@
+ 			/* x1 port */
+ 			pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -98,7 +98,7 @@
+ 			/* x1 port */
+ 			pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-385-turris-omnia.dts b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+index 92e08486ec81f..320c759b4090f 100644
+--- a/arch/arm/boot/dts/armada-385-turris-omnia.dts
++++ b/arch/arm/boot/dts/armada-385-turris-omnia.dts
+@@ -22,6 +22,12 @@
+ 		stdout-path = &uart0;
+ 	};
+ 
++	aliases {
++		ethernet0 = &eth0;
++		ethernet1 = &eth1;
++		ethernet2 = &eth2;
++	};
++
+ 	memory {
+ 		device_type = "memory";
+ 		reg = <0x00000000 0x40000000>; /* 1024 MB */
+@@ -291,7 +297,17 @@
+ 				};
+ 			};
+ 
+-			/* port 6 is connected to eth0 */
++			ports@6 {
++				reg = <6>;
++				label = "cpu";
++				ethernet = <&eth0>;
++				phy-mode = "rgmii-id";
++
++				fixed-link {
++					speed = <1000>;
++					full-duplex;
++				};
++			};
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/armada-385.dtsi b/arch/arm/boot/dts/armada-385.dtsi
+index f0022d10c7159..f081f7cb66e5f 100644
+--- a/arch/arm/boot/dts/armada-385.dtsi
++++ b/arch/arm/boot/dts/armada-385.dtsi
+@@ -84,7 +84,7 @@
+ 			/* x1 port */
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -103,7 +103,7 @@
+ 			/* x1 port */
+ 			pcie3: pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -125,7 +125,7 @@
+ 			 */
+ 			pcie4: pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x48000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-39x.dtsi b/arch/arm/boot/dts/armada-39x.dtsi
+index f0c949831efb7..f589112e76975 100644
+--- a/arch/arm/boot/dts/armada-39x.dtsi
++++ b/arch/arm/boot/dts/armada-39x.dtsi
+@@ -456,7 +456,7 @@
+ 			/* x1 port */
+ 			pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x40000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x40000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -475,7 +475,7 @@
+ 			/* x1 port */
+ 			pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x44000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -497,7 +497,7 @@
+ 			 */
+ 			pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x48000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-xp-mv78230.dtsi b/arch/arm/boot/dts/armada-xp-mv78230.dtsi
+index 8558bf6bb54c6..d55fe162fc7f0 100644
+--- a/arch/arm/boot/dts/armada-xp-mv78230.dtsi
++++ b/arch/arm/boot/dts/armada-xp-mv78230.dtsi
+@@ -97,7 +97,7 @@
+ 
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -115,7 +115,7 @@
+ 
+ 			pcie3: pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x48000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -133,7 +133,7 @@
+ 
+ 			pcie4: pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x4c000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -151,7 +151,7 @@
+ 
+ 			pcie5: pcie@5,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
+ 				reg = <0x2800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/armada-xp-mv78260.dtsi b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
+index 2d85fe8ac3272..fdcc818199401 100644
+--- a/arch/arm/boot/dts/armada-xp-mv78260.dtsi
++++ b/arch/arm/boot/dts/armada-xp-mv78260.dtsi
+@@ -112,7 +112,7 @@
+ 
+ 			pcie2: pcie@2,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x44000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x44000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -130,7 +130,7 @@
+ 
+ 			pcie3: pcie@3,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x48000 0 0x2000>;
++				assigned-addresses = <0x82001800 0 0x48000 0 0x2000>;
+ 				reg = <0x1800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -148,7 +148,7 @@
+ 
+ 			pcie4: pcie@4,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x4c000 0 0x2000>;
++				assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>;
+ 				reg = <0x2000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -166,7 +166,7 @@
+ 
+ 			pcie5: pcie@5,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
+ 				reg = <0x2800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -184,7 +184,7 @@
+ 
+ 			pcie6: pcie@6,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x84000 0 0x2000>;
++				assigned-addresses = <0x82003000 0 0x84000 0 0x2000>;
+ 				reg = <0x3000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -202,7 +202,7 @@
+ 
+ 			pcie7: pcie@7,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x88000 0 0x2000>;
++				assigned-addresses = <0x82003800 0 0x88000 0 0x2000>;
+ 				reg = <0x3800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -220,7 +220,7 @@
+ 
+ 			pcie8: pcie@8,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x8c000 0 0x2000>;
++				assigned-addresses = <0x82004000 0 0x8c000 0 0x2000>;
+ 				reg = <0x4000 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+@@ -238,7 +238,7 @@
+ 
+ 			pcie9: pcie@9,0 {
+ 				device_type = "pci";
+-				assigned-addresses = <0x82000800 0 0x42000 0 0x2000>;
++				assigned-addresses = <0x82004800 0 0x42000 0 0x2000>;
+ 				reg = <0x4800 0 0 0 0>;
+ 				#address-cells = <3>;
+ 				#size-cells = <2>;
+diff --git a/arch/arm/boot/dts/dove.dtsi b/arch/arm/boot/dts/dove.dtsi
+index 250ad0535e8cc..c677d1a74a905 100644
+--- a/arch/arm/boot/dts/dove.dtsi
++++ b/arch/arm/boot/dts/dove.dtsi
+@@ -129,7 +129,7 @@
+ 			pcie1: pcie@2 {
+ 				device_type = "pci";
+ 				status = "disabled";
+-				assigned-addresses = <0x82002800 0 0x80000 0 0x2000>;
++				assigned-addresses = <0x82001000 0 0x80000 0 0x2000>;
+ 				reg = <0x1000 0 0 0 0>;
+ 				clocks = <&gate_clk 5>;
+ 				marvell,pcie-port = <1>;
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 00daa844bf8c6..3b9d70eadeb95 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -1604,7 +1604,7 @@
+ 		};
+ 
+ 		etb@1a01000 {
+-			compatible = "coresight-etb10", "arm,primecell";
++			compatible = "arm,coresight-etb10", "arm,primecell";
+ 			reg = <0x1a01000 0x1000>;
+ 
+ 			clocks = <&rpmcc RPM_QDSS_CLK>;
+diff --git a/arch/arm/boot/dts/spear600.dtsi b/arch/arm/boot/dts/spear600.dtsi
+index 00166eb9be86f..ca07e73e1f27d 100644
+--- a/arch/arm/boot/dts/spear600.dtsi
++++ b/arch/arm/boot/dts/spear600.dtsi
+@@ -53,7 +53,7 @@
+ 			compatible = "arm,pl110", "arm,primecell";
+ 			reg = <0xfc200000 0x1000>;
+ 			interrupt-parent = <&vic1>;
+-			interrupts = <12>;
++			interrupts = <13>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm/mach-mmp/time.c b/arch/arm/mach-mmp/time.c
+index 96ad1db0b04be..edd280e755462 100644
+--- a/arch/arm/mach-mmp/time.c
++++ b/arch/arm/mach-mmp/time.c
+@@ -52,18 +52,21 @@
+ static void __iomem *mmp_timer_base = TIMERS_VIRT_BASE;
+ 
+ /*
+- * FIXME: the timer needs some delay to stablize the counter capture
++ * Read the timer through the CVWR register. Delay is required after requesting
++ * a read. The CR register cannot be directly read due to metastability issues
++ * documented in the PXA168 software manual.
+  */
+ static inline uint32_t timer_read(void)
+ {
+-	int delay = 100;
++	uint32_t val;
++	int delay = 3;
+ 
+ 	__raw_writel(1, mmp_timer_base + TMR_CVWR(1));
+ 
+ 	while (delay--)
+-		cpu_relax();
++		val = __raw_readl(mmp_timer_base + TMR_CVWR(1));
+ 
+-	return __raw_readl(mmp_timer_base + TMR_CVWR(1));
++	return val;
+ }
+ 
+ static u64 notrace mmp_read_sched_clock(void)
+diff --git a/arch/arm/nwfpe/Makefile b/arch/arm/nwfpe/Makefile
+index 303400fa2cdf7..2aec85ab1e8b9 100644
+--- a/arch/arm/nwfpe/Makefile
++++ b/arch/arm/nwfpe/Makefile
+@@ -11,3 +11,9 @@ nwfpe-y				+= fpa11.o fpa11_cpdo.o fpa11_cpdt.o \
+ 				   entry.o
+ 
+ nwfpe-$(CONFIG_FPE_NWFPE_XP)	+= extended_cpdo.o
++
++# Try really hard to avoid generating calls to __aeabi_uldivmod() from
++# float64_rem() due to loop elision.
++ifdef CONFIG_CC_IS_CLANG
++CFLAGS_softfloat.o	+= -mllvm -replexitval=never
++endif
+diff --git a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
+index 4ce9d6ca0bf78..424c22a266c4d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt2712-evb.dts
+@@ -25,14 +25,14 @@
+ 		stdout-path = "serial0:921600n8";
+ 	};
+ 
+-	cpus_fixed_vproc0: fixedregulator@0 {
++	cpus_fixed_vproc0: regulator-vproc-buck0 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vproc_buck0";
+ 		regulator-min-microvolt = <1000000>;
+ 		regulator-max-microvolt = <1000000>;
+ 	};
+ 
+-	cpus_fixed_vproc1: fixedregulator@1 {
++	cpus_fixed_vproc1: regulator-vproc-buck1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vproc_buck1";
+ 		regulator-min-microvolt = <1000000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+index 75cc0f7cc0888..9acbd82fb44b4 100644
+--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+@@ -158,70 +158,70 @@
+ 		#clock-cells = <0>;
+ 	};
+ 
+-	clk26m: oscillator@0 {
++	clk26m: oscillator-26m {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <26000000>;
+ 		clock-output-names = "clk26m";
+ 	};
+ 
+-	clk32k: oscillator@1 {
++	clk32k: oscillator-32k {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <32768>;
+ 		clock-output-names = "clk32k";
+ 	};
+ 
+-	clkfpc: oscillator@2 {
++	clkfpc: oscillator-50m {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <50000000>;
+ 		clock-output-names = "clkfpc";
+ 	};
+ 
+-	clkaud_ext_i_0: oscillator@3 {
++	clkaud_ext_i_0: oscillator-aud0 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <6500000>;
+ 		clock-output-names = "clkaud_ext_i_0";
+ 	};
+ 
+-	clkaud_ext_i_1: oscillator@4 {
++	clkaud_ext_i_1: oscillator-aud1 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <196608000>;
+ 		clock-output-names = "clkaud_ext_i_1";
+ 	};
+ 
+-	clkaud_ext_i_2: oscillator@5 {
++	clkaud_ext_i_2: oscillator-aud2 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <180633600>;
+ 		clock-output-names = "clkaud_ext_i_2";
+ 	};
+ 
+-	clki2si0_mck_i: oscillator@6 {
++	clki2si0_mck_i: oscillator-i2s0 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+ 		clock-output-names = "clki2si0_mck_i";
+ 	};
+ 
+-	clki2si1_mck_i: oscillator@7 {
++	clki2si1_mck_i: oscillator-i2s1 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+ 		clock-output-names = "clki2si1_mck_i";
+ 	};
+ 
+-	clki2si2_mck_i: oscillator@8 {
++	clki2si2_mck_i: oscillator-i2s2 {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+ 		clock-output-names = "clki2si2_mck_i";
+ 	};
+ 
+-	clktdmin_mclk_i: oscillator@9 {
++	clktdmin_mclk_i: oscillator-mclk {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <30000000>;
+@@ -264,7 +264,7 @@
+ 		reg = <0 0x10005000 0 0x1000>;
+ 	};
+ 
+-	pio: pinctrl@10005000 {
++	pio: pinctrl@1000b000 {
+ 		compatible = "mediatek,mt2712-pinctrl";
+ 		reg = <0 0x1000b000 0 0x1000>;
+ 		mediatek,pctl-regmap = <&syscfg_pctl_a>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt6797.dtsi b/arch/arm64/boot/dts/mediatek/mt6797.dtsi
+index 4beaa71107d7d..ebe1b5343966d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6797.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6797.dtsi
+@@ -101,7 +101,7 @@
+ 		};
+ 	};
+ 
+-	clk26m: oscillator@0 {
++	clk26m: oscillator-26m {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <26000000>;
+diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
+index 1cc42441bc672..a9b315dc7e24e 100644
+--- a/arch/arm64/include/asm/atomic_ll_sc.h
++++ b/arch/arm64/include/asm/atomic_ll_sc.h
+@@ -320,7 +320,7 @@ __LL_SC_PREFIX(__cmpxchg_double##name(unsigned long old1,		\
+ 	"	cbnz	%w0, 1b\n"					\
+ 	"	" #mb "\n"						\
+ 	"2:"								\
+-	: "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr)	\
++	: "=&r" (tmp), "=&r" (ret), "+Q" (*(__uint128_t *)ptr)		\
+ 	: "r" (old1), "r" (old2), "r" (new1), "r" (new2)		\
+ 	: cl);								\
+ 									\
+diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
+index 80cadc789f1a7..393d73797e733 100644
+--- a/arch/arm64/include/asm/atomic_lse.h
++++ b/arch/arm64/include/asm/atomic_lse.h
+@@ -555,7 +555,7 @@ static inline long __cmpxchg_double##name(unsigned long old1,		\
+ 	"	eor	%[old2], %[old2], %[oldval2]\n"			\
+ 	"	orr	%[old1], %[old1], %[old2]")			\
+ 	: [old1] "+&r" (x0), [old2] "+&r" (x1),				\
+-	  [v] "+Q" (*(unsigned long *)ptr)				\
++	  [v] "+Q" (*(__uint128_t *)ptr)				\
+ 	: [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4),		\
+ 	  [oldval1] "r" (oldval1), [oldval2] "r" (oldval2)		\
+ 	: __LL_SC_CLOBBERS, ##cl);					\
+diff --git a/arch/mips/bcm63xx/clk.c b/arch/mips/bcm63xx/clk.c
+index dcfa0ea912fe1..f183c45503ce1 100644
+--- a/arch/mips/bcm63xx/clk.c
++++ b/arch/mips/bcm63xx/clk.c
+@@ -361,6 +361,8 @@ static struct clk clk_periph = {
+  */
+ int clk_enable(struct clk *clk)
+ {
++	if (!clk)
++		return 0;
+ 	mutex_lock(&clocks_mutex);
+ 	clk_enable_unlocked(clk);
+ 	mutex_unlock(&clocks_mutex);
+diff --git a/arch/mips/kernel/vpe-cmp.c b/arch/mips/kernel/vpe-cmp.c
+index 9268ebc0f61e6..903c07bdc92d9 100644
+--- a/arch/mips/kernel/vpe-cmp.c
++++ b/arch/mips/kernel/vpe-cmp.c
+@@ -75,7 +75,6 @@ ATTRIBUTE_GROUPS(vpe);
+ 
+ static void vpe_device_release(struct device *cd)
+ {
+-	kfree(cd);
+ }
+ 
+ static struct class vpe_class = {
+@@ -157,6 +156,7 @@ out_dev:
+ 	device_del(&vpe_device);
+ 
+ out_class:
++	put_device(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 
+ out_chrdev:
+@@ -169,7 +169,7 @@ void __exit vpe_module_exit(void)
+ {
+ 	struct vpe *v, *n;
+ 
+-	device_del(&vpe_device);
++	device_unregister(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 	unregister_chrdev(major, VPE_MODULE_NAME);
+ 
+diff --git a/arch/mips/kernel/vpe-mt.c b/arch/mips/kernel/vpe-mt.c
+index 2e003b11a098f..9fd7cd48ea1d2 100644
+--- a/arch/mips/kernel/vpe-mt.c
++++ b/arch/mips/kernel/vpe-mt.c
+@@ -313,7 +313,6 @@ ATTRIBUTE_GROUPS(vpe);
+ 
+ static void vpe_device_release(struct device *cd)
+ {
+-	kfree(cd);
+ }
+ 
+ static struct class vpe_class = {
+@@ -497,6 +496,7 @@ out_dev:
+ 	device_del(&vpe_device);
+ 
+ out_class:
++	put_device(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 
+ out_chrdev:
+@@ -509,7 +509,7 @@ void __exit vpe_module_exit(void)
+ {
+ 	struct vpe *v, *n;
+ 
+-	device_del(&vpe_device);
++	device_unregister(&vpe_device);
+ 	class_unregister(&vpe_class);
+ 	unregister_chrdev(major, VPE_MODULE_NAME);
+ 
+diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h
+index 870fbf8c70881..13f61fdf38239 100644
+--- a/arch/parisc/include/uapi/asm/mman.h
++++ b/arch/parisc/include/uapi/asm/mman.h
+@@ -50,25 +50,24 @@
+ #define MADV_DONTFORK	10		/* don't inherit across fork */
+ #define MADV_DOFORK	11		/* do inherit across fork */
+ 
+-#define MADV_MERGEABLE   65		/* KSM may merge identical pages */
+-#define MADV_UNMERGEABLE 66		/* KSM may not merge identical pages */
++#define MADV_MERGEABLE   12		/* KSM may merge identical pages */
++#define MADV_UNMERGEABLE 13		/* KSM may not merge identical pages */
+ 
+-#define MADV_HUGEPAGE	67		/* Worth backing with hugepages */
+-#define MADV_NOHUGEPAGE	68		/* Not worth backing with hugepages */
++#define MADV_HUGEPAGE	14		/* Worth backing with hugepages */
++#define MADV_NOHUGEPAGE 15		/* Not worth backing with hugepages */
+ 
+-#define MADV_DONTDUMP   69		/* Explicity exclude from the core dump,
++#define MADV_DONTDUMP   16		/* Explicity exclude from the core dump,
+ 					   overrides the coredump filter bits */
+-#define MADV_DODUMP	70		/* Clear the MADV_NODUMP flag */
++#define MADV_DODUMP	17		/* Clear the MADV_NODUMP flag */
+ 
+-#define MADV_WIPEONFORK 71		/* Zero memory on fork, child only */
+-#define MADV_KEEPONFORK 72		/* Undo MADV_WIPEONFORK */
++#define MADV_WIPEONFORK 18		/* Zero memory on fork, child only */
++#define MADV_KEEPONFORK 19		/* Undo MADV_WIPEONFORK */
+ 
+ #define MADV_HWPOISON     100		/* poison a page for testing */
+ #define MADV_SOFT_OFFLINE 101		/* soft offline page for testing */
+ 
+ /* compatibility flags */
+ #define MAP_FILE	0
+-#define MAP_VARIABLE	0
+ 
+ #define PKEY_DISABLE_ACCESS	0x1
+ #define PKEY_DISABLE_WRITE	0x2
+diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
+index 376ea0d1b2759..4306d1bea98bf 100644
+--- a/arch/parisc/kernel/sys_parisc.c
++++ b/arch/parisc/kernel/sys_parisc.c
+@@ -386,3 +386,30 @@ long parisc_personality(unsigned long personality)
+ 
+ 	return err;
+ }
++
++/*
++ * madvise() wrapper
++ *
++ * Up to kernel v6.1 parisc has different values than all other
++ * platforms for the MADV_xxx flags listed below.
++ * To keep binary compatibility with existing userspace programs
++ * translate the former values to the new values.
++ *
++ * XXX: Remove this wrapper in year 2025 (or later)
++ */
++
++asmlinkage notrace long parisc_madvise(unsigned long start, size_t len_in, int behavior)
++{
++	switch (behavior) {
++	case 65: behavior = MADV_MERGEABLE;	break;
++	case 66: behavior = MADV_UNMERGEABLE;	break;
++	case 67: behavior = MADV_HUGEPAGE;	break;
++	case 68: behavior = MADV_NOHUGEPAGE;	break;
++	case 69: behavior = MADV_DONTDUMP;	break;
++	case 70: behavior = MADV_DODUMP;	break;
++	case 71: behavior = MADV_WIPEONFORK;	break;
++	case 72: behavior = MADV_KEEPONFORK;	break;
++	}
++
++	return sys_madvise(start, len_in, behavior);
++}
+diff --git a/arch/parisc/kernel/syscall_table.S b/arch/parisc/kernel/syscall_table.S
+index fe3f2a49d2b10..64a72f161eebb 100644
+--- a/arch/parisc/kernel/syscall_table.S
++++ b/arch/parisc/kernel/syscall_table.S
+@@ -195,7 +195,7 @@
+ 	ENTRY_COMP(sysinfo)
+ 	ENTRY_SAME(shutdown)
+ 	ENTRY_SAME(fsync)
+-	ENTRY_SAME(madvise)
++	ENTRY_OURS(madvise)
+ 	ENTRY_SAME(clone_wrapper)	/* 120 */
+ 	ENTRY_SAME(setdomainname)
+ 	ENTRY_COMP(sendfile)
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index b492cb1c36fd9..a3f08a380c992 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -717,6 +717,7 @@ void __noreturn rtas_halt(void)
+ 
+ /* Must be in the RMO region, so we place it here */
+ static char rtas_os_term_buf[2048];
++static s32 ibm_os_term_token = RTAS_UNKNOWN_SERVICE;
+ 
+ void rtas_os_term(char *str)
+ {
+@@ -728,16 +729,20 @@ void rtas_os_term(char *str)
+ 	 * this property may terminate the partition which we want to avoid
+ 	 * since it interferes with panic_timeout.
+ 	 */
+-	if (RTAS_UNKNOWN_SERVICE == rtas_token("ibm,os-term") ||
+-	    RTAS_UNKNOWN_SERVICE == rtas_token("ibm,extended-os-term"))
++	if (ibm_os_term_token == RTAS_UNKNOWN_SERVICE)
+ 		return;
+ 
+ 	snprintf(rtas_os_term_buf, 2048, "OS panic: %s", str);
+ 
++	/*
++	 * Keep calling as long as RTAS returns a "try again" status,
++	 * but don't use rtas_busy_delay(), which potentially
++	 * schedules.
++	 */
+ 	do {
+-		status = rtas_call(rtas_token("ibm,os-term"), 1, 1, NULL,
++		status = rtas_call(ibm_os_term_token, 1, 1, NULL,
+ 				   __pa(rtas_os_term_buf));
+-	} while (rtas_busy_delay(status));
++	} while (rtas_busy_delay_time(status));
+ 
+ 	if (status != 0)
+ 		printk(KERN_EMERG "ibm,os-term call failed %d\n", status);
+@@ -1332,6 +1337,13 @@ void __init rtas_initialize(void)
+ 	no_entry = of_property_read_u32(rtas.dev, "linux,rtas-entry", &entry);
+ 	rtas.entry = no_entry ? rtas.base : entry;
+ 
++	/*
++	 * Discover these now to avoid device tree lookups in the
++	 * panic path.
++	 */
++	if (of_property_read_bool(rtas.dev, "ibm,extended-os-term"))
++		ibm_os_term_token = rtas_token("ibm,os-term");
++
+ 	/* If RTAS was found, allocate the RMO buffer for it and look for
+ 	 * the stop-self token if any
+ 	 */
+diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
+index 0af051a1974e0..26a31a3b661e2 100644
+--- a/arch/powerpc/perf/callchain.c
++++ b/arch/powerpc/perf/callchain.c
+@@ -68,6 +68,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
+ 		next_sp = fp[0];
+ 
+ 		if (next_sp == sp + STACK_INT_FRAME_SIZE &&
++		    validate_sp(sp, current, STACK_INT_FRAME_SIZE) &&
+ 		    fp[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
+ 			/*
+ 			 * This looks like an interrupt frame for an
+diff --git a/arch/powerpc/perf/hv-gpci-requests.h b/arch/powerpc/perf/hv-gpci-requests.h
+index 8965b4463d433..5e86371a20c78 100644
+--- a/arch/powerpc/perf/hv-gpci-requests.h
++++ b/arch/powerpc/perf/hv-gpci-requests.h
+@@ -79,6 +79,7 @@ REQUEST(__field(0,	8,	partition_id)
+ )
+ #include I(REQUEST_END)
+ 
++#ifdef ENABLE_EVENTS_COUNTERINFO_V6
+ /*
+  * Not available for counter_info_version >= 0x8, use
+  * run_instruction_cycles_by_partition(0x100) instead.
+@@ -92,6 +93,7 @@ REQUEST(__field(0,	8,	partition_id)
+ 	__count(0x10,	8,	cycles)
+ )
+ #include I(REQUEST_END)
++#endif
+ 
+ #define REQUEST_NAME system_performance_capabilities
+ #define REQUEST_NUM 0x40
+@@ -103,6 +105,7 @@ REQUEST(__field(0,	1,	perf_collect_privileged)
+ )
+ #include I(REQUEST_END)
+ 
++#ifdef ENABLE_EVENTS_COUNTERINFO_V6
+ #define REQUEST_NAME processor_bus_utilization_abc_links
+ #define REQUEST_NUM 0x50
+ #define REQUEST_IDX_KIND "hw_chip_id=?"
+@@ -194,6 +197,7 @@ REQUEST(__field(0,	4,	phys_processor_idx)
+ 	__count(0x28,	8,	instructions_completed)
+ )
+ #include I(REQUEST_END)
++#endif
+ 
+ /* Processor_core_power_mode (0x95) skipped, no counters */
+ /* Affinity_domain_information_by_virtual_processor (0xA0) skipped,
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index 160b86d9d8199..126409bb5626e 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -74,7 +74,7 @@ static struct attribute_group format_group = {
+ 
+ static struct attribute_group event_group = {
+ 	.name  = "events",
+-	.attrs = hv_gpci_event_attrs,
++	/* .attrs is set in init */
+ };
+ 
+ #define HV_CAPS_ATTR(_name, _format)				\
+@@ -292,6 +292,7 @@ static int hv_gpci_init(void)
+ 	int r;
+ 	unsigned long hret;
+ 	struct hv_perf_caps caps;
++	struct hv_gpci_request_buffer *arg;
+ 
+ 	hv_gpci_assert_offsets_correct();
+ 
+@@ -310,6 +311,36 @@ static int hv_gpci_init(void)
+ 	/* sampling not supported */
+ 	h_gpci_pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
+ 
++	arg = (void *)get_cpu_var(hv_gpci_reqb);
++	memset(arg, 0, HGPCI_REQ_BUFFER_SIZE);
++
++	/*
++	 * hcall H_GET_PERF_COUNTER_INFO populates the output
++	 * counter_info_version value based on the system hypervisor.
++	 * Pass the counter request 0x10 corresponds to request type
++	 * 'Dispatch_timebase_by_processor', to get the supported
++	 * counter_info_version.
++	 */
++	arg->params.counter_request = cpu_to_be32(0x10);
++
++	r = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO,
++			virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE);
++	if (r) {
++		pr_devel("hcall failed, can't get supported counter_info_version: 0x%x\n", r);
++		arg->params.counter_info_version_out = 0x8;
++	}
++
++	/*
++	 * Use counter_info_version_out value to assign
++	 * required hv-gpci event list.
++	 */
++	if (arg->params.counter_info_version_out >= 0x8)
++		event_group.attrs = hv_gpci_event_attrs;
++	else
++		event_group.attrs = hv_gpci_event_attrs_v6;
++
++	put_cpu_var(hv_gpci_reqb);
++
+ 	r = perf_pmu_register(&h_gpci_pmu, h_gpci_pmu.name, -1);
+ 	if (r)
+ 		return r;
+diff --git a/arch/powerpc/perf/hv-gpci.h b/arch/powerpc/perf/hv-gpci.h
+index a3053eda5dcc3..060e464d35c6e 100644
+--- a/arch/powerpc/perf/hv-gpci.h
++++ b/arch/powerpc/perf/hv-gpci.h
+@@ -53,6 +53,7 @@ enum {
+ #define REQUEST_FILE "../hv-gpci-requests.h"
+ #define NAME_LOWER hv_gpci
+ #define NAME_UPPER HV_GPCI
++#define ENABLE_EVENTS_COUNTERINFO_V6
+ #include "req-gen/perf.h"
+ #undef REQUEST_FILE
+ #undef NAME_LOWER
+diff --git a/arch/powerpc/perf/req-gen/perf.h b/arch/powerpc/perf/req-gen/perf.h
+index fa9bc804e67af..6b2a59fefffa7 100644
+--- a/arch/powerpc/perf/req-gen/perf.h
++++ b/arch/powerpc/perf/req-gen/perf.h
+@@ -139,6 +139,26 @@ PMU_EVENT_ATTR_STRING(							\
+ #define REQUEST_(r_name, r_value, r_idx_1, r_fields)			\
+ 	r_fields
+ 
++/* Generate event list for platforms with counter_info_version 0x6 or below */
++static __maybe_unused struct attribute *hv_gpci_event_attrs_v6[] = {
++#include REQUEST_FILE
++	NULL
++};
++
++/*
++ * Based on getPerfCountInfo v1.018 documentation, some of the hv-gpci
++ * events were deprecated for platform firmware that supports
++ * counter_info_version 0x8 or above.
++ * Those deprecated events are still part of platform firmware that
++ * support counter_info_version 0x6 and below. As per the getPerfCountInfo
++ * v1.018 documentation there is no counter_info_version 0x7.
++ * Undefining macro ENABLE_EVENTS_COUNTERINFO_V6, to disable the addition of
++ * deprecated events in "hv_gpci_event_attrs" attribute group, for platforms
++ * that supports counter_info_version 0x8 or above.
++ */
++#undef ENABLE_EVENTS_COUNTERINFO_V6
++
++/* Generate event list for platforms with counter_info_version 0x8 or above*/
+ static __maybe_unused struct attribute *hv_gpci_event_attrs[] = {
+ #include REQUEST_FILE
+ 	NULL
+diff --git a/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c b/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
+index 7bb42a0100de6..caaaaf2bea522 100644
+--- a/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
++++ b/arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
+@@ -531,6 +531,7 @@ static int mpc52xx_lpbfifo_probe(struct platform_device *op)
+  err_bcom_rx_irq:
+ 	bcom_gen_bd_rx_release(lpbfifo.bcom_rx_task);
+  err_bcom_rx:
++	free_irq(lpbfifo.irq, &lpbfifo);
+  err_irq:
+ 	iounmap(lpbfifo.regs);
+ 	lpbfifo.regs = NULL;
+diff --git a/arch/powerpc/platforms/83xx/mpc832x_rdb.c b/arch/powerpc/platforms/83xx/mpc832x_rdb.c
+index 4389865938737..bfebfc67b23f5 100644
+--- a/arch/powerpc/platforms/83xx/mpc832x_rdb.c
++++ b/arch/powerpc/platforms/83xx/mpc832x_rdb.c
+@@ -111,7 +111,7 @@ static int __init of_fsl_spi_probe(char *type, char *compatible, u32 sysclk,
+ 
+ 		goto next;
+ unreg:
+-		platform_device_del(pdev);
++		platform_device_put(pdev);
+ err:
+ 		pr_err("%pOF: registration failed\n", np);
+ next:
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index aa705732150cc..ac1eb674f9d67 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -389,6 +389,7 @@ static int xive_spapr_populate_irq_data(u32 hw_irq, struct xive_irq_data *data)
+ 
+ 	data->trig_mmio = ioremap(data->trig_page, 1u << data->esb_shift);
+ 	if (!data->trig_mmio) {
++		iounmap(data->eoi_mmio);
+ 		pr_err("Failed to map trigger page for irq 0x%x\n", hw_irq);
+ 		return -ENOMEM;
+ 	}
+diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
+index f0ea3156192d0..30dd2bf865f4e 100644
+--- a/arch/riscv/include/asm/uaccess.h
++++ b/arch/riscv/include/asm/uaccess.h
+@@ -260,7 +260,7 @@ do {								\
+ 	might_fault();						\
+ 	access_ok(VERIFY_READ, __p, sizeof(*__p)) ?		\
+ 		__get_user((x), __p) :				\
+-		((x) = 0, -EFAULT);				\
++		((x) = (__force __typeof__(x))0, -EFAULT);	\
+ })
+ 
+ #define __put_user_asm(insn, x, ptr, err)			\
+diff --git a/arch/s390/include/asm/percpu.h b/arch/s390/include/asm/percpu.h
+index 50f6661ba5664..026a94b03a3c5 100644
+--- a/arch/s390/include/asm/percpu.h
++++ b/arch/s390/include/asm/percpu.h
+@@ -31,7 +31,7 @@
+ 	pcp_op_T__ *ptr__;						\
+ 	preempt_disable_notrace();					\
+ 	ptr__ = raw_cpu_ptr(&(pcp));					\
+-	prev__ = *ptr__;						\
++	prev__ = READ_ONCE(*ptr__);					\
+ 	do {								\
+ 		old__ = prev__;						\
+ 		new__ = old__ op (val);					\
+diff --git a/arch/x86/boot/bioscall.S b/arch/x86/boot/bioscall.S
+index d401b4a262b09..a860390852ffc 100644
+--- a/arch/x86/boot/bioscall.S
++++ b/arch/x86/boot/bioscall.S
+@@ -35,7 +35,7 @@ intcall:
+ 	movw	%dx, %si
+ 	movw	%sp, %di
+ 	movw	$11, %cx
+-	rep; movsd
++	rep; movsl
+ 
+ 	/* Pop full state from the stack */
+ 	popal
+@@ -70,7 +70,7 @@ intcall:
+ 	jz	4f
+ 	movw	%sp, %si
+ 	movw	$11, %cx
+-	rep; movsd
++	rep; movsl
+ 4:	addw	$44, %sp
+ 
+ 	/* Restore state and return */
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 2bf1170f7afdb..34da6d27d839a 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -2699,6 +2699,7 @@ static bool hswep_has_limit_sbox(unsigned int device)
+ 		return false;
+ 
+ 	pci_read_config_dword(dev, HSWEP_PCU_CAPID4_OFFET, &capid4);
++	pci_dev_put(dev);
+ 	if (!hswep_get_chop(capid4))
+ 		return true;
+ 
+diff --git a/arch/x86/ia32/ia32_aout.c b/arch/x86/ia32/ia32_aout.c
+index 3ebd77770f98b..4cfda3cfae7ff 100644
+--- a/arch/x86/ia32/ia32_aout.c
++++ b/arch/x86/ia32/ia32_aout.c
+@@ -298,6 +298,7 @@ static int load_aout_binary(struct linux_binprm *bprm)
+ 	set_personality_ia32(false);
+ 
+ 	setup_new_exec(bprm);
++	install_exec_creds(bprm);
+ 
+ 	regs->cs = __USER32_CS;
+ 	regs->r8 = regs->r9 = regs->r10 = regs->r11 = regs->r12 =
+@@ -314,8 +315,6 @@ static int load_aout_binary(struct linux_binprm *bprm)
+ 	if (retval < 0)
+ 		return retval;
+ 
+-	install_exec_creds(bprm);
+-
+ 	if (N_MAGIC(ex) == OMAGIC) {
+ 		unsigned long text_addr, map_size;
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index f1f57acca1396..e298ec9d5d536 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1778,6 +1778,8 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+ 		if (ctrl == PR_SPEC_FORCE_DISABLE)
+ 			task_set_spec_ib_force_disable(task);
+ 		task_update_spec_tif(task);
++		if (task == current)
++			indirect_branch_prediction_barrier();
+ 		break;
+ 	default:
+ 		return -ERANGE;
+diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+index f406e3b85bdb2..1125f752f1269 100644
+--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
++++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+@@ -585,8 +585,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
+ 	/*
+ 	 * Ensure the task's closid and rmid are written before determining if
+ 	 * the task is current that will decide if it will be interrupted.
++	 * This pairs with the full barrier between the rq->curr update and
++	 * resctrl_sched_in() during context switch.
+ 	 */
+-	barrier();
++	smp_mb();
+ 
+ 	/*
+ 	 * By now, the task's closid and rmid are set. If the task is current
+@@ -2140,19 +2142,23 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
+ 			t->closid = to->closid;
+ 			t->rmid = to->mon.rmid;
+ 
+-#ifdef CONFIG_SMP
+ 			/*
+-			 * This is safe on x86 w/o barriers as the ordering
+-			 * of writing to task_cpu() and t->on_cpu is
+-			 * reverse to the reading here. The detection is
+-			 * inaccurate as tasks might move or schedule
+-			 * before the smp function call takes place. In
+-			 * such a case the function call is pointless, but
++			 * Order the closid/rmid stores above before the loads
++			 * in task_curr(). This pairs with the full barrier
++			 * between the rq->curr update and resctrl_sched_in()
++			 * during context switch.
++			 */
++			smp_mb();
++
++			/*
++			 * If the task is on a CPU, set the CPU in the mask.
++			 * The detection is inaccurate as tasks might move or
++			 * schedule before the smp function call takes place.
++			 * In such a case the function call is pointless, but
+ 			 * there is no other side effect.
+ 			 */
+-			if (mask && t->on_cpu)
++			if (IS_ENABLED(CONFIG_SMP) && mask && task_curr(t))
+ 				cpumask_set_cpu(task_cpu(t), mask);
+-#endif
+ 		}
+ 	}
+ 	read_unlock(&tasklist_lock);
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 3aa0e5a453030..31ad79a0c6f32 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -662,7 +662,6 @@ void load_ucode_intel_ap(void)
+ 	else
+ 		iup = &intel_ucode_patch;
+ 
+-reget:
+ 	if (!*iup) {
+ 		patch = __load_ucode_intel(&uci);
+ 		if (!patch)
+@@ -673,12 +672,7 @@ reget:
+ 
+ 	uci.mc = *iup;
+ 
+-	if (apply_microcode_early(&uci, true)) {
+-		/* Mixed-silicon system? Try to refetch the proper patch: */
+-		*iup = NULL;
+-
+-		goto reget;
+-	}
++	apply_microcode_early(&uci, true);
+ }
+ 
+ static struct microcode_intel *find_patch(struct ucode_cpu_info *uci)
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index ae9e806a11def..3b87bca027b41 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -735,8 +735,9 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
+ 	switch (opc1) {
+ 	case 0xeb:	/* jmp 8 */
+ 	case 0xe9:	/* jmp 32 */
+-	case 0x90:	/* prefix* + nop; same as jmp with .offs = 0 */
+ 		break;
++	case 0x90:	/* prefix* + nop; same as jmp with .offs = 0 */
++		goto setup;
+ 
+ 	case 0xe8:	/* call relative */
+ 		branch_clear_offset(auprobe, insn);
+@@ -765,6 +766,7 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
+ 			return -ENOTSUPP;
+ 	}
+ 
++setup:
+ 	auprobe->branch.opc1 = opc1;
+ 	auprobe->branch.ilen = insn->length;
+ 	auprobe->branch.offs = insn->immediate.value;
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index 7a43b2ae19f12..a1cc855c539c1 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -32,30 +32,30 @@ static irqreturn_t xen_reschedule_interrupt(int irq, void *dev_id)
+ 
+ void xen_smp_intr_free(unsigned int cpu)
+ {
++	kfree(per_cpu(xen_resched_irq, cpu).name);
++	per_cpu(xen_resched_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_resched_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_resched_irq, cpu).irq, NULL);
+ 		per_cpu(xen_resched_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_resched_irq, cpu).name);
+-		per_cpu(xen_resched_irq, cpu).name = NULL;
+ 	}
++	kfree(per_cpu(xen_callfunc_irq, cpu).name);
++	per_cpu(xen_callfunc_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_callfunc_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_callfunc_irq, cpu).irq, NULL);
+ 		per_cpu(xen_callfunc_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_callfunc_irq, cpu).name);
+-		per_cpu(xen_callfunc_irq, cpu).name = NULL;
+ 	}
++	kfree(per_cpu(xen_debug_irq, cpu).name);
++	per_cpu(xen_debug_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_debug_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_debug_irq, cpu).irq, NULL);
+ 		per_cpu(xen_debug_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_debug_irq, cpu).name);
+-		per_cpu(xen_debug_irq, cpu).name = NULL;
+ 	}
++	kfree(per_cpu(xen_callfuncsingle_irq, cpu).name);
++	per_cpu(xen_callfuncsingle_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_callfuncsingle_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_callfuncsingle_irq, cpu).irq,
+ 				       NULL);
+ 		per_cpu(xen_callfuncsingle_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_callfuncsingle_irq, cpu).name);
+-		per_cpu(xen_callfuncsingle_irq, cpu).name = NULL;
+ 	}
+ }
+ 
+@@ -65,6 +65,7 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	char *resched_name, *callfunc_name, *debug_name;
+ 
+ 	resched_name = kasprintf(GFP_KERNEL, "resched%d", cpu);
++	per_cpu(xen_resched_irq, cpu).name = resched_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_RESCHEDULE_VECTOR,
+ 				    cpu,
+ 				    xen_reschedule_interrupt,
+@@ -74,9 +75,9 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_resched_irq, cpu).irq = rc;
+-	per_cpu(xen_resched_irq, cpu).name = resched_name;
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfunc%d", cpu);
++	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_VECTOR,
+ 				    cpu,
+ 				    xen_call_function_interrupt,
+@@ -86,18 +87,21 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_callfunc_irq, cpu).irq = rc;
+-	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
+ 
+-	debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
+-	rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu, xen_debug_interrupt,
+-				     IRQF_PERCPU | IRQF_NOBALANCING,
+-				     debug_name, NULL);
+-	if (rc < 0)
+-		goto fail;
+-	per_cpu(xen_debug_irq, cpu).irq = rc;
+-	per_cpu(xen_debug_irq, cpu).name = debug_name;
++	if (!xen_fifo_events) {
++		debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
++		per_cpu(xen_debug_irq, cpu).name = debug_name;
++		rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
++					     xen_debug_interrupt,
++					     IRQF_PERCPU | IRQF_NOBALANCING,
++					     debug_name, NULL);
++		if (rc < 0)
++			goto fail;
++		per_cpu(xen_debug_irq, cpu).irq = rc;
++	}
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
++	per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
+ 				    cpu,
+ 				    xen_call_function_single_interrupt,
+@@ -107,7 +111,6 @@ int xen_smp_intr_init(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_callfuncsingle_irq, cpu).irq = rc;
+-	per_cpu(xen_callfuncsingle_irq, cpu).name = callfunc_name;
+ 
+ 	return 0;
+ 
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index e8248e8a04ca7..75807c2a1e175 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -94,18 +94,18 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
+ 
+ void xen_smp_intr_free_pv(unsigned int cpu)
+ {
++	kfree(per_cpu(xen_irq_work, cpu).name);
++	per_cpu(xen_irq_work, cpu).name = NULL;
+ 	if (per_cpu(xen_irq_work, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_irq_work, cpu).irq, NULL);
+ 		per_cpu(xen_irq_work, cpu).irq = -1;
+-		kfree(per_cpu(xen_irq_work, cpu).name);
+-		per_cpu(xen_irq_work, cpu).name = NULL;
+ 	}
+ 
++	kfree(per_cpu(xen_pmu_irq, cpu).name);
++	per_cpu(xen_pmu_irq, cpu).name = NULL;
+ 	if (per_cpu(xen_pmu_irq, cpu).irq >= 0) {
+ 		unbind_from_irqhandler(per_cpu(xen_pmu_irq, cpu).irq, NULL);
+ 		per_cpu(xen_pmu_irq, cpu).irq = -1;
+-		kfree(per_cpu(xen_pmu_irq, cpu).name);
+-		per_cpu(xen_pmu_irq, cpu).name = NULL;
+ 	}
+ }
+ 
+@@ -115,6 +115,7 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	char *callfunc_name, *pmu_name;
+ 
+ 	callfunc_name = kasprintf(GFP_KERNEL, "irqwork%d", cpu);
++	per_cpu(xen_irq_work, cpu).name = callfunc_name;
+ 	rc = bind_ipi_to_irqhandler(XEN_IRQ_WORK_VECTOR,
+ 				    cpu,
+ 				    xen_irq_work_interrupt,
+@@ -124,10 +125,10 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 	if (rc < 0)
+ 		goto fail;
+ 	per_cpu(xen_irq_work, cpu).irq = rc;
+-	per_cpu(xen_irq_work, cpu).name = callfunc_name;
+ 
+ 	if (is_xen_pmu) {
+ 		pmu_name = kasprintf(GFP_KERNEL, "pmu%d", cpu);
++		per_cpu(xen_pmu_irq, cpu).name = pmu_name;
+ 		rc = bind_virq_to_irqhandler(VIRQ_XENPMU, cpu,
+ 					     xen_pmu_irq_handler,
+ 					     IRQF_PERCPU|IRQF_NOBALANCING,
+@@ -135,7 +136,6 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ 		if (rc < 0)
+ 			goto fail;
+ 		per_cpu(xen_pmu_irq, cpu).irq = rc;
+-		per_cpu(xen_pmu_irq, cpu).name = pmu_name;
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
+index 6fffb86a32add..e6cf1b430fd03 100644
+--- a/arch/x86/xen/spinlock.c
++++ b/arch/x86/xen/spinlock.c
+@@ -83,6 +83,7 @@ void xen_init_lock_cpu(int cpu)
+ 	     cpu, per_cpu(lock_kicker_irq, cpu));
+ 
+ 	name = kasprintf(GFP_KERNEL, "spinlock%d", cpu);
++	per_cpu(irq_name, cpu) = name;
+ 	irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
+ 				     cpu,
+ 				     dummy_handler,
+@@ -93,7 +94,6 @@ void xen_init_lock_cpu(int cpu)
+ 	if (irq >= 0) {
+ 		disable_irq(irq); /* make sure it's never delivered */
+ 		per_cpu(lock_kicker_irq, cpu) = irq;
+-		per_cpu(irq_name, cpu) = name;
+ 	}
+ 
+ 	printk("cpu %d spinlock event irq %d\n", cpu, irq);
+@@ -106,6 +106,8 @@ void xen_uninit_lock_cpu(int cpu)
+ 	if (!xen_pvspin)
+ 		return;
+ 
++	kfree(per_cpu(irq_name, cpu));
++	per_cpu(irq_name, cpu) = NULL;
+ 	/*
+ 	 * When booting the kernel with 'mitigations=auto,nosmt', the secondary
+ 	 * CPUs are not activated, and lock_kicker_irq is not initialized.
+@@ -116,8 +118,6 @@ void xen_uninit_lock_cpu(int cpu)
+ 
+ 	unbind_from_irqhandler(irq, NULL);
+ 	per_cpu(lock_kicker_irq, cpu) = -1;
+-	kfree(per_cpu(irq_name, cpu));
+-	per_cpu(irq_name, cpu) = NULL;
+ }
+ 
+ PV_CALLEE_SAVE_REGS_THUNK(xen_vcpu_stolen);
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 2f111f47ba98c..9faec8543237a 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -30,6 +30,8 @@ extern struct start_info *xen_start_info;
+ extern struct shared_info xen_dummy_shared_info;
+ extern struct shared_info *HYPERVISOR_shared_info;
+ 
++extern bool xen_fifo_events;
++
+ void xen_setup_mfn_list_list(void);
+ void xen_build_mfn_list_list(void);
+ void xen_setup_machphys_mapping(void);
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 5e4b7ed1e897a..ae03e63ffbfdd 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -241,7 +241,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct request_queue *q = hctx->queue;
+ 	struct blk_mq_ctx *ctx;
+-	int i, ret;
++	int i, j, ret;
+ 
+ 	if (!hctx->nr_ctx)
+ 		return 0;
+@@ -253,9 +253,16 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx *hctx)
+ 	hctx_for_each_ctx(hctx, ctx, i) {
+ 		ret = kobject_add(&ctx->kobj, &hctx->kobj, "cpu%u", ctx->cpu);
+ 		if (ret)
+-			break;
++			goto out;
+ 	}
+ 
++	return 0;
++out:
++	hctx_for_each_ctx(hctx, ctx, j) {
++		if (j < i)
++			kobject_del(&ctx->kobj);
++	}
++	kobject_del(&hctx->kobj);
+ 	return ret;
+ }
+ 
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index 98d60a59b843c..871f528dcb131 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -276,6 +276,7 @@ void delete_partition(struct gendisk *disk, int partno)
+ 	struct disk_part_tbl *ptbl =
+ 		rcu_dereference_protected(disk->part_tbl, 1);
+ 	struct hd_struct *part;
++	struct block_device *bdev;
+ 
+ 	if (partno >= ptbl->len)
+ 		return;
+@@ -296,6 +297,12 @@ void delete_partition(struct gendisk *disk, int partno)
+ 	 * "in-use" until we really free the gendisk.
+ 	 */
+ 	blk_invalidate_devt(part_devt(part));
++
++	bdev = bdget(part_devt(part));
++	if (bdev) {
++		remove_inode_hash(bdev->bd_inode);
++		bdput(bdev);
++	}
+ 	hd_struct_kill(part);
+ }
+ 
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index bf797c613ba2d..366f4510acbe4 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1285,15 +1285,6 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
+ 			goto out_free_tfm;
+ 		}
+ 
+-
+-	for (i = 0; i < num_mb; ++i)
+-		if (testmgr_alloc_buf(data[i].xbuf)) {
+-			while (i--)
+-				testmgr_free_buf(data[i].xbuf);
+-			goto out_free_tfm;
+-		}
+-
+-
+ 	for (i = 0; i < num_mb; ++i) {
+ 		data[i].req = skcipher_request_alloc(tfm, GFP_KERNEL);
+ 		if (!data[i].req) {
+diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
+index dd4deb678d13e..a00516d9538c2 100644
+--- a/drivers/acpi/acpica/dsmethod.c
++++ b/drivers/acpi/acpica/dsmethod.c
+@@ -517,7 +517,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 	info = ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_evaluate_info));
+ 	if (!info) {
+ 		status = AE_NO_MEMORY;
+-		goto cleanup;
++		goto pop_walk_state;
+ 	}
+ 
+ 	info->parameters = &this_walk_state->operands[0];
+@@ -529,7 +529,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 
+ 	ACPI_FREE(info);
+ 	if (ACPI_FAILURE(status)) {
+-		goto cleanup;
++		goto pop_walk_state;
+ 	}
+ 
+ 	/*
+@@ -561,6 +561,12 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ 
+ 	return_ACPI_STATUS(status);
+ 
++pop_walk_state:
++
++	/* On error, pop the walk state to be deleted from thread */
++
++	acpi_ds_pop_walk_state(thread);
++
+ cleanup:
+ 
+ 	/* On error, we must terminate the method properly */
+diff --git a/drivers/acpi/acpica/utcopy.c b/drivers/acpi/acpica/utcopy.c
+index a872ed7879ca7..056c1741c1e39 100644
+--- a/drivers/acpi/acpica/utcopy.c
++++ b/drivers/acpi/acpica/utcopy.c
+@@ -916,13 +916,6 @@ acpi_ut_copy_ipackage_to_ipackage(union acpi_operand_object *source_obj,
+ 	status = acpi_ut_walk_package_tree(source_obj, dest_obj,
+ 					   acpi_ut_copy_ielement_to_ielement,
+ 					   walk_state);
+-	if (ACPI_FAILURE(status)) {
+-
+-		/* On failure, delete the destination package object */
+-
+-		acpi_ut_remove_reference(dest_obj);
+-	}
+-
+ 	return_ACPI_STATUS(status);
+ }
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 505920d4530f8..13fb983b3413e 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -97,6 +97,7 @@ enum board_ids {
+ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
+ static void ahci_remove_one(struct pci_dev *dev);
+ static void ahci_shutdown_one(struct pci_dev *dev);
++static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hpriv);
+ static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
+ 				 unsigned long deadline);
+ static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
+@@ -655,6 +656,25 @@ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
+ 	ahci_save_initial_config(&pdev->dev, hpriv);
+ }
+ 
++static int ahci_pci_reset_controller(struct ata_host *host)
++{
++	struct pci_dev *pdev = to_pci_dev(host->dev);
++	struct ahci_host_priv *hpriv = host->private_data;
++	int rc;
++
++	rc = ahci_reset_controller(host);
++	if (rc)
++		return rc;
++
++	/*
++	 * If platform firmware failed to enable ports, try to enable
++	 * them here.
++	 */
++	ahci_intel_pcs_quirk(pdev, hpriv);
++
++	return 0;
++}
++
+ static void ahci_pci_init_controller(struct ata_host *host)
+ {
+ 	struct ahci_host_priv *hpriv = host->private_data;
+@@ -857,7 +877,7 @@ static int ahci_pci_device_runtime_resume(struct device *dev)
+ 	struct ata_host *host = pci_get_drvdata(pdev);
+ 	int rc;
+ 
+-	rc = ahci_reset_controller(host);
++	rc = ahci_pci_reset_controller(host);
+ 	if (rc)
+ 		return rc;
+ 	ahci_pci_init_controller(host);
+@@ -892,7 +912,7 @@ static int ahci_pci_device_resume(struct device *dev)
+ 		ahci_mcp89_apple_enable(pdev);
+ 
+ 	if (pdev->dev.power.power_state.event == PM_EVENT_SUSPEND) {
+-		rc = ahci_reset_controller(host);
++		rc = ahci_pci_reset_controller(host);
+ 		if (rc)
+ 			return rc;
+ 
+@@ -1769,12 +1789,6 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* save initial config */
+ 	ahci_pci_save_initial_config(pdev, hpriv);
+ 
+-	/*
+-	 * If platform firmware failed to enable ports, try to enable
+-	 * them here.
+-	 */
+-	ahci_intel_pcs_quirk(pdev, hpriv);
+-
+ 	/* prepare host */
+ 	if (hpriv->cap & HOST_CAP_NCQ) {
+ 		pi.flags |= ATA_FLAG_NCQ;
+@@ -1884,7 +1898,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = ahci_reset_controller(host);
++	rc = ahci_pci_reset_controller(host);
+ 	if (rc)
+ 		return rc;
+ 
+diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
+index 867621f8c3873..8c34c64e4d518 100644
+--- a/drivers/ata/pata_ixp4xx_cf.c
++++ b/drivers/ata/pata_ixp4xx_cf.c
+@@ -139,12 +139,12 @@ static void ixp4xx_setup_port(struct ata_port *ap,
+ 
+ static int ixp4xx_pata_probe(struct platform_device *pdev)
+ {
+-	unsigned int irq;
+ 	struct resource *cs0, *cs1;
+ 	struct ata_host *host;
+ 	struct ata_port *ap;
+ 	struct ixp4xx_pata_data *data = dev_get_platdata(&pdev->dev);
+ 	int ret;
++	int irq;
+ 
+ 	cs0 = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	cs1 = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+diff --git a/drivers/base/class.c b/drivers/base/class.c
+index 54def4e02f005..d36cdbeaa2c83 100644
+--- a/drivers/base/class.c
++++ b/drivers/base/class.c
+@@ -185,6 +185,11 @@ int __class_register(struct class *cls, struct lock_class_key *key)
+ 	}
+ 	error = class_add_groups(class_get(cls), cls->class_groups);
+ 	class_put(cls);
++	if (error) {
++		kobject_del(&cp->subsys.kobj);
++		kfree_const(cp->subsys.kobj.name);
++		kfree(cp);
++	}
+ 	return error;
+ }
+ EXPORT_SYMBOL_GPL(__class_register);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 63390a416b445..6a2aa85624715 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -220,7 +220,16 @@ static int deferred_devs_show(struct seq_file *s, void *data)
+ }
+ DEFINE_SHOW_ATTRIBUTE(deferred_devs);
+ 
++#ifdef CONFIG_MODULES
++/*
++ * In the case of modules, set the default probe timeout to
++ * 30 seconds to give userland some time to load needed modules
++ */
++static int deferred_probe_timeout = 30;
++#else
++/* In the case of !modules, no probe timeout needed */
+ static int deferred_probe_timeout = -1;
++#endif
+ static int __init deferred_probe_timeout_setup(char *str)
+ {
+ 	deferred_probe_timeout = simple_strtol(str, NULL, 10);
+@@ -902,8 +911,12 @@ static int __driver_attach(struct device *dev, void *data)
+ 		 */
+ 		return 0;
+ 	} else if (ret < 0) {
+-		dev_dbg(dev, "Bus failed to match device: %d", ret);
+-		return ret;
++		dev_dbg(dev, "Bus failed to match device: %d\n", ret);
++		/*
++		 * Driver could not match with device, but may match with
++		 * another device on the bus.
++		 */
++		return 0;
+ 	} /* ret > 0 means positive match */
+ 
+ 	if (dev->parent && dev->bus->need_parent_lock)
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index eaae4adf9ce4b..911bb8a4bf6df 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -403,7 +403,10 @@ static int rpm_idle(struct device *dev, int rpmflags)
+ 	/* Pending requests need to be canceled. */
+ 	dev->power.request = RPM_REQ_NONE;
+ 
+-	if (dev->power.no_callbacks)
++	callback = RPM_GET_CALLBACK(dev, runtime_idle);
++
++	/* If no callback assume success. */
++	if (!callback || dev->power.no_callbacks)
+ 		goto out;
+ 
+ 	/* Carry out an asynchronous or a synchronous idle notification. */
+@@ -419,10 +422,17 @@ static int rpm_idle(struct device *dev, int rpmflags)
+ 
+ 	dev->power.idle_notification = true;
+ 
+-	callback = RPM_GET_CALLBACK(dev, runtime_idle);
++	if (dev->power.irq_safe)
++		spin_unlock(&dev->power.lock);
++	else
++		spin_unlock_irq(&dev->power.lock);
++
++	retval = callback(dev);
+ 
+-	if (callback)
+-		retval = __rpm_callback(callback, dev);
++	if (dev->power.irq_safe)
++		spin_lock(&dev->power.lock);
++	else
++		spin_lock_irq(&dev->power.lock);
+ 
+ 	dev->power.idle_notification = false;
+ 	wake_up_all(&dev->power.wait_queue);
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 3ae718aa6b39f..1ff5af6c4f3fd 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -2258,7 +2258,6 @@ void drbd_destroy_device(struct kref *kref)
+ 		kref_put(&peer_device->connection->kref, drbd_destroy_connection);
+ 		kfree(peer_device);
+ 	}
+-	memset(device, 0xfd, sizeof(*device));
+ 	kfree(device);
+ 	kref_put(&resource->kref, drbd_destroy_resource);
+ }
+@@ -2351,7 +2350,6 @@ void drbd_destroy_resource(struct kref *kref)
+ 	idr_destroy(&resource->devices);
+ 	free_cpumask_var(resource->cpu_mask);
+ 	kfree(resource->name);
+-	memset(resource, 0xf2, sizeof(*resource));
+ 	kfree(resource);
+ }
+ 
+@@ -2748,7 +2746,6 @@ void drbd_destroy_connection(struct kref *kref)
+ 	drbd_free_socket(&connection->data);
+ 	kfree(connection->int_dig_in);
+ 	kfree(connection->int_dig_vv);
+-	memset(connection, 0xfc, sizeof(*connection));
+ 	kfree(connection);
+ 	kref_put(&resource->kref, drbd_destroy_resource);
+ }
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 7188f0fb2e050..b6eb48e44e6b1 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -501,13 +501,13 @@ static inline void btusb_free_frags(struct btusb_data *data)
+ 
+ 	spin_lock_irqsave(&data->rxlock, flags);
+ 
+-	kfree_skb(data->evt_skb);
++	dev_kfree_skb_irq(data->evt_skb);
+ 	data->evt_skb = NULL;
+ 
+-	kfree_skb(data->acl_skb);
++	dev_kfree_skb_irq(data->acl_skb);
+ 	data->acl_skb = NULL;
+ 
+-	kfree_skb(data->sco_skb);
++	dev_kfree_skb_irq(data->sco_skb);
+ 	data->sco_skb = NULL;
+ 
+ 	spin_unlock_irqrestore(&data->rxlock, flags);
+diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
+index 27829273f3c91..f6fa0a40dc373 100644
+--- a/drivers/bluetooth/hci_bcsp.c
++++ b/drivers/bluetooth/hci_bcsp.c
+@@ -393,7 +393,7 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
+ 		i++;
+ 
+ 		__skb_unlink(skb, &bcsp->unack);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 	}
+ 
+ 	if (skb_queue_empty(&bcsp->unack))
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 79b96251de806..29c80e2acb05d 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -311,7 +311,7 @@ static void h5_pkt_cull(struct h5 *h5)
+ 			break;
+ 
+ 		__skb_unlink(skb, &h5->unack);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 	}
+ 
+ 	if (skb_queue_empty(&h5->unack))
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index f96e58de049b3..f8b6823db04ac 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -810,7 +810,7 @@ static int qca_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ 	default:
+ 		BT_ERR("Illegal tx state: %d (losing packet)",
+ 		       qca->tx_ibs_state);
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
+index 9959c762da2f8..db3dd467194c2 100644
+--- a/drivers/char/hw_random/amd-rng.c
++++ b/drivers/char/hw_random/amd-rng.c
+@@ -143,15 +143,19 @@ static int __init mod_init(void)
+ found:
+ 	err = pci_read_config_dword(pdev, 0x58, &pmbase);
+ 	if (err)
+-		return err;
++		goto put_dev;
+ 
+ 	pmbase &= 0x0000FF00;
+-	if (pmbase == 0)
+-		return -EIO;
++	if (pmbase == 0) {
++		err = -EIO;
++		goto put_dev;
++	}
+ 
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
++	if (!priv) {
++		err = -ENOMEM;
++		goto put_dev;
++	}
+ 
+ 	if (!request_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE, DRV_NAME)) {
+ 		dev_err(&pdev->dev, DRV_NAME " region 0x%x already in use!\n",
+@@ -185,6 +189,8 @@ err_iomap:
+ 	release_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE);
+ out:
+ 	kfree(priv);
++put_dev:
++	pci_dev_put(pdev);
+ 	return err;
+ }
+ 
+@@ -200,6 +206,8 @@ static void __exit mod_exit(void)
+ 
+ 	release_region(priv->pmbase + PMBASE_OFFSET, PMBASE_SIZE);
+ 
++	pci_dev_put(priv->pcidev);
++
+ 	kfree(priv);
+ }
+ 
+diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
+index e1d421a36a138..207272979f233 100644
+--- a/drivers/char/hw_random/geode-rng.c
++++ b/drivers/char/hw_random/geode-rng.c
+@@ -51,6 +51,10 @@ static const struct pci_device_id pci_tbl[] = {
+ };
+ MODULE_DEVICE_TABLE(pci, pci_tbl);
+ 
++struct amd_geode_priv {
++	struct pci_dev *pcidev;
++	void __iomem *membase;
++};
+ 
+ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
+ {
+@@ -90,6 +94,7 @@ static int __init mod_init(void)
+ 	const struct pci_device_id *ent;
+ 	void __iomem *mem;
+ 	unsigned long rng_base;
++	struct amd_geode_priv *priv;
+ 
+ 	for_each_pci_dev(pdev) {
+ 		ent = pci_match_id(pci_tbl, pdev);
+@@ -97,17 +102,26 @@ static int __init mod_init(void)
+ 			goto found;
+ 	}
+ 	/* Device not found. */
+-	goto out;
++	return err;
+ 
+ found:
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		err = -ENOMEM;
++		goto put_dev;
++	}
++
+ 	rng_base = pci_resource_start(pdev, 0);
+ 	if (rng_base == 0)
+-		goto out;
++		goto free_priv;
+ 	err = -ENOMEM;
+ 	mem = ioremap(rng_base, 0x58);
+ 	if (!mem)
+-		goto out;
+-	geode_rng.priv = (unsigned long)mem;
++		goto free_priv;
++
++	geode_rng.priv = (unsigned long)priv;
++	priv->membase = mem;
++	priv->pcidev = pdev;
+ 
+ 	pr_info("AMD Geode RNG detected\n");
+ 	err = hwrng_register(&geode_rng);
+@@ -116,20 +130,26 @@ found:
+ 		       err);
+ 		goto err_unmap;
+ 	}
+-out:
+ 	return err;
+ 
+ err_unmap:
+ 	iounmap(mem);
+-	goto out;
++free_priv:
++	kfree(priv);
++put_dev:
++	pci_dev_put(pdev);
++	return err;
+ }
+ 
+ static void __exit mod_exit(void)
+ {
+-	void __iomem *mem = (void __iomem *)geode_rng.priv;
++	struct amd_geode_priv *priv;
+ 
++	priv = (struct amd_geode_priv *)geode_rng.priv;
+ 	hwrng_unregister(&geode_rng);
+-	iounmap(mem);
++	iounmap(priv->membase);
++	pci_dev_put(priv->pcidev);
++	kfree(priv);
+ }
+ 
+ module_init(mod_init);
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 4cf3ef4ddec35..4265e8d3e71c5 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -1219,6 +1219,7 @@ static void _ipmi_destroy_user(struct ipmi_user *user)
+ 	unsigned long    flags;
+ 	struct cmd_rcvr  *rcvr;
+ 	struct cmd_rcvr  *rcvrs = NULL;
++	struct module    *owner;
+ 
+ 	if (!acquire_ipmi_user(user, &i)) {
+ 		/*
+@@ -1278,8 +1279,9 @@ static void _ipmi_destroy_user(struct ipmi_user *user)
+ 		kfree(rcvr);
+ 	}
+ 
++	owner = intf->owner;
+ 	kref_put(&intf->refcount, intf_free);
+-	module_put(intf->owner);
++	module_put(owner);
+ }
+ 
+ int ipmi_destroy_user(struct ipmi_user *user)
+@@ -3461,12 +3463,16 @@ static void deliver_smi_err_response(struct ipmi_smi *intf,
+ 				     struct ipmi_smi_msg *msg,
+ 				     unsigned char err)
+ {
++	int rv;
+ 	msg->rsp[0] = msg->data[0] | 4;
+ 	msg->rsp[1] = msg->data[1];
+ 	msg->rsp[2] = err;
+ 	msg->rsp_size = 3;
+-	/* It's an error, so it will never requeue, no need to check return. */
+-	handle_one_recv_msg(intf, msg);
++
++	/* This will never requeue, but it may ask us to free the message. */
++	rv = handle_one_recv_msg(intf, msg);
++	if (rv == 0)
++		ipmi_free_smi_msg(msg);
+ }
+ 
+ static void cleanup_smi_msgs(struct ipmi_smi *intf)
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 006d765256782..a5e1dce042e8e 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2187,6 +2187,20 @@ skip_fallback_noirq:
+ }
+ module_init(init_ipmi_si);
+ 
++static void wait_msg_processed(struct smi_info *smi_info)
++{
++	unsigned long jiffies_now;
++	long time_diff;
++
++	while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
++		jiffies_now = jiffies;
++		time_diff = (((long)jiffies_now - (long)smi_info->last_timeout_jiffies)
++		     * SI_USEC_PER_JIFFY);
++		smi_event_handler(smi_info, time_diff);
++		schedule_timeout_uninterruptible(1);
++	}
++}
++
+ static void shutdown_smi(void *send_info)
+ {
+ 	struct smi_info *smi_info = send_info;
+@@ -2221,16 +2235,13 @@ static void shutdown_smi(void *send_info)
+ 	 * in the BMC.  Note that timers and CPU interrupts are off,
+ 	 * so no need for locks.
+ 	 */
+-	while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
+-		poll(smi_info);
+-		schedule_timeout_uninterruptible(1);
+-	}
++	wait_msg_processed(smi_info);
++
+ 	if (smi_info->handlers)
+ 		disable_si_irq(smi_info);
+-	while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
+-		poll(smi_info);
+-		schedule_timeout_uninterruptible(1);
+-	}
++
++	wait_msg_processed(smi_info);
++
+ 	if (smi_info->handlers)
+ 		smi_info->handlers->cleanup(smi_info->si_sm);
+ 
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 20f27100708bd..59cd7009277bf 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -256,7 +256,7 @@ static int __crb_relinquish_locality(struct device *dev,
+ 	iowrite32(CRB_LOC_CTRL_RELINQUISH, &priv->regs_h->loc_ctrl);
+ 	if (!crb_wait_for_reg_32(&priv->regs_h->loc_state, mask, value,
+ 				 TPM2_TIMEOUT_C)) {
+-		dev_warn(dev, "TPM_LOC_STATE_x.requestAccess timed out\n");
++		dev_warn(dev, "TPM_LOC_STATE_x.Relinquish timed out\n");
+ 		return -ETIME;
+ 	}
+ 
+@@ -680,12 +680,16 @@ static int crb_acpi_add(struct acpi_device *device)
+ 
+ 	/* Should the FIFO driver handle this? */
+ 	sm = buf->start_method;
+-	if (sm == ACPI_TPM2_MEMORY_MAPPED)
+-		return -ENODEV;
++	if (sm == ACPI_TPM2_MEMORY_MAPPED) {
++		rc = -ENODEV;
++		goto out;
++	}
+ 
+ 	priv = devm_kzalloc(dev, sizeof(struct crb_priv), GFP_KERNEL);
+-	if (!priv)
+-		return -ENOMEM;
++	if (!priv) {
++		rc = -ENOMEM;
++		goto out;
++	}
+ 
+ 	if (sm == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC) {
+ 		if (buf->header.length < (sizeof(*buf) + sizeof(*crb_smc))) {
+@@ -693,7 +697,8 @@ static int crb_acpi_add(struct acpi_device *device)
+ 				FW_BUG "TPM2 ACPI table has wrong size %u for start method type %d\n",
+ 				buf->header.length,
+ 				ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC);
+-			return -EINVAL;
++			rc = -EINVAL;
++			goto out;
+ 		}
+ 		crb_smc = ACPI_ADD_PTR(struct tpm2_crb_smc, buf, sizeof(*buf));
+ 		priv->smc_func_id = crb_smc->smc_func_id;
+@@ -704,17 +709,23 @@ static int crb_acpi_add(struct acpi_device *device)
+ 
+ 	rc = crb_map_io(device, priv, buf);
+ 	if (rc)
+-		return rc;
++		goto out;
+ 
+ 	chip = tpmm_chip_alloc(dev, &tpm_crb);
+-	if (IS_ERR(chip))
+-		return PTR_ERR(chip);
++	if (IS_ERR(chip)) {
++		rc = PTR_ERR(chip);
++		goto out;
++	}
+ 
+ 	dev_set_drvdata(&chip->dev, priv);
+ 	chip->acpi_dev_handle = device->handle;
+ 	chip->flags = TPM_CHIP_FLAG_TPM2;
+ 
+-	return tpm_chip_register(chip);
++	rc = tpm_chip_register(chip);
++
++out:
++	acpi_put_table((struct acpi_table_header *)buf);
++	return rc;
+ }
+ 
+ static int crb_acpi_remove(struct acpi_device *device)
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index 5a3a4f0953910..939dc25a78336 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -129,6 +129,7 @@ static int check_acpi_tpm2(struct device *dev)
+ 	const struct acpi_device_id *aid = acpi_match_device(tpm_acpi_tbl, dev);
+ 	struct acpi_table_tpm2 *tbl;
+ 	acpi_status st;
++	int ret = 0;
+ 
+ 	if (!aid || aid->driver_data != DEVICE_IS_TPM2)
+ 		return 0;
+@@ -136,8 +137,7 @@ static int check_acpi_tpm2(struct device *dev)
+ 	/* If the ACPI TPM2 signature is matched then a global ACPI_SIG_TPM2
+ 	 * table is mandatory
+ 	 */
+-	st =
+-	    acpi_get_table(ACPI_SIG_TPM2, 1, (struct acpi_table_header **)&tbl);
++	st = acpi_get_table(ACPI_SIG_TPM2, 1, (struct acpi_table_header **)&tbl);
+ 	if (ACPI_FAILURE(st) || tbl->header.length < sizeof(*tbl)) {
+ 		dev_err(dev, FW_BUG "failed to get TPM2 ACPI table\n");
+ 		return -EINVAL;
+@@ -145,9 +145,10 @@ static int check_acpi_tpm2(struct device *dev)
+ 
+ 	/* The tpm2_crb driver handles this device */
+ 	if (tbl->start_method != ACPI_TPM2_MEMORY_MAPPED)
+-		return -ENODEV;
++		ret = -ENODEV;
+ 
+-	return 0;
++	acpi_put_table((struct acpi_table_header *)tbl);
++	return ret;
+ }
+ #else
+ static int check_acpi_tpm2(struct device *dev)
+diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
+index dd0433d4753e8..77aff5defac62 100644
+--- a/drivers/clk/rockchip/clk-pll.c
++++ b/drivers/clk/rockchip/clk-pll.c
+@@ -972,6 +972,7 @@ struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ 	return mux_clk;
+ 
+ err_pll:
++	kfree(pll->rate_table);
+ 	clk_unregister(mux_clk);
+ 	mux_clk = pll_clk;
+ err_mux:
+diff --git a/drivers/clk/samsung/clk-pll.c b/drivers/clk/samsung/clk-pll.c
+index 1c4c7a3039f1b..8cbf0f2cf196b 100644
+--- a/drivers/clk/samsung/clk-pll.c
++++ b/drivers/clk/samsung/clk-pll.c
+@@ -1392,6 +1392,7 @@ static void __init _samsung_clk_register_pll(struct samsung_clk_provider *ctx,
+ 	if (ret) {
+ 		pr_err("%s: failed to register pll clock %s : %d\n",
+ 			__func__, pll_clk->name, ret);
++		kfree(pll->rate_table);
+ 		kfree(pll);
+ 		return;
+ 	}
+diff --git a/drivers/clk/socfpga/clk-gate.c b/drivers/clk/socfpga/clk-gate.c
+index 14918896811d6..139b009eda82b 100644
+--- a/drivers/clk/socfpga/clk-gate.c
++++ b/drivers/clk/socfpga/clk-gate.c
+@@ -183,12 +183,13 @@ static void __init __socfpga_gate_init(struct device_node *node,
+ 	u32 div_reg[3];
+ 	u32 clk_phase[2];
+ 	u32 fixed_div;
+-	struct clk *clk;
++	struct clk_hw *hw_clk;
+ 	struct socfpga_gate_clk *socfpga_clk;
+ 	const char *clk_name = node->name;
+ 	const char *parent_name[SOCFPGA_MAX_PARENTS];
+ 	struct clk_init_data init;
+ 	int rc;
++	int err;
+ 
+ 	socfpga_clk = kzalloc(sizeof(*socfpga_clk), GFP_KERNEL);
+ 	if (WARN_ON(!socfpga_clk))
+@@ -237,12 +238,14 @@ static void __init __socfpga_gate_init(struct device_node *node,
+ 	init.parent_names = parent_name;
+ 	socfpga_clk->hw.hw.init = &init;
+ 
+-	clk = clk_register(NULL, &socfpga_clk->hw.hw);
+-	if (WARN_ON(IS_ERR(clk))) {
++	hw_clk = &socfpga_clk->hw.hw;
++
++	err = clk_hw_register(NULL, hw_clk);
++	if (err) {
+ 		kfree(socfpga_clk);
+ 		return;
+ 	}
+-	rc = of_clk_add_provider(node, of_clk_src_simple_get, clk);
++	rc = of_clk_add_provider(node, of_clk_src_simple_get, hw_clk);
+ 	if (WARN_ON(rc))
+ 		return;
+ }
+diff --git a/drivers/clk/socfpga/clk-periph.c b/drivers/clk/socfpga/clk-periph.c
+index 52c883ea77067..912c3059cbe47 100644
+--- a/drivers/clk/socfpga/clk-periph.c
++++ b/drivers/clk/socfpga/clk-periph.c
+@@ -61,7 +61,7 @@ static __init void __socfpga_periph_init(struct device_node *node,
+ 	const struct clk_ops *ops)
+ {
+ 	u32 reg;
+-	struct clk *clk;
++	struct clk_hw *hw_clk;
+ 	struct socfpga_periph_clk *periph_clk;
+ 	const char *clk_name = node->name;
+ 	const char *parent_name[SOCFPGA_MAX_PARENTS];
+@@ -104,13 +104,13 @@ static __init void __socfpga_periph_init(struct device_node *node,
+ 	init.parent_names = parent_name;
+ 
+ 	periph_clk->hw.hw.init = &init;
++	hw_clk = &periph_clk->hw.hw;
+ 
+-	clk = clk_register(NULL, &periph_clk->hw.hw);
+-	if (WARN_ON(IS_ERR(clk))) {
++	if (clk_hw_register(NULL, hw_clk)) {
+ 		kfree(periph_clk);
+ 		return;
+ 	}
+-	rc = of_clk_add_provider(node, of_clk_src_simple_get, clk);
++	rc = of_clk_add_provider(node, of_clk_src_simple_get, hw_clk);
+ }
+ 
+ void __init socfpga_periph_init(struct device_node *node)
+diff --git a/drivers/clk/socfpga/clk-pll.c b/drivers/clk/socfpga/clk-pll.c
+index b4b44e9b59011..7bde8ca45d4b1 100644
+--- a/drivers/clk/socfpga/clk-pll.c
++++ b/drivers/clk/socfpga/clk-pll.c
+@@ -80,17 +80,18 @@ static struct clk_ops clk_pll_ops = {
+ 	.get_parent = clk_pll_get_parent,
+ };
+ 
+-static __init struct clk *__socfpga_pll_init(struct device_node *node,
++static __init struct clk_hw *__socfpga_pll_init(struct device_node *node,
+ 	const struct clk_ops *ops)
+ {
+ 	u32 reg;
+-	struct clk *clk;
++	struct clk_hw *hw_clk;
+ 	struct socfpga_pll *pll_clk;
+ 	const char *clk_name = node->name;
+ 	const char *parent_name[SOCFPGA_MAX_PARENTS];
+ 	struct clk_init_data init;
+ 	struct device_node *clkmgr_np;
+ 	int rc;
++	int err;
+ 
+ 	of_property_read_u32(node, "reg", &reg);
+ 
+@@ -118,13 +119,15 @@ static __init struct clk *__socfpga_pll_init(struct device_node *node,
+ 	clk_pll_ops.enable = clk_gate_ops.enable;
+ 	clk_pll_ops.disable = clk_gate_ops.disable;
+ 
+-	clk = clk_register(NULL, &pll_clk->hw.hw);
+-	if (WARN_ON(IS_ERR(clk))) {
++	hw_clk = &pll_clk->hw.hw;
++
++	err = clk_hw_register(NULL, hw_clk);
++	if (err) {
+ 		kfree(pll_clk);
+-		return NULL;
++		return ERR_PTR(err);
+ 	}
+-	rc = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+-	return clk;
++	rc = of_clk_add_provider(node, of_clk_src_simple_get, hw_clk);
++	return hw_clk;
+ }
+ 
+ void __init socfpga_pll_init(struct device_node *node)
+diff --git a/drivers/clk/st/clkgen-fsyn.c b/drivers/clk/st/clkgen-fsyn.c
+index a79d81985c4e0..bbe113159bc64 100644
+--- a/drivers/clk/st/clkgen-fsyn.c
++++ b/drivers/clk/st/clkgen-fsyn.c
+@@ -948,9 +948,10 @@ static void __init st_of_quadfs_setup(struct device_node *np,
+ 
+ 	clk = st_clk_register_quadfs_pll(pll_name, clk_parent_name, data,
+ 			reg, lock);
+-	if (IS_ERR(clk))
++	if (IS_ERR(clk)) {
++		kfree(lock);
+ 		goto err_exit;
+-	else
++	} else
+ 		pr_debug("%s: parent %s rate %u\n",
+ 			__clk_get_name(clk),
+ 			__clk_get_name(clk_get_parent(clk)),
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index 7a6d4c4c0feba..0ca8819acc4d4 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -239,6 +239,8 @@ static const struct sh_cmt_info sh_cmt_info[] = {
+ #define CMCNT 1 /* channel register */
+ #define CMCOR 2 /* channel register */
+ 
++#define CMCLKE	0x1000	/* CLK Enable Register (R-Car Gen2) */
++
+ static inline u32 sh_cmt_read_cmstr(struct sh_cmt_channel *ch)
+ {
+ 	if (ch->iostart)
+@@ -856,6 +858,7 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index,
+ 				unsigned int hwidx, bool clockevent,
+ 				bool clocksource, struct sh_cmt_device *cmt)
+ {
++	u32 value;
+ 	int ret;
+ 
+ 	/* Skip unused channels. */
+@@ -885,6 +888,11 @@ static int sh_cmt_setup_channel(struct sh_cmt_channel *ch, unsigned int index,
+ 		ch->iostart = cmt->mapbase + ch->hwidx * 0x100;
+ 		ch->ioctrl = ch->iostart + 0x10;
+ 		ch->timer_bit = 0;
++
++		/* Enable the clock supply to the channel */
++		value = ioread32(cmt->mapbase + CMCLKE);
++		value |= BIT(hwidx);
++		iowrite32(value, cmt->mapbase + CMCLKE);
+ 		break;
+ 	}
+ 
+@@ -991,12 +999,10 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ 	else
+ 		cmt->rate = clk_get_rate(cmt->clk) / 8;
+ 
+-	clk_disable(cmt->clk);
+-
+ 	/* Map the memory resource(s). */
+ 	ret = sh_cmt_map_memory(cmt);
+ 	if (ret < 0)
+-		goto err_clk_unprepare;
++		goto err_clk_disable;
+ 
+ 	/* Allocate and setup the channels. */
+ 	cmt->num_channels = hweight8(cmt->hw_channels);
+@@ -1024,6 +1030,8 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ 		mask &= ~(1 << hwidx);
+ 	}
+ 
++	clk_disable(cmt->clk);
++
+ 	platform_set_drvdata(pdev, cmt);
+ 
+ 	return 0;
+@@ -1031,6 +1039,8 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev)
+ err_unmap:
+ 	kfree(cmt->channels);
+ 	iounmap(cmt->mapbase);
++err_clk_disable:
++	clk_disable(cmt->clk);
+ err_clk_unprepare:
+ 	clk_unprepare(cmt->clk);
+ err_clk_put:
+diff --git a/drivers/cpufreq/amd_freq_sensitivity.c b/drivers/cpufreq/amd_freq_sensitivity.c
+index 4b4f128c3488f..cacd4c67d9bed 100644
+--- a/drivers/cpufreq/amd_freq_sensitivity.c
++++ b/drivers/cpufreq/amd_freq_sensitivity.c
+@@ -122,6 +122,8 @@ static int __init amd_freq_sensitivity_init(void)
+ 	if (!pcidev) {
+ 		if (!static_cpu_has(X86_FEATURE_PROC_FEEDBACK))
+ 			return -ENODEV;
++	} else {
++		pci_dev_put(pcidev);
+ 	}
+ 
+ 	if (rdmsrl_safe(MSR_AMD64_FREQ_SENSITIVITY_ACTUAL, &val))
+diff --git a/drivers/cpuidle/dt_idle_states.c b/drivers/cpuidle/dt_idle_states.c
+index 53342b7f10105..ea3c59d3fdadd 100644
+--- a/drivers/cpuidle/dt_idle_states.c
++++ b/drivers/cpuidle/dt_idle_states.c
+@@ -224,6 +224,6 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
+ 	 * also be 0 on platforms with missing DT idle states or legacy DT
+ 	 * configuration predating the DT idle states bindings.
+ 	 */
+-	return i;
++	return state_idx - start_idx;
+ }
+ EXPORT_SYMBOL_GPL(dt_init_idle_driver);
+diff --git a/drivers/crypto/ccree/cc_debugfs.c b/drivers/crypto/ccree/cc_debugfs.c
+index 5ca184e424835..3879f33fb59ea 100644
+--- a/drivers/crypto/ccree/cc_debugfs.c
++++ b/drivers/crypto/ccree/cc_debugfs.c
+@@ -46,7 +46,7 @@ int __init cc_debugfs_global_init(void)
+ 	return !cc_debugfs_dir;
+ }
+ 
+-void __exit cc_debugfs_global_fini(void)
++void cc_debugfs_global_fini(void)
+ {
+ 	debugfs_remove(cc_debugfs_dir);
+ }
+diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c
+index b87000a0a01cb..f70923643a97e 100644
+--- a/drivers/crypto/img-hash.c
++++ b/drivers/crypto/img-hash.c
+@@ -359,12 +359,16 @@ static int img_hash_dma_init(struct img_hash_dev *hdev)
+ static void img_hash_dma_task(unsigned long d)
+ {
+ 	struct img_hash_dev *hdev = (struct img_hash_dev *)d;
+-	struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
++	struct img_hash_request_ctx *ctx;
+ 	u8 *addr;
+ 	size_t nbytes, bleft, wsend, len, tbc;
+ 	struct scatterlist tsg;
+ 
+-	if (!hdev->req || !ctx->sg)
++	if (!hdev->req)
++		return;
++
++	ctx = ahash_request_ctx(hdev->req);
++	if (!ctx->sg)
+ 		return;
+ 
+ 	addr = sg_virt(ctx->sg);
+diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
+index 55f34cfc43ff6..11b37ce46e569 100644
+--- a/drivers/crypto/n2_core.c
++++ b/drivers/crypto/n2_core.c
+@@ -1286,6 +1286,7 @@ struct n2_hash_tmpl {
+ 	const u32	*hash_init;
+ 	u8		hw_op_hashsz;
+ 	u8		digest_size;
++	u8		statesize;
+ 	u8		block_size;
+ 	u8		auth_type;
+ 	u8		hmac_type;
+@@ -1317,6 +1318,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_HMAC_MD5,
+ 	  .hw_op_hashsz	= MD5_DIGEST_SIZE,
+ 	  .digest_size	= MD5_DIGEST_SIZE,
++	  .statesize	= sizeof(struct md5_state),
+ 	  .block_size	= MD5_HMAC_BLOCK_SIZE },
+ 	{ .name		= "sha1",
+ 	  .hash_zero	= sha1_zero_message_hash,
+@@ -1325,6 +1327,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_HMAC_SHA1,
+ 	  .hw_op_hashsz	= SHA1_DIGEST_SIZE,
+ 	  .digest_size	= SHA1_DIGEST_SIZE,
++	  .statesize	= sizeof(struct sha1_state),
+ 	  .block_size	= SHA1_BLOCK_SIZE },
+ 	{ .name		= "sha256",
+ 	  .hash_zero	= sha256_zero_message_hash,
+@@ -1333,6 +1336,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_HMAC_SHA256,
+ 	  .hw_op_hashsz	= SHA256_DIGEST_SIZE,
+ 	  .digest_size	= SHA256_DIGEST_SIZE,
++	  .statesize	= sizeof(struct sha256_state),
+ 	  .block_size	= SHA256_BLOCK_SIZE },
+ 	{ .name		= "sha224",
+ 	  .hash_zero	= sha224_zero_message_hash,
+@@ -1341,6 +1345,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
+ 	  .hmac_type	= AUTH_TYPE_RESERVED,
+ 	  .hw_op_hashsz	= SHA256_DIGEST_SIZE,
+ 	  .digest_size	= SHA224_DIGEST_SIZE,
++	  .statesize	= sizeof(struct sha256_state),
+ 	  .block_size	= SHA224_BLOCK_SIZE },
+ };
+ #define NUM_HASH_TMPLS ARRAY_SIZE(hash_tmpls)
+@@ -1482,6 +1487,7 @@ static int __n2_register_one_ahash(const struct n2_hash_tmpl *tmpl)
+ 
+ 	halg = &ahash->halg;
+ 	halg->digestsize = tmpl->digest_size;
++	halg->statesize = tmpl->statesize;
+ 
+ 	base = &halg->base;
+ 	snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", tmpl->name);
+diff --git a/drivers/dio/dio.c b/drivers/dio/dio.c
+index 92e78d16b4765..fcde602f49020 100644
+--- a/drivers/dio/dio.c
++++ b/drivers/dio/dio.c
+@@ -110,6 +110,12 @@ static char dio_no_name[] = { 0 };
+ 
+ #endif /* CONFIG_DIO_CONSTANTS */
+ 
++static void dio_dev_release(struct device *dev)
++{
++	struct dio_dev *ddev = container_of(dev, typeof(struct dio_dev), dev);
++	kfree(ddev);
++}
++
+ int __init dio_find(int deviceid)
+ {
+ 	/* Called to find a DIO device before the full bus scan has run.
+@@ -222,6 +228,7 @@ static int __init dio_init(void)
+ 		dev->bus = &dio_bus;
+ 		dev->dev.parent = &dio_bus.dev;
+ 		dev->dev.bus = &dio_bus_type;
++		dev->dev.release = dio_dev_release;
+ 		dev->scode = scode;
+ 		dev->resource.start = pa;
+ 		dev->resource.end = pa + DIO_SIZE(scode, va);
+@@ -249,6 +256,7 @@ static int __init dio_init(void)
+ 		if (error) {
+ 			pr_err("DIO: Error registering device %s\n",
+ 			       dev->name);
++			put_device(&dev->dev);
+ 			continue;
+ 		}
+ 		error = dio_create_sysfs_dev_files(dev);
+diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c
+index 65cf2b9355c47..93d6e6319b3cc 100644
+--- a/drivers/edac/edac_device.c
++++ b/drivers/edac/edac_device.c
+@@ -424,17 +424,16 @@ static void edac_device_workq_teardown(struct edac_device_ctl_info *edac_dev)
+  *	Then restart the workq on the new delay
+  */
+ void edac_device_reset_delay_period(struct edac_device_ctl_info *edac_dev,
+-					unsigned long value)
++				    unsigned long msec)
+ {
+-	unsigned long jiffs = msecs_to_jiffies(value);
+-
+-	if (value == 1000)
+-		jiffs = round_jiffies_relative(value);
+-
+-	edac_dev->poll_msec = value;
+-	edac_dev->delay	    = jiffs;
++	edac_dev->poll_msec = msec;
++	edac_dev->delay	    = msecs_to_jiffies(msec);
+ 
+-	edac_mod_work(&edac_dev->work, jiffs);
++	/* See comment in edac_device_workq_setup() above */
++	if (edac_dev->poll_msec == 1000)
++		edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
++	else
++		edac_mod_work(&edac_dev->work, edac_dev->delay);
+ }
+ 
+ int edac_device_alloc_index(void)
+diff --git a/drivers/edac/edac_module.h b/drivers/edac/edac_module.h
+index c9f0e73872a64..a74c441759a80 100644
+--- a/drivers/edac/edac_module.h
++++ b/drivers/edac/edac_module.h
+@@ -57,7 +57,7 @@ bool edac_stop_work(struct delayed_work *work);
+ bool edac_mod_work(struct delayed_work *work, unsigned long delay);
+ 
+ extern void edac_device_reset_delay_period(struct edac_device_ctl_info
+-					   *edac_dev, unsigned long value);
++					   *edac_dev, unsigned long msec);
+ extern void edac_mc_reset_delay_period(unsigned long value);
+ 
+ extern void *edac_align_ptr(void **p, unsigned size, int n_elems);
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index f0ef2643b70ed..c966a23de85d5 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -359,8 +359,8 @@ static int __init efisubsys_init(void)
+ 	efi_kobj = kobject_create_and_add("efi", firmware_kobj);
+ 	if (!efi_kobj) {
+ 		pr_err("efi: Firmware registration failed.\n");
+-		destroy_workqueue(efi_rts_wq);
+-		return -ENOMEM;
++		error = -ENOMEM;
++		goto err_destroy_wq;
+ 	}
+ 
+ 	error = generic_ops_register();
+@@ -396,7 +396,10 @@ err_unregister:
+ 	generic_ops_unregister();
+ err_put:
+ 	kobject_put(efi_kobj);
+-	destroy_workqueue(efi_rts_wq);
++err_destroy_wq:
++	if (efi_rts_wq)
++		destroy_workqueue(efi_rts_wq);
++
+ 	return error;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+index 6cf3dd5edffda..c1e2e9912bf05 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+@@ -312,6 +312,7 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
+ 
+ 	if (!found)
+ 		return false;
++	pci_dev_put(pdev);
+ 
+ 	adev->bios = kmalloc(size, GFP_KERNEL);
+ 	if (!adev->bios) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+index 880ac113a3a9f..d26b6e36be023 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+@@ -48,6 +48,8 @@ struct amdgpu_vf_error_buffer {
+ 	uint64_t data[AMDGPU_VF_ERROR_ENTRY_SIZE];
+ };
+ 
++enum idh_request;
++
+ /**
+  * struct amdgpu_virt_ops - amdgpu device virt operations
+  */
+@@ -56,7 +58,8 @@ struct amdgpu_virt_ops {
+ 	int (*rel_full_gpu)(struct amdgpu_device *adev, bool init);
+ 	int (*reset_gpu)(struct amdgpu_device *adev);
+ 	int (*wait_reset)(struct amdgpu_device *adev);
+-	void (*trans_msg)(struct amdgpu_device *adev, u32 req, u32 data1, u32 data2, u32 data3);
++	void (*trans_msg)(struct amdgpu_device *adev, enum idh_request req,
++			  u32 data1, u32 data2, u32 data3);
+ };
+ 
+ /*
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 7bb68ca4aa0ba..71136cbd183c3 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -413,6 +413,9 @@ void drm_connector_cleanup(struct drm_connector *connector)
+ 	mutex_destroy(&connector->mutex);
+ 
+ 	memset(connector, 0, sizeof(*connector));
++
++	if (dev->registered)
++		drm_sysfs_hotplug_event(dev);
+ }
+ EXPORT_SYMBOL(drm_connector_cleanup);
+ 
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
+index 2298ed2a9e1c0..3c8c4a820e951 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
+@@ -83,8 +83,9 @@ static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector)
+ 	return num_modes;
+ }
+ 
+-static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
+-					    struct drm_display_mode *mode)
++static enum drm_mode_status
++fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
++				 struct drm_display_mode *mode)
+ {
+ 	if (mode->hdisplay & 0xf)
+ 		return MODE_ERROR;
+diff --git a/drivers/gpu/drm/radeon/radeon_bios.c b/drivers/gpu/drm/radeon/radeon_bios.c
+index dd0528cf98183..0246edecc61f1 100644
+--- a/drivers/gpu/drm/radeon/radeon_bios.c
++++ b/drivers/gpu/drm/radeon/radeon_bios.c
+@@ -223,6 +223,7 @@ static bool radeon_atrm_get_bios(struct radeon_device *rdev)
+ 
+ 	if (!found)
+ 		return false;
++	pci_dev_put(pdev);
+ 
+ 	rdev->bios = kmalloc(size, GFP_KERNEL);
+ 	if (!rdev->bios) {
+@@ -608,13 +609,14 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ 	acpi_size tbl_size;
+ 	UEFI_ACPI_VFCT *vfct;
+ 	unsigned offset;
++	bool r = false;
+ 
+ 	if (!ACPI_SUCCESS(acpi_get_table("VFCT", 1, &hdr)))
+ 		return false;
+ 	tbl_size = hdr->length;
+ 	if (tbl_size < sizeof(UEFI_ACPI_VFCT)) {
+ 		DRM_ERROR("ACPI VFCT table present but broken (too short #1)\n");
+-		return false;
++		goto out;
+ 	}
+ 
+ 	vfct = (UEFI_ACPI_VFCT *)hdr;
+@@ -627,13 +629,13 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ 		offset += sizeof(VFCT_IMAGE_HEADER);
+ 		if (offset > tbl_size) {
+ 			DRM_ERROR("ACPI VFCT image header truncated\n");
+-			return false;
++			goto out;
+ 		}
+ 
+ 		offset += vhdr->ImageLength;
+ 		if (offset > tbl_size) {
+ 			DRM_ERROR("ACPI VFCT image truncated\n");
+-			return false;
++			goto out;
+ 		}
+ 
+ 		if (vhdr->ImageLength &&
+@@ -645,15 +647,18 @@ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ 			rdev->bios = kmemdup(&vbios->VbiosContent,
+ 					     vhdr->ImageLength,
+ 					     GFP_KERNEL);
++			if (rdev->bios)
++				r = true;
+ 
+-			if (!rdev->bios)
+-				return false;
+-			return true;
++			goto out;
+ 		}
+ 	}
+ 
+ 	DRM_ERROR("ACPI VFCT table present but broken (too short #2)\n");
+-	return false;
++
++out:
++	acpi_put_table(hdr);
++	return r;
+ }
+ #else
+ static inline bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+diff --git a/drivers/gpu/drm/sti/sti_dvo.c b/drivers/gpu/drm/sti/sti_dvo.c
+index b08376b7611b8..794ac057a55c5 100644
+--- a/drivers/gpu/drm/sti/sti_dvo.c
++++ b/drivers/gpu/drm/sti/sti_dvo.c
+@@ -288,7 +288,7 @@ static void sti_dvo_set_mode(struct drm_bridge *bridge,
+ 
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+-	memcpy(&dvo->mode, mode, sizeof(struct drm_display_mode));
++	drm_mode_copy(&dvo->mode, mode);
+ 
+ 	/* According to the path used (main or aux), the dvo clocks should
+ 	 * have a different parent clock. */
+@@ -346,8 +346,9 @@ static int sti_dvo_connector_get_modes(struct drm_connector *connector)
+ 
+ #define CLK_TOLERANCE_HZ 50
+ 
+-static int sti_dvo_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++sti_dvo_connector_mode_valid(struct drm_connector *connector,
++			     struct drm_display_mode *mode)
+ {
+ 	int target = mode->clock * 1000;
+ 	int target_min = target - CLK_TOLERANCE_HZ;
+diff --git a/drivers/gpu/drm/sti/sti_hda.c b/drivers/gpu/drm/sti/sti_hda.c
+index 19b9b5ed12970..cf5b6dbe86b09 100644
+--- a/drivers/gpu/drm/sti/sti_hda.c
++++ b/drivers/gpu/drm/sti/sti_hda.c
+@@ -518,7 +518,7 @@ static void sti_hda_set_mode(struct drm_bridge *bridge,
+ 
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+-	memcpy(&hda->mode, mode, sizeof(struct drm_display_mode));
++	drm_mode_copy(&hda->mode, mode);
+ 
+ 	if (!hda_get_mode_idx(hda->mode, &mode_idx)) {
+ 		DRM_ERROR("Undefined mode\n");
+@@ -596,8 +596,9 @@ static int sti_hda_connector_get_modes(struct drm_connector *connector)
+ 
+ #define CLK_TOLERANCE_HZ 50
+ 
+-static int sti_hda_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++sti_hda_connector_mode_valid(struct drm_connector *connector,
++			     struct drm_display_mode *mode)
+ {
+ 	int target = mode->clock * 1000;
+ 	int target_min = target - CLK_TOLERANCE_HZ;
+diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
+index ccf718404a1c2..c4bc09c9d0b5a 100644
+--- a/drivers/gpu/drm/sti/sti_hdmi.c
++++ b/drivers/gpu/drm/sti/sti_hdmi.c
+@@ -926,7 +926,7 @@ static void sti_hdmi_set_mode(struct drm_bridge *bridge,
+ 	DRM_DEBUG_DRIVER("\n");
+ 
+ 	/* Copy the drm display mode in the connector local structure */
+-	memcpy(&hdmi->mode, mode, sizeof(struct drm_display_mode));
++	drm_mode_copy(&hdmi->mode, mode);
+ 
+ 	/* Update clock framerate according to the selected mode */
+ 	ret = clk_set_rate(hdmi->clk_pix, mode->clock * 1000);
+@@ -989,8 +989,9 @@ fail:
+ 
+ #define CLK_TOLERANCE_HZ 50
+ 
+-static int sti_hdmi_connector_mode_valid(struct drm_connector *connector,
+-					struct drm_display_mode *mode)
++static enum drm_mode_status
++sti_hdmi_connector_mode_valid(struct drm_connector *connector,
++			      struct drm_display_mode *mode)
+ {
+ 	int target = mode->clock * 1000;
+ 	int target_min = target - CLK_TOLERANCE_HZ;
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 03adb4cf325b9..6c8b59a266756 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -2376,8 +2376,10 @@ static int tegra_dc_probe(struct platform_device *pdev)
+ 	usleep_range(2000, 4000);
+ 
+ 	err = reset_control_assert(dc->rst);
+-	if (err < 0)
++	if (err < 0) {
++		clk_disable_unprepare(dc->clk);
+ 		return err;
++	}
+ 
+ 	usleep_range(2000, 4000);
+ 
+diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+index 8d2f5ded86d66..a539843a03ba3 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+@@ -309,7 +309,6 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
+ 		}
+ 		return ret;
+ 	}
+-	drm_gem_object_put_unlocked(obj);
+ 
+ 	rc->res_handle = res_id; /* similiar to a VM address */
+ 	rc->bo_handle = handle;
+@@ -318,6 +317,15 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
+ 		virtio_gpu_unref_list(&validate_list);
+ 		dma_fence_put(&fence->f);
+ 	}
++
++	/*
++	 * The handle owns the reference now.  But we must drop our
++	 * remaining reference *after* we no longer need to dereference
++	 * the obj.  Otherwise userspace could guess the handle and
++	 * race closing it from another thread.
++	 */
++	drm_gem_object_put_unlocked(obj);
++
+ 	return 0;
+ fail_unref:
+ 	if (vgdev->has_virgl_3d) {
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index d87bd2a8c75fb..bf8c721ebfe9c 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -179,7 +179,8 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf,
+ 	if (cmd->dma.guest.ptr.offset % PAGE_SIZE ||
+ 	    box->x != 0    || box->y != 0    || box->z != 0    ||
+ 	    box->srcx != 0 || box->srcy != 0 || box->srcz != 0 ||
+-	    box->d != 1    || box_count != 1) {
++	    box->d != 1    || box_count != 1 ||
++	    box->w > 64 || box->h > 64) {
+ 		/* TODO handle none page aligned offsets */
+ 		/* TODO handle more dst & src != 0 */
+ 		/* TODO handle more then one copy */
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 00943ddbe4172..2c9597c8ac92d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -908,7 +908,10 @@
+ #define USB_DEVICE_ID_ORTEK_IHOME_IMAC_A210S	0x8003
+ 
+ #define USB_VENDOR_ID_PLANTRONICS	0x047f
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES	0xc055
+ #define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES	0xc056
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES	0xc057
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES	0xc058
+ 
+ #define USB_VENDOR_ID_PANASONIC		0x04da
+ #define USB_DEVICE_ID_PANABOARD_UBT780	0x1044
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index 460711c1124ac..3b75cadd543fa 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -201,9 +201,18 @@ err:
+ }
+ 
+ static const struct hid_device_id plantronics_devices[] = {
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+ 					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
+ 		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ 	{ }
+ };
+diff --git a/drivers/hid/hid-sensor-custom.c b/drivers/hid/hid-sensor-custom.c
+index bb012bc032e02..a0c30863e1e0c 100644
+--- a/drivers/hid/hid-sensor-custom.c
++++ b/drivers/hid/hid-sensor-custom.c
+@@ -67,7 +67,7 @@ struct hid_sensor_sample {
+ 	u32 raw_len;
+ } __packed;
+ 
+-static struct attribute hid_custom_attrs[] = {
++static struct attribute hid_custom_attrs[HID_CUSTOM_TOTAL_ATTRS] = {
+ 	{.name = "name", .mode = S_IRUGO},
+ 	{.name = "units", .mode = S_IRUGO},
+ 	{.name = "unit-expo", .mode = S_IRUGO},
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index fb63845e9920d..8e32415e0a909 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -163,6 +163,9 @@ static int wacom_raw_event(struct hid_device *hdev, struct hid_report *report,
+ {
+ 	struct wacom *wacom = hid_get_drvdata(hdev);
+ 
++	if (wacom->wacom_wac.features.type == BOOTLOADER)
++		return 0;
++
+ 	if (size > WACOM_PKGLEN_MAX)
+ 		return 1;
+ 
+@@ -2759,6 +2762,11 @@ static int wacom_probe(struct hid_device *hdev,
+ 		goto fail;
+ 	}
+ 
++	if (features->type == BOOTLOADER) {
++		hid_warn(hdev, "Using device in hidraw-only mode");
++		return hid_hw_start(hdev, HID_CONNECT_HIDRAW);
++	}
++
+ 	error = wacom_parse_and_register(wacom, false);
+ 	if (error)
+ 		goto fail;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 826c02a11133b..bc4d6474d8f54 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -4680,6 +4680,9 @@ static const struct wacom_features wacom_features_0x37B =
+ static const struct wacom_features wacom_features_HID_ANY_ID =
+ 	{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
+ 
++static const struct wacom_features wacom_features_0x94 =
++	{ "Wacom Bootloader", .type = BOOTLOADER };
++
+ #define USB_DEVICE_WACOM(prod)						\
+ 	HID_DEVICE(BUS_USB, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
+ 	.driver_data = (kernel_ulong_t)&wacom_features_##prod
+@@ -4753,6 +4756,7 @@ const struct hid_device_id wacom_ids[] = {
+ 	{ USB_DEVICE_WACOM(0x84) },
+ 	{ USB_DEVICE_WACOM(0x90) },
+ 	{ USB_DEVICE_WACOM(0x93) },
++	{ USB_DEVICE_WACOM(0x94) },
+ 	{ USB_DEVICE_WACOM(0x97) },
+ 	{ USB_DEVICE_WACOM(0x9A) },
+ 	{ USB_DEVICE_WACOM(0x9F) },
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 48ce2b0a4549e..fbdbb74f9f1fd 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -244,6 +244,7 @@ enum {
+ 	MTTPC,
+ 	MTTPC_B,
+ 	HID_GENERIC,
++	BOOTLOADER,
+ 	MAX_TYPE
+ };
+ 
+diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c
+index 15ecc4bc8de60..6595f34e51aad 100644
+--- a/drivers/hsi/controllers/omap_ssi_core.c
++++ b/drivers/hsi/controllers/omap_ssi_core.c
+@@ -538,8 +538,10 @@ static int ssi_probe(struct platform_device *pd)
+ 	platform_set_drvdata(pd, ssi);
+ 
+ 	err = ssi_add_controller(ssi, pd);
+-	if (err < 0)
++	if (err < 0) {
++		hsi_put_controller(ssi);
+ 		goto out1;
++	}
+ 
+ 	pm_runtime_enable(&pd->dev);
+ 
+@@ -572,9 +574,9 @@ out3:
+ 	device_for_each_child(&pd->dev, NULL, ssi_remove_ports);
+ out2:
+ 	ssi_remove_controller(ssi);
++	pm_runtime_disable(&pd->dev);
+ out1:
+ 	platform_set_drvdata(pd, NULL);
+-	pm_runtime_disable(&pd->dev);
+ 
+ 	return err;
+ }
+@@ -665,7 +667,13 @@ static int __init ssi_init(void) {
+ 	if (ret)
+ 		return ret;
+ 
+-	return platform_driver_register(&ssi_port_pdriver);
++	ret = platform_driver_register(&ssi_port_pdriver);
++	if (ret) {
++		platform_driver_unregister(&ssi_pdriver);
++		return ret;
++	}
++
++	return 0;
+ }
+ module_init(ssi_init);
+ 
+diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c
+index 80796061102f1..9f5915d5f7e75 100644
+--- a/drivers/i2c/busses/i2c-ismt.c
++++ b/drivers/i2c/busses/i2c-ismt.c
+@@ -504,6 +504,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
+ 		if (read_write == I2C_SMBUS_WRITE) {
+ 			/* Block Write */
+ 			dev_dbg(dev, "I2C_SMBUS_BLOCK_DATA:  WRITE\n");
++			if (data->block[0] < 1 || data->block[0] > I2C_SMBUS_BLOCK_MAX)
++				return -EINVAL;
++
+ 			dma_size = data->block[0] + 1;
+ 			dma_direction = DMA_TO_DEVICE;
+ 			desc->wr_len_cmd = dma_size;
+diff --git a/drivers/i2c/busses/i2c-pxa-pci.c b/drivers/i2c/busses/i2c-pxa-pci.c
+index 72ea8f4c61aa4..883937c8408b7 100644
+--- a/drivers/i2c/busses/i2c-pxa-pci.c
++++ b/drivers/i2c/busses/i2c-pxa-pci.c
+@@ -105,7 +105,7 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
+ 	int i;
+ 	struct ce4100_devices *sds;
+ 
+-	ret = pci_enable_device_mem(dev);
++	ret = pcim_enable_device(dev);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -114,10 +114,8 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
+ 		return -EINVAL;
+ 	}
+ 	sds = kzalloc(sizeof(*sds), GFP_KERNEL);
+-	if (!sds) {
+-		ret = -ENOMEM;
+-		goto err_mem;
+-	}
++	if (!sds)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(sds->pdev); i++) {
+ 		sds->pdev[i] = add_i2c_device(dev, i);
+@@ -133,8 +131,6 @@ static int ce4100_i2c_probe(struct pci_dev *dev,
+ 
+ err_dev_add:
+ 	kfree(sds);
+-err_mem:
+-	pci_disable_device(dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 25af4c76b57fe..a96943bbd83db 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -283,10 +283,10 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
+ 	unsigned int sample, raw_sample;
+ 	int ret = 0;
+ 
+-	if (iio_buffer_enabled(indio_dev))
+-		return -EBUSY;
++	ret = iio_device_claim_direct_mode(indio_dev);
++	if (ret)
++		return ret;
+ 
+-	mutex_lock(&indio_dev->mlock);
+ 	ad_sigma_delta_set_channel(sigma_delta, chan->address);
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+@@ -320,7 +320,7 @@ out:
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 	sigma_delta->bus_locked = false;
+ 	spi_bus_unlock(sigma_delta->spi->master);
+-	mutex_unlock(&indio_dev->mlock);
++	iio_device_release_direct_mode(indio_dev);
+ 
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index f6fa9b115fdad..798bc810b2344 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -433,7 +433,7 @@ static int fill_res_cm_id_entry(struct sk_buff *msg,
+ 	struct nlattr *entry_attr;
+ 
+ 	if (port && port != cm_id->port_num)
+-		return 0;
++		return -EAGAIN;
+ 
+ 	entry_attr = nla_nest_start(msg, RDMA_NLDEV_ATTR_RES_CM_ID_ENTRY);
+ 	if (!entry_attr)
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index 01ed0a667928d..bb670249bebfe 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -217,6 +217,8 @@ out:
+ 	for (node = 0; node < node_affinity.num_possible_nodes; node++)
+ 		hfi1_per_node_cntr[node] = 1;
+ 
++	pci_dev_put(dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
+index c090807124858..747ec08dec0d9 100644
+--- a/drivers/infiniband/hw/hfi1/firmware.c
++++ b/drivers/infiniband/hw/hfi1/firmware.c
+@@ -1786,6 +1786,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 
+ 	if (!dd->platform_config.data) {
+ 		dd_dev_err(dd, "%s: Missing config file\n", __func__);
++		ret = -EINVAL;
+ 		goto bail;
+ 	}
+ 	ptr = (u32 *)dd->platform_config.data;
+@@ -1794,6 +1795,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 	ptr++;
+ 	if (magic_num != PLATFORM_CONFIG_MAGIC_NUM) {
+ 		dd_dev_err(dd, "%s: Bad config file\n", __func__);
++		ret = -EINVAL;
+ 		goto bail;
+ 	}
+ 
+@@ -1817,6 +1819,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 	if (file_length > dd->platform_config.size) {
+ 		dd_dev_info(dd, "%s:File claims to be larger than read size\n",
+ 			    __func__);
++		ret = -EINVAL;
+ 		goto bail;
+ 	} else if (file_length < dd->platform_config.size) {
+ 		dd_dev_info(dd,
+@@ -1837,6 +1840,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 			dd_dev_err(dd, "%s: Failed validation at offset %ld\n",
+ 				   __func__, (ptr - (u32 *)
+ 					      dd->platform_config.data));
++			ret = -EINVAL;
+ 			goto bail;
+ 		}
+ 
+@@ -1883,6 +1887,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 					   __func__, table_type,
+ 					   (ptr - (u32 *)
+ 					    dd->platform_config.data));
++				ret = -EINVAL;
+ 				goto bail; /* We don't trust this file now */
+ 			}
+ 			pcfgcache->config_tables[table_type].table = ptr;
+@@ -1907,6 +1912,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 					   __func__, table_type,
+ 					   (ptr -
+ 					    (u32 *)dd->platform_config.data));
++				ret = -EINVAL;
+ 				goto bail; /* We don't trust this file now */
+ 			}
+ 			pcfgcache->config_tables[table_type].table_metadata =
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 361b1b8597820..1520a3098f7dc 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3411,6 +3411,40 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 	return err;
+ }
+ 
++static int validate_rd_atomic(struct mlx5_ib_dev *dev, struct ib_qp_attr *attr,
++			      int attr_mask, enum ib_qp_type qp_type)
++{
++	int log_max_ra_res;
++	int log_max_ra_req;
++
++	if (qp_type == MLX5_IB_QPT_DCI) {
++		log_max_ra_res = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_res_dc);
++		log_max_ra_req = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_req_dc);
++	} else {
++		log_max_ra_res = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_res_qp);
++		log_max_ra_req = 1 << MLX5_CAP_GEN(dev->mdev,
++						   log_max_ra_req_qp);
++	}
++
++	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
++	    attr->max_rd_atomic > log_max_ra_res) {
++		mlx5_ib_dbg(dev, "invalid max_rd_atomic value %d\n",
++			    attr->max_rd_atomic);
++		return false;
++	}
++
++	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
++	    attr->max_dest_rd_atomic > log_max_ra_req) {
++		mlx5_ib_dbg(dev, "invalid max_dest_rd_atomic value %d\n",
++			    attr->max_dest_rd_atomic);
++		return false;
++	}
++	return true;
++}
++
+ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		      int attr_mask, struct ib_udata *udata)
+ {
+@@ -3508,21 +3542,8 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+ 		}
+ 	}
+ 
+-	if (attr_mask & IB_QP_MAX_QP_RD_ATOMIC &&
+-	    attr->max_rd_atomic >
+-	    (1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_res_qp))) {
+-		mlx5_ib_dbg(dev, "invalid max_rd_atomic value %d\n",
+-			    attr->max_rd_atomic);
+-		goto out;
+-	}
+-
+-	if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC &&
+-	    attr->max_dest_rd_atomic >
+-	    (1 << MLX5_CAP_GEN(dev->mdev, log_max_ra_req_qp))) {
+-		mlx5_ib_dbg(dev, "invalid max_dest_rd_atomic value %d\n",
+-			    attr->max_dest_rd_atomic);
++	if (!validate_rd_atomic(dev, attr, attr_mask, qp_type))
+ 		goto out;
+-	}
+ 
+ 	if (cur_state == new_state && cur_state == IB_QPS_RESET) {
+ 		err = 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 6320390f531ce..2cae62ae6c64a 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -836,12 +836,12 @@ static void rxe_qp_do_cleanup(struct work_struct *work)
+ 		qp->resp.mr = NULL;
+ 	}
+ 
+-	if (qp_type(qp) == IB_QPT_RC)
+-		sk_dst_reset(qp->sk->sk);
+-
+ 	free_rd_atomic_resources(qp);
+ 
+ 	if (qp->sk) {
++		if (qp_type(qp) == IB_QPT_RC)
++			sk_dst_reset(qp->sk->sk);
++
+ 		kernel_sock_shutdown(qp->sk, SHUT_RDWR);
+ 		sock_release(qp->sk);
+ 	}
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+index d4d553a51fa98..285cb28bf14a5 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_netlink.c
+@@ -42,6 +42,11 @@ static const struct nla_policy ipoib_policy[IFLA_IPOIB_MAX + 1] = {
+ 	[IFLA_IPOIB_UMCAST]	= { .type = NLA_U16 },
+ };
+ 
++static unsigned int ipoib_get_max_num_queues(void)
++{
++	return min_t(unsigned int, num_possible_cpus(), 128);
++}
++
+ static int ipoib_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ {
+ 	struct ipoib_dev_priv *priv = ipoib_priv(dev);
+@@ -147,6 +152,8 @@ static struct rtnl_link_ops ipoib_link_ops __read_mostly = {
+ 	.changelink	= ipoib_changelink,
+ 	.get_size	= ipoib_get_size,
+ 	.fill_info	= ipoib_fill_info,
++	.get_num_rx_queues = ipoib_get_max_num_queues,
++	.get_num_tx_queues = ipoib_get_max_num_queues,
+ };
+ 
+ int __init ipoib_netlink_init(void)
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index adfae2d88707b..83e9bded87d9a 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -1082,14 +1082,12 @@ static int elants_i2c_power_on(struct elants_data *ts)
+ 	if (IS_ERR_OR_NULL(ts->reset_gpio))
+ 		return 0;
+ 
+-	gpiod_set_value_cansleep(ts->reset_gpio, 1);
+-
+ 	error = regulator_enable(ts->vcc33);
+ 	if (error) {
+ 		dev_err(&ts->client->dev,
+ 			"failed to enable vcc33 regulator: %d\n",
+ 			error);
+-		goto release_reset_gpio;
++		return error;
+ 	}
+ 
+ 	error = regulator_enable(ts->vccio);
+@@ -1098,7 +1096,7 @@ static int elants_i2c_power_on(struct elants_data *ts)
+ 			"failed to enable vccio regulator: %d\n",
+ 			error);
+ 		regulator_disable(ts->vcc33);
+-		goto release_reset_gpio;
++		return error;
+ 	}
+ 
+ 	/*
+@@ -1107,7 +1105,6 @@ static int elants_i2c_power_on(struct elants_data *ts)
+ 	 */
+ 	udelay(ELAN_POWERON_DELAY_USEC);
+ 
+-release_reset_gpio:
+ 	gpiod_set_value_cansleep(ts->reset_gpio, 0);
+ 	if (error)
+ 		return error;
+@@ -1215,7 +1212,7 @@ static int elants_i2c_probe(struct i2c_client *client,
+ 		return error;
+ 	}
+ 
+-	ts->reset_gpio = devm_gpiod_get(&client->dev, "reset", GPIOD_OUT_LOW);
++	ts->reset_gpio = devm_gpiod_get(&client->dev, "reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(ts->reset_gpio)) {
+ 		error = PTR_ERR(ts->reset_gpio);
+ 
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 1c61cd0b1d55d..12e7254b39489 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -2946,6 +2946,13 @@ static int __init parse_ivrs_acpihid(char *str)
+ 		return 1;
+ 	}
+ 
++	/*
++	 * Ignore leading zeroes after ':', so e.g., AMDI0095:00
++	 * will match AMDI0095:0 in the second strcmp in acpi_dev_hid_uid_match
++	 */
++	while (*uid == '0' && *(uid + 1))
++		uid++;
++
+ 	i = early_acpihid_map_size++;
+ 	memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
+ 	memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
+diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
+index 7a59a8ebac108..387b72b7e7b36 100644
+--- a/drivers/iommu/amd_iommu_v2.c
++++ b/drivers/iommu/amd_iommu_v2.c
+@@ -626,6 +626,7 @@ out_drop_state:
+ 	put_device_state(dev_state);
+ 
+ out:
++	pci_dev_put(pdev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/iommu/fsl_pamu.c b/drivers/iommu/fsl_pamu.c
+index 8540625796a16..b6a0c6d3b204e 100644
+--- a/drivers/iommu/fsl_pamu.c
++++ b/drivers/iommu/fsl_pamu.c
+@@ -1134,7 +1134,7 @@ static int fsl_pamu_probe(struct platform_device *pdev)
+ 		ret = create_csd(ppaact_phys, mem_size, csd_port_id);
+ 		if (ret) {
+ 			dev_err(dev, "could not create coherence subdomain\n");
+-			return ret;
++			goto error;
+ 		}
+ 	}
+ 
+diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
+index 676c029494e44..94b16cacb80fc 100644
+--- a/drivers/iommu/mtk_iommu_v1.c
++++ b/drivers/iommu/mtk_iommu_v1.c
+@@ -626,18 +626,34 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ 	ret = iommu_device_sysfs_add(&data->iommu, &pdev->dev, NULL,
+ 				     dev_name(&pdev->dev));
+ 	if (ret)
+-		return ret;
++		goto out_clk_unprepare;
+ 
+ 	iommu_device_set_ops(&data->iommu, &mtk_iommu_ops);
+ 
+ 	ret = iommu_device_register(&data->iommu);
+ 	if (ret)
+-		return ret;
++		goto out_sysfs_remove;
+ 
+-	if (!iommu_present(&platform_bus_type))
+-		bus_set_iommu(&platform_bus_type,  &mtk_iommu_ops);
++	if (!iommu_present(&platform_bus_type)) {
++		ret = bus_set_iommu(&platform_bus_type,  &mtk_iommu_ops);
++		if (ret)
++			goto out_dev_unreg;
++	}
+ 
+-	return component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
++	ret = component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
++	if (ret)
++		goto out_bus_set_null;
++	return ret;
++
++out_bus_set_null:
++	bus_set_iommu(&platform_bus_type, NULL);
++out_dev_unreg:
++	iommu_device_unregister(&data->iommu);
++out_sysfs_remove:
++	iommu_device_sysfs_remove(&data->iommu);
++out_clk_unprepare:
++	clk_disable_unprepare(data->bclk);
++	return ret;
+ }
+ 
+ static int mtk_iommu_remove(struct platform_device *pdev)
+diff --git a/drivers/irqchip/irq-gic-pm.c b/drivers/irqchip/irq-gic-pm.c
+index ecafd295c31ce..21c5decfc55b4 100644
+--- a/drivers/irqchip/irq-gic-pm.c
++++ b/drivers/irqchip/irq-gic-pm.c
+@@ -112,7 +112,7 @@ static int gic_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		goto rpm_disable;
+ 
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 0928fd1f0e0c1..60b3a4aabe6b8 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -3233,6 +3233,7 @@ static int
+ hfcm_l1callback(struct dchannel *dch, u_int cmd)
+ {
+ 	struct hfc_multi	*hc = dch->hw;
++	struct sk_buff_head	free_queue;
+ 	u_long	flags;
+ 
+ 	switch (cmd) {
+@@ -3261,6 +3262,7 @@ hfcm_l1callback(struct dchannel *dch, u_int cmd)
+ 		l1_event(dch->l1, HW_POWERUP_IND);
+ 		break;
+ 	case HW_DEACT_REQ:
++		__skb_queue_head_init(&free_queue);
+ 		/* start deactivation */
+ 		spin_lock_irqsave(&hc->lock, flags);
+ 		if (hc->ctype == HFC_TYPE_E1) {
+@@ -3280,20 +3282,21 @@ hfcm_l1callback(struct dchannel *dch, u_int cmd)
+ 				plxsd_checksync(hc, 0);
+ 			}
+ 		}
+-		skb_queue_purge(&dch->squeue);
++		skb_queue_splice_init(&dch->squeue, &free_queue);
+ 		if (dch->tx_skb) {
+-			dev_kfree_skb(dch->tx_skb);
++			__skb_queue_tail(&free_queue, dch->tx_skb);
+ 			dch->tx_skb = NULL;
+ 		}
+ 		dch->tx_idx = 0;
+ 		if (dch->rx_skb) {
+-			dev_kfree_skb(dch->rx_skb);
++			__skb_queue_tail(&free_queue, dch->rx_skb);
+ 			dch->rx_skb = NULL;
+ 		}
+ 		test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+ 		if (test_and_clear_bit(FLG_BUSY_TIMER, &dch->Flags))
+ 			del_timer(&dch->timer);
+ 		spin_unlock_irqrestore(&hc->lock, flags);
++		__skb_queue_purge(&free_queue);
+ 		break;
+ 	case HW_POWERUP_REQ:
+ 		spin_lock_irqsave(&hc->lock, flags);
+@@ -3400,6 +3403,9 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
+ 	case PH_DEACTIVATE_REQ:
+ 		test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
+ 		if (dch->dev.D.protocol != ISDN_P_TE_S0) {
++			struct sk_buff_head free_queue;
++
++			__skb_queue_head_init(&free_queue);
+ 			spin_lock_irqsave(&hc->lock, flags);
+ 			if (debug & DEBUG_HFCMULTI_MSG)
+ 				printk(KERN_DEBUG
+@@ -3421,14 +3427,14 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
+ 				/* deactivate */
+ 				dch->state = 1;
+ 			}
+-			skb_queue_purge(&dch->squeue);
++			skb_queue_splice_init(&dch->squeue, &free_queue);
+ 			if (dch->tx_skb) {
+-				dev_kfree_skb(dch->tx_skb);
++				__skb_queue_tail(&free_queue, dch->tx_skb);
+ 				dch->tx_skb = NULL;
+ 			}
+ 			dch->tx_idx = 0;
+ 			if (dch->rx_skb) {
+-				dev_kfree_skb(dch->rx_skb);
++				__skb_queue_tail(&free_queue, dch->rx_skb);
+ 				dch->rx_skb = NULL;
+ 			}
+ 			test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+@@ -3440,6 +3446,7 @@ handle_dmsg(struct mISDNchannel *ch, struct sk_buff *skb)
+ #endif
+ 			ret = 0;
+ 			spin_unlock_irqrestore(&hc->lock, flags);
++			__skb_queue_purge(&free_queue);
+ 		} else
+ 			ret = l1_event(dch->l1, hh->prim);
+ 		break;
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index 53349850f8669..de98e94711d22 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -1633,16 +1633,19 @@ hfcpci_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 		test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
+ 		spin_lock_irqsave(&hc->lock, flags);
+ 		if (hc->hw.protocol == ISDN_P_NT_S0) {
++			struct sk_buff_head free_queue;
++
++			__skb_queue_head_init(&free_queue);
+ 			/* prepare deactivation */
+ 			Write_hfc(hc, HFCPCI_STATES, 0x40);
+-			skb_queue_purge(&dch->squeue);
++			skb_queue_splice_init(&dch->squeue, &free_queue);
+ 			if (dch->tx_skb) {
+-				dev_kfree_skb(dch->tx_skb);
++				__skb_queue_tail(&free_queue, dch->tx_skb);
+ 				dch->tx_skb = NULL;
+ 			}
+ 			dch->tx_idx = 0;
+ 			if (dch->rx_skb) {
+-				dev_kfree_skb(dch->rx_skb);
++				__skb_queue_tail(&free_queue, dch->rx_skb);
+ 				dch->rx_skb = NULL;
+ 			}
+ 			test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+@@ -1655,10 +1658,12 @@ hfcpci_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 			hc->hw.mst_m &= ~HFCPCI_MASTER;
+ 			Write_hfc(hc, HFCPCI_MST_MODE, hc->hw.mst_m);
+ 			ret = 0;
++			spin_unlock_irqrestore(&hc->lock, flags);
++			__skb_queue_purge(&free_queue);
+ 		} else {
+ 			ret = l1_event(dch->l1, hh->prim);
++			spin_unlock_irqrestore(&hc->lock, flags);
+ 		}
+-		spin_unlock_irqrestore(&hc->lock, flags);
+ 		break;
+ 	}
+ 	if (!ret)
+diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c
+index c952002c6301d..c49081ed5734f 100644
+--- a/drivers/isdn/hardware/mISDN/hfcsusb.c
++++ b/drivers/isdn/hardware/mISDN/hfcsusb.c
+@@ -337,20 +337,24 @@ hfcusb_l2l1D(struct mISDNchannel *ch, struct sk_buff *skb)
+ 		test_and_clear_bit(FLG_L2_ACTIVATED, &dch->Flags);
+ 
+ 		if (hw->protocol == ISDN_P_NT_S0) {
++			struct sk_buff_head free_queue;
++
++			__skb_queue_head_init(&free_queue);
+ 			hfcsusb_ph_command(hw, HFC_L1_DEACTIVATE_NT);
+ 			spin_lock_irqsave(&hw->lock, flags);
+-			skb_queue_purge(&dch->squeue);
++			skb_queue_splice_init(&dch->squeue, &free_queue);
+ 			if (dch->tx_skb) {
+-				dev_kfree_skb(dch->tx_skb);
++				__skb_queue_tail(&free_queue, dch->tx_skb);
+ 				dch->tx_skb = NULL;
+ 			}
+ 			dch->tx_idx = 0;
+ 			if (dch->rx_skb) {
+-				dev_kfree_skb(dch->rx_skb);
++				__skb_queue_tail(&free_queue, dch->rx_skb);
+ 				dch->rx_skb = NULL;
+ 			}
+ 			test_and_clear_bit(FLG_TX_BUSY, &dch->Flags);
+ 			spin_unlock_irqrestore(&hw->lock, flags);
++			__skb_queue_purge(&free_queue);
+ #ifdef FIXME
+ 			if (test_and_clear_bit(FLG_L1_BUSY, &dch->Flags))
+ 				dchannel_sched_event(&hc->dch, D_CLEARBUSY);
+@@ -1344,7 +1348,7 @@ tx_iso_complete(struct urb *urb)
+ 					printk("\n");
+ 				}
+ 
+-				dev_kfree_skb(tx_skb);
++				dev_consume_skb_irq(tx_skb);
+ 				tx_skb = NULL;
+ 				if (fifo->dch && get_next_dframe(fifo->dch))
+ 					tx_skb = fifo->dch->tx_skb;
+diff --git a/drivers/macintosh/macio-adb.c b/drivers/macintosh/macio-adb.c
+index eb3adfb7f88d3..172a8b18c5799 100644
+--- a/drivers/macintosh/macio-adb.c
++++ b/drivers/macintosh/macio-adb.c
+@@ -106,6 +106,10 @@ int macio_init(void)
+ 		return -ENXIO;
+ 	}
+ 	adb = ioremap(r.start, sizeof(struct adb_regs));
++	if (!adb) {
++		of_node_put(adbs);
++		return -ENOMEM;
++	}
+ 
+ 	out_8(&adb->ctrl.r, 0);
+ 	out_8(&adb->intr.r, 0);
+diff --git a/drivers/macintosh/macio_asic.c b/drivers/macintosh/macio_asic.c
+index 07074820a1674..12c360ebd7e95 100644
+--- a/drivers/macintosh/macio_asic.c
++++ b/drivers/macintosh/macio_asic.c
+@@ -427,7 +427,7 @@ static struct macio_dev * macio_add_one_device(struct macio_chip *chip,
+ 	if (of_device_register(&dev->ofdev) != 0) {
+ 		printk(KERN_DEBUG"macio: device registration error for %s!\n",
+ 		       dev_name(&dev->ofdev.dev));
+-		kfree(dev);
++		put_device(&dev->ofdev.dev);
+ 		return NULL;
+ 	}
+ 
+diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
+index 118d27ee31c2f..7fd32b0183dc1 100644
+--- a/drivers/mcb/mcb-core.c
++++ b/drivers/mcb/mcb-core.c
+@@ -74,8 +74,10 @@ static int mcb_probe(struct device *dev)
+ 
+ 	get_device(dev);
+ 	ret = mdrv->probe(mdev, found_id);
+-	if (ret)
++	if (ret) {
+ 		module_put(carrier_mod);
++		put_device(dev);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
+index 7369bda3442fb..3636349648b4f 100644
+--- a/drivers/mcb/mcb-parse.c
++++ b/drivers/mcb/mcb-parse.c
+@@ -107,7 +107,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
+ 	return 0;
+ 
+ err:
+-	mcb_free_dev(mdev);
++	put_device(&mdev->dev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index af6d4f898e4c1..2ecd0db0f2945 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -551,11 +551,13 @@ static int __create_persistent_data_objects(struct dm_cache_metadata *cmd,
+ 	return r;
+ }
+ 
+-static void __destroy_persistent_data_objects(struct dm_cache_metadata *cmd)
++static void __destroy_persistent_data_objects(struct dm_cache_metadata *cmd,
++					      bool destroy_bm)
+ {
+ 	dm_sm_destroy(cmd->metadata_sm);
+ 	dm_tm_destroy(cmd->tm);
+-	dm_block_manager_destroy(cmd->bm);
++	if (destroy_bm)
++		dm_block_manager_destroy(cmd->bm);
+ }
+ 
+ typedef unsigned long (*flags_mutator)(unsigned long);
+@@ -826,7 +828,7 @@ static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev,
+ 		cmd2 = lookup(bdev);
+ 		if (cmd2) {
+ 			mutex_unlock(&table_lock);
+-			__destroy_persistent_data_objects(cmd);
++			__destroy_persistent_data_objects(cmd, true);
+ 			kfree(cmd);
+ 			return cmd2;
+ 		}
+@@ -874,7 +876,7 @@ void dm_cache_metadata_close(struct dm_cache_metadata *cmd)
+ 		mutex_unlock(&table_lock);
+ 
+ 		if (!cmd->fail_io)
+-			__destroy_persistent_data_objects(cmd);
++			__destroy_persistent_data_objects(cmd, true);
+ 		kfree(cmd);
+ 	}
+ }
+@@ -1808,14 +1810,52 @@ int dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd, bool *result)
+ 
+ int dm_cache_metadata_abort(struct dm_cache_metadata *cmd)
+ {
+-	int r;
++	int r = -EINVAL;
++	struct dm_block_manager *old_bm = NULL, *new_bm = NULL;
++
++	/* fail_io is double-checked with cmd->root_lock held below */
++	if (unlikely(cmd->fail_io))
++		return r;
++
++	/*
++	 * Replacement block manager (new_bm) is created and old_bm destroyed outside of
++	 * cmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
++	 * shrinker associated with the block manager's bufio client vs cmd root_lock).
++	 * - must take shrinker_rwsem without holding cmd->root_lock
++	 */
++	new_bm = dm_block_manager_create(cmd->bdev, DM_CACHE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
++					 CACHE_MAX_CONCURRENT_LOCKS);
+ 
+ 	WRITE_LOCK(cmd);
+-	__destroy_persistent_data_objects(cmd);
+-	r = __create_persistent_data_objects(cmd, false);
++	if (cmd->fail_io) {
++		WRITE_UNLOCK(cmd);
++		goto out;
++	}
++
++	__destroy_persistent_data_objects(cmd, false);
++	old_bm = cmd->bm;
++	if (IS_ERR(new_bm)) {
++		DMERR("could not create block manager during abort");
++		cmd->bm = NULL;
++		r = PTR_ERR(new_bm);
++		goto out_unlock;
++	}
++
++	cmd->bm = new_bm;
++	r = __open_or_format_metadata(cmd, false);
++	if (r) {
++		cmd->bm = NULL;
++		goto out_unlock;
++	}
++	new_bm = NULL;
++out_unlock:
+ 	if (r)
+ 		cmd->fail_io = true;
+ 	WRITE_UNLOCK(cmd);
++	dm_block_manager_destroy(old_bm);
++out:
++	if (new_bm && !IS_ERR(new_bm))
++		dm_block_manager_destroy(new_bm);
+ 
+ 	return r;
+ }
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 2ddd575e97f73..df7bc45bc0ced 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -1010,16 +1010,16 @@ static void abort_transaction(struct cache *cache)
+ 	if (get_cache_mode(cache) >= CM_READ_ONLY)
+ 		return;
+ 
+-	if (dm_cache_metadata_set_needs_check(cache->cmd)) {
+-		DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
+-		set_cache_mode(cache, CM_FAIL);
+-	}
+-
+ 	DMERR_LIMIT("%s: aborting current metadata transaction", dev_name);
+ 	if (dm_cache_metadata_abort(cache->cmd)) {
+ 		DMERR("%s: failed to abort metadata transaction", dev_name);
+ 		set_cache_mode(cache, CM_FAIL);
+ 	}
++
++	if (dm_cache_metadata_set_needs_check(cache->cmd)) {
++		DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
++		set_cache_mode(cache, CM_FAIL);
++	}
+ }
+ 
+ static void metadata_operation_failed(struct cache *cache, const char *op, int r)
+@@ -1987,6 +1987,7 @@ static void destroy(struct cache *cache)
+ 	if (cache->prison)
+ 		dm_bio_prison_destroy_v2(cache->prison);
+ 
++	cancel_delayed_work_sync(&cache->waker);
+ 	if (cache->wq)
+ 		destroy_workqueue(cache->wq);
+ 
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index a6a5cee6b9430..f374f593fe555 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -660,6 +660,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd)
+ 		goto bad_cleanup_data_sm;
+ 	}
+ 
++	/*
++	 * For pool metadata opening process, root setting is redundant
++	 * because it will be set again in __begin_transaction(). But dm
++	 * pool aborting process really needs to get last transaction's
++	 * root to avoid accessing broken btree.
++	 */
++	pmd->root = le64_to_cpu(disk_super->data_mapping_root);
++	pmd->details_root = le64_to_cpu(disk_super->device_details_root);
++
+ 	__setup_btree_details(pmd);
+ 	dm_bm_unlock(sblock);
+ 
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 435a2ee4a3927..386cb33953783 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2921,6 +2921,8 @@ static void __pool_destroy(struct pool *pool)
+ 	dm_bio_prison_destroy(pool->prison);
+ 	dm_kcopyd_client_destroy(pool->copier);
+ 
++	cancel_delayed_work_sync(&pool->waker);
++	cancel_delayed_work_sync(&pool->no_space_timeout);
+ 	if (pool->wq)
+ 		destroy_workqueue(pool->wq);
+ 
+@@ -3547,20 +3549,28 @@ static int pool_preresume(struct dm_target *ti)
+ 	 */
+ 	r = bind_control_target(pool, ti);
+ 	if (r)
+-		return r;
++		goto out;
+ 
+ 	r = maybe_resize_data_dev(ti, &need_commit1);
+ 	if (r)
+-		return r;
++		goto out;
+ 
+ 	r = maybe_resize_metadata_dev(ti, &need_commit2);
+ 	if (r)
+-		return r;
++		goto out;
+ 
+ 	if (need_commit1 || need_commit2)
+ 		(void) commit(pool);
++out:
++	/*
++	 * When a thin-pool is PM_FAIL, it cannot be rebuilt if
++	 * bio is in deferred list. Therefore need to return 0
++	 * to allow pool_resume() to flush IO.
++	 */
++	if (r && get_pool_mode(pool) == PM_FAIL)
++		r = 0;
+ 
+-	return 0;
++	return r;
+ }
+ 
+ static void pool_suspend_active_thins(struct pool *pool)
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 7cf9d34ce20eb..1c4c462787198 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -488,7 +488,7 @@ void md_bitmap_print_sb(struct bitmap *bitmap)
+ 	sb = kmap_atomic(bitmap->storage.sb_page);
+ 	pr_debug("%s: bitmap file superblock:\n", bmname(bitmap));
+ 	pr_debug("         magic: %08x\n", le32_to_cpu(sb->magic));
+-	pr_debug("       version: %d\n", le32_to_cpu(sb->version));
++	pr_debug("       version: %u\n", le32_to_cpu(sb->version));
+ 	pr_debug("          uuid: %08x.%08x.%08x.%08x\n",
+ 		 le32_to_cpu(*(__u32 *)(sb->uuid+0)),
+ 		 le32_to_cpu(*(__u32 *)(sb->uuid+4)),
+@@ -499,11 +499,11 @@ void md_bitmap_print_sb(struct bitmap *bitmap)
+ 	pr_debug("events cleared: %llu\n",
+ 		 (unsigned long long) le64_to_cpu(sb->events_cleared));
+ 	pr_debug("         state: %08x\n", le32_to_cpu(sb->state));
+-	pr_debug("     chunksize: %d B\n", le32_to_cpu(sb->chunksize));
+-	pr_debug("  daemon sleep: %ds\n", le32_to_cpu(sb->daemon_sleep));
++	pr_debug("     chunksize: %u B\n", le32_to_cpu(sb->chunksize));
++	pr_debug("  daemon sleep: %us\n", le32_to_cpu(sb->daemon_sleep));
+ 	pr_debug("     sync size: %llu KB\n",
+ 		 (unsigned long long)le64_to_cpu(sb->sync_size)/2);
+-	pr_debug("max write behind: %d\n", le32_to_cpu(sb->write_behind));
++	pr_debug("max write behind: %u\n", le32_to_cpu(sb->write_behind));
+ 	kunmap_atomic(sb);
+ }
+ 
+@@ -2101,7 +2101,8 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 			bytes = DIV_ROUND_UP(chunks, 8);
+ 			if (!bitmap->mddev->bitmap_info.external)
+ 				bytes += sizeof(bitmap_super_t);
+-		} while (bytes > (space << 9));
++		} while (bytes > (space << 9) && (chunkshift + BITMAP_BLOCK_SHIFT) <
++			(BITS_PER_BYTE * sizeof(((bitmap_super_t *)0)->chunksize) - 1));
+ 	} else
+ 		chunkshift = ffz(~chunksize) - BITMAP_BLOCK_SHIFT;
+ 
+@@ -2146,7 +2147,7 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 	bitmap->counts.missing_pages = pages;
+ 	bitmap->counts.chunkshift = chunkshift;
+ 	bitmap->counts.chunks = chunks;
+-	bitmap->mddev->bitmap_info.chunksize = 1 << (chunkshift +
++	bitmap->mddev->bitmap_info.chunksize = 1UL << (chunkshift +
+ 						     BITMAP_BLOCK_SHIFT);
+ 
+ 	blocks = min(old_counts.chunks << old_counts.chunkshift,
+@@ -2172,8 +2173,8 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 				bitmap->counts.missing_pages = old_counts.pages;
+ 				bitmap->counts.chunkshift = old_counts.chunkshift;
+ 				bitmap->counts.chunks = old_counts.chunks;
+-				bitmap->mddev->bitmap_info.chunksize = 1 << (old_counts.chunkshift +
+-									     BITMAP_BLOCK_SHIFT);
++				bitmap->mddev->bitmap_info.chunksize =
++					1UL << (old_counts.chunkshift + BITMAP_BLOCK_SHIFT);
+ 				blocks = old_counts.chunks << old_counts.chunkshift;
+ 				pr_warn("Could not pre-allocate in-memory bitmap for cluster raid\n");
+ 				break;
+@@ -2191,20 +2192,23 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
+ 
+ 		if (set) {
+ 			bmc_new = md_bitmap_get_counter(&bitmap->counts, block, &new_blocks, 1);
+-			if (*bmc_new == 0) {
+-				/* need to set on-disk bits too. */
+-				sector_t end = block + new_blocks;
+-				sector_t start = block >> chunkshift;
+-				start <<= chunkshift;
+-				while (start < end) {
+-					md_bitmap_file_set_bit(bitmap, block);
+-					start += 1 << chunkshift;
++			if (bmc_new) {
++				if (*bmc_new == 0) {
++					/* need to set on-disk bits too. */
++					sector_t end = block + new_blocks;
++					sector_t start = block >> chunkshift;
++
++					start <<= chunkshift;
++					while (start < end) {
++						md_bitmap_file_set_bit(bitmap, block);
++						start += 1 << chunkshift;
++					}
++					*bmc_new = 2;
++					md_bitmap_count_page(&bitmap->counts, block, 1);
++					md_bitmap_set_pending(&bitmap->counts, block);
+ 				}
+-				*bmc_new = 2;
+-				md_bitmap_count_page(&bitmap->counts, block, 1);
+-				md_bitmap_set_pending(&bitmap->counts, block);
++				*bmc_new |= NEEDED_MASK;
+ 			}
+-			*bmc_new |= NEEDED_MASK;
+ 			if (new_blocks < old_blocks)
+ 				old_blocks = new_blocks;
+ 		}
+@@ -2496,6 +2500,9 @@ chunksize_store(struct mddev *mddev, const char *buf, size_t len)
+ 	if (csize < 512 ||
+ 	    !is_power_of_2(csize))
+ 		return -EINVAL;
++	if (BITS_PER_LONG > 32 && csize >= (1ULL << (BITS_PER_BYTE *
++		sizeof(((bitmap_super_t *)0)->chunksize))))
++		return -EOVERFLOW;
+ 	mddev->bitmap_info.chunksize = csize;
+ 	return len;
+ }
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 38cbde9061339..89d4dcc5253e5 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -417,13 +417,14 @@ static void md_end_flush(struct bio *bio)
+ 	struct md_rdev *rdev = bio->bi_private;
+ 	struct mddev *mddev = rdev->mddev;
+ 
++	bio_put(bio);
++
+ 	rdev_dec_pending(rdev, mddev);
+ 
+ 	if (atomic_dec_and_test(&mddev->flush_pending)) {
+ 		/* The pre-request flush has finished */
+ 		queue_work(md_wq, &mddev->flush_work);
+ 	}
+-	bio_put(bio);
+ }
+ 
+ static void md_submit_flush_data(struct work_struct *ws);
+@@ -821,10 +822,12 @@ static void super_written(struct bio *bio)
+ 	} else
+ 		clear_bit(LastDev, &rdev->flags);
+ 
++	bio_put(bio);
++
++	rdev_dec_pending(rdev, mddev);
++
+ 	if (atomic_dec_and_test(&mddev->pending_writes))
+ 		wake_up(&mddev->sb_wait);
+-	rdev_dec_pending(rdev, mddev);
+-	bio_put(bio);
+ }
+ 
+ void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 876d3e1339d17..0f8b1fb3d0517 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -3110,6 +3110,7 @@ static int raid1_run(struct mddev *mddev)
+ 	 * RAID1 needs at least one disk in active
+ 	 */
+ 	if (conf->raid_disks - mddev->degraded < 1) {
++		md_unregister_thread(&conf->thread);
+ 		ret = -EINVAL;
+ 		goto abort;
+ 	}
+diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
+index 9e0ef3934fa3d..ab0ce852c5b2a 100644
+--- a/drivers/media/dvb-core/dmxdev.c
++++ b/drivers/media/dvb-core/dmxdev.c
+@@ -800,6 +800,11 @@ static int dvb_demux_open(struct inode *inode, struct file *file)
+ 	if (mutex_lock_interruptible(&dmxdev->mutex))
+ 		return -ERESTARTSYS;
+ 
++	if (dmxdev->exit) {
++		mutex_unlock(&dmxdev->mutex);
++		return -ENODEV;
++	}
++
+ 	for (i = 0; i < dmxdev->filternum; i++)
+ 		if (dmxdev->filter[i].state == DMXDEV_STATE_FREE)
+ 			break;
+@@ -1457,7 +1462,10 @@ EXPORT_SYMBOL(dvb_dmxdev_init);
+ 
+ void dvb_dmxdev_release(struct dmxdev *dmxdev)
+ {
++	mutex_lock(&dmxdev->mutex);
+ 	dmxdev->exit = 1;
++	mutex_unlock(&dmxdev->mutex);
++
+ 	if (dmxdev->dvbdev->users > 1) {
+ 		wait_event(dmxdev->dvbdev->wait_queue,
+ 				dmxdev->dvbdev->users == 1);
+diff --git a/drivers/media/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb-core/dvb_ca_en50221.c
+index 4d371cea0d5df..36afcea709a75 100644
+--- a/drivers/media/dvb-core/dvb_ca_en50221.c
++++ b/drivers/media/dvb-core/dvb_ca_en50221.c
+@@ -168,7 +168,7 @@ static void dvb_ca_private_free(struct dvb_ca_private *ca)
+ {
+ 	unsigned int i;
+ 
+-	dvb_free_device(ca->dvbdev);
++	dvb_device_put(ca->dvbdev);
+ 	for (i = 0; i < ca->slot_count; i++)
+ 		vfree(ca->slot_info[i].rx_buffer.data);
+ 
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index 8a61150ee249e..e0650bc2df613 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -147,7 +147,7 @@ static void __dvb_frontend_free(struct dvb_frontend *fe)
+ 	struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ 
+ 	if (fepriv)
+-		dvb_free_device(fepriv->dvbdev);
++		dvb_device_put(fepriv->dvbdev);
+ 
+ 	dvb_frontend_invoke_release(fe, fe->ops.release);
+ 
+@@ -2956,6 +2956,7 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
+ 		.name = fe->ops.info.name,
+ #endif
+ 	};
++	int ret;
+ 
+ 	dev_dbg(dvb->device, "%s:\n", __func__);
+ 
+@@ -2989,8 +2990,13 @@ int dvb_register_frontend(struct dvb_adapter *dvb,
+ 		 "DVB: registering adapter %i frontend %i (%s)...\n",
+ 		 fe->dvb->num, fe->id, fe->ops.info.name);
+ 
+-	dvb_register_device(fe->dvb, &fepriv->dvbdev, &dvbdev_template,
++	ret = dvb_register_device(fe->dvb, &fepriv->dvbdev, &dvbdev_template,
+ 			    fe, DVB_DEVICE_FRONTEND, 0);
++	if (ret) {
++		dvb_frontend_put(fe);
++		mutex_unlock(&frontend_mutex);
++		return ret;
++	}
+ 
+ 	/*
+ 	 * Initialize the cache to the proper values according with the
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index d8f19a4d214aa..e103999711fc3 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -107,7 +107,7 @@ static int dvb_device_open(struct inode *inode, struct file *file)
+ 		new_fops = fops_get(dvbdev->fops);
+ 		if (!new_fops)
+ 			goto fail;
+-		file->private_data = dvbdev;
++		file->private_data = dvb_device_get(dvbdev);
+ 		replace_fops(file, new_fops);
+ 		if (file->f_op->open)
+ 			err = file->f_op->open(inode, file);
+@@ -171,6 +171,9 @@ int dvb_generic_release(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	dvbdev->users++;
++
++	dvb_device_put(dvbdev);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(dvb_generic_release);
+@@ -342,6 +345,7 @@ static int dvb_create_media_entity(struct dvb_device *dvbdev,
+ 				       GFP_KERNEL);
+ 		if (!dvbdev->pads) {
+ 			kfree(dvbdev->entity);
++			dvbdev->entity = NULL;
+ 			return -ENOMEM;
+ 		}
+ 	}
+@@ -488,6 +492,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 	}
+ 
+ 	memcpy(dvbdev, template, sizeof(struct dvb_device));
++	kref_init(&dvbdev->ref);
+ 	dvbdev->type = type;
+ 	dvbdev->id = id;
+ 	dvbdev->adapter = adap;
+@@ -518,7 +523,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ #endif
+ 
+ 	dvbdev->minor = minor;
+-	dvb_minors[minor] = dvbdev;
++	dvb_minors[minor] = dvb_device_get(dvbdev);
+ 	up_write(&minor_rwsem);
+ 
+ 	ret = dvb_register_media_device(dvbdev, type, minor, demux_sink_pads);
+@@ -559,6 +564,7 @@ void dvb_remove_device(struct dvb_device *dvbdev)
+ 
+ 	down_write(&minor_rwsem);
+ 	dvb_minors[dvbdev->minor] = NULL;
++	dvb_device_put(dvbdev);
+ 	up_write(&minor_rwsem);
+ 
+ 	dvb_media_device_free(dvbdev);
+@@ -570,21 +576,34 @@ void dvb_remove_device(struct dvb_device *dvbdev)
+ EXPORT_SYMBOL(dvb_remove_device);
+ 
+ 
+-void dvb_free_device(struct dvb_device *dvbdev)
++static void dvb_free_device(struct kref *ref)
+ {
+-	if (!dvbdev)
+-		return;
++	struct dvb_device *dvbdev = container_of(ref, struct dvb_device, ref);
+ 
+ 	kfree (dvbdev->fops);
+ 	kfree (dvbdev);
+ }
+-EXPORT_SYMBOL(dvb_free_device);
++
++
++struct dvb_device *dvb_device_get(struct dvb_device *dvbdev)
++{
++	kref_get(&dvbdev->ref);
++	return dvbdev;
++}
++EXPORT_SYMBOL(dvb_device_get);
++
++
++void dvb_device_put(struct dvb_device *dvbdev)
++{
++	if (dvbdev)
++		kref_put(&dvbdev->ref, dvb_free_device);
++}
+ 
+ 
+ void dvb_unregister_device(struct dvb_device *dvbdev)
+ {
+ 	dvb_remove_device(dvbdev);
+-	dvb_free_device(dvbdev);
++	dvb_device_put(dvbdev);
+ }
+ EXPORT_SYMBOL(dvb_unregister_device);
+ 
+diff --git a/drivers/media/dvb-frontends/bcm3510.c b/drivers/media/dvb-frontends/bcm3510.c
+index e92542b92d349..6457b0912d14d 100644
+--- a/drivers/media/dvb-frontends/bcm3510.c
++++ b/drivers/media/dvb-frontends/bcm3510.c
+@@ -649,6 +649,7 @@ static int bcm3510_download_firmware(struct dvb_frontend* fe)
+ 		deb_info("firmware chunk, addr: 0x%04x, len: 0x%04x, total length: 0x%04zx\n",addr,len,fw->size);
+ 		if ((ret = bcm3510_write_ram(st,addr,&b[i+4],len)) < 0) {
+ 			err("firmware download failed: %d\n",ret);
++			release_firmware(fw);
+ 			return ret;
+ 		}
+ 		i += 4 + len;
+diff --git a/drivers/media/dvb-frontends/stv0288.c b/drivers/media/dvb-frontends/stv0288.c
+index c9a9fa4e2c1b2..ca5d8e41c114f 100644
+--- a/drivers/media/dvb-frontends/stv0288.c
++++ b/drivers/media/dvb-frontends/stv0288.c
+@@ -452,9 +452,8 @@ static int stv0288_set_frontend(struct dvb_frontend *fe)
+ 	struct stv0288_state *state = fe->demodulator_priv;
+ 	struct dtv_frontend_properties *c = &fe->dtv_property_cache;
+ 
+-	char tm;
+-	unsigned char tda[3];
+-	u8 reg, time_out = 0;
++	u8 tda[3], reg, time_out = 0;
++	s8 tm;
+ 
+ 	dprintk("%s : FE_SET_FRONTEND\n", __func__);
+ 
+diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c
+index 034ebf7540070..c2a6d1d5217a0 100644
+--- a/drivers/media/i2c/ad5820.c
++++ b/drivers/media/i2c/ad5820.c
+@@ -321,18 +321,18 @@ static int ad5820_probe(struct i2c_client *client,
+ 
+ 	ret = media_entity_pads_init(&coil->subdev.entity, 0, NULL);
+ 	if (ret < 0)
+-		goto cleanup2;
++		goto clean_mutex;
+ 
+ 	ret = v4l2_async_register_subdev(&coil->subdev);
+ 	if (ret < 0)
+-		goto cleanup;
++		goto clean_entity;
+ 
+ 	return ret;
+ 
+-cleanup2:
+-	mutex_destroy(&coil->power_lock);
+-cleanup:
++clean_entity:
+ 	media_entity_cleanup(&coil->subdev.entity);
++clean_mutex:
++	mutex_destroy(&coil->power_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/pci/saa7164/saa7164-core.c b/drivers/media/pci/saa7164/saa7164-core.c
+index 5102519df1083..dba9071e8507f 100644
+--- a/drivers/media/pci/saa7164/saa7164-core.c
++++ b/drivers/media/pci/saa7164/saa7164-core.c
+@@ -1237,7 +1237,7 @@ static int saa7164_initdev(struct pci_dev *pci_dev,
+ 
+ 	if (saa7164_dev_setup(dev) < 0) {
+ 		err = -EINVAL;
+-		goto fail_free;
++		goto fail_dev;
+ 	}
+ 
+ 	/* print pci info */
+@@ -1405,6 +1405,8 @@ fail_fw:
+ 
+ fail_irq:
+ 	saa7164_dev_unregister(dev);
++fail_dev:
++	pci_disable_device(pci_dev);
+ fail_free:
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+ 	kfree(dev);
+diff --git a/drivers/media/pci/solo6x10/solo6x10-core.c b/drivers/media/pci/solo6x10/solo6x10-core.c
+index 19ffd2ed3cc7b..d3bca099b78bb 100644
+--- a/drivers/media/pci/solo6x10/solo6x10-core.c
++++ b/drivers/media/pci/solo6x10/solo6x10-core.c
+@@ -429,6 +429,7 @@ static int solo_sysfs_init(struct solo_dev *solo_dev)
+ 		     solo_dev->nr_chans);
+ 
+ 	if (device_register(dev)) {
++		put_device(dev);
+ 		dev->parent = NULL;
+ 		return -ENOMEM;
+ 	}
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index c3eaddced7214..e73bf8e14f331 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -670,7 +670,7 @@ static void coda_setup_iram(struct coda_ctx *ctx)
+ 		/* Only H.264BP and H.263P3 are considered */
+ 		iram_info->buf_dbk_y_use = coda_iram_alloc(iram_info, w64);
+ 		iram_info->buf_dbk_c_use = coda_iram_alloc(iram_info, w64);
+-		if (!iram_info->buf_dbk_c_use)
++		if (!iram_info->buf_dbk_y_use || !iram_info->buf_dbk_c_use)
+ 			goto out;
+ 		iram_info->axi_sram_use |= dbk_bits;
+ 
+@@ -694,7 +694,7 @@ static void coda_setup_iram(struct coda_ctx *ctx)
+ 
+ 		iram_info->buf_dbk_y_use = coda_iram_alloc(iram_info, w128);
+ 		iram_info->buf_dbk_c_use = coda_iram_alloc(iram_info, w128);
+-		if (!iram_info->buf_dbk_c_use)
++		if (!iram_info->buf_dbk_y_use || !iram_info->buf_dbk_c_use)
+ 			goto out;
+ 		iram_info->axi_sram_use |= dbk_bits;
+ 
+@@ -901,10 +901,16 @@ static int coda_start_encoding(struct coda_ctx *ctx)
+ 	}
+ 
+ 	if (dst_fourcc == V4L2_PIX_FMT_JPEG) {
+-		if (!ctx->params.jpeg_qmat_tab[0])
++		if (!ctx->params.jpeg_qmat_tab[0]) {
+ 			ctx->params.jpeg_qmat_tab[0] = kmalloc(64, GFP_KERNEL);
+-		if (!ctx->params.jpeg_qmat_tab[1])
++			if (!ctx->params.jpeg_qmat_tab[0])
++				return -ENOMEM;
++		}
++		if (!ctx->params.jpeg_qmat_tab[1]) {
+ 			ctx->params.jpeg_qmat_tab[1] = kmalloc(64, GFP_KERNEL);
++			if (!ctx->params.jpeg_qmat_tab[1])
++				return -ENOMEM;
++		}
+ 		coda_set_jpeg_compression_quality(ctx, ctx->params.jpeg_quality);
+ 	}
+ 
+diff --git a/drivers/media/platform/exynos4-is/fimc-core.c b/drivers/media/platform/exynos4-is/fimc-core.c
+index d8d8c9902b19f..cc8bc41b7db74 100644
+--- a/drivers/media/platform/exynos4-is/fimc-core.c
++++ b/drivers/media/platform/exynos4-is/fimc-core.c
+@@ -1255,7 +1255,7 @@ int __init fimc_register_driver(void)
+ 	return platform_driver_register(&fimc_driver);
+ }
+ 
+-void __exit fimc_unregister_driver(void)
++void fimc_unregister_driver(void)
+ {
+ 	platform_driver_unregister(&fimc_driver);
+ }
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index 3261dc72cc614..03171f2cf2968 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -1566,7 +1566,11 @@ static int __init fimc_md_init(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	return platform_driver_register(&fimc_md_driver);
++	ret = platform_driver_register(&fimc_md_driver);
++	if (ret)
++		fimc_unregister_driver();
++
++	return ret;
+ }
+ 
+ static void __exit fimc_md_exit(void)
+diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c
+index e81ebeb0506e4..23701ec8d7de7 100644
+--- a/drivers/media/platform/qcom/camss/camss-video.c
++++ b/drivers/media/platform/qcom/camss/camss-video.c
+@@ -438,7 +438,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
+ 
+ 	ret = media_pipeline_start(&vdev->entity, &video->pipe);
+ 	if (ret < 0)
+-		return ret;
++		goto flush_buffers;
+ 
+ 	ret = video_check_format(video);
+ 	if (ret < 0)
+@@ -467,6 +467,7 @@ static int video_start_streaming(struct vb2_queue *q, unsigned int count)
+ error:
+ 	media_pipeline_stop(&vdev->entity);
+ 
++flush_buffers:
+ 	video->ops->flush_buffers(video, VB2_BUF_STATE_QUEUED);
+ 
+ 	return ret;
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index 0fc101bc58d67..684aa8491a389 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1574,8 +1574,18 @@ static struct s5p_mfc_variant mfc_drvdata_v7 = {
+ 	.port_num	= MFC_NUM_PORTS_V7,
+ 	.buf_size	= &buf_size_v7,
+ 	.fw_name[0]     = "s5p-mfc-v7.fw",
+-	.clk_names	= {"mfc", "sclk_mfc"},
+-	.num_clocks	= 2,
++	.clk_names	= {"mfc"},
++	.num_clocks	= 1,
++};
++
++static struct s5p_mfc_variant mfc_drvdata_v7_3250 = {
++	.version        = MFC_VERSION_V7,
++	.version_bit    = MFC_V7_BIT,
++	.port_num       = MFC_NUM_PORTS_V7,
++	.buf_size       = &buf_size_v7,
++	.fw_name[0]     = "s5p-mfc-v7.fw",
++	.clk_names      = {"mfc", "sclk_mfc"},
++	.num_clocks     = 2,
+ };
+ 
+ static struct s5p_mfc_buf_size_v6 mfc_buf_size_v8 = {
+@@ -1645,6 +1655,9 @@ static const struct of_device_id exynos_mfc_match[] = {
+ 	}, {
+ 		.compatible = "samsung,mfc-v7",
+ 		.data = &mfc_drvdata_v7,
++	}, {
++		.compatible = "samsung,exynos3250-mfc",
++		.data = &mfc_drvdata_v7_3250,
+ 	}, {
+ 		.compatible = "samsung,mfc-v8",
+ 		.data = &mfc_drvdata_v8,
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c b/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
+index ee7b15b335e08..31bbf1cc53886 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
+@@ -472,8 +472,10 @@ void s5p_mfc_close_mfc_inst(struct s5p_mfc_dev *dev, struct s5p_mfc_ctx *ctx)
+ 	s5p_mfc_hw_call(dev->mfc_ops, try_run, dev);
+ 	/* Wait until instance is returned or timeout occurred */
+ 	if (s5p_mfc_wait_for_done_ctx(ctx,
+-				S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0))
++				S5P_MFC_R2H_CMD_CLOSE_INSTANCE_RET, 0)){
++		clear_work_bit_irqsave(ctx);
+ 		mfc_err("Err returning instance\n");
++	}
+ 
+ 	/* Free resources */
+ 	s5p_mfc_hw_call(dev->mfc_ops, release_codec_buffers, ctx);
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c b/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
+index 3ad4f5073002c..cc8d101eb2b06 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
+@@ -1224,6 +1224,7 @@ static int enc_post_frame_start(struct s5p_mfc_ctx *ctx)
+ 	unsigned long mb_y_addr, mb_c_addr;
+ 	int slice_type;
+ 	unsigned int strm_size;
++	bool src_ready;
+ 
+ 	slice_type = s5p_mfc_hw_call(dev->mfc_ops, get_enc_slice_type, dev);
+ 	strm_size = s5p_mfc_hw_call(dev->mfc_ops, get_enc_strm_size, dev);
+@@ -1263,7 +1264,8 @@ static int enc_post_frame_start(struct s5p_mfc_ctx *ctx)
+ 			}
+ 		}
+ 	}
+-	if ((ctx->src_queue_cnt > 0) && (ctx->state == MFCINST_RUNNING)) {
++	if (ctx->src_queue_cnt > 0 && (ctx->state == MFCINST_RUNNING ||
++				       ctx->state == MFCINST_FINISHING)) {
+ 		mb_entry = list_entry(ctx->src_queue.next, struct s5p_mfc_buf,
+ 									list);
+ 		if (mb_entry->flags & MFC_BUF_FLAG_USED) {
+@@ -1294,7 +1296,13 @@ static int enc_post_frame_start(struct s5p_mfc_ctx *ctx)
+ 		vb2_set_plane_payload(&mb_entry->b->vb2_buf, 0, strm_size);
+ 		vb2_buffer_done(&mb_entry->b->vb2_buf, VB2_BUF_STATE_DONE);
+ 	}
+-	if ((ctx->src_queue_cnt == 0) || (ctx->dst_queue_cnt == 0))
++
++	src_ready = true;
++	if (ctx->state == MFCINST_RUNNING && ctx->src_queue_cnt == 0)
++		src_ready = false;
++	if (ctx->state == MFCINST_FINISHING && ctx->ref_queue_cnt == 0)
++		src_ready = false;
++	if (!src_ready || ctx->dst_queue_cnt == 0)
+ 		clear_work_bit(ctx);
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c b/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
+index 7c629be432052..1171b76df036d 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
+@@ -1063,7 +1063,7 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 	}
+ 
+ 	/* aspect ratio VUI */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 5);
+ 	reg |= ((p_h264->vui_sar & 0x1) << 5);
+ 	writel(reg, mfc_regs->e_h264_options);
+@@ -1086,7 +1086,7 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 
+ 	/* intra picture period for H.264 open GOP */
+ 	/* control */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 4);
+ 	reg |= ((p_h264->open_gop & 0x1) << 4);
+ 	writel(reg, mfc_regs->e_h264_options);
+@@ -1100,23 +1100,23 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 	}
+ 
+ 	/* 'WEIGHTED_BI_PREDICTION' for B is disable */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x3 << 9);
+ 	writel(reg, mfc_regs->e_h264_options);
+ 
+ 	/* 'CONSTRAINED_INTRA_PRED_ENABLE' is disable */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 14);
+ 	writel(reg, mfc_regs->e_h264_options);
+ 
+ 	/* ASO */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 6);
+ 	reg |= ((p_h264->aso & 0x1) << 6);
+ 	writel(reg, mfc_regs->e_h264_options);
+ 
+ 	/* hier qp enable */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 8);
+ 	reg |= ((p_h264->open_gop & 0x1) << 8);
+ 	writel(reg, mfc_regs->e_h264_options);
+@@ -1137,7 +1137,7 @@ static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+ 	writel(reg, mfc_regs->e_h264_num_t_layer);
+ 
+ 	/* frame packing SEI generation */
+-	readl(mfc_regs->e_h264_options);
++	reg = readl(mfc_regs->e_h264_options);
+ 	reg &= ~(0x1 << 25);
+ 	reg |= ((p_h264->sei_frame_packing & 0x1) << 25);
+ 	writel(reg, mfc_regs->e_h264_options);
+diff --git a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+index 3c05b3dc49ec0..98c14565bbfd1 100644
+--- a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
++++ b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+@@ -943,6 +943,7 @@ static int configure_channels(struct c8sectpfei *fei)
+ 		if (ret) {
+ 			dev_err(fei->dev,
+ 				"configure_memdma_and_inputblock failed\n");
++			of_node_put(child);
+ 			goto err_unmap;
+ 		}
+ 		index++;
+diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
+index 48f2c9c96fc92..ded48b7ad7744 100644
+--- a/drivers/media/platform/vivid/vivid-vid-cap.c
++++ b/drivers/media/platform/vivid/vivid-vid-cap.c
+@@ -923,6 +923,7 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 			if (dev->has_compose_cap) {
+ 				v4l2_rect_set_min_size(compose, &min_rect);
+ 				v4l2_rect_set_max_size(compose, &max_rect);
++				v4l2_rect_map_inside(compose, &fmt);
+ 			}
+ 			dev->fmt_cap_rect = fmt;
+ 			tpg_s_buf_height(&dev->tpg, fmt.height);
+diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c
+index ba43a727c0b95..51b961cfba7cb 100644
+--- a/drivers/media/radio/si470x/radio-si470x-usb.c
++++ b/drivers/media/radio/si470x/radio-si470x-usb.c
+@@ -742,8 +742,10 @@ static int si470x_usb_driver_probe(struct usb_interface *intf,
+ 
+ 	/* start radio */
+ 	retval = si470x_start_usb(radio);
+-	if (retval < 0)
++	if (retval < 0 && !radio->int_in_running)
+ 		goto err_buf;
++	else if (retval < 0)	/* in case of radio->int_in_running == 1 */
++		goto err_all;
+ 
+ 	/* set initial frequency */
+ 	si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 6b10363fb6f09..99bb7380ee0e5 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -613,15 +613,14 @@ static int send_packet(struct imon_context *ictx)
+ 		pr_err_ratelimited("error submitting urb(%d)\n", retval);
+ 	} else {
+ 		/* Wait for transmission to complete (or abort) */
+-		mutex_unlock(&ictx->lock);
+ 		retval = wait_for_completion_interruptible(
+ 				&ictx->tx.finished);
+ 		if (retval) {
+ 			usb_kill_urb(ictx->tx_urb);
+ 			pr_err_ratelimited("task interrupted\n");
+ 		}
+-		mutex_lock(&ictx->lock);
+ 
++		ictx->tx.busy = false;
+ 		retval = ictx->tx.status;
+ 		if (retval)
+ 			pr_err_ratelimited("packet tx failed (%d)\n", retval);
+@@ -928,7 +927,8 @@ static ssize_t vfd_write(struct file *file, const char __user *buf,
+ 		return -ENODEV;
+ 	}
+ 
+-	mutex_lock(&ictx->lock);
++	if (mutex_lock_interruptible(&ictx->lock))
++		return -ERESTARTSYS;
+ 
+ 	if (!ictx->dev_present_intf0) {
+ 		pr_err_ratelimited("no iMON device present\n");
+diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
+index 990719727dc37..7d71ac7811ebd 100644
+--- a/drivers/media/usb/dvb-usb/az6027.c
++++ b/drivers/media/usb/dvb-usb/az6027.c
+@@ -978,6 +978,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
+ 		if (msg[i].addr == 0x99) {
+ 			req = 0xBE;
+ 			index = 0;
++			if (msg[i].len < 1) {
++				i = -EOPNOTSUPP;
++				break;
++			}
+ 			value = msg[i].buf[0] & 0x00ff;
+ 			length = 1;
+ 			az6027_usb_out_op(d, req, value, index, data, length);
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 4b1445d806e5b..16be32b19ca17 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -84,7 +84,7 @@ static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)
+ 
+ 		ret = dvb_usb_adapter_stream_init(adap);
+ 		if (ret)
+-			return ret;
++			goto stream_init_err;
+ 
+ 		ret = dvb_usb_adapter_dvb_init(adap, adapter_nrs);
+ 		if (ret)
+@@ -117,6 +117,8 @@ frontend_init_err:
+ 	dvb_usb_adapter_dvb_exit(adap);
+ dvb_init_err:
+ 	dvb_usb_adapter_stream_exit(adap);
++stream_init_err:
++	kfree(adap->priv);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
+index 08f4a512afad2..3153cf1651a22 100644
+--- a/drivers/misc/cxl/guest.c
++++ b/drivers/misc/cxl/guest.c
+@@ -963,10 +963,10 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
+ 	 * if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_afu(afu)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_afu_add(afu)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	/*
+ 	 * pHyp doesn't expose the programming models supported by the
+@@ -982,7 +982,7 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
+ 		afu->modes_supported = CXL_MODE_DIRECTED;
+ 
+ 	if ((rc = cxl_afu_select_best_mode(afu)))
+-		goto err_put2;
++		goto err_remove_sysfs;
+ 
+ 	adapter->afu[afu->slice] = afu;
+ 
+@@ -1002,10 +1002,12 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
+ 
+ 	return 0;
+ 
+-err_put2:
++err_remove_sysfs:
+ 	cxl_sysfs_afu_remove(afu);
+-err_put1:
+-	device_unregister(&afu->dev);
++err_del_dev:
++	device_del(&afu->dev);
++err_put_dev:
++	put_device(&afu->dev);
+ 	free = false;
+ 	guest_release_serr_irq(afu);
+ err2:
+@@ -1139,18 +1141,20 @@ struct cxl *cxl_guest_init_adapter(struct device_node *np, struct platform_devic
+ 	 * even if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_adapter(adapter)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_adapter_add(adapter)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	/* release the context lock as the adapter is configured */
+ 	cxl_adapter_context_unlock(adapter);
+ 
+ 	return adapter;
+ 
+-err_put1:
+-	device_unregister(&adapter->dev);
++err_del_dev:
++	device_del(&adapter->dev);
++err_put_dev:
++	put_device(&adapter->dev);
+ 	free = false;
+ 	cxl_guest_remove_chardev(adapter);
+ err1:
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index b9d3c87e9318d..8dbb1998c3124 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -391,6 +391,7 @@ int cxl_calc_capp_routing(struct pci_dev *dev, u64 *chipid,
+ 	rc = get_phb_index(np, phb_index);
+ 	if (rc) {
+ 		pr_err("cxl: invalid phb index\n");
++		of_node_put(np);
+ 		return rc;
+ 	}
+ 
+@@ -1168,10 +1169,10 @@ static int pci_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev)
+ 	 * if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_afu(afu)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_afu_add(afu)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	adapter->afu[afu->slice] = afu;
+ 
+@@ -1180,10 +1181,12 @@ static int pci_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev)
+ 
+ 	return 0;
+ 
+-err_put1:
++err_del_dev:
++	device_del(&afu->dev);
++err_put_dev:
+ 	pci_deconfigure_afu(afu);
+ 	cxl_debugfs_afu_remove(afu);
+-	device_unregister(&afu->dev);
++	put_device(&afu->dev);
+ 	return rc;
+ 
+ err_free_native:
+@@ -1671,23 +1674,25 @@ static struct cxl *cxl_pci_init_adapter(struct pci_dev *dev)
+ 	 * even if it returns an error!
+ 	 */
+ 	if ((rc = cxl_register_adapter(adapter)))
+-		goto err_put1;
++		goto err_put_dev;
+ 
+ 	if ((rc = cxl_sysfs_adapter_add(adapter)))
+-		goto err_put1;
++		goto err_del_dev;
+ 
+ 	/* Release the context lock as adapter is configured */
+ 	cxl_adapter_context_unlock(adapter);
+ 
+ 	return adapter;
+ 
+-err_put1:
++err_del_dev:
++	device_del(&adapter->dev);
++err_put_dev:
+ 	/* This should mirror cxl_remove_adapter, except without the
+ 	 * sysfs parts
+ 	 */
+ 	cxl_debugfs_adapter_remove(adapter);
+ 	cxl_deconfigure_adapter(adapter);
+-	device_unregister(&adapter->dev);
++	put_device(&adapter->dev);
+ 	return ERR_PTR(rc);
+ 
+ err_release:
+diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
+index 93be82fc338ad..16df731e63c56 100644
+--- a/drivers/misc/sgi-gru/grufault.c
++++ b/drivers/misc/sgi-gru/grufault.c
+@@ -661,6 +661,7 @@ int gru_handle_user_call_os(unsigned long cb)
+ 	if ((cb & (GRU_HANDLE_STRIDE - 1)) || ucbnum >= GRU_NUM_CB)
+ 		return -EINVAL;
+ 
++again:
+ 	gts = gru_find_lock_gts(cb);
+ 	if (!gts)
+ 		return -EINVAL;
+@@ -669,7 +670,11 @@ int gru_handle_user_call_os(unsigned long cb)
+ 	if (ucbnum >= gts->ts_cbr_au_count * GRU_CBR_AU_SIZE)
+ 		goto exit;
+ 
+-	gru_check_context_placement(gts);
++	if (gru_check_context_placement(gts)) {
++		gru_unlock_gts(gts);
++		gru_unload_context(gts, 1);
++		goto again;
++	}
+ 
+ 	/*
+ 	 * CCH may contain stale data if ts_force_cch_reload is set.
+@@ -887,7 +892,11 @@ int gru_set_context_option(unsigned long arg)
+ 		} else {
+ 			gts->ts_user_blade_id = req.val1;
+ 			gts->ts_user_chiplet_id = req.val0;
+-			gru_check_context_placement(gts);
++			if (gru_check_context_placement(gts)) {
++				gru_unlock_gts(gts);
++				gru_unload_context(gts, 1);
++				return ret;
++			}
+ 		}
+ 		break;
+ 	case sco_gseg_owner:
+diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c
+index ab174f28e3bef..8c3e0317c115b 100644
+--- a/drivers/misc/sgi-gru/grumain.c
++++ b/drivers/misc/sgi-gru/grumain.c
+@@ -729,9 +729,10 @@ static int gru_check_chiplet_assignment(struct gru_state *gru,
+  * chiplet. Misassignment can occur if the process migrates to a different
+  * blade or if the user changes the selected blade/chiplet.
+  */
+-void gru_check_context_placement(struct gru_thread_state *gts)
++int gru_check_context_placement(struct gru_thread_state *gts)
+ {
+ 	struct gru_state *gru;
++	int ret = 0;
+ 
+ 	/*
+ 	 * If the current task is the context owner, verify that the
+@@ -739,15 +740,23 @@ void gru_check_context_placement(struct gru_thread_state *gts)
+ 	 * references. Pthread apps use non-owner references to the CBRs.
+ 	 */
+ 	gru = gts->ts_gru;
++	/*
++	 * If gru or gts->ts_tgid_owner isn't initialized properly, return
++	 * success to indicate that the caller does not need to unload the
++	 * gru context.The caller is responsible for their inspection and
++	 * reinitialization if needed.
++	 */
+ 	if (!gru || gts->ts_tgid_owner != current->tgid)
+-		return;
++		return ret;
+ 
+ 	if (!gru_check_chiplet_assignment(gru, gts)) {
+ 		STAT(check_context_unload);
+-		gru_unload_context(gts, 1);
++		ret = -EINVAL;
+ 	} else if (gru_retarget_intr(gts)) {
+ 		STAT(check_context_retarget_intr);
+ 	}
++
++	return ret;
+ }
+ 
+ 
+@@ -947,7 +956,12 @@ again:
+ 	mutex_lock(&gts->ts_ctxlock);
+ 	preempt_disable();
+ 
+-	gru_check_context_placement(gts);
++	if (gru_check_context_placement(gts)) {
++		preempt_enable();
++		mutex_unlock(&gts->ts_ctxlock);
++		gru_unload_context(gts, 1);
++		return VM_FAULT_NOPAGE;
++	}
+ 
+ 	if (!gts->ts_gru) {
+ 		STAT(load_user_context);
+diff --git a/drivers/misc/sgi-gru/grutables.h b/drivers/misc/sgi-gru/grutables.h
+index 3e041b6f7a683..2becf4c3f7cab 100644
+--- a/drivers/misc/sgi-gru/grutables.h
++++ b/drivers/misc/sgi-gru/grutables.h
+@@ -652,7 +652,7 @@ extern int gru_user_flush_tlb(unsigned long arg);
+ extern int gru_user_unload_context(unsigned long arg);
+ extern int gru_get_exception_detail(unsigned long arg);
+ extern int gru_set_context_option(unsigned long address);
+-extern void gru_check_context_placement(struct gru_thread_state *gts);
++extern int gru_check_context_placement(struct gru_thread_state *gts);
+ extern int gru_cpu_fault_map_id(void);
+ extern struct vm_area_struct *gru_find_vma(unsigned long vaddr);
+ extern void gru_flush_all_tlb(struct gru_state *gru);
+diff --git a/drivers/misc/tifm_7xx1.c b/drivers/misc/tifm_7xx1.c
+index 9ac95b48ef924..1d33c4facd445 100644
+--- a/drivers/misc/tifm_7xx1.c
++++ b/drivers/misc/tifm_7xx1.c
+@@ -194,7 +194,7 @@ static void tifm_7xx1_switch_media(struct work_struct *work)
+ 				spin_unlock_irqrestore(&fm->lock, flags);
+ 			}
+ 			if (sock)
+-				tifm_free_device(&sock->dev);
++				put_device(&sock->dev);
+ 		}
+ 		spin_lock_irqsave(&fm->lock, flags);
+ 	}
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index fbc56ee996827..d40bab3d9f4af 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -2262,6 +2262,7 @@ static int atmci_init_slot(struct atmel_mci *host,
+ {
+ 	struct mmc_host			*mmc;
+ 	struct atmel_mci_slot		*slot;
++	int ret;
+ 
+ 	mmc = mmc_alloc_host(sizeof(struct atmel_mci_slot), &host->pdev->dev);
+ 	if (!mmc)
+@@ -2342,11 +2343,13 @@ static int atmci_init_slot(struct atmel_mci *host,
+ 
+ 	host->slot[id] = slot;
+ 	mmc_regulator_get_supply(mmc);
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++		mmc_free_host(mmc);
++		return ret;
++	}
+ 
+ 	if (gpio_is_valid(slot->detect_pin)) {
+-		int ret;
+-
+ 		timer_setup(&slot->detect_timer, atmci_detect_change, 0);
+ 
+ 		ret = request_irq(gpio_to_irq(slot->detect_pin),
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 72f34a58928ca..dba98c2886f26 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1370,7 +1370,9 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	mmc->ops = &meson_mmc_ops;
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto err_free_irq;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index fa6268c0f1232..434ba4a39ded7 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -1808,7 +1808,9 @@ static int mmci_probe(struct amba_device *dev,
+ 	pm_runtime_set_autosuspend_delay(&dev->dev, 50);
+ 	pm_runtime_use_autosuspend(&dev->dev);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto clk_disable;
+ 
+ 	pm_runtime_put(&dev->dev);
+ 	return 0;
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index 1552d1f09c5c4..52307dce08ba8 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -660,7 +660,9 @@ static int moxart_probe(struct platform_device *pdev)
+ 		goto out;
+ 
+ 	dev_set_drvdata(dev, mmc);
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto out;
+ 
+ 	dev_dbg(dev, "IRQ=%d, FIFO is %d bytes\n", irq, host->fifo_width);
+ 
+diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
+index 2f604b312767d..6215feb976e32 100644
+--- a/drivers/mmc/host/mxcmmc.c
++++ b/drivers/mmc/host/mxcmmc.c
+@@ -1167,7 +1167,9 @@ static int mxcmci_probe(struct platform_device *pdev)
+ 
+ 	timer_setup(&host->watchdog, mxcmci_watchdog, 0);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto out_free_dma;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/rtsx_usb_sdmmc.c b/drivers/mmc/host/rtsx_usb_sdmmc.c
+index 9a3ff22dd0fe2..1e5f54054a531 100644
+--- a/drivers/mmc/host/rtsx_usb_sdmmc.c
++++ b/drivers/mmc/host/rtsx_usb_sdmmc.c
+@@ -1344,6 +1344,7 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev)
+ #ifdef RTSX_USB_USE_LEDS_CLASS
+ 	int err;
+ #endif
++	int ret;
+ 
+ 	ucr = usb_get_intfdata(to_usb_interface(pdev->dev.parent));
+ 	if (!ucr)
+@@ -1382,7 +1383,15 @@ static int rtsx_usb_sdmmc_drv_probe(struct platform_device *pdev)
+ 	INIT_WORK(&host->led_work, rtsx_usb_update_led);
+ 
+ #endif
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++#ifdef RTSX_USB_USE_LEDS_CLASS
++		led_classdev_unregister(&host->led);
++#endif
++		mmc_free_host(mmc);
++		pm_runtime_disable(&pdev->dev);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mmc/host/sdhci_f_sdh30.c b/drivers/mmc/host/sdhci_f_sdh30.c
+index 485f7591fae4f..ca9e05440da1d 100644
+--- a/drivers/mmc/host/sdhci_f_sdh30.c
++++ b/drivers/mmc/host/sdhci_f_sdh30.c
+@@ -199,6 +199,9 @@ static int sdhci_f_sdh30_probe(struct platform_device *pdev)
+ 	if (reg & SDHCI_CAN_DO_8BIT)
+ 		priv->vendor_hs200 = F_SDH30_EMMC_HS200;
+ 
++	if (!(reg & SDHCI_TIMEOUT_CLK_MASK))
++		host->quirks |= SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK;
++
+ 	ret = sdhci_add_host(host);
+ 	if (ret)
+ 		goto err_add_host;
+diff --git a/drivers/mmc/host/toshsd.c b/drivers/mmc/host/toshsd.c
+index dd961c54a6a9a..9236965b00fd9 100644
+--- a/drivers/mmc/host/toshsd.c
++++ b/drivers/mmc/host/toshsd.c
+@@ -655,7 +655,9 @@ static int toshsd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto unmap;
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto free_irq;
+ 
+ 	base = pci_resource_start(pdev, 0);
+ 	dev_dbg(&pdev->dev, "MMIO %pa, IRQ %d\n", &base, pdev->irq);
+@@ -664,6 +666,8 @@ static int toshsd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	return 0;
+ 
++free_irq:
++	free_irq(pdev->irq, host);
+ unmap:
+ 	pci_iounmap(pdev, host->ioaddr);
+ release:
+diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
+index 1b66466d2ed45..b78e96b5289b2 100644
+--- a/drivers/mmc/host/via-sdmmc.c
++++ b/drivers/mmc/host/via-sdmmc.c
+@@ -1166,7 +1166,9 @@ static int via_sd_probe(struct pci_dev *pcidev,
+ 	    pcidev->subsystem_device == 0x3891)
+ 		sdhost->quirks = VIA_CRDR_QUIRK_300MS_PWRDELAY;
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto unmap;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index aa8905753be33..e74ab79c99cd9 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -2052,6 +2052,7 @@ static void vub300_enable_sdio_irq(struct mmc_host *mmc, int enable)
+ 		return;
+ 	kref_get(&vub300->kref);
+ 	if (enable) {
++		set_current_state(TASK_RUNNING);
+ 		mutex_lock(&vub300->irq_mutex);
+ 		if (vub300->irqs_queued) {
+ 			vub300->irqs_queued -= 1;
+@@ -2067,6 +2068,7 @@ static void vub300_enable_sdio_irq(struct mmc_host *mmc, int enable)
+ 			vub300_queue_poll_work(vub300, 0);
+ 		}
+ 		mutex_unlock(&vub300->irq_mutex);
++		set_current_state(TASK_INTERRUPTIBLE);
+ 	} else {
+ 		vub300->irq_enabled = 0;
+ 	}
+@@ -2309,14 +2311,14 @@ static int vub300_probe(struct usb_interface *interface,
+ 				0x0000, 0x0000, &vub300->system_port_status,
+ 				sizeof(vub300->system_port_status), 1000);
+ 	if (retval < 0) {
+-		goto error4;
++		goto error5;
+ 	} else if (sizeof(vub300->system_port_status) == retval) {
+ 		vub300->card_present =
+ 			(0x0001 & vub300->system_port_status.port_flags) ? 1 : 0;
+ 		vub300->read_only =
+ 			(0x0010 & vub300->system_port_status.port_flags) ? 1 : 0;
+ 	} else {
+-		goto error4;
++		goto error5;
+ 	}
+ 	usb_set_intfdata(interface, vub300);
+ 	INIT_DELAYED_WORK(&vub300->pollwork, vub300_pollwork_thread);
+@@ -2339,8 +2341,13 @@ static int vub300_probe(struct usb_interface *interface,
+ 			 "USB vub300 remote SDIO host controller[%d]"
+ 			 "connected with no SD/SDIO card inserted\n",
+ 			 interface_to_InterfaceNumber(interface));
+-	mmc_add_host(mmc);
++	retval = mmc_add_host(mmc);
++	if (retval)
++		goto error6;
++
+ 	return 0;
++error6:
++	del_timer_sync(&vub300->inactivity_timer);
+ error5:
+ 	mmc_free_host(mmc);
+ 	/*
+diff --git a/drivers/mmc/host/wbsd.c b/drivers/mmc/host/wbsd.c
+index 1e54bbf13d75b..9b15431d961ce 100644
+--- a/drivers/mmc/host/wbsd.c
++++ b/drivers/mmc/host/wbsd.c
+@@ -1706,7 +1706,17 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
+ 	 */
+ 	wbsd_init_device(host);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret) {
++		if (!pnp)
++			wbsd_chip_poweroff(host);
++
++		wbsd_release_resources(host);
++		wbsd_free_mmc(dev);
++
++		mmc_free_host(mmc);
++		return ret;
++	}
+ 
+ 	pr_info("%s: W83L51xD", mmc_hostname(mmc));
+ 	if (host->chip_id != 0)
+diff --git a/drivers/mmc/host/wmt-sdmmc.c b/drivers/mmc/host/wmt-sdmmc.c
+index f8b1696846938..a132bc822b5c6 100644
+--- a/drivers/mmc/host/wmt-sdmmc.c
++++ b/drivers/mmc/host/wmt-sdmmc.c
+@@ -863,11 +863,15 @@ static int wmt_mci_probe(struct platform_device *pdev)
+ 	/* configure the controller to a known 'ready' state */
+ 	wmt_reset_hardware(mmc);
+ 
+-	mmc_add_host(mmc);
++	ret = mmc_add_host(mmc);
++	if (ret)
++		goto fail7;
+ 
+ 	dev_info(&pdev->dev, "WMT SDHC Controller initialized\n");
+ 
+ 	return 0;
++fail7:
++	clk_disable_unprepare(priv->clk_sdmmc);
+ fail6:
+ 	clk_put(priv->clk_sdmmc);
+ fail5_and_a_half:
+diff --git a/drivers/mtd/lpddr/lpddr2_nvm.c b/drivers/mtd/lpddr/lpddr2_nvm.c
+index 90e6cb64db69c..b004d3213a667 100644
+--- a/drivers/mtd/lpddr/lpddr2_nvm.c
++++ b/drivers/mtd/lpddr/lpddr2_nvm.c
+@@ -442,6 +442,8 @@ static int lpddr2_nvm_probe(struct platform_device *pdev)
+ 
+ 	/* lpddr2_nvm address range */
+ 	add_range = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!add_range)
++		return -ENODEV;
+ 
+ 	/* Populate map_info data structure */
+ 	*map = (struct map_info) {
+diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c
+index 2cde28ed95c99..59d2fe1f46e1e 100644
+--- a/drivers/mtd/maps/pxa2xx-flash.c
++++ b/drivers/mtd/maps/pxa2xx-flash.c
+@@ -69,6 +69,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
+ 	if (!info->map.virt) {
+ 		printk(KERN_WARNING "Failed to ioremap %s\n",
+ 		       info->map.name);
++		kfree(info);
+ 		return -ENOMEM;
+ 	}
+ 	info->map.cached =
+@@ -91,6 +92,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
+ 		iounmap((void *)info->map.virt);
+ 		if (info->map.cached)
+ 			iounmap(info->map.cached);
++		kfree(info);
+ 		return -EIO;
+ 	}
+ 	info->mtd->dev.parent = &pdev->dev;
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index a0b1a7814e2e7..90d9c0e93157f 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -567,8 +567,10 @@ int add_mtd_device(struct mtd_info *mtd)
+ 	dev_set_drvdata(&mtd->dev, mtd);
+ 	of_node_get(mtd_get_of_node(mtd));
+ 	error = device_register(&mtd->dev);
+-	if (error)
++	if (error) {
++		put_device(&mtd->dev);
+ 		goto fail_added;
++	}
+ 
+ 	if (!IS_ERR_OR_NULL(dfs_dir_mtd)) {
+ 		mtd->dbg.dfs_dir = debugfs_create_dir(dev_name(&mtd->dev), dfs_dir_mtd);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index cab5c1cc9fe99..4e4adacb5c2c1 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2096,10 +2096,10 @@ static int bond_slave_info_query(struct net_device *bond_dev, struct ifslave *in
+ /* called with rcu_read_lock() */
+ static int bond_miimon_inspect(struct bonding *bond)
+ {
++	bool ignore_updelay = false;
+ 	int link_state, commit = 0;
+ 	struct list_head *iter;
+ 	struct slave *slave;
+-	bool ignore_updelay;
+ 
+ 	ignore_updelay = !rcu_dereference(bond->curr_active_slave);
+ 
+@@ -3993,6 +3993,29 @@ err:
+ 	bond_slave_arr_work_rearm(bond, 1);
+ }
+ 
++static void bond_skip_slave(struct bond_up_slave *slaves,
++			    struct slave *skipslave)
++{
++	int idx;
++
++	/* Rare situation where caller has asked to skip a specific
++	 * slave but allocation failed (most likely!). BTW this is
++	 * only possible when the call is initiated from
++	 * __bond_release_one(). In this situation; overwrite the
++	 * skipslave entry in the array with the last entry from the
++	 * array to avoid a situation where the xmit path may choose
++	 * this to-be-skipped slave to send a packet out.
++	 */
++	for (idx = 0; slaves && idx < slaves->count; idx++) {
++		if (skipslave == slaves->arr[idx]) {
++			slaves->arr[idx] =
++				slaves->arr[slaves->count - 1];
++			slaves->count--;
++			break;
++		}
++	}
++}
++
+ /* Build the usable slaves array in control path for modes that use xmit-hash
+  * to determine the slave interface -
+  * (a) BOND_MODE_8023AD
+@@ -4063,27 +4086,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
+ 	if (old_arr)
+ 		kfree_rcu(old_arr, rcu);
+ out:
+-	if (ret != 0 && skipslave) {
+-		int idx;
+-
+-		/* Rare situation where caller has asked to skip a specific
+-		 * slave but allocation failed (most likely!). BTW this is
+-		 * only possible when the call is initiated from
+-		 * __bond_release_one(). In this situation; overwrite the
+-		 * skipslave entry in the array with the last entry from the
+-		 * array to avoid a situation where the xmit path may choose
+-		 * this to-be-skipped slave to send a packet out.
+-		 */
+-		old_arr = rtnl_dereference(bond->slave_arr);
+-		for (idx = 0; old_arr != NULL && idx < old_arr->count; idx++) {
+-			if (skipslave == old_arr->arr[idx]) {
+-				old_arr->arr[idx] =
+-				    old_arr->arr[old_arr->count-1];
+-				old_arr->count--;
+-				break;
+-			}
+-		}
+-	}
++	if (ret != 0 && skipslave)
++		bond_skip_slave(rtnl_dereference(bond->slave_arr), skipslave);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+index 62958f04a2f20..5699531f87873 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h
+@@ -76,6 +76,14 @@ struct kvaser_usb_tx_urb_context {
+ 	int dlc;
+ };
+ 
++struct kvaser_usb_busparams {
++	__le32 bitrate;
++	u8 tseg1;
++	u8 tseg2;
++	u8 sjw;
++	u8 nsamples;
++} __packed;
++
+ struct kvaser_usb {
+ 	struct usb_device *udev;
+ 	struct usb_interface *intf;
+@@ -104,13 +112,19 @@ struct kvaser_usb_net_priv {
+ 	struct can_priv can;
+ 	struct can_berr_counter bec;
+ 
++	/* subdriver-specific data */
++	void *sub_priv;
++
+ 	struct kvaser_usb *dev;
+ 	struct net_device *netdev;
+ 	int channel;
+ 
+-	struct completion start_comp, stop_comp, flush_comp;
++	struct completion start_comp, stop_comp, flush_comp,
++			  get_busparams_comp;
+ 	struct usb_anchor tx_submitted;
+ 
++	struct kvaser_usb_busparams busparams_nominal, busparams_data;
++
+ 	spinlock_t tx_contexts_lock; /* lock for active_tx_contexts */
+ 	int active_tx_contexts;
+ 	struct kvaser_usb_tx_urb_context tx_contexts[];
+@@ -120,11 +134,15 @@ struct kvaser_usb_net_priv {
+  * struct kvaser_usb_dev_ops - Device specific functions
+  * @dev_set_mode:		used for can.do_set_mode
+  * @dev_set_bittiming:		used for can.do_set_bittiming
++ * @dev_get_busparams:		readback arbitration busparams
+  * @dev_set_data_bittiming:	used for can.do_set_data_bittiming
++ * @dev_get_data_busparams:	readback data busparams
+  * @dev_get_berr_counter:	used for can.do_get_berr_counter
+  *
+  * @dev_setup_endpoints:	setup USB in and out endpoints
+  * @dev_init_card:		initialize card
++ * @dev_init_channel:		initialize channel
++ * @dev_remove_channel:		uninitialize channel
+  * @dev_get_software_info:	get software info
+  * @dev_get_software_details:	get software details
+  * @dev_get_card_info:		get card info
+@@ -140,12 +158,18 @@ struct kvaser_usb_net_priv {
+  */
+ struct kvaser_usb_dev_ops {
+ 	int (*dev_set_mode)(struct net_device *netdev, enum can_mode mode);
+-	int (*dev_set_bittiming)(struct net_device *netdev);
+-	int (*dev_set_data_bittiming)(struct net_device *netdev);
++	int (*dev_set_bittiming)(const struct net_device *netdev,
++				 const struct kvaser_usb_busparams *busparams);
++	int (*dev_get_busparams)(struct kvaser_usb_net_priv *priv);
++	int (*dev_set_data_bittiming)(const struct net_device *netdev,
++				      const struct kvaser_usb_busparams *busparams);
++	int (*dev_get_data_busparams)(struct kvaser_usb_net_priv *priv);
+ 	int (*dev_get_berr_counter)(const struct net_device *netdev,
+ 				    struct can_berr_counter *bec);
+ 	int (*dev_setup_endpoints)(struct kvaser_usb *dev);
+ 	int (*dev_init_card)(struct kvaser_usb *dev);
++	int (*dev_init_channel)(struct kvaser_usb_net_priv *priv);
++	void (*dev_remove_channel)(struct kvaser_usb_net_priv *priv);
+ 	int (*dev_get_software_info)(struct kvaser_usb *dev);
+ 	int (*dev_get_software_details)(struct kvaser_usb *dev);
+ 	int (*dev_get_card_info)(struct kvaser_usb *dev);
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+index 379df22d51231..da449d046905d 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
+@@ -416,10 +416,6 @@ static int kvaser_usb_open(struct net_device *netdev)
+ 	if (err)
+ 		return err;
+ 
+-	err = kvaser_usb_setup_rx_urbs(dev);
+-	if (err)
+-		goto error;
+-
+ 	err = ops->dev_set_opt_mode(priv);
+ 	if (err)
+ 		goto error;
+@@ -510,6 +506,93 @@ static int kvaser_usb_close(struct net_device *netdev)
+ 	return 0;
+ }
+ 
++static int kvaser_usb_set_bittiming(struct net_device *netdev)
++{
++	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
++	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
++	struct can_bittiming *bt = &priv->can.bittiming;
++
++	struct kvaser_usb_busparams busparams;
++	int tseg1 = bt->prop_seg + bt->phase_seg1;
++	int tseg2 = bt->phase_seg2;
++	int sjw = bt->sjw;
++	int err = -EOPNOTSUPP;
++
++	busparams.bitrate = cpu_to_le32(bt->bitrate);
++	busparams.sjw = (u8)sjw;
++	busparams.tseg1 = (u8)tseg1;
++	busparams.tseg2 = (u8)tseg2;
++	if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
++		busparams.nsamples = 3;
++	else
++		busparams.nsamples = 1;
++
++	err = ops->dev_set_bittiming(netdev, &busparams);
++	if (err)
++		return err;
++
++	err = kvaser_usb_setup_rx_urbs(priv->dev);
++	if (err)
++		return err;
++
++	err = ops->dev_get_busparams(priv);
++	if (err) {
++		/* Treat EOPNOTSUPP as success */
++		if (err == -EOPNOTSUPP)
++			err = 0;
++		return err;
++	}
++
++	if (memcmp(&busparams, &priv->busparams_nominal,
++		   sizeof(priv->busparams_nominal)) != 0)
++		err = -EINVAL;
++
++	return err;
++}
++
++static int kvaser_usb_set_data_bittiming(struct net_device *netdev)
++{
++	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
++	struct kvaser_usb *dev = priv->dev;
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
++	struct can_bittiming *dbt = &priv->can.data_bittiming;
++
++	struct kvaser_usb_busparams busparams;
++	int tseg1 = dbt->prop_seg + dbt->phase_seg1;
++	int tseg2 = dbt->phase_seg2;
++	int sjw = dbt->sjw;
++	int err;
++
++	if (!ops->dev_set_data_bittiming ||
++	    !ops->dev_get_data_busparams)
++		return -EOPNOTSUPP;
++
++	busparams.bitrate = cpu_to_le32(dbt->bitrate);
++	busparams.sjw = (u8)sjw;
++	busparams.tseg1 = (u8)tseg1;
++	busparams.tseg2 = (u8)tseg2;
++	busparams.nsamples = 1;
++
++	err = ops->dev_set_data_bittiming(netdev, &busparams);
++	if (err)
++		return err;
++
++	err = kvaser_usb_setup_rx_urbs(priv->dev);
++	if (err)
++		return err;
++
++	err = ops->dev_get_data_busparams(priv);
++	if (err)
++		return err;
++
++	if (memcmp(&busparams, &priv->busparams_data,
++		   sizeof(priv->busparams_data)) != 0)
++		err = -EINVAL;
++
++	return err;
++}
++
+ static void kvaser_usb_write_bulk_callback(struct urb *urb)
+ {
+ 	struct kvaser_usb_tx_urb_context *context = urb->context;
+@@ -645,6 +728,7 @@ static const struct net_device_ops kvaser_usb_netdev_ops = {
+ 
+ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
+ {
++	const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops;
+ 	int i;
+ 
+ 	for (i = 0; i < dev->nchannels; i++) {
+@@ -660,6 +744,9 @@ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev)
+ 		if (!dev->nets[i])
+ 			continue;
+ 
++		if (ops->dev_remove_channel)
++			ops->dev_remove_channel(dev->nets[i]);
++
+ 		free_candev(dev->nets[i]->netdev);
+ 	}
+ }
+@@ -692,6 +779,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	init_completion(&priv->start_comp);
+ 	init_completion(&priv->stop_comp);
+ 	init_completion(&priv->flush_comp);
++	init_completion(&priv->get_busparams_comp);
+ 	priv->can.ctrlmode_supported = 0;
+ 
+ 	priv->dev = dev;
+@@ -704,7 +792,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 	priv->can.state = CAN_STATE_STOPPED;
+ 	priv->can.clock.freq = dev->cfg->clock.freq;
+ 	priv->can.bittiming_const = dev->cfg->bittiming_const;
+-	priv->can.do_set_bittiming = ops->dev_set_bittiming;
++	priv->can.do_set_bittiming = kvaser_usb_set_bittiming;
+ 	priv->can.do_set_mode = ops->dev_set_mode;
+ 	if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) ||
+ 	    (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP))
+@@ -716,7 +804,7 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 
+ 	if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) {
+ 		priv->can.data_bittiming_const = dev->cfg->data_bittiming_const;
+-		priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming;
++		priv->can.do_set_data_bittiming = kvaser_usb_set_data_bittiming;
+ 	}
+ 
+ 	netdev->flags |= IFF_ECHO;
+@@ -728,17 +816,26 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel)
+ 
+ 	dev->nets[channel] = priv;
+ 
++	if (ops->dev_init_channel) {
++		err = ops->dev_init_channel(priv);
++		if (err)
++			goto err;
++	}
++
+ 	err = register_candev(netdev);
+ 	if (err) {
+ 		dev_err(&dev->intf->dev, "Failed to register CAN device\n");
+-		free_candev(netdev);
+-		dev->nets[channel] = NULL;
+-		return err;
++		goto err;
+ 	}
+ 
+ 	netdev_dbg(netdev, "device registered\n");
+ 
+ 	return 0;
++
++err:
++	free_candev(netdev);
++	dev->nets[channel] = NULL;
++	return err;
+ }
+ 
+ static int kvaser_usb_probe(struct usb_interface *intf,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+index 45d2787248839..2764fdd7e84b3 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+@@ -43,6 +43,8 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc;
+ 
+ /* Minihydra command IDs */
+ #define CMD_SET_BUSPARAMS_REQ			16
++#define CMD_GET_BUSPARAMS_REQ			17
++#define CMD_GET_BUSPARAMS_RESP			18
+ #define CMD_GET_CHIP_STATE_REQ			19
+ #define CMD_CHIP_STATE_EVENT			20
+ #define CMD_SET_DRIVERMODE_REQ			21
+@@ -193,21 +195,26 @@ struct kvaser_cmd_chip_state_event {
+ #define KVASER_USB_HYDRA_BUS_MODE_CANFD_ISO	0x01
+ #define KVASER_USB_HYDRA_BUS_MODE_NONISO	0x02
+ struct kvaser_cmd_set_busparams {
+-	__le32 bitrate;
+-	u8 tseg1;
+-	u8 tseg2;
+-	u8 sjw;
+-	u8 nsamples;
++	struct kvaser_usb_busparams busparams_nominal;
+ 	u8 reserved0[4];
+-	__le32 bitrate_d;
+-	u8 tseg1_d;
+-	u8 tseg2_d;
+-	u8 sjw_d;
+-	u8 nsamples_d;
++	struct kvaser_usb_busparams busparams_data;
+ 	u8 canfd_mode;
+ 	u8 reserved1[7];
+ } __packed;
+ 
++/* Busparam type */
++#define KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN	0x00
++#define KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD	0x01
++struct kvaser_cmd_get_busparams_req {
++	u8 type;
++	u8 reserved[27];
++} __packed;
++
++struct kvaser_cmd_get_busparams_res {
++	struct kvaser_usb_busparams busparams;
++	u8 reserved[20];
++} __packed;
++
+ /* Ctrl modes */
+ #define KVASER_USB_HYDRA_CTRLMODE_NORMAL	0x01
+ #define KVASER_USB_HYDRA_CTRLMODE_LISTEN	0x02
+@@ -278,6 +285,8 @@ struct kvaser_cmd {
+ 		struct kvaser_cmd_error_event error_event;
+ 
+ 		struct kvaser_cmd_set_busparams set_busparams_req;
++		struct kvaser_cmd_get_busparams_req get_busparams_req;
++		struct kvaser_cmd_get_busparams_res get_busparams_res;
+ 
+ 		struct kvaser_cmd_chip_state_event chip_state_event;
+ 
+@@ -293,6 +302,7 @@ struct kvaser_cmd {
+ #define KVASER_USB_HYDRA_CF_FLAG_OVERRUN	BIT(1)
+ #define KVASER_USB_HYDRA_CF_FLAG_REMOTE_FRAME	BIT(4)
+ #define KVASER_USB_HYDRA_CF_FLAG_EXTENDED_ID	BIT(5)
++#define KVASER_USB_HYDRA_CF_FLAG_TX_ACK		BIT(6)
+ /* CAN frame flags. Used in ext_rx_can and ext_tx_can */
+ #define KVASER_USB_HYDRA_CF_FLAG_OSM_NACK	BIT(12)
+ #define KVASER_USB_HYDRA_CF_FLAG_ABL		BIT(13)
+@@ -359,6 +369,10 @@ struct kvaser_cmd_ext {
+ 	} __packed;
+ } __packed;
+ 
++struct kvaser_usb_net_hydra_priv {
++	int pending_get_busparams_type;
++};
++
+ static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = {
+ 	.name = "kvaser_usb_kcan",
+ 	.tseg1_min = 1,
+@@ -812,6 +826,39 @@ static void kvaser_usb_hydra_flush_queue_reply(const struct kvaser_usb *dev,
+ 	complete(&priv->flush_comp);
+ }
+ 
++static void kvaser_usb_hydra_get_busparams_reply(const struct kvaser_usb *dev,
++						 const struct kvaser_cmd *cmd)
++{
++	struct kvaser_usb_net_priv *priv;
++	struct kvaser_usb_net_hydra_priv *hydra;
++
++	priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
++	if (!priv)
++		return;
++
++	hydra = priv->sub_priv;
++	if (!hydra)
++		return;
++
++	switch (hydra->pending_get_busparams_type) {
++	case KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN:
++		memcpy(&priv->busparams_nominal, &cmd->get_busparams_res.busparams,
++		       sizeof(priv->busparams_nominal));
++		break;
++	case KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD:
++		memcpy(&priv->busparams_data, &cmd->get_busparams_res.busparams,
++		       sizeof(priv->busparams_nominal));
++		break;
++	default:
++		dev_warn(&dev->intf->dev, "Unknown get_busparams_type %d\n",
++			 hydra->pending_get_busparams_type);
++		break;
++	}
++	hydra->pending_get_busparams_type = -1;
++
++	complete(&priv->get_busparams_comp);
++}
++
+ static void
+ kvaser_usb_hydra_bus_status_to_can_state(const struct kvaser_usb_net_priv *priv,
+ 					 u8 bus_status,
+@@ -1099,6 +1146,7 @@ static void kvaser_usb_hydra_tx_acknowledge(const struct kvaser_usb *dev,
+ 	struct kvaser_usb_net_priv *priv;
+ 	unsigned long irq_flags;
+ 	bool one_shot_fail = false;
++	bool is_err_frame = false;
+ 	u16 transid = kvaser_usb_hydra_get_cmd_transid(cmd);
+ 
+ 	priv = kvaser_usb_hydra_net_priv_from_cmd(dev, cmd);
+@@ -1117,10 +1165,13 @@ static void kvaser_usb_hydra_tx_acknowledge(const struct kvaser_usb *dev,
+ 			kvaser_usb_hydra_one_shot_fail(priv, cmd_ext);
+ 			one_shot_fail = true;
+ 		}
++
++		is_err_frame = flags & KVASER_USB_HYDRA_CF_FLAG_TX_ACK &&
++			       flags & KVASER_USB_HYDRA_CF_FLAG_ERROR_FRAME;
+ 	}
+ 
+ 	context = &priv->tx_contexts[transid % dev->max_tx_urbs];
+-	if (!one_shot_fail) {
++	if (!one_shot_fail && !is_err_frame) {
+ 		struct net_device_stats *stats = &priv->netdev->stats;
+ 
+ 		stats->tx_packets++;
+@@ -1294,6 +1345,10 @@ static void kvaser_usb_hydra_handle_cmd_std(const struct kvaser_usb *dev,
+ 		kvaser_usb_hydra_state_event(dev, cmd);
+ 		break;
+ 
++	case CMD_GET_BUSPARAMS_RESP:
++		kvaser_usb_hydra_get_busparams_reply(dev, cmd);
++		break;
++
+ 	case CMD_ERROR_EVENT:
+ 		kvaser_usb_hydra_error_event(dev, cmd);
+ 		break;
+@@ -1494,15 +1549,58 @@ static int kvaser_usb_hydra_set_mode(struct net_device *netdev,
+ 	return err;
+ }
+ 
+-static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
++static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
++					  int busparams_type)
++{
++	struct kvaser_usb *dev = priv->dev;
++	struct kvaser_usb_net_hydra_priv *hydra = priv->sub_priv;
++	struct kvaser_cmd *cmd;
++	int err;
++
++	if (!hydra)
++		return -EINVAL;
++
++	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
++
++	cmd->header.cmd_no = CMD_GET_BUSPARAMS_REQ;
++	kvaser_usb_hydra_set_cmd_dest_he
++		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
++	kvaser_usb_hydra_set_cmd_transid
++				(cmd, kvaser_usb_hydra_get_next_transid(dev));
++	cmd->get_busparams_req.type = busparams_type;
++	hydra->pending_get_busparams_type = busparams_type;
++
++	reinit_completion(&priv->get_busparams_comp);
++
++	err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
++	if (err)
++		return err;
++
++	if (!wait_for_completion_timeout(&priv->get_busparams_comp,
++					 msecs_to_jiffies(KVASER_USB_TIMEOUT)))
++		return -ETIMEDOUT;
++
++	return err;
++}
++
++static int kvaser_usb_hydra_get_nominal_busparams(struct kvaser_usb_net_priv *priv)
++{
++	return kvaser_usb_hydra_get_busparams(priv, KVASER_USB_HYDRA_BUSPARAM_TYPE_CAN);
++}
++
++static int kvaser_usb_hydra_get_data_busparams(struct kvaser_usb_net_priv *priv)
++{
++	return kvaser_usb_hydra_get_busparams(priv, KVASER_USB_HYDRA_BUSPARAM_TYPE_CANFD);
++}
++
++static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
++					  const struct kvaser_usb_busparams *busparams)
+ {
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+-	struct can_bittiming *bt = &priv->can.bittiming;
+ 	struct kvaser_usb *dev = priv->dev;
+-	int tseg1 = bt->prop_seg + bt->phase_seg1;
+-	int tseg2 = bt->phase_seg2;
+-	int sjw = bt->sjw;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -1510,11 +1608,8 @@ static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
+-	cmd->set_busparams_req.bitrate = cpu_to_le32(bt->bitrate);
+-	cmd->set_busparams_req.sjw = (u8)sjw;
+-	cmd->set_busparams_req.tseg1 = (u8)tseg1;
+-	cmd->set_busparams_req.tseg2 = (u8)tseg2;
+-	cmd->set_busparams_req.nsamples = 1;
++	memcpy(&cmd->set_busparams_req.busparams_nominal, busparams,
++	       sizeof(cmd->set_busparams_req.busparams_nominal));
+ 
+ 	kvaser_usb_hydra_set_cmd_dest_he
+ 		(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
+@@ -1528,15 +1623,12 @@ static int kvaser_usb_hydra_set_bittiming(struct net_device *netdev)
+ 	return err;
+ }
+ 
+-static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
++static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
++					       const struct kvaser_usb_busparams *busparams)
+ {
+ 	struct kvaser_cmd *cmd;
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+-	struct can_bittiming *dbt = &priv->can.data_bittiming;
+ 	struct kvaser_usb *dev = priv->dev;
+-	int tseg1 = dbt->prop_seg + dbt->phase_seg1;
+-	int tseg2 = dbt->phase_seg2;
+-	int sjw = dbt->sjw;
+ 	int err;
+ 
+ 	cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
+@@ -1544,11 +1636,8 @@ static int kvaser_usb_hydra_set_data_bittiming(struct net_device *netdev)
+ 		return -ENOMEM;
+ 
+ 	cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
+-	cmd->set_busparams_req.bitrate_d = cpu_to_le32(dbt->bitrate);
+-	cmd->set_busparams_req.sjw_d = (u8)sjw;
+-	cmd->set_busparams_req.tseg1_d = (u8)tseg1;
+-	cmd->set_busparams_req.tseg2_d = (u8)tseg2;
+-	cmd->set_busparams_req.nsamples_d = 1;
++	memcpy(&cmd->set_busparams_req.busparams_data, busparams,
++	       sizeof(cmd->set_busparams_req.busparams_data));
+ 
+ 	if (priv->can.ctrlmode & CAN_CTRLMODE_FD) {
+ 		if (priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)
+@@ -1655,6 +1744,19 @@ static int kvaser_usb_hydra_init_card(struct kvaser_usb *dev)
+ 	return 0;
+ }
+ 
++static int kvaser_usb_hydra_init_channel(struct kvaser_usb_net_priv *priv)
++{
++	struct kvaser_usb_net_hydra_priv *hydra;
++
++	hydra = devm_kzalloc(&priv->dev->intf->dev, sizeof(*hydra), GFP_KERNEL);
++	if (!hydra)
++		return -ENOMEM;
++
++	priv->sub_priv = hydra;
++
++	return 0;
++}
++
+ static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
+ {
+ 	struct kvaser_cmd cmd;
+@@ -1997,10 +2099,13 @@ kvaser_usb_hydra_frame_to_cmd(const struct kvaser_usb_net_priv *priv,
+ const struct kvaser_usb_dev_ops kvaser_usb_hydra_dev_ops = {
+ 	.dev_set_mode = kvaser_usb_hydra_set_mode,
+ 	.dev_set_bittiming = kvaser_usb_hydra_set_bittiming,
++	.dev_get_busparams = kvaser_usb_hydra_get_nominal_busparams,
+ 	.dev_set_data_bittiming = kvaser_usb_hydra_set_data_bittiming,
++	.dev_get_data_busparams = kvaser_usb_hydra_get_data_busparams,
+ 	.dev_get_berr_counter = kvaser_usb_hydra_get_berr_counter,
+ 	.dev_setup_endpoints = kvaser_usb_hydra_setup_endpoints,
+ 	.dev_init_card = kvaser_usb_hydra_init_card,
++	.dev_init_channel = kvaser_usb_hydra_init_channel,
+ 	.dev_get_software_info = kvaser_usb_hydra_get_software_info,
+ 	.dev_get_software_details = kvaser_usb_hydra_get_software_details,
+ 	.dev_get_card_info = kvaser_usb_hydra_get_card_info,
+diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+index 15380cc08ee69..f06d63db9077b 100644
+--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
++++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
+@@ -20,6 +20,7 @@
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/usb.h>
++#include <linux/workqueue.h>
+ 
+ #include <linux/can.h>
+ #include <linux/can/dev.h>
+@@ -55,6 +56,9 @@
+ #define CMD_RX_EXT_MESSAGE		14
+ #define CMD_TX_EXT_MESSAGE		15
+ #define CMD_SET_BUS_PARAMS		16
++#define CMD_GET_BUS_PARAMS		17
++#define CMD_GET_BUS_PARAMS_REPLY	18
++#define CMD_GET_CHIP_STATE		19
+ #define CMD_CHIP_STATE_EVENT		20
+ #define CMD_SET_CTRL_MODE		21
+ #define CMD_RESET_CHIP			24
+@@ -69,10 +73,13 @@
+ #define CMD_GET_CARD_INFO_REPLY		35
+ #define CMD_GET_SOFTWARE_INFO		38
+ #define CMD_GET_SOFTWARE_INFO_REPLY	39
++#define CMD_ERROR_EVENT			45
+ #define CMD_FLUSH_QUEUE			48
+ #define CMD_TX_ACKNOWLEDGE		50
+ #define CMD_CAN_ERROR_EVENT		51
+ #define CMD_FLUSH_QUEUE_REPLY		68
++#define CMD_GET_CAPABILITIES_REQ	95
++#define CMD_GET_CAPABILITIES_RESP	96
+ 
+ #define CMD_LEAF_LOG_MESSAGE		106
+ 
+@@ -82,6 +89,8 @@
+ #define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5)
+ #define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6)
+ 
++#define KVASER_USB_LEAF_SWOPTION_EXT_CAP BIT(12)
++
+ /* error factors */
+ #define M16C_EF_ACKE			BIT(0)
+ #define M16C_EF_CRCE			BIT(1)
+@@ -156,11 +165,7 @@ struct usbcan_cmd_softinfo {
+ struct kvaser_cmd_busparams {
+ 	u8 tid;
+ 	u8 channel;
+-	__le32 bitrate;
+-	u8 tseg1;
+-	u8 tseg2;
+-	u8 sjw;
+-	u8 no_samp;
++	struct kvaser_usb_busparams busparams;
+ } __packed;
+ 
+ struct kvaser_cmd_tx_can {
+@@ -229,7 +234,7 @@ struct kvaser_cmd_tx_acknowledge_header {
+ 	u8 tid;
+ } __packed;
+ 
+-struct leaf_cmd_error_event {
++struct leaf_cmd_can_error_event {
+ 	u8 tid;
+ 	u8 flags;
+ 	__le16 time[3];
+@@ -241,7 +246,7 @@ struct leaf_cmd_error_event {
+ 	u8 error_factor;
+ } __packed;
+ 
+-struct usbcan_cmd_error_event {
++struct usbcan_cmd_can_error_event {
+ 	u8 tid;
+ 	u8 padding;
+ 	u8 tx_errors_count_ch0;
+@@ -253,6 +258,28 @@ struct usbcan_cmd_error_event {
+ 	__le16 time;
+ } __packed;
+ 
++/* CMD_ERROR_EVENT error codes */
++#define KVASER_USB_LEAF_ERROR_EVENT_TX_QUEUE_FULL 0x8
++#define KVASER_USB_LEAF_ERROR_EVENT_PARAM 0x9
++
++struct leaf_cmd_error_event {
++	u8 tid;
++	u8 error_code;
++	__le16 timestamp[3];
++	__le16 padding;
++	__le16 info1;
++	__le16 info2;
++} __packed;
++
++struct usbcan_cmd_error_event {
++	u8 tid;
++	u8 error_code;
++	__le16 info1;
++	__le16 info2;
++	__le16 timestamp;
++	__le16 padding;
++} __packed;
++
+ struct kvaser_cmd_ctrl_mode {
+ 	u8 tid;
+ 	u8 channel;
+@@ -277,6 +304,28 @@ struct leaf_cmd_log_message {
+ 	u8 data[8];
+ } __packed;
+ 
++/* Sub commands for cap_req and cap_res */
++#define KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE 0x02
++#define KVASER_USB_LEAF_CAP_CMD_ERR_REPORT 0x05
++struct kvaser_cmd_cap_req {
++	__le16 padding0;
++	__le16 cap_cmd;
++	__le16 padding1;
++	__le16 channel;
++} __packed;
++
++/* Status codes for cap_res */
++#define KVASER_USB_LEAF_CAP_STAT_OK 0x00
++#define KVASER_USB_LEAF_CAP_STAT_NOT_IMPL 0x01
++#define KVASER_USB_LEAF_CAP_STAT_UNAVAIL 0x02
++struct kvaser_cmd_cap_res {
++	__le16 padding;
++	__le16 cap_cmd;
++	__le16 status;
++	__le32 mask;
++	__le32 value;
++} __packed;
++
+ struct kvaser_cmd {
+ 	u8 len;
+ 	u8 id;
+@@ -292,14 +341,18 @@ struct kvaser_cmd {
+ 			struct leaf_cmd_softinfo softinfo;
+ 			struct leaf_cmd_rx_can rx_can;
+ 			struct leaf_cmd_chip_state_event chip_state_event;
+-			struct leaf_cmd_error_event error_event;
++			struct leaf_cmd_can_error_event can_error_event;
+ 			struct leaf_cmd_log_message log_message;
++			struct leaf_cmd_error_event error_event;
++			struct kvaser_cmd_cap_req cap_req;
++			struct kvaser_cmd_cap_res cap_res;
+ 		} __packed leaf;
+ 
+ 		union {
+ 			struct usbcan_cmd_softinfo softinfo;
+ 			struct usbcan_cmd_rx_can rx_can;
+ 			struct usbcan_cmd_chip_state_event chip_state_event;
++			struct usbcan_cmd_can_error_event can_error_event;
+ 			struct usbcan_cmd_error_event error_event;
+ 		} __packed usbcan;
+ 
+@@ -322,7 +375,10 @@ static const u8 kvaser_usb_leaf_cmd_sizes_leaf[] = {
+ 	[CMD_RX_EXT_MESSAGE]		= kvaser_fsize(u.leaf.rx_can),
+ 	[CMD_LEAF_LOG_MESSAGE]		= kvaser_fsize(u.leaf.log_message),
+ 	[CMD_CHIP_STATE_EVENT]		= kvaser_fsize(u.leaf.chip_state_event),
+-	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.leaf.error_event),
++	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.leaf.can_error_event),
++	[CMD_GET_CAPABILITIES_RESP]	= kvaser_fsize(u.leaf.cap_res),
++	[CMD_GET_BUS_PARAMS_REPLY]	= kvaser_fsize(u.busparams),
++	[CMD_ERROR_EVENT]		= kvaser_fsize(u.leaf.error_event),
+ 	/* ignored events: */
+ 	[CMD_FLUSH_QUEUE_REPLY]		= CMD_SIZE_ANY,
+ };
+@@ -336,7 +392,8 @@ static const u8 kvaser_usb_leaf_cmd_sizes_usbcan[] = {
+ 	[CMD_RX_STD_MESSAGE]		= kvaser_fsize(u.usbcan.rx_can),
+ 	[CMD_RX_EXT_MESSAGE]		= kvaser_fsize(u.usbcan.rx_can),
+ 	[CMD_CHIP_STATE_EVENT]		= kvaser_fsize(u.usbcan.chip_state_event),
+-	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.usbcan.error_event),
++	[CMD_CAN_ERROR_EVENT]		= kvaser_fsize(u.usbcan.can_error_event),
++	[CMD_ERROR_EVENT]		= kvaser_fsize(u.usbcan.error_event),
+ 	/* ignored events: */
+ 	[CMD_USBCAN_CLOCK_OVERFLOW_EVENT] = CMD_SIZE_ANY,
+ };
+@@ -364,6 +421,12 @@ struct kvaser_usb_err_summary {
+ 	};
+ };
+ 
++struct kvaser_usb_net_leaf_priv {
++	struct kvaser_usb_net_priv *net;
++
++	struct delayed_work chip_state_req_work;
++};
++
+ static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = {
+ 	.name = "kvaser_usb_ucii",
+ 	.tseg1_min = 4,
+@@ -607,6 +670,9 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev,
+ 	dev->fw_version = le32_to_cpu(softinfo->fw_version);
+ 	dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx);
+ 
++	if (sw_options & KVASER_USB_LEAF_SWOPTION_EXT_CAP)
++		dev->card_data.capabilities |= KVASER_USB_CAP_EXT_CAP;
++
+ 	if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) {
+ 		/* Firmware expects bittiming parameters calculated for 16MHz
+ 		 * clock, regardless of the actual clock
+@@ -694,6 +760,116 @@ static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev)
+ 	return 0;
+ }
+ 
++static int kvaser_usb_leaf_get_single_capability(struct kvaser_usb *dev,
++						 u16 cap_cmd_req, u16 *status)
++{
++	struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
++	struct kvaser_cmd *cmd;
++	u32 value = 0;
++	u32 mask = 0;
++	u16 cap_cmd_res;
++	int err;
++	int i;
++
++	cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
++	if (!cmd)
++		return -ENOMEM;
++
++	cmd->id = CMD_GET_CAPABILITIES_REQ;
++	cmd->u.leaf.cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
++	cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_cap_req);
++
++	err = kvaser_usb_send_cmd(dev, cmd, cmd->len);
++	if (err)
++		goto end;
++
++	err = kvaser_usb_leaf_wait_cmd(dev, CMD_GET_CAPABILITIES_RESP, cmd);
++	if (err)
++		goto end;
++
++	*status = le16_to_cpu(cmd->u.leaf.cap_res.status);
++
++	if (*status != KVASER_USB_LEAF_CAP_STAT_OK)
++		goto end;
++
++	cap_cmd_res = le16_to_cpu(cmd->u.leaf.cap_res.cap_cmd);
++	switch (cap_cmd_res) {
++	case KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE:
++	case KVASER_USB_LEAF_CAP_CMD_ERR_REPORT:
++		value = le32_to_cpu(cmd->u.leaf.cap_res.value);
++		mask = le32_to_cpu(cmd->u.leaf.cap_res.mask);
++		break;
++	default:
++		dev_warn(&dev->intf->dev, "Unknown capability command %u\n",
++			 cap_cmd_res);
++		break;
++	}
++
++	for (i = 0; i < dev->nchannels; i++) {
++		if (BIT(i) & (value & mask)) {
++			switch (cap_cmd_res) {
++			case KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE:
++				card_data->ctrlmode_supported |=
++						CAN_CTRLMODE_LISTENONLY;
++				break;
++			case KVASER_USB_LEAF_CAP_CMD_ERR_REPORT:
++				card_data->capabilities |=
++						KVASER_USB_CAP_BERR_CAP;
++				break;
++			}
++		}
++	}
++
++end:
++	kfree(cmd);
++
++	return err;
++}
++
++static int kvaser_usb_leaf_get_capabilities_leaf(struct kvaser_usb *dev)
++{
++	int err;
++	u16 status;
++
++	if (!(dev->card_data.capabilities & KVASER_USB_CAP_EXT_CAP)) {
++		dev_info(&dev->intf->dev,
++			 "No extended capability support. Upgrade device firmware.\n");
++		return 0;
++	}
++
++	err = kvaser_usb_leaf_get_single_capability(dev,
++						    KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE,
++						    &status);
++	if (err)
++		return err;
++	if (status)
++		dev_info(&dev->intf->dev,
++			 "KVASER_USB_LEAF_CAP_CMD_LISTEN_MODE failed %u\n",
++			 status);
++
++	err = kvaser_usb_leaf_get_single_capability(dev,
++						    KVASER_USB_LEAF_CAP_CMD_ERR_REPORT,
++						    &status);
++	if (err)
++		return err;
++	if (status)
++		dev_info(&dev->intf->dev,
++			 "KVASER_USB_LEAF_CAP_CMD_ERR_REPORT failed %u\n",
++			 status);
++
++	return 0;
++}
++
++static int kvaser_usb_leaf_get_capabilities(struct kvaser_usb *dev)
++{
++	int err = 0;
++
++	if (dev->driver_info->family == KVASER_LEAF)
++		err = kvaser_usb_leaf_get_capabilities_leaf(dev);
++
++	return err;
++}
++
+ static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
+ 					   const struct kvaser_cmd *cmd)
+ {
+@@ -722,7 +898,7 @@ static void kvaser_usb_leaf_tx_acknowledge(const struct kvaser_usb *dev,
+ 	context = &priv->tx_contexts[tid % dev->max_tx_urbs];
+ 
+ 	/* Sometimes the state change doesn't come after a bus-off event */
+-	if (priv->can.restart_ms && priv->can.state >= CAN_STATE_BUS_OFF) {
++	if (priv->can.restart_ms && priv->can.state == CAN_STATE_BUS_OFF) {
+ 		struct sk_buff *skb;
+ 		struct can_frame *cf;
+ 
+@@ -778,6 +954,16 @@ static int kvaser_usb_leaf_simple_cmd_async(struct kvaser_usb_net_priv *priv,
+ 	return err;
+ }
+ 
++static void kvaser_usb_leaf_chip_state_req_work(struct work_struct *work)
++{
++	struct kvaser_usb_net_leaf_priv *leaf =
++		container_of(work, struct kvaser_usb_net_leaf_priv,
++			     chip_state_req_work.work);
++	struct kvaser_usb_net_priv *priv = leaf->net;
++
++	kvaser_usb_leaf_simple_cmd_async(priv, CMD_GET_CHIP_STATE);
++}
++
+ static void
+ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 					const struct kvaser_usb_err_summary *es,
+@@ -796,20 +982,16 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 		new_state = CAN_STATE_BUS_OFF;
+ 	} else if (es->status & M16C_STATE_BUS_PASSIVE) {
+ 		new_state = CAN_STATE_ERROR_PASSIVE;
+-	} else if (es->status & M16C_STATE_BUS_ERROR) {
++	} else if ((es->status & M16C_STATE_BUS_ERROR) &&
++		   cur_state >= CAN_STATE_BUS_OFF) {
+ 		/* Guard against spurious error events after a busoff */
+-		if (cur_state < CAN_STATE_BUS_OFF) {
+-			if (es->txerr >= 128 || es->rxerr >= 128)
+-				new_state = CAN_STATE_ERROR_PASSIVE;
+-			else if (es->txerr >= 96 || es->rxerr >= 96)
+-				new_state = CAN_STATE_ERROR_WARNING;
+-			else if (cur_state > CAN_STATE_ERROR_ACTIVE)
+-				new_state = CAN_STATE_ERROR_ACTIVE;
+-		}
+-	}
+-
+-	if (!es->status)
++	} else if (es->txerr >= 128 || es->rxerr >= 128) {
++		new_state = CAN_STATE_ERROR_PASSIVE;
++	} else if (es->txerr >= 96 || es->rxerr >= 96) {
++		new_state = CAN_STATE_ERROR_WARNING;
++	} else {
+ 		new_state = CAN_STATE_ERROR_ACTIVE;
++	}
+ 
+ 	if (new_state != cur_state) {
+ 		tx_state = (es->txerr >= es->rxerr) ? new_state : 0;
+@@ -819,7 +1001,7 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv,
+ 	}
+ 
+ 	if (priv->can.restart_ms &&
+-	    cur_state >= CAN_STATE_BUS_OFF &&
++	    cur_state == CAN_STATE_BUS_OFF &&
+ 	    new_state < CAN_STATE_BUS_OFF)
+ 		priv->can.can_stats.restarts++;
+ 
+@@ -853,6 +1035,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	struct sk_buff *skb;
+ 	struct net_device_stats *stats;
+ 	struct kvaser_usb_net_priv *priv;
++	struct kvaser_usb_net_leaf_priv *leaf;
+ 	enum can_state old_state, new_state;
+ 
+ 	if (es->channel >= dev->nchannels) {
+@@ -862,8 +1045,13 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	}
+ 
+ 	priv = dev->nets[es->channel];
++	leaf = priv->sub_priv;
+ 	stats = &priv->netdev->stats;
+ 
++	/* Ignore e.g. state change to bus-off reported just after stopping */
++	if (!netif_running(priv->netdev))
++		return;
++
+ 	/* Update all of the CAN interface's state and error counters before
+ 	 * trying any memory allocation that can actually fail with -ENOMEM.
+ 	 *
+@@ -878,6 +1066,14 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 	kvaser_usb_leaf_rx_error_update_can_state(priv, es, &tmp_cf);
+ 	new_state = priv->can.state;
+ 
++	/* If there are errors, request status updates periodically as we do
++	 * not get automatic notifications of improved state.
++	 */
++	if (new_state < CAN_STATE_BUS_OFF &&
++	    (es->rxerr || es->txerr || new_state == CAN_STATE_ERROR_PASSIVE))
++		schedule_delayed_work(&leaf->chip_state_req_work,
++				      msecs_to_jiffies(500));
++
+ 	skb = alloc_can_err_skb(priv->netdev, &cf);
+ 	if (!skb) {
+ 		stats->rx_dropped++;
+@@ -895,7 +1091,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev,
+ 		}
+ 
+ 		if (priv->can.restart_ms &&
+-		    old_state >= CAN_STATE_BUS_OFF &&
++		    old_state == CAN_STATE_BUS_OFF &&
+ 		    new_state < CAN_STATE_BUS_OFF) {
+ 			cf->can_id |= CAN_ERR_RESTARTED;
+ 			netif_carrier_on(priv->netdev);
+@@ -995,11 +1191,11 @@ static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
+ 
+ 	case CMD_CAN_ERROR_EVENT:
+ 		es.channel = 0;
+-		es.status = cmd->u.usbcan.error_event.status_ch0;
+-		es.txerr = cmd->u.usbcan.error_event.tx_errors_count_ch0;
+-		es.rxerr = cmd->u.usbcan.error_event.rx_errors_count_ch0;
++		es.status = cmd->u.usbcan.can_error_event.status_ch0;
++		es.txerr = cmd->u.usbcan.can_error_event.tx_errors_count_ch0;
++		es.rxerr = cmd->u.usbcan.can_error_event.rx_errors_count_ch0;
+ 		es.usbcan.other_ch_status =
+-			cmd->u.usbcan.error_event.status_ch1;
++			cmd->u.usbcan.can_error_event.status_ch1;
+ 		kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
+ 
+ 		/* The USBCAN firmware supports up to 2 channels.
+@@ -1007,13 +1203,13 @@ static void kvaser_usb_leaf_usbcan_rx_error(const struct kvaser_usb *dev,
+ 		 */
+ 		if (dev->nchannels == MAX_USBCAN_NET_DEVICES) {
+ 			es.channel = 1;
+-			es.status = cmd->u.usbcan.error_event.status_ch1;
++			es.status = cmd->u.usbcan.can_error_event.status_ch1;
+ 			es.txerr =
+-				cmd->u.usbcan.error_event.tx_errors_count_ch1;
++				cmd->u.usbcan.can_error_event.tx_errors_count_ch1;
+ 			es.rxerr =
+-				cmd->u.usbcan.error_event.rx_errors_count_ch1;
++				cmd->u.usbcan.can_error_event.rx_errors_count_ch1;
+ 			es.usbcan.other_ch_status =
+-				cmd->u.usbcan.error_event.status_ch0;
++				cmd->u.usbcan.can_error_event.status_ch0;
+ 			kvaser_usb_leaf_usbcan_conditionally_rx_error(dev, &es);
+ 		}
+ 		break;
+@@ -1030,11 +1226,11 @@ static void kvaser_usb_leaf_leaf_rx_error(const struct kvaser_usb *dev,
+ 
+ 	switch (cmd->id) {
+ 	case CMD_CAN_ERROR_EVENT:
+-		es.channel = cmd->u.leaf.error_event.channel;
+-		es.status = cmd->u.leaf.error_event.status;
+-		es.txerr = cmd->u.leaf.error_event.tx_errors_count;
+-		es.rxerr = cmd->u.leaf.error_event.rx_errors_count;
+-		es.leaf.error_factor = cmd->u.leaf.error_event.error_factor;
++		es.channel = cmd->u.leaf.can_error_event.channel;
++		es.status = cmd->u.leaf.can_error_event.status;
++		es.txerr = cmd->u.leaf.can_error_event.tx_errors_count;
++		es.rxerr = cmd->u.leaf.can_error_event.rx_errors_count;
++		es.leaf.error_factor = cmd->u.leaf.can_error_event.error_factor;
+ 		break;
+ 	case CMD_LEAF_LOG_MESSAGE:
+ 		es.channel = cmd->u.leaf.log_message.channel;
+@@ -1166,6 +1362,74 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev,
+ 	netif_rx(skb);
+ }
+ 
++static void kvaser_usb_leaf_error_event_parameter(const struct kvaser_usb *dev,
++						  const struct kvaser_cmd *cmd)
++{
++	u16 info1 = 0;
++
++	switch (dev->driver_info->family) {
++	case KVASER_LEAF:
++		info1 = le16_to_cpu(cmd->u.leaf.error_event.info1);
++		break;
++	case KVASER_USBCAN:
++		info1 = le16_to_cpu(cmd->u.usbcan.error_event.info1);
++		break;
++	}
++
++	/* info1 will contain the offending cmd_no */
++	switch (info1) {
++	case CMD_SET_CTRL_MODE:
++		dev_warn(&dev->intf->dev,
++			 "CMD_SET_CTRL_MODE error in parameter\n");
++		break;
++
++	case CMD_SET_BUS_PARAMS:
++		dev_warn(&dev->intf->dev,
++			 "CMD_SET_BUS_PARAMS error in parameter\n");
++		break;
++
++	default:
++		dev_warn(&dev->intf->dev,
++			 "Unhandled parameter error event cmd_no (%u)\n",
++			 info1);
++		break;
++	}
++}
++
++static void kvaser_usb_leaf_error_event(const struct kvaser_usb *dev,
++					const struct kvaser_cmd *cmd)
++{
++	u8 error_code = 0;
++
++	switch (dev->driver_info->family) {
++	case KVASER_LEAF:
++		error_code = cmd->u.leaf.error_event.error_code;
++		break;
++	case KVASER_USBCAN:
++		error_code = cmd->u.usbcan.error_event.error_code;
++		break;
++	}
++
++	switch (error_code) {
++	case KVASER_USB_LEAF_ERROR_EVENT_TX_QUEUE_FULL:
++		/* Received additional CAN message, when firmware TX queue is
++		 * already full. Something is wrong with the driver.
++		 * This should never happen!
++		 */
++		dev_err(&dev->intf->dev,
++			"Received error event TX_QUEUE_FULL\n");
++		break;
++	case KVASER_USB_LEAF_ERROR_EVENT_PARAM:
++		kvaser_usb_leaf_error_event_parameter(dev, cmd);
++		break;
++
++	default:
++		dev_warn(&dev->intf->dev,
++			 "Unhandled error event (%d)\n", error_code);
++		break;
++	}
++}
++
+ static void kvaser_usb_leaf_start_chip_reply(const struct kvaser_usb *dev,
+ 					     const struct kvaser_cmd *cmd)
+ {
+@@ -1206,6 +1470,25 @@ static void kvaser_usb_leaf_stop_chip_reply(const struct kvaser_usb *dev,
+ 	complete(&priv->stop_comp);
+ }
+ 
++static void kvaser_usb_leaf_get_busparams_reply(const struct kvaser_usb *dev,
++						const struct kvaser_cmd *cmd)
++{
++	struct kvaser_usb_net_priv *priv;
++	u8 channel = cmd->u.busparams.channel;
++
++	if (channel >= dev->nchannels) {
++		dev_err(&dev->intf->dev,
++			"Invalid channel number (%d)\n", channel);
++		return;
++	}
++
++	priv = dev->nets[channel];
++	memcpy(&priv->busparams_nominal, &cmd->u.busparams.busparams,
++	       sizeof(priv->busparams_nominal));
++
++	complete(&priv->get_busparams_comp);
++}
++
+ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 					   const struct kvaser_cmd *cmd)
+ {
+@@ -1244,6 +1527,14 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev,
+ 		kvaser_usb_leaf_tx_acknowledge(dev, cmd);
+ 		break;
+ 
++	case CMD_ERROR_EVENT:
++		kvaser_usb_leaf_error_event(dev, cmd);
++		break;
++
++	case CMD_GET_BUS_PARAMS_REPLY:
++		kvaser_usb_leaf_get_busparams_reply(dev, cmd);
++		break;
++
+ 	/* Ignored commands */
+ 	case CMD_USBCAN_CLOCK_OVERFLOW_EVENT:
+ 		if (dev->driver_info->family != KVASER_USBCAN)
+@@ -1340,10 +1631,13 @@ static int kvaser_usb_leaf_start_chip(struct kvaser_usb_net_priv *priv)
+ 
+ static int kvaser_usb_leaf_stop_chip(struct kvaser_usb_net_priv *priv)
+ {
++	struct kvaser_usb_net_leaf_priv *leaf = priv->sub_priv;
+ 	int err;
+ 
+ 	reinit_completion(&priv->stop_comp);
+ 
++	cancel_delayed_work(&leaf->chip_state_req_work);
++
+ 	err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_STOP_CHIP,
+ 					      priv->channel);
+ 	if (err)
+@@ -1390,10 +1684,35 @@ static int kvaser_usb_leaf_init_card(struct kvaser_usb *dev)
+ 	return 0;
+ }
+ 
+-static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
++static int kvaser_usb_leaf_init_channel(struct kvaser_usb_net_priv *priv)
++{
++	struct kvaser_usb_net_leaf_priv *leaf;
++
++	leaf = devm_kzalloc(&priv->dev->intf->dev, sizeof(*leaf), GFP_KERNEL);
++	if (!leaf)
++		return -ENOMEM;
++
++	leaf->net = priv;
++	INIT_DELAYED_WORK(&leaf->chip_state_req_work,
++			  kvaser_usb_leaf_chip_state_req_work);
++
++	priv->sub_priv = leaf;
++
++	return 0;
++}
++
++static void kvaser_usb_leaf_remove_channel(struct kvaser_usb_net_priv *priv)
++{
++	struct kvaser_usb_net_leaf_priv *leaf = priv->sub_priv;
++
++	if (leaf)
++		cancel_delayed_work_sync(&leaf->chip_state_req_work);
++}
++
++static int kvaser_usb_leaf_set_bittiming(const struct net_device *netdev,
++					 const struct kvaser_usb_busparams *busparams)
+ {
+ 	struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
+-	struct can_bittiming *bt = &priv->can.bittiming;
+ 	struct kvaser_usb *dev = priv->dev;
+ 	struct kvaser_cmd *cmd;
+ 	int rc;
+@@ -1406,15 +1725,8 @@ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
+ 	cmd->len = CMD_HEADER_LEN + sizeof(struct kvaser_cmd_busparams);
+ 	cmd->u.busparams.channel = priv->channel;
+ 	cmd->u.busparams.tid = 0xff;
+-	cmd->u.busparams.bitrate = cpu_to_le32(bt->bitrate);
+-	cmd->u.busparams.sjw = bt->sjw;
+-	cmd->u.busparams.tseg1 = bt->prop_seg + bt->phase_seg1;
+-	cmd->u.busparams.tseg2 = bt->phase_seg2;
+-
+-	if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
+-		cmd->u.busparams.no_samp = 3;
+-	else
+-		cmd->u.busparams.no_samp = 1;
++	memcpy(&cmd->u.busparams.busparams, busparams,
++	       sizeof(cmd->u.busparams.busparams));
+ 
+ 	rc = kvaser_usb_send_cmd(dev, cmd, cmd->len);
+ 
+@@ -1422,6 +1734,27 @@ static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev)
+ 	return rc;
+ }
+ 
++static int kvaser_usb_leaf_get_busparams(struct kvaser_usb_net_priv *priv)
++{
++	int err;
++
++	if (priv->dev->driver_info->family == KVASER_USBCAN)
++		return -EOPNOTSUPP;
++
++	reinit_completion(&priv->get_busparams_comp);
++
++	err = kvaser_usb_leaf_send_simple_cmd(priv->dev, CMD_GET_BUS_PARAMS,
++					      priv->channel);
++	if (err)
++		return err;
++
++	if (!wait_for_completion_timeout(&priv->get_busparams_comp,
++					 msecs_to_jiffies(KVASER_USB_TIMEOUT)))
++		return -ETIMEDOUT;
++
++	return 0;
++}
++
+ static int kvaser_usb_leaf_set_mode(struct net_device *netdev,
+ 				    enum can_mode mode)
+ {
+@@ -1483,14 +1816,18 @@ static int kvaser_usb_leaf_setup_endpoints(struct kvaser_usb *dev)
+ const struct kvaser_usb_dev_ops kvaser_usb_leaf_dev_ops = {
+ 	.dev_set_mode = kvaser_usb_leaf_set_mode,
+ 	.dev_set_bittiming = kvaser_usb_leaf_set_bittiming,
++	.dev_get_busparams = kvaser_usb_leaf_get_busparams,
+ 	.dev_set_data_bittiming = NULL,
++	.dev_get_data_busparams = NULL,
+ 	.dev_get_berr_counter = kvaser_usb_leaf_get_berr_counter,
+ 	.dev_setup_endpoints = kvaser_usb_leaf_setup_endpoints,
+ 	.dev_init_card = kvaser_usb_leaf_init_card,
++	.dev_init_channel = kvaser_usb_leaf_init_channel,
++	.dev_remove_channel = kvaser_usb_leaf_remove_channel,
+ 	.dev_get_software_info = kvaser_usb_leaf_get_software_info,
+ 	.dev_get_software_details = NULL,
+ 	.dev_get_card_info = kvaser_usb_leaf_get_card_info,
+-	.dev_get_capabilities = NULL,
++	.dev_get_capabilities = kvaser_usb_leaf_get_capabilities,
+ 	.dev_set_opt_mode = kvaser_usb_leaf_set_opt_mode,
+ 	.dev_start_chip = kvaser_usb_leaf_start_chip,
+ 	.dev_stop_chip = kvaser_usb_leaf_stop_chip,
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index ea1de06009d6d..093458fbde68b 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -58,6 +58,10 @@
+ #define MCBA_VER_REQ_USB 1
+ #define MCBA_VER_REQ_CAN 2
+ 
++/* Drive the CAN_RES signal LOW "0" to activate R24 and R25 */
++#define MCBA_VER_TERMINATION_ON 0
++#define MCBA_VER_TERMINATION_OFF 1
++
+ #define MCBA_SIDL_EXID_MASK 0x8
+ #define MCBA_DLC_MASK 0xf
+ #define MCBA_DLC_RTR_MASK 0x40
+@@ -480,7 +484,7 @@ static void mcba_usb_process_ka_usb(struct mcba_priv *priv,
+ 		priv->usb_ka_first_pass = false;
+ 	}
+ 
+-	if (msg->termination_state)
++	if (msg->termination_state == MCBA_VER_TERMINATION_ON)
+ 		priv->can.termination = MCBA_TERMINATION_ENABLED;
+ 	else
+ 		priv->can.termination = MCBA_TERMINATION_DISABLED;
+@@ -800,9 +804,9 @@ static int mcba_set_termination(struct net_device *netdev, u16 term)
+ 	};
+ 
+ 	if (term == MCBA_TERMINATION_ENABLED)
+-		usb_msg.termination = 1;
++		usb_msg.termination = MCBA_VER_TERMINATION_ON;
+ 	else
+-		usb_msg.termination = 0;
++		usb_msg.termination = MCBA_VER_TERMINATION_OFF;
+ 
+ 	mcba_usb_xmit_cmd(priv, (struct mcba_usb_msg *)&usb_msg);
+ 
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index 03dc075ff4e85..f976b3d64593a 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -1010,9 +1010,11 @@ static void lan9303_get_ethtool_stats(struct dsa_switch *ds, int port,
+ 		ret = lan9303_read_switch_port(
+ 			chip, port, lan9303_mib[u].offset, &reg);
+ 
+-		if (ret)
++		if (ret) {
+ 			dev_warn(chip->dev, "Reading status port %d reg %u failed\n",
+ 				 port, lan9303_mib[u].offset);
++			reg = 0;
++		}
+ 		data[u] = reg;
+ 	}
+ }
+diff --git a/drivers/net/ethernet/amd/atarilance.c b/drivers/net/ethernet/amd/atarilance.c
+index d3d44e07afbc0..414b990827e8e 100644
+--- a/drivers/net/ethernet/amd/atarilance.c
++++ b/drivers/net/ethernet/amd/atarilance.c
+@@ -825,7 +825,7 @@ lance_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	lp->memcpy_f( PKTBUF_ADDR(head), (void *)skb->data, skb->len );
+ 	head->flag = TMD1_OWN_CHIP | TMD1_ENP | TMD1_STP;
+ 	dev->stats.tx_bytes += skb->len;
+-	dev_kfree_skb( skb );
++	dev_consume_skb_irq(skb);
+ 	lp->cur_tx++;
+ 	while( lp->cur_tx >= TX_RING_SIZE && lp->dirty_tx >= TX_RING_SIZE ) {
+ 		lp->cur_tx -= TX_RING_SIZE;
+diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c
+index b56d84c7df46e..a21f4f82a9c2f 100644
+--- a/drivers/net/ethernet/amd/lance.c
++++ b/drivers/net/ethernet/amd/lance.c
+@@ -997,7 +997,7 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
+ 		skb_copy_from_linear_data(skb, &lp->tx_bounce_buffs[entry], skb->len);
+ 		lp->tx_ring[entry].base =
+ 			((u32)isa_virt_to_bus((lp->tx_bounce_buffs + entry)) & 0xffffff) | 0x83000000;
+-		dev_kfree_skb(skb);
++		dev_consume_skb_irq(skb);
+ 	} else {
+ 		lp->tx_skbuff[entry] = skb;
+ 		lp->tx_ring[entry].base = ((u32)isa_virt_to_bus(skb->data) & 0xffffff) | 0x83000000;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 35659f0dbe74e..c1fb1e62557c7 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -1140,6 +1140,9 @@ static void xgbe_free_irqs(struct xgbe_prv_data *pdata)
+ 
+ 	devm_free_irq(pdata->dev, pdata->dev_irq, pdata);
+ 
++	tasklet_kill(&pdata->tasklet_dev);
++	tasklet_kill(&pdata->tasklet_ecc);
++
+ 	if (pdata->vdata->ecc_support && (pdata->dev_irq != pdata->ecc_irq))
+ 		devm_free_irq(pdata->dev, pdata->ecc_irq, pdata);
+ 
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
+index 4d9062d35930f..530043742a07a 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-i2c.c
+@@ -447,8 +447,10 @@ static void xgbe_i2c_stop(struct xgbe_prv_data *pdata)
+ 	xgbe_i2c_disable(pdata);
+ 	xgbe_i2c_clear_all_interrupts(pdata);
+ 
+-	if (pdata->dev_irq != pdata->i2c_irq)
++	if (pdata->dev_irq != pdata->i2c_irq) {
+ 		devm_free_irq(pdata->dev, pdata->i2c_irq, pdata);
++		tasklet_kill(&pdata->tasklet_i2c);
++	}
+ }
+ 
+ static int xgbe_i2c_start(struct xgbe_prv_data *pdata)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 156a0bc8ab01d..97167fc9bebe7 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -1390,8 +1390,10 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata)
+ 	/* Disable auto-negotiation */
+ 	xgbe_an_disable_all(pdata);
+ 
+-	if (pdata->dev_irq != pdata->an_irq)
++	if (pdata->dev_irq != pdata->an_irq) {
+ 		devm_free_irq(pdata->dev, pdata->an_irq, pdata);
++		tasklet_kill(&pdata->tasklet_an);
++	}
+ 
+ 	pdata->phy_if.phy_impl.stop(pdata);
+ 
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index d54e6e138aafe..e32649c254cd3 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -188,6 +188,7 @@ enum xgbe_sfp_cable {
+ 	XGBE_SFP_CABLE_UNKNOWN = 0,
+ 	XGBE_SFP_CABLE_ACTIVE,
+ 	XGBE_SFP_CABLE_PASSIVE,
++	XGBE_SFP_CABLE_FIBER,
+ };
+ 
+ enum xgbe_sfp_base {
+@@ -235,10 +236,7 @@ enum xgbe_sfp_speed {
+ 
+ #define XGBE_SFP_BASE_BR			12
+ #define XGBE_SFP_BASE_BR_1GBE_MIN		0x0a
+-#define XGBE_SFP_BASE_BR_1GBE_MAX		0x0d
+ #define XGBE_SFP_BASE_BR_10GBE_MIN		0x64
+-#define XGBE_SFP_BASE_BR_10GBE_MAX		0x68
+-#define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX	0x78
+ 
+ #define XGBE_SFP_BASE_CU_CABLE_LEN		18
+ 
+@@ -825,29 +823,22 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
+ static bool xgbe_phy_sfp_bit_rate(struct xgbe_sfp_eeprom *sfp_eeprom,
+ 				  enum xgbe_sfp_speed sfp_speed)
+ {
+-	u8 *sfp_base, min, max;
++	u8 *sfp_base, min;
+ 
+ 	sfp_base = sfp_eeprom->base;
+ 
+ 	switch (sfp_speed) {
+ 	case XGBE_SFP_SPEED_1000:
+ 		min = XGBE_SFP_BASE_BR_1GBE_MIN;
+-		max = XGBE_SFP_BASE_BR_1GBE_MAX;
+ 		break;
+ 	case XGBE_SFP_SPEED_10000:
+ 		min = XGBE_SFP_BASE_BR_10GBE_MIN;
+-		if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME],
+-			   XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0)
+-			max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX;
+-		else
+-			max = XGBE_SFP_BASE_BR_10GBE_MAX;
+ 		break;
+ 	default:
+ 		return false;
+ 	}
+ 
+-	return ((sfp_base[XGBE_SFP_BASE_BR] >= min) &&
+-		(sfp_base[XGBE_SFP_BASE_BR] <= max));
++	return sfp_base[XGBE_SFP_BASE_BR] >= min;
+ }
+ 
+ static void xgbe_phy_free_phy_device(struct xgbe_prv_data *pdata)
+@@ -1136,16 +1127,18 @@ static void xgbe_phy_sfp_parse_eeprom(struct xgbe_prv_data *pdata)
+ 	phy_data->sfp_tx_fault = xgbe_phy_check_sfp_tx_fault(phy_data);
+ 	phy_data->sfp_rx_los = xgbe_phy_check_sfp_rx_los(phy_data);
+ 
+-	/* Assume ACTIVE cable unless told it is PASSIVE */
++	/* Assume FIBER cable unless told otherwise */
+ 	if (sfp_base[XGBE_SFP_BASE_CABLE] & XGBE_SFP_BASE_CABLE_PASSIVE) {
+ 		phy_data->sfp_cable = XGBE_SFP_CABLE_PASSIVE;
+ 		phy_data->sfp_cable_len = sfp_base[XGBE_SFP_BASE_CU_CABLE_LEN];
+-	} else {
++	} else if (sfp_base[XGBE_SFP_BASE_CABLE] & XGBE_SFP_BASE_CABLE_ACTIVE) {
+ 		phy_data->sfp_cable = XGBE_SFP_CABLE_ACTIVE;
++	} else {
++		phy_data->sfp_cable = XGBE_SFP_CABLE_FIBER;
+ 	}
+ 
+ 	/* Determine the type of SFP */
+-	if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE &&
++	if (phy_data->sfp_cable != XGBE_SFP_CABLE_FIBER &&
+ 	    xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000))
+ 		phy_data->sfp_base = XGBE_SFP_BASE_10000_CR;
+ 	else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR)
+diff --git a/drivers/net/ethernet/apple/bmac.c b/drivers/net/ethernet/apple/bmac.c
+index ab6ce85540b8a..a0ce699903cf9 100644
+--- a/drivers/net/ethernet/apple/bmac.c
++++ b/drivers/net/ethernet/apple/bmac.c
+@@ -1510,7 +1510,7 @@ static void bmac_tx_timeout(struct timer_list *t)
+ 	i = bp->tx_empty;
+ 	++dev->stats.tx_errors;
+ 	if (i != bp->tx_fill) {
+-		dev_kfree_skb(bp->tx_bufs[i]);
++		dev_kfree_skb_irq(bp->tx_bufs[i]);
+ 		bp->tx_bufs[i] = NULL;
+ 		if (++i >= N_TX_RING) i = 0;
+ 		bp->tx_empty = i;
+diff --git a/drivers/net/ethernet/apple/mace.c b/drivers/net/ethernet/apple/mace.c
+index 68b9ee4894892..8f5e0cc00dddd 100644
+--- a/drivers/net/ethernet/apple/mace.c
++++ b/drivers/net/ethernet/apple/mace.c
+@@ -840,7 +840,7 @@ static void mace_tx_timeout(struct timer_list *t)
+     if (mp->tx_bad_runt) {
+ 	mp->tx_bad_runt = 0;
+     } else if (i != mp->tx_fill) {
+-	dev_kfree_skb(mp->tx_bufs[i]);
++	dev_kfree_skb_irq(mp->tx_bufs[i]);
+ 	if (++i >= N_TX_RING)
+ 	    i = 0;
+ 	mp->tx_empty = i;
+diff --git a/drivers/net/ethernet/dnet.c b/drivers/net/ethernet/dnet.c
+index 5a847941c46b9..f7d126a2617e3 100644
+--- a/drivers/net/ethernet/dnet.c
++++ b/drivers/net/ethernet/dnet.c
+@@ -558,11 +558,11 @@ static netdev_tx_t dnet_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	skb_tx_timestamp(skb);
+ 
++	spin_unlock_irqrestore(&bp->lock, flags);
++
+ 	/* free the buffer */
+ 	dev_kfree_skb(skb);
+ 
+-	spin_unlock_irqrestore(&bp->lock, flags);
+-
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index aacfa5fcdc408..87f98170ac937 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -1211,8 +1211,12 @@ static int igb_alloc_q_vector(struct igb_adapter *adapter,
+ 	if (!q_vector) {
+ 		q_vector = kzalloc(size, GFP_KERNEL);
+ 	} else if (size > ksize(q_vector)) {
+-		kfree_rcu(q_vector, rcu);
+-		q_vector = kzalloc(size, GFP_KERNEL);
++		struct igb_q_vector *new_q_vector;
++
++		new_q_vector = kzalloc(size, GFP_KERNEL);
++		if (new_q_vector)
++			kfree_rcu(q_vector, rcu);
++		q_vector = new_q_vector;
+ 	} else {
+ 		memset(q_vector, 0, size);
+ 	}
+@@ -7165,7 +7169,7 @@ static void igb_vf_reset_msg(struct igb_adapter *adapter, u32 vf)
+ {
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	unsigned char *vf_mac = adapter->vf_data[vf].vf_mac_addresses;
+-	u32 reg, msgbuf[3];
++	u32 reg, msgbuf[3] = {};
+ 	u8 *addr = (u8 *)(&msgbuf[1]);
+ 
+ 	/* process all the same items cleared in a function level reset */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index 0fd62510fb277..dee65443d76b7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -395,8 +395,8 @@ static int mlx5_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
+ 
+ static const struct ptp_clock_info mlx5_ptp_clock_info = {
+ 	.owner		= THIS_MODULE,
+-	.name		= "mlx5_p2p",
+-	.max_adj	= 100000000,
++	.name		= "mlx5_ptp",
++	.max_adj	= 50000000,
+ 	.n_alarm	= 0,
+ 	.n_ext_ts	= 0,
+ 	.n_per_out	= 0,
+diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+index 3bc570c46f81c..8296b0aa42a9e 100644
+--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+@@ -3960,6 +3960,7 @@ abort_with_slices:
+ 	myri10ge_free_slices(mgp);
+ 
+ abort_with_firmware:
++	kfree(mgp->msix_vectors);
+ 	myri10ge_dummy_rdma(mgp, 0);
+ 
+ abort_with_ioremap:
+diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
+index 0a8483c615d4f..b42f81d0c6f05 100644
+--- a/drivers/net/ethernet/neterion/s2io.c
++++ b/drivers/net/ethernet/neterion/s2io.c
+@@ -2375,7 +2375,7 @@ static void free_tx_buffers(struct s2io_nic *nic)
+ 			skb = s2io_txdl_getskb(&mac_control->fifos[i], txdp, j);
+ 			if (skb) {
+ 				swstats->mem_freed += skb->truesize;
+-				dev_kfree_skb(skb);
++				dev_kfree_skb_irq(skb);
+ 				cnt++;
+ 			}
+ 		}
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
+index 4e417f839c3f0..f7b354e796cb6 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
+@@ -874,7 +874,6 @@ area_cache_get(struct nfp_cpp *cpp, u32 id,
+ 	}
+ 
+ 	/* Adjust the start address to be cache size aligned */
+-	cache->id = id;
+ 	cache->addr = addr & ~(u64)(cache->size - 1);
+ 
+ 	/* Re-init to the new ID and address */
+@@ -894,6 +893,8 @@ area_cache_get(struct nfp_cpp *cpp, u32 id,
+ 		return NULL;
+ 	}
+ 
++	cache->id = id;
++
+ exit:
+ 	/* Adjust offset */
+ 	*offset = addr - cache->addr;
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
+index 10286215092f6..85419b8258b59 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
+@@ -2525,7 +2525,13 @@ int qlcnic_83xx_init(struct qlcnic_adapter *adapter, int pci_using_dac)
+ 		goto disable_mbx_intr;
+ 
+ 	qlcnic_83xx_clear_function_resources(adapter);
+-	qlcnic_dcb_enable(adapter->dcb);
++
++	err = qlcnic_dcb_enable(adapter->dcb);
++	if (err) {
++		qlcnic_dcb_free(adapter->dcb);
++		goto disable_mbx_intr;
++	}
++
+ 	qlcnic_83xx_initialize_nic(adapter, 1);
+ 	qlcnic_dcb_get_info(adapter->dcb);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+index 0a9d24e86715d..eb8000d9b6d0e 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+@@ -42,11 +42,6 @@ struct qlcnic_dcb {
+ 	unsigned long			state;
+ };
+ 
+-static inline void qlcnic_clear_dcb_ops(struct qlcnic_dcb *dcb)
+-{
+-	kfree(dcb);
+-}
+-
+ static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)
+ {
+ 	if (dcb && dcb->ops->get_hw_capability)
+@@ -113,9 +108,8 @@ static inline void qlcnic_dcb_init_dcbnl_ops(struct qlcnic_dcb *dcb)
+ 		dcb->ops->init_dcbnl_ops(dcb);
+ }
+ 
+-static inline void qlcnic_dcb_enable(struct qlcnic_dcb *dcb)
++static inline int qlcnic_dcb_enable(struct qlcnic_dcb *dcb)
+ {
+-	if (dcb && qlcnic_dcb_attach(dcb))
+-		qlcnic_clear_dcb_ops(dcb);
++	return dcb ? qlcnic_dcb_attach(dcb) : 0;
+ }
+ #endif
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+index 43920374beae2..a0c427f3b1800 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+@@ -2638,7 +2638,13 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			 "Device does not support MSI interrupts\n");
+ 
+ 	if (qlcnic_82xx_check(adapter)) {
+-		qlcnic_dcb_enable(adapter->dcb);
++		err = qlcnic_dcb_enable(adapter->dcb);
++		if (err) {
++			qlcnic_dcb_free(adapter->dcb);
++			dev_err(&pdev->dev, "Failed to enable DCB\n");
++			goto err_out_free_hw;
++		}
++
+ 		qlcnic_dcb_get_info(adapter->dcb);
+ 		err = qlcnic_setup_intr(adapter);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+index 98275f18a87b0..d28c4436f965b 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+@@ -222,6 +222,8 @@ int qlcnic_sriov_init(struct qlcnic_adapter *adapter, int num_vfs)
+ 	return 0;
+ 
+ qlcnic_destroy_async_wq:
++	while (i--)
++		kfree(sriov->vf_info[i].vp);
+ 	destroy_workqueue(bc->bc_async_wq);
+ 
+ qlcnic_destroy_trans_wq:
+diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c
+index 2199bd08f4d6a..e377c1f68777b 100644
+--- a/drivers/net/ethernet/rdc/r6040.c
++++ b/drivers/net/ethernet/rdc/r6040.c
+@@ -1184,10 +1184,12 @@ static int r6040_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	err = register_netdev(dev);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to register net device\n");
+-		goto err_out_mdio_unregister;
++		goto err_out_phy_disconnect;
+ 	}
+ 	return 0;
+ 
++err_out_phy_disconnect:
++	phy_disconnect(dev->phydev);
+ err_out_mdio_unregister:
+ 	mdiobus_unregister(lp->mii_bus);
+ err_out_mdio:
+@@ -1211,6 +1213,7 @@ static void r6040_remove_one(struct pci_dev *pdev)
+ 	struct r6040_private *lp = netdev_priv(dev);
+ 
+ 	unregister_netdev(dev);
++	phy_disconnect(dev->phydev);
+ 	mdiobus_unregister(lp->mii_bus);
+ 	mdiobus_free(lp->mii_bus);
+ 	netif_napi_del(&lp->napi);
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 9077014f6f403..ff374d0d80a7d 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -2199,11 +2199,11 @@ static int ravb_remove(struct platform_device *pdev)
+ 			  priv->desc_bat_dma);
+ 	/* Set reset mode */
+ 	ravb_write(ndev, CCC_OPC_RESET, CCC);
+-	pm_runtime_put_sync(&pdev->dev);
+ 	unregister_netdev(ndev);
+ 	netif_napi_del(&priv->napi[RAVB_NC]);
+ 	netif_napi_del(&priv->napi[RAVB_BE]);
+ 	ravb_mdio_release(priv);
++	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	free_netdev(ndev);
+ 	platform_set_drvdata(pdev, NULL);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 08a058e1bc75c..541c0449f31c7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -53,7 +53,8 @@ static void config_sub_second_increment(void __iomem *ioaddr,
+ 	if (!(value & PTP_TCR_TSCTRLSSR))
+ 		data = (data * 1000) / 465;
+ 
+-	data &= PTP_SSIR_SSINC_MASK;
++	if (data > PTP_SSIR_SSINC_MAX)
++		data = PTP_SSIR_SSINC_MAX;
+ 
+ 	reg_value = data;
+ 	if (gmac4)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
+index ecccf895fd7e4..306ebf1a495e4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
+@@ -66,7 +66,7 @@
+ #define	PTP_TCR_TSENMACADDR	BIT(18)
+ 
+ /* SSIR defines */
+-#define	PTP_SSIR_SSINC_MASK		0xff
++#define	PTP_SSIR_SSINC_MAX		0xff
+ #define	GMAC4_PTP_SSIR_SSINC_SHIFT	16
+ 
+ #endif	/* __STMMAC_PTP_H__ */
+diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
+index 6099865217f29..7fdbd2ff55737 100644
+--- a/drivers/net/ethernet/ti/netcp_core.c
++++ b/drivers/net/ethernet/ti/netcp_core.c
+@@ -1276,7 +1276,7 @@ out:
+ }
+ 
+ /* Submit the packet */
+-static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
++static netdev_tx_t netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ {
+ 	struct netcp_intf *netcp = netdev_priv(ndev);
+ 	struct netcp_stats *tx_stats = &netcp->stats;
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 4e1504587895a..e3f0beaa7d550 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -543,7 +543,7 @@ static void xemaclite_tx_timeout(struct net_device *dev)
+ 	xemaclite_enable_interrupts(lp);
+ 
+ 	if (lp->deferred_skb) {
+-		dev_kfree_skb(lp->deferred_skb);
++		dev_kfree_skb_irq(lp->deferred_skb);
+ 		lp->deferred_skb = NULL;
+ 		dev->stats.tx_errors++;
+ 	}
+diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
+index 3b48c890540ac..7f14aad1c240c 100644
+--- a/drivers/net/fddi/defxx.c
++++ b/drivers/net/fddi/defxx.c
+@@ -3844,10 +3844,24 @@ static int dfx_init(void)
+ 	int status;
+ 
+ 	status = pci_register_driver(&dfx_pci_driver);
+-	if (!status)
+-		status = eisa_driver_register(&dfx_eisa_driver);
+-	if (!status)
+-		status = tc_register_driver(&dfx_tc_driver);
++	if (status)
++		goto err_pci_register;
++
++	status = eisa_driver_register(&dfx_eisa_driver);
++	if (status)
++		goto err_eisa_register;
++
++	status = tc_register_driver(&dfx_tc_driver);
++	if (status)
++		goto err_tc_register;
++
++	return 0;
++
++err_tc_register:
++	eisa_driver_unregister(&dfx_eisa_driver);
++err_eisa_register:
++	pci_unregister_driver(&dfx_pci_driver);
++err_pci_register:
+ 	return status;
+ }
+ 
+diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
+index 1e62d00732f25..787eaf3f4f131 100644
+--- a/drivers/net/hamradio/baycom_epp.c
++++ b/drivers/net/hamradio/baycom_epp.c
+@@ -772,7 +772,7 @@ static void epp_bh(struct work_struct *work)
+  * ===================== network driver interface =========================
+  */
+ 
+-static int baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct baycom_state *bc = netdev_priv(dev);
+ 
+diff --git a/drivers/net/hamradio/scc.c b/drivers/net/hamradio/scc.c
+index 6c03932d8a6bc..3dc4eb841aa1d 100644
+--- a/drivers/net/hamradio/scc.c
++++ b/drivers/net/hamradio/scc.c
+@@ -300,12 +300,12 @@ static inline void scc_discard_buffers(struct scc_channel *scc)
+ 	spin_lock_irqsave(&scc->lock, flags);	
+ 	if (scc->tx_buff != NULL)
+ 	{
+-		dev_kfree_skb(scc->tx_buff);
++		dev_kfree_skb_irq(scc->tx_buff);
+ 		scc->tx_buff = NULL;
+ 	}
+ 	
+ 	while (!skb_queue_empty(&scc->tx_queue))
+-		dev_kfree_skb(skb_dequeue(&scc->tx_queue));
++		dev_kfree_skb_irq(skb_dequeue(&scc->tx_queue));
+ 
+ 	spin_unlock_irqrestore(&scc->lock, flags);
+ }
+@@ -1667,7 +1667,7 @@ static netdev_tx_t scc_net_tx(struct sk_buff *skb, struct net_device *dev)
+ 	if (skb_queue_len(&scc->tx_queue) > scc->dev->tx_queue_len) {
+ 		struct sk_buff *skb_del;
+ 		skb_del = skb_dequeue(&scc->tx_queue);
+-		dev_kfree_skb(skb_del);
++		dev_kfree_skb_irq(skb_del);
+ 	}
+ 	skb_queue_tail(&scc->tx_queue, skb);
+ 	netif_trans_update(dev);
+diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
+index d192936b76cff..7863918592db9 100644
+--- a/drivers/net/loopback.c
++++ b/drivers/net/loopback.c
+@@ -210,7 +210,7 @@ static __net_init int loopback_net_init(struct net *net)
+ 	int err;
+ 
+ 	err = -ENOMEM;
+-	dev = alloc_netdev(0, "lo", NET_NAME_UNKNOWN, loopback_setup);
++	dev = alloc_netdev(0, "lo", NET_NAME_PREDICTABLE, loopback_setup);
+ 	if (!dev)
+ 		goto out;
+ 
+diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
+index 33974e7519cef..ca0053775d3f9 100644
+--- a/drivers/net/ntb_netdev.c
++++ b/drivers/net/ntb_netdev.c
+@@ -140,7 +140,7 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
+ enqueue_again:
+ 	rc = ntb_transport_rx_enqueue(qp, skb, skb->data, ndev->mtu + ETH_HLEN);
+ 	if (rc) {
+-		dev_kfree_skb(skb);
++		dev_kfree_skb_any(skb);
+ 		ndev->stats.rx_errors++;
+ 		ndev->stats.rx_fifo_errors++;
+ 	}
+@@ -195,7 +195,7 @@ static void ntb_netdev_tx_handler(struct ntb_transport_qp *qp, void *qp_data,
+ 		ndev->stats.tx_aborted_errors++;
+ 	}
+ 
+-	dev_kfree_skb(skb);
++	dev_kfree_skb_any(skb);
+ 
+ 	if (ntb_transport_tx_free_entry(dev->qp) >= tx_start) {
+ 		/* Make sure anybody stopping the queue after this sees the new
+diff --git a/drivers/net/phy/xilinx_gmii2rgmii.c b/drivers/net/phy/xilinx_gmii2rgmii.c
+index bd6084e315de2..f44dc80ed3e6a 100644
+--- a/drivers/net/phy/xilinx_gmii2rgmii.c
++++ b/drivers/net/phy/xilinx_gmii2rgmii.c
+@@ -91,6 +91,7 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
+ 
+ 	if (!priv->phy_dev->drv) {
+ 		dev_info(dev, "Attached phy not ready\n");
++		put_device(&priv->phy_dev->mdio.dev);
+ 		return -EPROBE_DEFER;
+ 	}
+ 
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 3f335b57d5cde..220b28711f98e 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1528,6 +1528,8 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+ 	int len;
+ 	unsigned char *cp;
+ 
++	skb->dev = ppp->dev;
++
+ 	if (proto < 0x8000) {
+ #ifdef CONFIG_PPP_FILTER
+ 		/* check if we should pass this packet */
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index ab41a63aa4aaa..497d6bcdc2762 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -267,7 +267,8 @@ static int rndis_query(struct usbnet *dev, struct usb_interface *intf,
+ 
+ 	off = le32_to_cpu(u.get_c->offset);
+ 	len = le32_to_cpu(u.get_c->len);
+-	if (unlikely((8 + off + len) > CONTROL_BUFFER_SIZE))
++	if (unlikely((off > CONTROL_BUFFER_SIZE - 8) ||
++		     (len > CONTROL_BUFFER_SIZE - 8 - off)))
+ 		goto response_error;
+ 
+ 	if (*reply_len != -1 && len != *reply_len)
+diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
+index 2a3f0f1a2b0a2..bc249f81c80d9 100644
+--- a/drivers/net/wan/farsync.c
++++ b/drivers/net/wan/farsync.c
+@@ -2617,6 +2617,7 @@ fst_remove_one(struct pci_dev *pdev)
+ 	for (i = 0; i < card->nports; i++) {
+ 		struct net_device *dev = port_to_dev(&card->ports[i]);
+ 		unregister_hdlc_device(dev);
++		free_netdev(dev);
+ 	}
+ 
+ 	fst_disable_intr(card);
+@@ -2637,6 +2638,7 @@ fst_remove_one(struct pci_dev *pdev)
+ 				    card->tx_dma_handle_card);
+ 	}
+ 	fst_card_array[card->card_no] = NULL;
++	kfree(card);
+ }
+ 
+ static struct pci_driver fst_driver = {
+diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
+index 58e189ec672f9..5d3cf354f6cb5 100644
+--- a/drivers/net/wireless/ath/ar5523/ar5523.c
++++ b/drivers/net/wireless/ath/ar5523/ar5523.c
+@@ -241,6 +241,11 @@ static void ar5523_cmd_tx_cb(struct urb *urb)
+ 	}
+ }
+ 
++static void ar5523_cancel_tx_cmd(struct ar5523 *ar)
++{
++	usb_kill_urb(ar->tx_cmd.urb_tx);
++}
++
+ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
+ 		      int ilen, void *odata, int olen, int flags)
+ {
+@@ -280,6 +285,7 @@ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
+ 	}
+ 
+ 	if (!wait_for_completion_timeout(&cmd->done, 2 * HZ)) {
++		ar5523_cancel_tx_cmd(ar);
+ 		cmd->odata = NULL;
+ 		ar5523_err(ar, "timeout waiting for command %02x reply\n",
+ 			   code);
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index caece8339a50a..92757495c73b6 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -3739,18 +3739,22 @@ static struct pci_driver ath10k_pci_driver = {
+ 
+ static int __init ath10k_pci_init(void)
+ {
+-	int ret;
++	int ret1, ret2;
+ 
+-	ret = pci_register_driver(&ath10k_pci_driver);
+-	if (ret)
++	ret1 = pci_register_driver(&ath10k_pci_driver);
++	if (ret1)
+ 		printk(KERN_ERR "failed to register ath10k pci driver: %d\n",
+-		       ret);
++		       ret1);
+ 
+-	ret = ath10k_ahb_init();
+-	if (ret)
+-		printk(KERN_ERR "ahb init failed: %d\n", ret);
++	ret2 = ath10k_ahb_init();
++	if (ret2)
++		printk(KERN_ERR "ahb init failed: %d\n", ret2);
+ 
+-	return ret;
++	if (ret1 && ret2)
++		return ret1;
++
++	/* registered to at least one bus */
++	return 0;
+ }
+ module_init(ath10k_pci_init);
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index c8c7afe0e343e..8a18a33b5b59f 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -709,14 +709,13 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
+ 	struct hif_device_usb *hif_dev = rx_buf->hif_dev;
+ 	struct sk_buff *skb = rx_buf->skb;
+-	struct sk_buff *nskb;
+ 	int ret;
+ 
+ 	if (!skb)
+ 		return;
+ 
+ 	if (!hif_dev)
+-		goto free;
++		goto free_skb;
+ 
+ 	switch (urb->status) {
+ 	case 0:
+@@ -725,7 +724,7 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	case -ECONNRESET:
+ 	case -ENODEV:
+ 	case -ESHUTDOWN:
+-		goto free;
++		goto free_skb;
+ 	default:
+ 		skb_reset_tail_pointer(skb);
+ 		skb_trim(skb, 0);
+@@ -736,25 +735,27 @@ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ 	if (likely(urb->actual_length != 0)) {
+ 		skb_put(skb, urb->actual_length);
+ 
+-		/* Process the command first */
++		/*
++		 * Process the command first.
++		 * skb is either freed here or passed to be
++		 * managed to another callback function.
++		 */
+ 		ath9k_htc_rx_msg(hif_dev->htc_handle, skb,
+ 				 skb->len, USB_REG_IN_PIPE);
+ 
+-
+-		nskb = alloc_skb(MAX_REG_IN_BUF_SIZE, GFP_ATOMIC);
+-		if (!nskb) {
++		skb = alloc_skb(MAX_REG_IN_BUF_SIZE, GFP_ATOMIC);
++		if (!skb) {
+ 			dev_err(&hif_dev->udev->dev,
+ 				"ath9k_htc: REG_IN memory allocation failure\n");
+-			urb->context = NULL;
+-			return;
++			goto free_rx_buf;
+ 		}
+ 
+-		rx_buf->skb = nskb;
++		rx_buf->skb = skb;
+ 
+ 		usb_fill_int_urb(urb, hif_dev->udev,
+ 				 usb_rcvintpipe(hif_dev->udev,
+ 						 USB_REG_IN_PIPE),
+-				 nskb->data, MAX_REG_IN_BUF_SIZE,
++				 skb->data, MAX_REG_IN_BUF_SIZE,
+ 				 ath9k_hif_usb_reg_in_cb, rx_buf, 1);
+ 	}
+ 
+@@ -763,12 +764,13 @@ resubmit:
+ 	ret = usb_submit_urb(urb, GFP_ATOMIC);
+ 	if (ret) {
+ 		usb_unanchor_urb(urb);
+-		goto free;
++		goto free_skb;
+ 	}
+ 
+ 	return;
+-free:
++free_skb:
+ 	kfree_skb(skb);
++free_rx_buf:
+ 	kfree(rx_buf);
+ 	urb->context = NULL;
+ }
+@@ -781,14 +783,10 @@ static void ath9k_hif_usb_dealloc_tx_urbs(struct hif_device_usb *hif_dev)
+ 	spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ 	list_for_each_entry_safe(tx_buf, tx_buf_tmp,
+ 				 &hif_dev->tx.tx_buf, list) {
+-		usb_get_urb(tx_buf->urb);
+-		spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+-		usb_kill_urb(tx_buf->urb);
+ 		list_del(&tx_buf->list);
+ 		usb_free_urb(tx_buf->urb);
+ 		kfree(tx_buf->buf);
+ 		kfree(tx_buf);
+-		spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
+ 	}
+ 	spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+ 
+@@ -1330,10 +1328,24 @@ static int send_eject_command(struct usb_interface *interface)
+ static int ath9k_hif_usb_probe(struct usb_interface *interface,
+ 			       const struct usb_device_id *id)
+ {
++	struct usb_endpoint_descriptor *bulk_in, *bulk_out, *int_in, *int_out;
+ 	struct usb_device *udev = interface_to_usbdev(interface);
++	struct usb_host_interface *alt;
+ 	struct hif_device_usb *hif_dev;
+ 	int ret = 0;
+ 
++	/* Verify the expected endpoints are present */
++	alt = interface->cur_altsetting;
++	if (usb_find_common_endpoints(alt, &bulk_in, &bulk_out, &int_in, &int_out) < 0 ||
++	    usb_endpoint_num(bulk_in) != USB_WLAN_RX_PIPE ||
++	    usb_endpoint_num(bulk_out) != USB_WLAN_TX_PIPE ||
++	    usb_endpoint_num(int_in) != USB_REG_IN_PIPE ||
++	    usb_endpoint_num(int_out) != USB_REG_OUT_PIPE) {
++		dev_err(&udev->dev,
++			"ath9k_htc: Device endpoint numbers are not the expected ones\n");
++		return -ENODEV;
++	}
++
+ 	if (id->driver_info == STORAGE_DEVICE)
+ 		return send_eject_command(interface);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+index 4e5a6c311d1ad..d8460835ff003 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+@@ -648,6 +648,11 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev,
+ 	char end = '\0';
+ 	size_t reqsz;
+ 
++	if (chiprev >= BITS_PER_TYPE(u32)) {
++		brcmf_err("Invalid chip revision %u\n", chiprev);
++		return NULL;
++	}
++
+ 	for (i = 0; i < table_size; i++) {
+ 		if (mapping_table[i].chipid == chip &&
+ 		    mapping_table[i].revmask & BIT(chiprev))
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index a21529d2ccabc..6ee04af85e9d5 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -612,7 +612,7 @@ static int brcmf_pcie_exit_download_state(struct brcmf_pciedev_info *devinfo,
+ 	}
+ 
+ 	if (!brcmf_chip_set_active(devinfo->ci, resetintr))
+-		return -EINVAL;
++		return -EIO;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 0a96c1071e5b8..6eb1daf513010 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -3337,6 +3337,7 @@ static int brcmf_sdio_download_firmware(struct brcmf_sdio *bus,
+ 	/* Take arm out of reset */
+ 	if (!brcmf_chip_set_active(bus->ci, rstvec)) {
+ 		brcmf_err("error getting out of ARM core reset\n");
++		bcmerror = -EIO;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index bd28deff9b8c6..921a226b18f85 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1136,6 +1136,15 @@ enum bt_mp_oper_opcode_8723b {
+ 	BT_MP_OP_ENABLE_CFO_TRACKING = 0x24,
+ };
+ 
++enum rtl8xxxu_bw_mode {
++	RTL8XXXU_CHANNEL_WIDTH_20 = 0,
++	RTL8XXXU_CHANNEL_WIDTH_40 = 1,
++	RTL8XXXU_CHANNEL_WIDTH_80 = 2,
++	RTL8XXXU_CHANNEL_WIDTH_160 = 3,
++	RTL8XXXU_CHANNEL_WIDTH_80_80 = 4,
++	RTL8XXXU_CHANNEL_WIDTH_MAX = 5,
++};
++
+ struct rtl8723bu_c2h {
+ 	u8 id;
+ 	u8 seq;
+@@ -1186,7 +1195,7 @@ struct rtl8723bu_c2h {
+ 			u8 dummy3_0;
+ 		} __packed ra_report;
+ 	};
+-};
++} __packed;
+ 
+ struct rtl8xxxu_fileops;
+ 
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 38f06ee98b356..c3c8382dd0ba2 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -1614,18 +1614,18 @@ static void rtl8xxxu_print_chipinfo(struct rtl8xxxu_priv *priv)
+ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ {
+ 	struct device *dev = &priv->udev->dev;
+-	u32 val32, bonding;
++	u32 val32, bonding, sys_cfg;
+ 	u16 val16;
+ 
+-	val32 = rtl8xxxu_read32(priv, REG_SYS_CFG);
+-	priv->chip_cut = (val32 & SYS_CFG_CHIP_VERSION_MASK) >>
++	sys_cfg = rtl8xxxu_read32(priv, REG_SYS_CFG);
++	priv->chip_cut = (sys_cfg & SYS_CFG_CHIP_VERSION_MASK) >>
+ 		SYS_CFG_CHIP_VERSION_SHIFT;
+-	if (val32 & SYS_CFG_TRP_VAUX_EN) {
++	if (sys_cfg & SYS_CFG_TRP_VAUX_EN) {
+ 		dev_info(dev, "Unsupported test chip\n");
+ 		return -ENOTSUPP;
+ 	}
+ 
+-	if (val32 & SYS_CFG_BT_FUNC) {
++	if (sys_cfg & SYS_CFG_BT_FUNC) {
+ 		if (priv->chip_cut >= 3) {
+ 			sprintf(priv->chip_name, "8723BU");
+ 			priv->rtl_chip = RTL8723B;
+@@ -1647,7 +1647,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ 		if (val32 & MULTI_GPS_FUNC_EN)
+ 			priv->has_gps = 1;
+ 		priv->is_multi_func = 1;
+-	} else if (val32 & SYS_CFG_TYPE_ID) {
++	} else if (sys_cfg & SYS_CFG_TYPE_ID) {
+ 		bonding = rtl8xxxu_read32(priv, REG_HPON_FSM);
+ 		bonding &= HPON_FSM_BONDING_MASK;
+ 		if (priv->fops->tx_desc_size ==
+@@ -1695,7 +1695,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ 	case RTL8188E:
+ 	case RTL8192E:
+ 	case RTL8723B:
+-		switch (val32 & SYS_CFG_VENDOR_EXT_MASK) {
++		switch (sys_cfg & SYS_CFG_VENDOR_EXT_MASK) {
+ 		case SYS_CFG_VENDOR_ID_TSMC:
+ 			sprintf(priv->chip_vendor, "TSMC");
+ 			break;
+@@ -1712,7 +1712,7 @@ static int rtl8xxxu_identify_chip(struct rtl8xxxu_priv *priv)
+ 		}
+ 		break;
+ 	default:
+-		if (val32 & SYS_CFG_VENDOR_ID) {
++		if (sys_cfg & SYS_CFG_VENDOR_ID) {
+ 			sprintf(priv->chip_vendor, "UMC");
+ 			priv->vendor_umc = 1;
+ 		} else {
+@@ -4333,7 +4333,7 @@ void rtl8xxxu_gen2_update_rate_mask(struct rtl8xxxu_priv *priv,
+ 				    u32 ramask, int sgi)
+ {
+ 	struct h2c_cmd h2c;
+-	u8 bw = 0;
++	u8 bw = RTL8XXXU_CHANNEL_WIDTH_20;
+ 
+ 	memset(&h2c, 0, sizeof(struct h2c_cmd));
+ 
+diff --git a/drivers/net/wireless/rsi/rsi_91x_core.c b/drivers/net/wireless/rsi/rsi_91x_core.c
+index c6c29034b2ead..a939b552a8e47 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_core.c
++++ b/drivers/net/wireless/rsi/rsi_91x_core.c
+@@ -466,7 +466,9 @@ void rsi_core_xmit(struct rsi_common *common, struct sk_buff *skb)
+ 							      tid, 0);
+ 			}
+ 		}
+-		if (skb->protocol == cpu_to_be16(ETH_P_PAE)) {
++
++		if (IEEE80211_SKB_CB(skb)->control.flags &
++		    IEEE80211_TX_CTRL_PORT_CTRL_PROTO) {
+ 			q_num = MGMT_SOFT_Q;
+ 			skb->priority = q_num;
+ 		}
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index 2cb7cca4ec2d9..b445a52fbfbde 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -152,12 +152,16 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 	u8 header_size;
+ 	u8 vap_id = 0;
+ 	u8 dword_align_bytes;
++	bool tx_eapol;
+ 	u16 seq_num;
+ 
+ 	info = IEEE80211_SKB_CB(skb);
+ 	vif = info->control.vif;
+ 	tx_params = (struct skb_info *)info->driver_data;
+ 
++	tx_eapol = IEEE80211_SKB_CB(skb)->control.flags &
++		   IEEE80211_TX_CTRL_PORT_CTRL_PROTO;
++
+ 	header_size = FRAME_DESC_SZ + sizeof(struct rsi_xtended_desc);
+ 	if (header_size > skb_headroom(skb)) {
+ 		rsi_dbg(ERR_ZONE, "%s: Unable to send pkt\n", __func__);
+@@ -221,7 +225,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 		}
+ 	}
+ 
+-	if (skb->protocol == cpu_to_be16(ETH_P_PAE)) {
++	if (tx_eapol) {
+ 		rsi_dbg(INFO_ZONE, "*** Tx EAPOL ***\n");
+ 
+ 		data_desc->frame_info = cpu_to_le16(RATE_INFO_ENABLE);
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index 79bf8e1bd39c2..b7f5db3a6aa02 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -1305,6 +1305,8 @@ static int pn533_poll_dep_complete(struct pn533 *dev, void *arg,
+ 	if (IS_ERR(resp))
+ 		return PTR_ERR(resp);
+ 
++	memset(&nfc_target, 0, sizeof(struct nfc_target));
++
+ 	rsp = (struct pn533_cmd_jump_dep_response *)resp->data;
+ 
+ 	rc = rsp->status & PN533_CMD_RET_MASK;
+@@ -1786,6 +1788,8 @@ static int pn533_in_dep_link_up_complete(struct pn533 *dev, void *arg,
+ 
+ 		dev_dbg(dev->dev, "Creating new target\n");
+ 
++		memset(&nfc_target, 0, sizeof(struct nfc_target));
++
+ 		nfc_target.supported_protocols = NFC_PROTO_NFC_DEP_MASK;
+ 		nfc_target.nfcid1_len = 10;
+ 		memcpy(nfc_target.nfcid1, rsp->nfcid3t, nfc_target.nfcid1_len);
+diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
+index a2c9b3f3bc232..c7da364b63584 100644
+--- a/drivers/nfc/pn533/usb.c
++++ b/drivers/nfc/pn533/usb.c
+@@ -165,10 +165,17 @@ static int pn533_usb_send_ack(struct pn533 *dev, gfp_t flags)
+ 	return usb_submit_urb(phy->ack_urb, flags);
+ }
+ 
++struct pn533_out_arg {
++	struct pn533_usb_phy *phy;
++	struct completion done;
++};
++
+ static int pn533_usb_send_frame(struct pn533 *dev,
+ 				struct sk_buff *out)
+ {
+ 	struct pn533_usb_phy *phy = dev->phy;
++	struct pn533_out_arg arg;
++	void *cntx;
+ 	int rc;
+ 
+ 	if (phy->priv == NULL)
+@@ -180,10 +187,17 @@ static int pn533_usb_send_frame(struct pn533 *dev,
+ 	print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1,
+ 			     out->data, out->len, false);
+ 
++	init_completion(&arg.done);
++	cntx = phy->out_urb->context;
++	phy->out_urb->context = &arg;
++
+ 	rc = usb_submit_urb(phy->out_urb, GFP_KERNEL);
+ 	if (rc)
+ 		return rc;
+ 
++	wait_for_completion(&arg.done);
++	phy->out_urb->context = cntx;
++
+ 	if (dev->protocol_type == PN533_PROTO_REQ_RESP) {
+ 		/* request for response for sent packet directly */
+ 		rc = pn533_submit_urb_for_response(phy, GFP_KERNEL);
+@@ -424,7 +438,31 @@ static int pn533_acr122_poweron_rdr(struct pn533_usb_phy *phy)
+ 	return arg.rc;
+ }
+ 
+-static void pn533_send_complete(struct urb *urb)
++static void pn533_out_complete(struct urb *urb)
++{
++	struct pn533_out_arg *arg = urb->context;
++	struct pn533_usb_phy *phy = arg->phy;
++
++	switch (urb->status) {
++	case 0:
++		break; /* success */
++	case -ECONNRESET:
++	case -ENOENT:
++		dev_dbg(&phy->udev->dev,
++			"The urb has been stopped (status %d)\n",
++			urb->status);
++		break;
++	case -ESHUTDOWN:
++	default:
++		nfc_err(&phy->udev->dev,
++			"Urb failure (status %d)\n",
++			urb->status);
++	}
++
++	complete(&arg->done);
++}
++
++static void pn533_ack_complete(struct urb *urb)
+ {
+ 	struct pn533_usb_phy *phy = urb->context;
+ 
+@@ -512,10 +550,10 @@ static int pn533_usb_probe(struct usb_interface *interface,
+ 
+ 	usb_fill_bulk_urb(phy->out_urb, phy->udev,
+ 			  usb_sndbulkpipe(phy->udev, out_endpoint),
+-			  NULL, 0, pn533_send_complete, phy);
++			  NULL, 0, pn533_out_complete, phy);
+ 	usb_fill_bulk_urb(phy->ack_urb, phy->udev,
+ 			  usb_sndbulkpipe(phy->udev, out_endpoint),
+-			  NULL, 0, pn533_send_complete, phy);
++			  NULL, 0, pn533_ack_complete, phy);
+ 
+ 	switch (id->driver_info) {
+ 	case PN533_DEVICE_STD:
+diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
+index c60b465f6fe45..6a0833f5923c1 100644
+--- a/drivers/parisc/led.c
++++ b/drivers/parisc/led.c
+@@ -141,6 +141,9 @@ static int start_task(void)
+ 
+ 	/* Create the work queue and queue the LED task */
+ 	led_wq = create_singlethread_workqueue("led_wq");	
++	if (!led_wq)
++		return -ENOMEM;
++
+ 	queue_delayed_work(led_wq, &led_task, 0);
+ 
+ 	return 0;
+diff --git a/drivers/pci/irq.c b/drivers/pci/irq.c
+index a1de501a27299..3f6a5d5202595 100644
+--- a/drivers/pci/irq.c
++++ b/drivers/pci/irq.c
+@@ -94,6 +94,8 @@ int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler,
+ 	va_start(ap, fmt);
+ 	devname = kvasprintf(GFP_KERNEL, fmt, ap);
+ 	va_end(ap);
++	if (!devname)
++		return -ENOMEM;
+ 
+ 	ret = request_threaded_irq(pci_irq_vector(dev, nr), handler, thread_fn,
+ 				   irqflags, devname, dev_id);
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 1edf5a1836ea9..34a7b3c137bb0 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -1289,11 +1289,9 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
+ 
+ 	sysfs_bin_attr_init(res_attr);
+ 	if (write_combine) {
+-		pdev->res_attr_wc[num] = res_attr;
+ 		sprintf(res_attr_name, "resource%d_wc", num);
+ 		res_attr->mmap = pci_mmap_resource_wc;
+ 	} else {
+-		pdev->res_attr[num] = res_attr;
+ 		sprintf(res_attr_name, "resource%d", num);
+ 		if (pci_resource_flags(pdev, num) & IORESOURCE_IO) {
+ 			res_attr->read = pci_read_resource_io;
+@@ -1309,10 +1307,17 @@ static int pci_create_attr(struct pci_dev *pdev, int num, int write_combine)
+ 	res_attr->size = pci_resource_len(pdev, num);
+ 	res_attr->private = (void *)(unsigned long)num;
+ 	retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr);
+-	if (retval)
++	if (retval) {
+ 		kfree(res_attr);
++		return retval;
++	}
+ 
+-	return retval;
++	if (write_combine)
++		pdev->res_attr_wc[num] = res_attr;
++	else
++		pdev->res_attr[num] = res_attr;
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 48c419b306f34..7ac6f4710908b 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5815,6 +5815,8 @@ bool pci_device_is_present(struct pci_dev *pdev)
+ {
+ 	u32 v;
+ 
++	/* Check PF if pdev is a VF, since VF Vendor/Device IDs are 0xffff */
++	pdev = pci_physfn(pdev);
+ 	if (pci_dev_is_disconnected(pdev))
+ 		return false;
+ 	return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0);
+diff --git a/drivers/perf/arm_dsu_pmu.c b/drivers/perf/arm_dsu_pmu.c
+index 660cb8ac886ac..961533ba33e75 100644
+--- a/drivers/perf/arm_dsu_pmu.c
++++ b/drivers/perf/arm_dsu_pmu.c
+@@ -823,7 +823,11 @@ static int __init dsu_pmu_init(void)
+ 	if (ret < 0)
+ 		return ret;
+ 	dsu_pmu_cpuhp_state = ret;
+-	return platform_driver_register(&dsu_pmu_driver);
++	ret = platform_driver_register(&dsu_pmu_driver);
++	if (ret)
++		cpuhp_remove_multi_state(dsu_pmu_cpuhp_state);
++
++	return ret;
+ }
+ 
+ static void __exit dsu_pmu_exit(void)
+diff --git a/drivers/pinctrl/mediatek/mtk-eint.c b/drivers/pinctrl/mediatek/mtk-eint.c
+index 564cfaee129d2..55e3305ede811 100644
+--- a/drivers/pinctrl/mediatek/mtk-eint.c
++++ b/drivers/pinctrl/mediatek/mtk-eint.c
+@@ -277,12 +277,15 @@ static struct irq_chip mtk_eint_irq_chip = {
+ 
+ static unsigned int mtk_eint_hw_init(struct mtk_eint *eint)
+ {
+-	void __iomem *reg = eint->base + eint->regs->dom_en;
++	void __iomem *dom_en = eint->base + eint->regs->dom_en;
++	void __iomem *mask_set = eint->base + eint->regs->mask_set;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < eint->hw->ap_num; i += 32) {
+-		writel(0xffffffff, reg);
+-		reg += 4;
++		writel(0xffffffff, dom_en);
++		writel(0xffffffff, mask_set);
++		dom_en += 4;
++		mask_set += 4;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index 55b56440a5a89..7e946c54681a5 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -390,8 +390,10 @@ int pinconf_generic_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	for_each_available_child_of_node(np_config, np) {
+ 		ret = pinconf_generic_dt_subnode_to_map(pctldev, np, map,
+ 					&reserved_maps, num_maps, type);
+-		if (ret < 0)
++		if (ret < 0) {
++			of_node_put(np);
+ 			goto exit;
++		}
+ 	}
+ 	return 0;
+ 
+diff --git a/drivers/platform/x86/mxm-wmi.c b/drivers/platform/x86/mxm-wmi.c
+index 35d8b9a939f9f..9c1893a703e62 100644
+--- a/drivers/platform/x86/mxm-wmi.c
++++ b/drivers/platform/x86/mxm-wmi.c
+@@ -48,13 +48,11 @@ int mxm_wmi_call_mxds(int adapter)
+ 		.xarg = 1,
+ 	};
+ 	struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
+-	struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	acpi_status status;
+ 
+ 	printk("calling mux switch %d\n", adapter);
+ 
+-	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input,
+-				     &output);
++	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input, NULL);
+ 
+ 	if (ACPI_FAILURE(status))
+ 		return status;
+@@ -73,13 +71,11 @@ int mxm_wmi_call_mxmx(int adapter)
+ 		.xarg = 1,
+ 	};
+ 	struct acpi_buffer input = { (acpi_size)sizeof(args), &args };
+-	struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	acpi_status status;
+ 
+ 	printk("calling mux switch %d\n", adapter);
+ 
+-	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input,
+-				     &output);
++	status = wmi_evaluate_method(MXM_WMMX_GUID, 0x0, adapter, &input, NULL);
+ 
+ 	if (ACPI_FAILURE(status))
+ 		return status;
+diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
+index b50f8f73fb478..0977ed356ed11 100644
+--- a/drivers/platform/x86/sony-laptop.c
++++ b/drivers/platform/x86/sony-laptop.c
+@@ -1913,14 +1913,21 @@ static int sony_nc_kbd_backlight_setup(struct platform_device *pd,
+ 		break;
+ 	}
+ 
+-	ret = sony_call_snc_handle(handle, probe_base, &result);
+-	if (ret)
+-		return ret;
++	/*
++	 * Only probe if there is a separate probe_base, otherwise the probe call
++	 * is equivalent to __sony_nc_kbd_backlight_mode_set(0), resulting in
++	 * the keyboard backlight being turned off.
++	 */
++	if (probe_base) {
++		ret = sony_call_snc_handle(handle, probe_base, &result);
++		if (ret)
++			return ret;
+ 
+-	if ((handle == 0x0137 && !(result & 0x02)) ||
+-			!(result & 0x01)) {
+-		dprintk("no backlight keyboard found\n");
+-		return 0;
++		if ((handle == 0x0137 && !(result & 0x02)) ||
++				!(result & 0x01)) {
++			dprintk("no backlight keyboard found\n");
++			return 0;
++		}
+ 	}
+ 
+ 	kbdbl_ctl = kzalloc(sizeof(*kbdbl_ctl), GFP_KERNEL);
+diff --git a/drivers/pnp/core.c b/drivers/pnp/core.c
+index 3bf18d7189750..131b925b820d2 100644
+--- a/drivers/pnp/core.c
++++ b/drivers/pnp/core.c
+@@ -160,14 +160,14 @@ struct pnp_dev *pnp_alloc_dev(struct pnp_protocol *protocol, int id,
+ 	dev->dev.coherent_dma_mask = dev->dma_mask;
+ 	dev->dev.release = &pnp_release_device;
+ 
+-	dev_set_name(&dev->dev, "%02x:%02x", dev->protocol->number, dev->number);
+-
+ 	dev_id = pnp_add_id(dev, pnpid);
+ 	if (!dev_id) {
+ 		kfree(dev);
+ 		return NULL;
+ 	}
+ 
++	dev_set_name(&dev->dev, "%02x:%02x", dev->protocol->number, dev->number);
++
+ 	return dev;
+ }
+ 
+diff --git a/drivers/power/avs/smartreflex.c b/drivers/power/avs/smartreflex.c
+index 8760477d0e8af..5ac122cd25b8c 100644
+--- a/drivers/power/avs/smartreflex.c
++++ b/drivers/power/avs/smartreflex.c
+@@ -984,6 +984,7 @@ static int omap_sr_probe(struct platform_device *pdev)
+ err_debugfs:
+ 	debugfs_remove_recursive(sr_info->dbg_dir);
+ err_list_del:
++	pm_runtime_disable(&pdev->dev);
+ 	list_del(&sr_info->node);
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index e43a7b3b570c4..9b98921a3b16a 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -945,8 +945,8 @@ create_triggers_failed:
+ register_cooler_failed:
+ 	psy_unregister_thermal(psy);
+ register_thermal_failed:
+-	device_del(dev);
+ wakeup_init_failed:
++	device_del(dev);
+ device_add_failed:
+ check_supplies_failed:
+ dev_set_name_failed:
+diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
+index a136a7ae77140..ee19c511adce4 100644
+--- a/drivers/rapidio/devices/rio_mport_cdev.c
++++ b/drivers/rapidio/devices/rio_mport_cdev.c
+@@ -1809,8 +1809,11 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
+ 		rio_init_dbell_res(&rdev->riores[RIO_DOORBELL_RESOURCE],
+ 				   0, 0xffff);
+ 	err = rio_add_device(rdev);
+-	if (err)
+-		goto cleanup;
++	if (err) {
++		put_device(&rdev->dev);
++		return err;
++	}
++
+ 	rio_dev_get(rdev);
+ 
+ 	return 0;
+@@ -1906,10 +1909,6 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
+ 
+ 	priv->md = chdev;
+ 
+-	mutex_lock(&chdev->file_mutex);
+-	list_add_tail(&priv->list, &chdev->file_list);
+-	mutex_unlock(&chdev->file_mutex);
+-
+ 	INIT_LIST_HEAD(&priv->db_filters);
+ 	INIT_LIST_HEAD(&priv->pw_filters);
+ 	spin_lock_init(&priv->fifo_lock);
+@@ -1918,6 +1917,7 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
+ 			  sizeof(struct rio_event) * MPORT_EVENT_DEPTH,
+ 			  GFP_KERNEL);
+ 	if (ret < 0) {
++		put_device(&chdev->dev);
+ 		dev_err(&chdev->dev, DRV_NAME ": kfifo_alloc failed\n");
+ 		ret = -ENOMEM;
+ 		goto err_fifo;
+@@ -1928,6 +1928,9 @@ static int mport_cdev_open(struct inode *inode, struct file *filp)
+ 	spin_lock_init(&priv->req_lock);
+ 	mutex_init(&priv->dma_lock);
+ #endif
++	mutex_lock(&chdev->file_mutex);
++	list_add_tail(&priv->list, &chdev->file_list);
++	mutex_unlock(&chdev->file_mutex);
+ 
+ 	filp->private_data = priv;
+ 	goto out;
+diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
+index fd7b517132ace..d2ddc173c5bd4 100644
+--- a/drivers/rapidio/rio-scan.c
++++ b/drivers/rapidio/rio-scan.c
+@@ -460,8 +460,12 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
+ 				   0, 0xffff);
+ 
+ 	ret = rio_add_device(rdev);
+-	if (ret)
+-		goto cleanup;
++	if (ret) {
++		if (rswitch)
++			kfree(rswitch->route_table);
++		put_device(&rdev->dev);
++		return NULL;
++	}
+ 
+ 	rio_dev_get(rdev);
+ 
+diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
+index 83406696c7aa7..4304142d87006 100644
+--- a/drivers/rapidio/rio.c
++++ b/drivers/rapidio/rio.c
+@@ -2271,11 +2271,16 @@ int rio_register_mport(struct rio_mport *port)
+ 	atomic_set(&port->state, RIO_DEVICE_RUNNING);
+ 
+ 	res = device_register(&port->dev);
+-	if (res)
++	if (res) {
+ 		dev_err(&port->dev, "RIO: mport%d registration failed ERR=%d\n",
+ 			port->id, res);
+-	else
++		mutex_lock(&rio_mport_list_lock);
++		list_del(&port->node);
++		mutex_unlock(&rio_mport_list_lock);
++		put_device(&port->dev);
++	} else {
+ 		dev_dbg(&port->dev, "RIO: registered mport%d\n", port->id);
++	}
+ 
+ 	return res;
+ }
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 045075cd256ce..11656b3836748 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1197,7 +1197,13 @@ static int set_machine_constraints(struct regulator_dev *rdev)
+ 		if (rdev->supply_name && !rdev->supply)
+ 			return -EPROBE_DEFER;
+ 
+-		if (rdev->supply) {
++		/* If supplying regulator has already been enabled,
++		 * it's not intended to have use_count increment
++		 * when rdev is only boot-on.
++		 */
++		if (rdev->supply &&
++		    (rdev->constraints->always_on ||
++		     !regulator_is_enabled(rdev->supply))) {
+ 			ret = regulator_enable(rdev->supply);
+ 			if (ret < 0) {
+ 				_regulator_put(rdev->supply);
+@@ -1241,6 +1247,7 @@ static int set_supply(struct regulator_dev *rdev,
+ 
+ 	rdev->supply = create_regulator(supply_rdev, &rdev->dev, "SUPPLY");
+ 	if (rdev->supply == NULL) {
++		module_put(supply_rdev->owner);
+ 		err = -ENOMEM;
+ 		return err;
+ 	}
+@@ -1541,6 +1548,7 @@ static struct regulator_dev *regulator_dev_lookup(struct device *dev,
+ 		node = of_get_regulator(dev, supply);
+ 		if (node) {
+ 			r = of_find_regulator_by_node(node);
++			of_node_put(node);
+ 			if (r)
+ 				return r;
+ 
+diff --git a/drivers/regulator/da9211-regulator.c b/drivers/regulator/da9211-regulator.c
+index 6c122b3df5d03..8847d15c2b90f 100644
+--- a/drivers/regulator/da9211-regulator.c
++++ b/drivers/regulator/da9211-regulator.c
+@@ -469,6 +469,12 @@ static int da9211_i2c_probe(struct i2c_client *i2c,
+ 
+ 	chip->chip_irq = i2c->irq;
+ 
++	ret = da9211_regulator_init(chip);
++	if (ret < 0) {
++		dev_err(chip->dev, "Failed to initialize regulator: %d\n", ret);
++		return ret;
++	}
++
+ 	if (chip->chip_irq != 0) {
+ 		ret = devm_request_threaded_irq(chip->dev, chip->chip_irq, NULL,
+ 					da9211_irq_handler,
+@@ -483,11 +489,6 @@ static int da9211_i2c_probe(struct i2c_client *i2c,
+ 		dev_warn(chip->dev, "No IRQ configured\n");
+ 	}
+ 
+-	ret = da9211_regulator_init(chip);
+-
+-	if (ret < 0)
+-		dev_err(chip->dev, "Failed to initialize regulator: %d\n", ret);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/rtc/rtc-mxc_v2.c b/drivers/rtc/rtc-mxc_v2.c
+index 45c7366b72862..c16aa4a389e96 100644
+--- a/drivers/rtc/rtc-mxc_v2.c
++++ b/drivers/rtc/rtc-mxc_v2.c
+@@ -335,8 +335,10 @@ static int mxc_rtc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
+-	if (IS_ERR(pdata->rtc))
++	if (IS_ERR(pdata->rtc)) {
++		clk_disable_unprepare(pdata->clk);
+ 		return PTR_ERR(pdata->rtc);
++	}
+ 
+ 	pdata->rtc->ops = &mxc_rtc_ops;
+ 	pdata->rtc->range_max = U32_MAX;
+diff --git a/drivers/rtc/rtc-snvs.c b/drivers/rtc/rtc-snvs.c
+index 3cf011e120530..b1e13f2877ada 100644
+--- a/drivers/rtc/rtc-snvs.c
++++ b/drivers/rtc/rtc-snvs.c
+@@ -32,6 +32,14 @@
+ #define SNVS_LPPGDR_INIT	0x41736166
+ #define CNTR_TO_SECS_SH		15
+ 
++/* The maximum RTC clock cycles that are allowed to pass between two
++ * consecutive clock counter register reads. If the values are corrupted a
++ * bigger difference is expected. The RTC frequency is 32kHz. With 320 cycles
++ * we end at 10ms which should be enough for most cases. If it once takes
++ * longer than expected we do a retry.
++ */
++#define MAX_RTC_READ_DIFF_CYCLES	320
++
+ struct snvs_rtc_data {
+ 	struct rtc_device *rtc;
+ 	struct regmap *regmap;
+@@ -56,6 +64,7 @@ static u64 rtc_read_lpsrt(struct snvs_rtc_data *data)
+ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
+ {
+ 	u64 read1, read2;
++	s64 diff;
+ 	unsigned int timeout = 100;
+ 
+ 	/* As expected, the registers might update between the read of the LSB
+@@ -66,7 +75,8 @@ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
+ 	do {
+ 		read2 = read1;
+ 		read1 = rtc_read_lpsrt(data);
+-	} while (read1 != read2 && --timeout);
++		diff = read1 - read2;
++	} while (((diff < 0) || (diff > MAX_RTC_READ_DIFF_CYCLES)) && --timeout);
+ 	if (!timeout)
+ 		dev_err(&data->rtc->dev, "Timeout trying to get valid LPSRT Counter read\n");
+ 
+@@ -78,13 +88,15 @@ static u32 rtc_read_lp_counter(struct snvs_rtc_data *data)
+ static int rtc_read_lp_counter_lsb(struct snvs_rtc_data *data, u32 *lsb)
+ {
+ 	u32 count1, count2;
++	s32 diff;
+ 	unsigned int timeout = 100;
+ 
+ 	regmap_read(data->regmap, data->offset + SNVS_LPSRTCLR, &count1);
+ 	do {
+ 		count2 = count1;
+ 		regmap_read(data->regmap, data->offset + SNVS_LPSRTCLR, &count1);
+-	} while (count1 != count2 && --timeout);
++		diff = count1 - count2;
++	} while (((diff < 0) || (diff > MAX_RTC_READ_DIFF_CYCLES)) && --timeout);
+ 	if (!timeout) {
+ 		dev_err(&data->rtc->dev, "Timeout trying to get valid LPSRT Counter read\n");
+ 		return -ETIMEDOUT;
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index bee75ca7ff79b..e66439b6247a4 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -249,6 +249,7 @@ static int st_rtc_probe(struct platform_device *pdev)
+ 
+ 	rtc->clkrate = clk_get_rate(rtc->clk);
+ 	if (!rtc->clkrate) {
++		clk_disable_unprepare(rtc->clk);
+ 		dev_err(&pdev->dev, "Unable to fetch clock rate\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c
+index f63c5c871d3dd..d9fdcdd6760b4 100644
+--- a/drivers/s390/net/ctcm_main.c
++++ b/drivers/s390/net/ctcm_main.c
+@@ -867,16 +867,9 @@ done:
+ /**
+  * Start transmission of a packet.
+  * Called from generic network device layer.
+- *
+- *  skb		Pointer to buffer containing the packet.
+- *  dev		Pointer to interface struct.
+- *
+- * returns 0 if packet consumed, !0 if packet rejected.
+- *         Note: If we return !0, then the packet is free'd by
+- *               the generic network layer.
+  */
+ /* first merge version - leaving both functions separated */
+-static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t ctcm_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ctcm_priv *priv = dev->ml_priv;
+ 
+@@ -919,7 +912,7 @@ static int ctcm_tx(struct sk_buff *skb, struct net_device *dev)
+ }
+ 
+ /* unmerged MPC variant of ctcm_tx */
+-static int ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	int len = 0;
+ 	struct ctcm_priv *priv = dev->ml_priv;
+diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
+index d8f99ff53a94f..09e019d76bdd7 100644
+--- a/drivers/s390/net/lcs.c
++++ b/drivers/s390/net/lcs.c
+@@ -1518,9 +1518,8 @@ lcs_txbuffer_cb(struct lcs_channel *channel, struct lcs_buffer *buffer)
+ /**
+  * Packet transmit function called by network stack
+  */
+-static int
+-__lcs_start_xmit(struct lcs_card *card, struct sk_buff *skb,
+-		 struct net_device *dev)
++static netdev_tx_t __lcs_start_xmit(struct lcs_card *card, struct sk_buff *skb,
++				    struct net_device *dev)
+ {
+ 	struct lcs_header *header;
+ 	int rc = NETDEV_TX_OK;
+@@ -1581,8 +1580,7 @@ out:
+ 	return rc;
+ }
+ 
+-static int
+-lcs_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t lcs_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct lcs_card *card;
+ 	int rc;
+diff --git a/drivers/s390/net/netiucv.c b/drivers/s390/net/netiucv.c
+index 5ce2424ca7290..e2984b54447b7 100644
+--- a/drivers/s390/net/netiucv.c
++++ b/drivers/s390/net/netiucv.c
+@@ -1344,15 +1344,8 @@ out:
+ /**
+  * Start transmission of a packet.
+  * Called from generic network device layer.
+- *
+- * @param skb Pointer to buffer containing the packet.
+- * @param dev Pointer to interface struct.
+- *
+- * @return 0 if packet consumed, !0 if packet rejected.
+- *         Note: If we return !0, then the packet is free'd by
+- *               the generic network layer.
+  */
+-static int netiucv_tx(struct sk_buff *skb, struct net_device *dev)
++static netdev_tx_t netiucv_tx(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct netiucv_priv *privptr = netdev_priv(dev);
+ 	int rc;
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index 6768b2e8148a2..c5c4805435f65 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -2519,6 +2519,7 @@ static int __init fcoe_init(void)
+ 
+ out_free:
+ 	mutex_unlock(&fcoe_config_mutex);
++	fcoe_transport_detach(&fcoe_sw_transport);
+ out_destroy:
+ 	destroy_workqueue(fcoe_wq);
+ 	return rc;
+diff --git a/drivers/scsi/fcoe/fcoe_sysfs.c b/drivers/scsi/fcoe/fcoe_sysfs.c
+index 5c8310bade619..dab025e3ed273 100644
+--- a/drivers/scsi/fcoe/fcoe_sysfs.c
++++ b/drivers/scsi/fcoe/fcoe_sysfs.c
+@@ -831,14 +831,15 @@ struct fcoe_ctlr_device *fcoe_ctlr_device_add(struct device *parent,
+ 
+ 	dev_set_name(&ctlr->dev, "ctlr_%d", ctlr->id);
+ 	error = device_register(&ctlr->dev);
+-	if (error)
+-		goto out_del_q2;
++	if (error) {
++		destroy_workqueue(ctlr->devloss_work_q);
++		destroy_workqueue(ctlr->work_q);
++		put_device(&ctlr->dev);
++		return NULL;
++	}
+ 
+ 	return ctlr;
+ 
+-out_del_q2:
+-	destroy_workqueue(ctlr->devloss_work_q);
+-	ctlr->devloss_work_q = NULL;
+ out_del_q:
+ 	destroy_workqueue(ctlr->work_q);
+ 	ctlr->work_q = NULL;
+@@ -1037,16 +1038,16 @@ struct fcoe_fcf_device *fcoe_fcf_device_add(struct fcoe_ctlr_device *ctlr,
+ 	fcf->selected = new_fcf->selected;
+ 
+ 	error = device_register(&fcf->dev);
+-	if (error)
+-		goto out_del;
++	if (error) {
++		put_device(&fcf->dev);
++		goto out;
++	}
+ 
+ 	fcf->state = FCOE_FCF_STATE_CONNECTED;
+ 	list_add_tail(&fcf->peers, &ctlr->fcfs);
+ 
+ 	return fcf;
+ 
+-out_del:
+-	kfree(fcf);
+ out:
+ 	return NULL;
+ }
+diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
+index 0fe21cbdf0ca7..13931c5c0eff2 100644
+--- a/drivers/scsi/hpsa.c
++++ b/drivers/scsi/hpsa.c
+@@ -8133,6 +8133,11 @@ static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
+ 		destroy_workqueue(h->rescan_ctlr_wq);
+ 		h->rescan_ctlr_wq = NULL;
+ 	}
++	if (h->monitor_ctlr_wq) {
++		destroy_workqueue(h->monitor_ctlr_wq);
++		h->monitor_ctlr_wq = NULL;
++	}
++
+ 	kfree(h);				/* init_one 1 */
+ }
+ 
+@@ -8481,8 +8486,8 @@ static void hpsa_event_monitor_worker(struct work_struct *work)
+ 
+ 	spin_lock_irqsave(&h->lock, flags);
+ 	if (!h->remove_in_progress)
+-		schedule_delayed_work(&h->event_monitor_work,
+-					HPSA_EVENT_MONITOR_INTERVAL);
++		queue_delayed_work(h->monitor_ctlr_wq, &h->event_monitor_work,
++				HPSA_EVENT_MONITOR_INTERVAL);
+ 	spin_unlock_irqrestore(&h->lock, flags);
+ }
+ 
+@@ -8527,7 +8532,7 @@ static void hpsa_monitor_ctlr_worker(struct work_struct *work)
+ 
+ 	spin_lock_irqsave(&h->lock, flags);
+ 	if (!h->remove_in_progress)
+-		schedule_delayed_work(&h->monitor_ctlr_work,
++		queue_delayed_work(h->monitor_ctlr_wq, &h->monitor_ctlr_work,
+ 				h->heartbeat_sample_interval);
+ 	spin_unlock_irqrestore(&h->lock, flags);
+ }
+@@ -8695,6 +8700,12 @@ reinit_after_soft_reset:
+ 		goto clean7;	/* aer/h */
+ 	}
+ 
++	h->monitor_ctlr_wq = hpsa_create_controller_wq(h, "monitor");
++	if (!h->monitor_ctlr_wq) {
++		rc = -ENOMEM;
++		goto clean7;
++	}
++
+ 	/*
+ 	 * At this point, the controller is ready to take commands.
+ 	 * Now, if reset_devices and the hard reset didn't work, try
+@@ -8826,7 +8837,11 @@ clean1:	/* wq/aer/h */
+ 		destroy_workqueue(h->rescan_ctlr_wq);
+ 		h->rescan_ctlr_wq = NULL;
+ 	}
+-	kfree(h);
++	if (h->monitor_ctlr_wq) {
++		destroy_workqueue(h->monitor_ctlr_wq);
++		h->monitor_ctlr_wq = NULL;
++	}
++	hpda_free_ctlr_info(h);
+ 	return rc;
+ }
+ 
+@@ -8973,6 +8988,7 @@ static void hpsa_remove_one(struct pci_dev *pdev)
+ 	cancel_delayed_work_sync(&h->event_monitor_work);
+ 	destroy_workqueue(h->rescan_ctlr_wq);
+ 	destroy_workqueue(h->resubmit_wq);
++	destroy_workqueue(h->monitor_ctlr_wq);
+ 
+ 	hpsa_delete_sas_host(h);
+ 
+@@ -9684,7 +9700,8 @@ static int hpsa_add_sas_host(struct ctlr_info *h)
+ 	return 0;
+ 
+ free_sas_phy:
+-	hpsa_free_sas_phy(hpsa_sas_phy);
++	sas_phy_free(hpsa_sas_phy->phy);
++	kfree(hpsa_sas_phy);
+ free_sas_port:
+ 	hpsa_free_sas_port(hpsa_sas_port);
+ free_sas_node:
+@@ -9720,10 +9737,12 @@ static int hpsa_add_sas_device(struct hpsa_sas_node *hpsa_sas_node,
+ 
+ 	rc = hpsa_sas_port_add_rphy(hpsa_sas_port, rphy);
+ 	if (rc)
+-		goto free_sas_port;
++		goto free_sas_rphy;
+ 
+ 	return 0;
+ 
++free_sas_rphy:
++	sas_rphy_free(rphy);
+ free_sas_port:
+ 	hpsa_free_sas_port(hpsa_sas_port);
+ 	device->sas_port = NULL;
+diff --git a/drivers/scsi/hpsa.h b/drivers/scsi/hpsa.h
+index 59e023696fffe..7aa7378f70ddb 100644
+--- a/drivers/scsi/hpsa.h
++++ b/drivers/scsi/hpsa.h
+@@ -300,6 +300,7 @@ struct ctlr_info {
+ 	int	needs_abort_tags_swizzled;
+ 	struct workqueue_struct *resubmit_wq;
+ 	struct workqueue_struct *rescan_ctlr_wq;
++	struct workqueue_struct *monitor_ctlr_wq;
+ 	atomic_t abort_cmds_available;
+ 	wait_queue_head_t event_sync_wait_queue;
+ 	struct mutex reset_mutex;
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index c6cde552b995e..5989c868bfe08 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -10854,11 +10854,19 @@ static struct notifier_block ipr_notifier = {
+  **/
+ static int __init ipr_init(void)
+ {
++	int rc;
++
+ 	ipr_info("IBM Power RAID SCSI Device Driver version: %s %s\n",
+ 		 IPR_DRIVER_VERSION, IPR_DRIVER_DATE);
+ 
+ 	register_reboot_notifier(&ipr_notifier);
+-	return pci_register_driver(&ipr_driver);
++	rc = pci_register_driver(&ipr_driver);
++	if (rc) {
++		unregister_reboot_notifier(&ipr_notifier);
++		return rc;
++	}
++
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index 4d73a7f67dea9..0733ecc9f8781 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -3156,7 +3156,7 @@ static int resp_write_scat(struct scsi_cmnd *scp,
+ 		mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_FIELD_IN_CDB, 0);
+ 		return illegal_condition_result;
+ 	}
+-	lrdp = kzalloc(lbdof_blen, GFP_ATOMIC);
++	lrdp = kzalloc(lbdof_blen, GFP_ATOMIC | __GFP_NOWARN);
+ 	if (lrdp == NULL)
+ 		return SCSI_MLQUEUE_HOST_BUSY;
+ 	if (sdebug_verbose)
+diff --git a/drivers/scsi/snic/snic_disc.c b/drivers/scsi/snic/snic_disc.c
+index b106596cc0cf5..69c5e26a9d5b3 100644
+--- a/drivers/scsi/snic/snic_disc.c
++++ b/drivers/scsi/snic/snic_disc.c
+@@ -317,6 +317,9 @@ snic_tgt_create(struct snic *snic, struct snic_tgt_id *tgtid)
+ 			      ret);
+ 
+ 		put_device(&snic->shost->shost_gendev);
++		spin_lock_irqsave(snic->shost->host_lock, flags);
++		list_del(&tgt->list);
++		spin_unlock_irqrestore(snic->shost->host_lock, flags);
+ 		kfree(tgt);
+ 		tgt = NULL;
+ 
+diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
+index ba79b609aca21..248759612d7df 100644
+--- a/drivers/soc/qcom/Kconfig
++++ b/drivers/soc/qcom/Kconfig
+@@ -16,6 +16,7 @@ config QCOM_COMMAND_DB
+ config QCOM_GENI_SE
+ 	tristate "QCOM GENI Serial Engine Driver"
+ 	depends on ARCH_QCOM || COMPILE_TEST
++	select REGMAP_MMIO
+ 	help
+ 	  This driver is used to manage Generic Interface (GENI) firmware based
+ 	  Qualcomm Technologies, Inc. Universal Peripheral (QUP) Wrapper. This
+diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
+index 9f5ce52e6c161..b264c36b2c0e3 100644
+--- a/drivers/soc/ti/knav_qmss_queue.c
++++ b/drivers/soc/ti/knav_qmss_queue.c
+@@ -72,7 +72,7 @@ static DEFINE_MUTEX(knav_dev_lock);
+  * Newest followed by older ones. Search is done from start of the array
+  * until a firmware file is found.
+  */
+-const char *knav_acc_firmwares[] = {"ks2_qmss_pdsp_acc48.bin"};
++static const char * const knav_acc_firmwares[] = {"ks2_qmss_pdsp_acc48.bin"};
+ 
+ static bool device_ready;
+ bool knav_qmss_device_ready(void)
+diff --git a/drivers/soc/ux500/ux500-soc-id.c b/drivers/soc/ux500/ux500-soc-id.c
+index 6c1be74e5fcc6..86a1a3c408df0 100644
+--- a/drivers/soc/ux500/ux500-soc-id.c
++++ b/drivers/soc/ux500/ux500-soc-id.c
+@@ -159,20 +159,18 @@ static ssize_t ux500_get_process(struct device *dev,
+ static const char *db8500_read_soc_id(struct device_node *backupram)
+ {
+ 	void __iomem *base;
+-	void __iomem *uid;
+ 	const char *retstr;
++	u32 uid[5];
+ 
+ 	base = of_iomap(backupram, 0);
+ 	if (!base)
+ 		return NULL;
+-	uid = base + 0x1fc0;
++	memcpy_fromio(uid, base + 0x1fc0, sizeof(uid));
+ 
+ 	/* Throw these device-specific numbers into the entropy pool */
+-	add_device_randomness(uid, 0x14);
++	add_device_randomness(uid, sizeof(uid));
+ 	retstr = kasprintf(GFP_KERNEL, "%08x%08x%08x%08x%08x",
+-			 readl((u32 *)uid+0),
+-			 readl((u32 *)uid+1), readl((u32 *)uid+2),
+-			 readl((u32 *)uid+3), readl((u32 *)uid+4));
++			   uid[0], uid[1], uid[2], uid[3], uid[4]);
+ 	iounmap(base);
+ 	return retstr;
+ }
+diff --git a/drivers/staging/rtl8192e/rtllib_rx.c b/drivers/staging/rtl8192e/rtllib_rx.c
+index 3bfd63fe4ac10..004d89853a68c 100644
+--- a/drivers/staging/rtl8192e/rtllib_rx.c
++++ b/drivers/staging/rtl8192e/rtllib_rx.c
+@@ -1500,9 +1500,9 @@ static int rtllib_rx_Monitor(struct rtllib_device *ieee, struct sk_buff *skb,
+ 		hdrlen += 4;
+ 	}
+ 
+-	rtllib_monitor_rx(ieee, skb, rx_stats, hdrlen);
+ 	ieee->stats.rx_packets++;
+ 	ieee->stats.rx_bytes += skb->len;
++	rtllib_monitor_rx(ieee, skb, rx_stats, hdrlen);
+ 
+ 	return 1;
+ }
+diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
+index fb824c5174497..14b3daa5f17ee 100644
+--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c
+@@ -961,9 +961,11 @@ int ieee80211_rx(struct ieee80211_device *ieee, struct sk_buff *skb,
+ #endif
+ 
+ 	if (ieee->iw_mode == IW_MODE_MONITOR) {
++		unsigned int len = skb->len;
++
+ 		ieee80211_monitor_rx(ieee, skb, rx_stats);
+ 		stats->rx_packets++;
+-		stats->rx_bytes += skb->len;
++		stats->rx_bytes += len;
+ 		return 1;
+ 	}
+ 
+diff --git a/drivers/staging/wilc1000/wilc_sdio.c b/drivers/staging/wilc1000/wilc_sdio.c
+index e52c3bdeaf04b..b4d456f3fb1b9 100644
+--- a/drivers/staging/wilc1000/wilc_sdio.c
++++ b/drivers/staging/wilc1000/wilc_sdio.c
+@@ -18,6 +18,7 @@ static const struct sdio_device_id wilc_sdio_ids[] = {
+ 	{ SDIO_DEVICE(SDIO_VENDOR_ID_WILC, SDIO_DEVICE_ID_WILC) },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(sdio, wilc_sdio_ids);
+ 
+ #define WILC_SDIO_BLOCK_SIZE 512
+ 
+diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
+index 080adf7bcae43..47ffb485ff34c 100644
+--- a/drivers/tty/hvc/hvc_xen.c
++++ b/drivers/tty/hvc/hvc_xen.c
+@@ -52,17 +52,22 @@ static DEFINE_SPINLOCK(xencons_lock);
+ 
+ static struct xencons_info *vtermno_to_xencons(int vtermno)
+ {
+-	struct xencons_info *entry, *n, *ret = NULL;
++	struct xencons_info *entry, *ret = NULL;
++	unsigned long flags;
+ 
+-	if (list_empty(&xenconsoles))
+-			return NULL;
++	spin_lock_irqsave(&xencons_lock, flags);
++	if (list_empty(&xenconsoles)) {
++		spin_unlock_irqrestore(&xencons_lock, flags);
++		return NULL;
++	}
+ 
+-	list_for_each_entry_safe(entry, n, &xenconsoles, list) {
++	list_for_each_entry(entry, &xenconsoles, list) {
+ 		if (entry->vtermno == vtermno) {
+ 			ret  = entry;
+ 			break;
+ 		}
+ 	}
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return ret;
+ }
+@@ -223,7 +228,7 @@ static int xen_hvm_console_init(void)
+ {
+ 	int r;
+ 	uint64_t v = 0;
+-	unsigned long gfn;
++	unsigned long gfn, flags;
+ 	struct xencons_info *info;
+ 
+ 	if (!xen_hvm_domain())
+@@ -258,9 +263,9 @@ static int xen_hvm_console_init(void)
+ 		goto err;
+ 	info->vtermno = HVC_COOKIE;
+ 
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_add_tail(&info->list, &xenconsoles);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ err:
+@@ -283,6 +288,7 @@ static int xencons_info_pv_init(struct xencons_info *info, int vtermno)
+ static int xen_pv_console_init(void)
+ {
+ 	struct xencons_info *info;
++	unsigned long flags;
+ 
+ 	if (!xen_pv_domain())
+ 		return -ENODEV;
+@@ -299,9 +305,9 @@ static int xen_pv_console_init(void)
+ 		/* already configured */
+ 		return 0;
+ 	}
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	xencons_info_pv_init(info, HVC_COOKIE);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -309,6 +315,7 @@ static int xen_pv_console_init(void)
+ static int xen_initial_domain_console_init(void)
+ {
+ 	struct xencons_info *info;
++	unsigned long flags;
+ 
+ 	if (!xen_initial_domain())
+ 		return -ENODEV;
+@@ -323,9 +330,9 @@ static int xen_initial_domain_console_init(void)
+ 	info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
+ 	info->vtermno = HVC_COOKIE;
+ 
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_add_tail(&info->list, &xenconsoles);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ }
+@@ -380,10 +387,12 @@ static void xencons_free(struct xencons_info *info)
+ 
+ static int xen_console_remove(struct xencons_info *info)
+ {
++	unsigned long flags;
++
+ 	xencons_disconnect_backend(info);
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_del(&info->list);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 	if (info->xbdev != NULL)
+ 		xencons_free(info);
+ 	else {
+@@ -464,6 +473,7 @@ static int xencons_probe(struct xenbus_device *dev,
+ {
+ 	int ret, devid;
+ 	struct xencons_info *info;
++	unsigned long flags;
+ 
+ 	devid = dev->nodename[strlen(dev->nodename) - 1] - '0';
+ 	if (devid == 0)
+@@ -482,9 +492,9 @@ static int xencons_probe(struct xenbus_device *dev,
+ 	ret = xencons_connect_backend(dev, info);
+ 	if (ret < 0)
+ 		goto error;
+-	spin_lock(&xencons_lock);
++	spin_lock_irqsave(&xencons_lock, flags);
+ 	list_add_tail(&info->list, &xenconsoles);
+-	spin_unlock(&xencons_lock);
++	spin_unlock_irqrestore(&xencons_lock, flags);
+ 
+ 	return 0;
+ 
+@@ -583,10 +593,12 @@ static int __init xen_hvc_init(void)
+ 
+ 	info->hvc = hvc_alloc(HVC_COOKIE, info->irq, ops, 256);
+ 	if (IS_ERR(info->hvc)) {
++		unsigned long flags;
++
+ 		r = PTR_ERR(info->hvc);
+-		spin_lock(&xencons_lock);
++		spin_lock_irqsave(&xencons_lock, flags);
+ 		list_del(&info->list);
+-		spin_unlock(&xencons_lock);
++		spin_unlock_irqrestore(&xencons_lock, flags);
+ 		if (info->irq)
+ 			unbind_from_irqhandler(info->irq, NULL);
+ 		kfree(info);
+diff --git a/drivers/tty/serial/altera_uart.c b/drivers/tty/serial/altera_uart.c
+index 0e487ce091ac9..d91f76b1d353e 100644
+--- a/drivers/tty/serial/altera_uart.c
++++ b/drivers/tty/serial/altera_uart.c
+@@ -199,9 +199,8 @@ static void altera_uart_set_termios(struct uart_port *port,
+ 	 */
+ }
+ 
+-static void altera_uart_rx_chars(struct altera_uart *pp)
++static void altera_uart_rx_chars(struct uart_port *port)
+ {
+-	struct uart_port *port = &pp->port;
+ 	unsigned char ch, flag;
+ 	unsigned short status;
+ 
+@@ -248,9 +247,8 @@ static void altera_uart_rx_chars(struct altera_uart *pp)
+ 	spin_lock(&port->lock);
+ }
+ 
+-static void altera_uart_tx_chars(struct altera_uart *pp)
++static void altera_uart_tx_chars(struct uart_port *port)
+ {
+-	struct uart_port *port = &pp->port;
+ 	struct circ_buf *xmit = &port->state->xmit;
+ 
+ 	if (port->x_char) {
+@@ -274,26 +272,25 @@ static void altera_uart_tx_chars(struct altera_uart *pp)
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+ 
+-	if (xmit->head == xmit->tail) {
+-		pp->imr &= ~ALTERA_UART_CONTROL_TRDY_MSK;
+-		altera_uart_update_ctrl_reg(pp);
+-	}
++	if (uart_circ_empty(xmit))
++		altera_uart_stop_tx(port);
+ }
+ 
+ static irqreturn_t altera_uart_interrupt(int irq, void *data)
+ {
+ 	struct uart_port *port = data;
+ 	struct altera_uart *pp = container_of(port, struct altera_uart, port);
++	unsigned long flags;
+ 	unsigned int isr;
+ 
+ 	isr = altera_uart_readl(port, ALTERA_UART_STATUS_REG) & pp->imr;
+ 
+-	spin_lock(&port->lock);
++	spin_lock_irqsave(&port->lock, flags);
+ 	if (isr & ALTERA_UART_STATUS_RRDY_MSK)
+-		altera_uart_rx_chars(pp);
++		altera_uart_rx_chars(port);
+ 	if (isr & ALTERA_UART_STATUS_TRDY_MSK)
+-		altera_uart_tx_chars(pp);
+-	spin_unlock(&port->lock);
++		altera_uart_tx_chars(port);
++	spin_unlock_irqrestore(&port->lock, flags);
+ 
+ 	return IRQ_RETVAL(isr);
+ }
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 7de4bed1ddba7..d1f4882d9f403 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1053,6 +1053,9 @@ static void pl011_dma_rx_callback(void *data)
+  */
+ static inline void pl011_dma_rx_stop(struct uart_amba_port *uap)
+ {
++	if (!uap->using_rx_dma)
++		return;
++
+ 	/* FIXME.  Just disable the DMA enable */
+ 	uap->dmacr &= ~UART011_RXDMAE;
+ 	pl011_write(uap->dmacr, uap, REG_DMACR);
+@@ -1768,8 +1771,17 @@ static void pl011_enable_interrupts(struct uart_amba_port *uap)
+ static void pl011_unthrottle_rx(struct uart_port *port)
+ {
+ 	struct uart_amba_port *uap = container_of(port, struct uart_amba_port, port);
++	unsigned long flags;
+ 
+-	pl011_enable_interrupts(uap);
++	spin_lock_irqsave(&uap->port.lock, flags);
++
++	uap->im = UART011_RTIM;
++	if (!pl011_dma_rx_running(uap))
++		uap->im |= UART011_RXIM;
++
++	pl011_write(uap->im, uap, REG_IMSC);
++
++	spin_unlock_irqrestore(&uap->port.lock, flags);
+ }
+ 
+ static int pl011_startup(struct uart_port *port)
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index e5ff30544bd0d..447990006d688 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -734,6 +734,7 @@ static void pch_request_dma(struct uart_port *port)
+ 	if (!chan) {
+ 		dev_err(priv->port.dev, "%s:dma_request_channel FAILS(Tx)\n",
+ 			__func__);
++		pci_dev_put(dma_dev);
+ 		return;
+ 	}
+ 	priv->chan_tx = chan;
+@@ -750,6 +751,7 @@ static void pch_request_dma(struct uart_port *port)
+ 			__func__);
+ 		dma_release_channel(priv->chan_tx);
+ 		priv->chan_tx = NULL;
++		pci_dev_put(dma_dev);
+ 		return;
+ 	}
+ 
+@@ -757,6 +759,8 @@ static void pch_request_dma(struct uart_port *port)
+ 	priv->rx_buf_virt = dma_alloc_coherent(port->dev, port->fifosize,
+ 				    &priv->rx_buf_dma, GFP_KERNEL);
+ 	priv->chan_rx = chan;
++
++	pci_dev_put(dma_dev);
+ }
+ 
+ static void pch_dma_rx_complete(void *arg)
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index ee480a1480e86..41fe45f2349ef 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -62,7 +62,7 @@
+ #define TEGRA_UART_TX_TRIG_4B			0x20
+ #define TEGRA_UART_TX_TRIG_1B			0x30
+ 
+-#define TEGRA_UART_MAXIMUM			5
++#define TEGRA_UART_MAXIMUM			8
+ 
+ /* Default UART setting when started: 115200 no parity, stop, 8 data bits */
+ #define TEGRA_UART_DEFAULT_BAUD			115200
+@@ -72,6 +72,8 @@
+ #define TEGRA_TX_PIO				1
+ #define TEGRA_TX_DMA				2
+ 
++#define TEGRA_UART_FCR_IIR_FIFO_EN		0x40
++
+ /**
+  * tegra_uart_chip_data: SOC specific data.
+  *
+@@ -84,6 +86,17 @@ struct tegra_uart_chip_data {
+ 	bool	tx_fifo_full_status;
+ 	bool	allow_txfifo_reset_fifo_mode;
+ 	bool	support_clk_src_div;
++	bool	fifo_mode_enable_status;
++	int	uart_max_port;
++	int	max_dma_burst_bytes;
++	int	error_tolerance_low_range;
++	int	error_tolerance_high_range;
++};
++
++struct tegra_baud_tolerance {
++	u32 lower_range_baud;
++	u32 upper_range_baud;
++	s32 tolerance;
+ };
+ 
+ struct tegra_uart_port {
+@@ -122,10 +135,19 @@ struct tegra_uart_port {
+ 	dma_cookie_t				rx_cookie;
+ 	unsigned int				tx_bytes_requested;
+ 	unsigned int				rx_bytes_requested;
++	struct tegra_baud_tolerance		*baud_tolerance;
++	int					n_adjustable_baud_rates;
++	int					required_rate;
++	int					configured_rate;
++	bool					use_rx_pio;
++	bool					use_tx_pio;
++	bool					rx_dma_active;
+ };
+ 
+ static void tegra_uart_start_next_tx(struct tegra_uart_port *tup);
+ static int tegra_uart_start_rx_dma(struct tegra_uart_port *tup);
++static void tegra_uart_dma_channel_free(struct tegra_uart_port *tup,
++					bool dma_to_memory);
+ 
+ static inline unsigned long tegra_uart_read(struct tegra_uart_port *tup,
+ 		unsigned long reg)
+@@ -243,6 +265,21 @@ static void tegra_uart_wait_sym_time(struct tegra_uart_port *tup,
+ 			tup->current_baud));
+ }
+ 
++static int tegra_uart_wait_fifo_mode_enabled(struct tegra_uart_port *tup)
++{
++	unsigned long iir;
++	unsigned int tmout = 100;
++
++	do {
++		iir = tegra_uart_read(tup, UART_IIR);
++		if (iir & TEGRA_UART_FCR_IIR_FIFO_EN)
++			return 0;
++		udelay(1);
++	} while (--tmout);
++
++	return -ETIMEDOUT;
++}
++
+ static void tegra_uart_fifo_reset(struct tegra_uart_port *tup, u8 fcr_bits)
+ {
+ 	unsigned long fcr = tup->fcr_shadow;
+@@ -258,6 +295,8 @@ static void tegra_uart_fifo_reset(struct tegra_uart_port *tup, u8 fcr_bits)
+ 		tegra_uart_write(tup, fcr, UART_FCR);
+ 		fcr |= UART_FCR_ENABLE_FIFO;
+ 		tegra_uart_write(tup, fcr, UART_FCR);
++		if (tup->cdata->fifo_mode_enable_status)
++			tegra_uart_wait_fifo_mode_enabled(tup);
+ 	}
+ 
+ 	/* Dummy read to ensure the write is posted */
+@@ -271,6 +310,37 @@ static void tegra_uart_fifo_reset(struct tegra_uart_port *tup, u8 fcr_bits)
+ 	tegra_uart_wait_cycle_time(tup, 32);
+ }
+ 
++static long tegra_get_tolerance_rate(struct tegra_uart_port *tup,
++				     unsigned int baud, long rate)
++{
++	int i;
++
++	for (i = 0; i < tup->n_adjustable_baud_rates; ++i) {
++		if (baud >= tup->baud_tolerance[i].lower_range_baud &&
++		    baud <= tup->baud_tolerance[i].upper_range_baud)
++			return (rate + (rate *
++				tup->baud_tolerance[i].tolerance) / 10000);
++	}
++
++	return rate;
++}
++
++static int tegra_check_rate_in_range(struct tegra_uart_port *tup)
++{
++	long diff;
++
++	diff = ((long)(tup->configured_rate - tup->required_rate) * 10000)
++		/ tup->required_rate;
++	if (diff < (tup->cdata->error_tolerance_low_range * 100) ||
++	    diff > (tup->cdata->error_tolerance_high_range * 100)) {
++		dev_err(tup->uport.dev,
++			"configured baud rate is out of range by %ld", diff);
++		return -EIO;
++	}
++
++	return 0;
++}
++
+ static int tegra_set_baudrate(struct tegra_uart_port *tup, unsigned int baud)
+ {
+ 	unsigned long rate;
+@@ -283,13 +353,22 @@ static int tegra_set_baudrate(struct tegra_uart_port *tup, unsigned int baud)
+ 
+ 	if (tup->cdata->support_clk_src_div) {
+ 		rate = baud * 16;
++		tup->required_rate = rate;
++
++		if (tup->n_adjustable_baud_rates)
++			rate = tegra_get_tolerance_rate(tup, baud, rate);
++
+ 		ret = clk_set_rate(tup->uart_clk, rate);
+ 		if (ret < 0) {
+ 			dev_err(tup->uport.dev,
+ 				"clk_set_rate() failed for rate %lu\n", rate);
+ 			return ret;
+ 		}
++		tup->configured_rate = clk_get_rate(tup->uart_clk);
+ 		divisor = 1;
++		ret = tegra_check_rate_in_range(tup);
++		if (ret < 0)
++			return ret;
+ 	} else {
+ 		rate = clk_get_rate(tup->uart_clk);
+ 		divisor = DIV_ROUND_CLOSEST(rate, baud * 16);
+@@ -440,12 +519,15 @@ static void tegra_uart_start_next_tx(struct tegra_uart_port *tup)
+ 	unsigned long count;
+ 	struct circ_buf *xmit = &tup->uport.state->xmit;
+ 
++	if (!tup->current_baud)
++		return;
++
+ 	tail = (unsigned long)&xmit->buf[xmit->tail];
+ 	count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE);
+ 	if (!count)
+ 		return;
+ 
+-	if (count < TEGRA_UART_MIN_DMA)
++	if (tup->use_tx_pio || count < TEGRA_UART_MIN_DMA)
+ 		tegra_uart_start_pio_tx(tup, count);
+ 	else if (BYTES_TO_ALIGN(tail) > 0)
+ 		tegra_uart_start_pio_tx(tup, BYTES_TO_ALIGN(tail));
+@@ -488,8 +570,9 @@ static void tegra_uart_stop_tx(struct uart_port *u)
+ 	if (tup->tx_in_progress != TEGRA_UART_TX_DMA)
+ 		return;
+ 
+-	dmaengine_terminate_all(tup->tx_dma_chan);
++	dmaengine_pause(tup->tx_dma_chan);
+ 	dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state);
++	dmaengine_terminate_all(tup->tx_dma_chan);
+ 	count = tup->tx_bytes_requested - state.residue;
+ 	async_tx_ack(tup->tx_dma_desc);
+ 	uart_xmit_advance(&tup->uport, count);
+@@ -555,11 +638,22 @@ static void tegra_uart_copy_rx_to_tty(struct tegra_uart_port *tup,
+ 				TEGRA_UART_RX_DMA_BUFFER_SIZE, DMA_TO_DEVICE);
+ }
+ 
++static void do_handle_rx_pio(struct tegra_uart_port *tup)
++{
++	struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port);
++	struct tty_port *port = &tup->uport.state->port;
++
++	tegra_uart_handle_rx_pio(tup, port);
++	if (tty) {
++		tty_flip_buffer_push(port);
++		tty_kref_put(tty);
++	}
++}
++
+ static void tegra_uart_rx_buffer_push(struct tegra_uart_port *tup,
+ 				      unsigned int residue)
+ {
+ 	struct tty_port *port = &tup->uport.state->port;
+-	struct tty_struct *tty = tty_port_tty_get(port);
+ 	unsigned int count;
+ 
+ 	async_tx_ack(tup->rx_dma_desc);
+@@ -568,11 +662,7 @@ static void tegra_uart_rx_buffer_push(struct tegra_uart_port *tup,
+ 	/* If we are here, DMA is stopped */
+ 	tegra_uart_copy_rx_to_tty(tup, port, count);
+ 
+-	tegra_uart_handle_rx_pio(tup, port);
+-	if (tty) {
+-		tty_flip_buffer_push(port);
+-		tty_kref_put(tty);
+-	}
++	do_handle_rx_pio(tup);
+ }
+ 
+ static void tegra_uart_rx_dma_complete(void *args)
+@@ -596,6 +686,7 @@ static void tegra_uart_rx_dma_complete(void *args)
+ 	if (tup->rts_active)
+ 		set_rts(tup, false);
+ 
++	tup->rx_dma_active = false;
+ 	tegra_uart_rx_buffer_push(tup, 0);
+ 	tegra_uart_start_rx_dma(tup);
+ 
+@@ -607,18 +698,30 @@ done:
+ 	spin_unlock_irqrestore(&u->lock, flags);
+ }
+ 
+-static void tegra_uart_handle_rx_dma(struct tegra_uart_port *tup)
++static void tegra_uart_terminate_rx_dma(struct tegra_uart_port *tup)
+ {
+ 	struct dma_tx_state state;
+ 
++	if (!tup->rx_dma_active) {
++		do_handle_rx_pio(tup);
++		return;
++	}
++
++	dmaengine_pause(tup->rx_dma_chan);
++	dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state);
++	dmaengine_terminate_all(tup->rx_dma_chan);
++
++	tegra_uart_rx_buffer_push(tup, state.residue);
++	tup->rx_dma_active = false;
++}
++
++static void tegra_uart_handle_rx_dma(struct tegra_uart_port *tup)
++{
+ 	/* Deactivate flow control to stop sender */
+ 	if (tup->rts_active)
+ 		set_rts(tup, false);
+ 
+-	dmaengine_terminate_all(tup->rx_dma_chan);
+-	dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state);
+-	tegra_uart_rx_buffer_push(tup, state.residue);
+-	tegra_uart_start_rx_dma(tup);
++	tegra_uart_terminate_rx_dma(tup);
+ 
+ 	if (tup->rts_active)
+ 		set_rts(tup, true);
+@@ -628,6 +731,9 @@ static int tegra_uart_start_rx_dma(struct tegra_uart_port *tup)
+ {
+ 	unsigned int count = TEGRA_UART_RX_DMA_BUFFER_SIZE;
+ 
++	if (tup->rx_dma_active)
++		return 0;
++
+ 	tup->rx_dma_desc = dmaengine_prep_slave_single(tup->rx_dma_chan,
+ 				tup->rx_dma_buf_phys, count, DMA_DEV_TO_MEM,
+ 				DMA_PREP_INTERRUPT);
+@@ -636,6 +742,7 @@ static int tegra_uart_start_rx_dma(struct tegra_uart_port *tup)
+ 		return -EIO;
+ 	}
+ 
++	tup->rx_dma_active = true;
+ 	tup->rx_dma_desc->callback = tegra_uart_rx_dma_complete;
+ 	tup->rx_dma_desc->callback_param = tup;
+ 	dma_sync_single_for_device(tup->uport.dev, tup->rx_dma_buf_phys,
+@@ -673,6 +780,7 @@ static irqreturn_t tegra_uart_isr(int irq, void *data)
+ 	struct uart_port *u = &tup->uport;
+ 	unsigned long iir;
+ 	unsigned long ier;
++	bool is_rx_start = false;
+ 	bool is_rx_int = false;
+ 	unsigned long flags;
+ 
+@@ -680,15 +788,17 @@ static irqreturn_t tegra_uart_isr(int irq, void *data)
+ 	while (1) {
+ 		iir = tegra_uart_read(tup, UART_IIR);
+ 		if (iir & UART_IIR_NO_INT) {
+-			if (is_rx_int) {
++			if (!tup->use_rx_pio && is_rx_int) {
+ 				tegra_uart_handle_rx_dma(tup);
+ 				if (tup->rx_in_progress) {
+ 					ier = tup->ier_shadow;
+ 					ier |= (UART_IER_RLSI | UART_IER_RTOIE |
+-						TEGRA_UART_IER_EORD);
++						TEGRA_UART_IER_EORD | UART_IER_RDI);
+ 					tup->ier_shadow = ier;
+ 					tegra_uart_write(tup, ier, UART_IER);
+ 				}
++			} else if (is_rx_start) {
++				tegra_uart_start_rx_dma(tup);
+ 			}
+ 			spin_unlock_irqrestore(&u->lock, flags);
+ 			return IRQ_HANDLED;
+@@ -707,17 +817,25 @@ static irqreturn_t tegra_uart_isr(int irq, void *data)
+ 
+ 		case 4: /* End of data */
+ 		case 6: /* Rx timeout */
+-		case 2: /* Receive */
+-			if (!is_rx_int) {
+-				is_rx_int = true;
++			if (!tup->use_rx_pio) {
++				is_rx_int = tup->rx_in_progress;
+ 				/* Disable Rx interrupts */
+ 				ier = tup->ier_shadow;
+-				ier |= UART_IER_RDI;
+-				tegra_uart_write(tup, ier, UART_IER);
+ 				ier &= ~(UART_IER_RDI | UART_IER_RLSI |
+ 					UART_IER_RTOIE | TEGRA_UART_IER_EORD);
+ 				tup->ier_shadow = ier;
+ 				tegra_uart_write(tup, ier, UART_IER);
++				break;
++			}
++			/* Fall through */
++		case 2: /* Receive */
++			if (!tup->use_rx_pio) {
++				is_rx_start = tup->rx_in_progress;
++				tup->ier_shadow  &= ~UART_IER_RDI;
++				tegra_uart_write(tup, tup->ier_shadow,
++						 UART_IER);
++			} else {
++				do_handle_rx_pio(tup);
+ 			}
+ 			break;
+ 
+@@ -736,7 +854,7 @@ static irqreturn_t tegra_uart_isr(int irq, void *data)
+ static void tegra_uart_stop_rx(struct uart_port *u)
+ {
+ 	struct tegra_uart_port *tup = to_tegra_uport(u);
+-	struct dma_tx_state state;
++	struct tty_port *port = &tup->uport.state->port;
+ 	unsigned long ier;
+ 
+ 	if (tup->rts_active)
+@@ -753,9 +871,11 @@ static void tegra_uart_stop_rx(struct uart_port *u)
+ 	tup->ier_shadow = ier;
+ 	tegra_uart_write(tup, ier, UART_IER);
+ 	tup->rx_in_progress = 0;
+-	dmaengine_terminate_all(tup->rx_dma_chan);
+-	dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state);
+-	tegra_uart_rx_buffer_push(tup, state.residue);
++
++	if (!tup->use_rx_pio)
++		tegra_uart_terminate_rx_dma(tup);
++	else
++		tegra_uart_handle_rx_pio(tup, port);
+ }
+ 
+ static void tegra_uart_hw_deinit(struct tegra_uart_port *tup)
+@@ -803,6 +923,14 @@ static void tegra_uart_hw_deinit(struct tegra_uart_port *tup)
+ 	tup->current_baud = 0;
+ 	spin_unlock_irqrestore(&tup->uport.lock, flags);
+ 
++	tup->rx_in_progress = 0;
++	tup->tx_in_progress = 0;
++
++	if (!tup->use_rx_pio)
++		tegra_uart_dma_channel_free(tup, true);
++	if (!tup->use_tx_pio)
++		tegra_uart_dma_channel_free(tup, false);
++
+ 	clk_disable_unprepare(tup->uart_clk);
+ }
+ 
+@@ -845,45 +973,62 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ 	 * programmed in the DMA registers.
+ 	 */
+ 	tup->fcr_shadow = UART_FCR_ENABLE_FIFO;
+-	tup->fcr_shadow |= UART_FCR_R_TRIG_01;
++
++	if (tup->use_rx_pio) {
++		tup->fcr_shadow |= UART_FCR_R_TRIG_11;
++	} else {
++		if (tup->cdata->max_dma_burst_bytes == 8)
++			tup->fcr_shadow |= UART_FCR_R_TRIG_10;
++		else
++			tup->fcr_shadow |= UART_FCR_R_TRIG_01;
++	}
++
+ 	tup->fcr_shadow |= TEGRA_UART_TX_TRIG_16B;
+ 	tegra_uart_write(tup, tup->fcr_shadow, UART_FCR);
+ 
+ 	/* Dummy read to ensure the write is posted */
+ 	tegra_uart_read(tup, UART_SCR);
+ 
+-	/*
+-	 * For all tegra devices (up to t210), there is a hardware issue that
+-	 * requires software to wait for 3 UART clock periods after enabling
+-	 * the TX fifo, otherwise data could be lost.
+-	 */
+-	tegra_uart_wait_cycle_time(tup, 3);
++	if (tup->cdata->fifo_mode_enable_status) {
++		ret = tegra_uart_wait_fifo_mode_enabled(tup);
++		if (ret < 0) {
++			dev_err(tup->uport.dev,
++				"Failed to enable FIFO mode: %d\n", ret);
++			return ret;
++		}
++	} else {
++		/*
++		 * For all tegra devices (up to t210), there is a hardware
++		 * issue that requires software to wait for 3 UART clock
++		 * periods after enabling the TX fifo, otherwise data could
++		 * be lost.
++		 */
++		tegra_uart_wait_cycle_time(tup, 3);
++	}
+ 
+ 	/*
+ 	 * Initialize the UART with default configuration
+ 	 * (115200, N, 8, 1) so that the receive DMA buffer may be
+ 	 * enqueued
+ 	 */
+-	tup->lcr_shadow = TEGRA_UART_DEFAULT_LSR;
+-	tegra_set_baudrate(tup, TEGRA_UART_DEFAULT_BAUD);
+-	tup->fcr_shadow |= UART_FCR_DMA_SELECT;
+-	tegra_uart_write(tup, tup->fcr_shadow, UART_FCR);
+-
+-	ret = tegra_uart_start_rx_dma(tup);
++	ret = tegra_set_baudrate(tup, TEGRA_UART_DEFAULT_BAUD);
+ 	if (ret < 0) {
+-		dev_err(tup->uport.dev, "Not able to start Rx DMA\n");
++		dev_err(tup->uport.dev, "Failed to set baud rate\n");
+ 		return ret;
+ 	}
++	if (!tup->use_rx_pio) {
++		tup->lcr_shadow = TEGRA_UART_DEFAULT_LSR;
++		tup->fcr_shadow |= UART_FCR_DMA_SELECT;
++		tegra_uart_write(tup, tup->fcr_shadow, UART_FCR);
++	} else {
++		tegra_uart_write(tup, tup->fcr_shadow, UART_FCR);
++	}
+ 	tup->rx_in_progress = 1;
+ 
+ 	/*
+ 	 * Enable IE_RXS for the receive status interrupts like line errros.
+ 	 * Enable IE_RX_TIMEOUT to get the bytes which cannot be DMA'd.
+ 	 *
+-	 * If using DMA mode, enable EORD instead of receive interrupt which
+-	 * will interrupt after the UART is done with the receive instead of
+-	 * the interrupt when the FIFO "threshold" is reached.
+-	 *
+ 	 * EORD is different interrupt than RX_TIMEOUT - RX_TIMEOUT occurs when
+ 	 * the DATA is sitting in the FIFO and couldn't be transferred to the
+ 	 * DMA as the DMA size alignment(4 bytes) is not met. EORD will be
+@@ -894,7 +1039,15 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ 	 * both the EORD as well as RX_TIMEOUT - SW sees RX_TIMEOUT first
+ 	 * then the EORD.
+ 	 */
+-	tup->ier_shadow = UART_IER_RLSI | UART_IER_RTOIE | TEGRA_UART_IER_EORD;
++	tup->ier_shadow = UART_IER_RLSI | UART_IER_RTOIE | UART_IER_RDI;
++
++	/*
++	 * If using DMA mode, enable EORD interrupt to notify about RX
++	 * completion.
++	 */
++	if (!tup->use_rx_pio)
++		tup->ier_shadow |= TEGRA_UART_IER_EORD;
++
+ 	tegra_uart_write(tup, tup->ier_shadow, UART_IER);
+ 	return 0;
+ }
+@@ -951,7 +1104,7 @@ static int tegra_uart_dma_channel_allocate(struct tegra_uart_port *tup,
+ 		}
+ 		dma_sconfig.src_addr = tup->uport.mapbase;
+ 		dma_sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+-		dma_sconfig.src_maxburst = 4;
++		dma_sconfig.src_maxburst = tup->cdata->max_dma_burst_bytes;
+ 		tup->rx_dma_chan = dma_chan;
+ 		tup->rx_dma_buf_virt = dma_buf;
+ 		tup->rx_dma_buf_phys = dma_phys;
+@@ -989,16 +1142,22 @@ static int tegra_uart_startup(struct uart_port *u)
+ 	struct tegra_uart_port *tup = to_tegra_uport(u);
+ 	int ret;
+ 
+-	ret = tegra_uart_dma_channel_allocate(tup, false);
+-	if (ret < 0) {
+-		dev_err(u->dev, "Tx Dma allocation failed, err = %d\n", ret);
+-		return ret;
++	if (!tup->use_tx_pio) {
++		ret = tegra_uart_dma_channel_allocate(tup, false);
++		if (ret < 0) {
++			dev_err(u->dev, "Tx Dma allocation failed, err = %d\n",
++				ret);
++			return ret;
++		}
+ 	}
+ 
+-	ret = tegra_uart_dma_channel_allocate(tup, true);
+-	if (ret < 0) {
+-		dev_err(u->dev, "Rx Dma allocation failed, err = %d\n", ret);
+-		goto fail_rx_dma;
++	if (!tup->use_rx_pio) {
++		ret = tegra_uart_dma_channel_allocate(tup, true);
++		if (ret < 0) {
++			dev_err(u->dev, "Rx Dma allocation failed, err = %d\n",
++				ret);
++			goto fail_rx_dma;
++		}
+ 	}
+ 
+ 	ret = tegra_uart_hw_init(tup);
+@@ -1016,9 +1175,11 @@ static int tegra_uart_startup(struct uart_port *u)
+ 	return 0;
+ 
+ fail_hw_init:
+-	tegra_uart_dma_channel_free(tup, true);
++	if (!tup->use_rx_pio)
++		tegra_uart_dma_channel_free(tup, true);
+ fail_rx_dma:
+-	tegra_uart_dma_channel_free(tup, false);
++	if (!tup->use_tx_pio)
++		tegra_uart_dma_channel_free(tup, false);
+ 	return ret;
+ }
+ 
+@@ -1040,12 +1201,6 @@ static void tegra_uart_shutdown(struct uart_port *u)
+ 	struct tegra_uart_port *tup = to_tegra_uport(u);
+ 
+ 	tegra_uart_hw_deinit(tup);
+-
+-	tup->rx_in_progress = 0;
+-	tup->tx_in_progress = 0;
+-
+-	tegra_uart_dma_channel_free(tup, true);
+-	tegra_uart_dma_channel_free(tup, false);
+ 	free_irq(u->irq, tup);
+ }
+ 
+@@ -1070,6 +1225,7 @@ static void tegra_uart_set_termios(struct uart_port *u,
+ 	struct clk *parent_clk = clk_get_parent(tup->uart_clk);
+ 	unsigned long parent_clk_rate = clk_get_rate(parent_clk);
+ 	int max_divider = (tup->cdata->support_clk_src_div) ? 0x7FFF : 0xFFFF;
++	int ret;
+ 
+ 	max_divider *= 16;
+ 	spin_lock_irqsave(&u->lock, flags);
+@@ -1142,7 +1298,11 @@ static void tegra_uart_set_termios(struct uart_port *u,
+ 			parent_clk_rate/max_divider,
+ 			parent_clk_rate/16);
+ 	spin_unlock_irqrestore(&u->lock, flags);
+-	tegra_set_baudrate(tup, baud);
++	ret = tegra_set_baudrate(tup, baud);
++	if (ret < 0) {
++		dev_err(tup->uport.dev, "Failed to set baud rate\n");
++		return;
++	}
+ 	if (tty_termios_baud_rate(termios))
+ 		tty_termios_encode_baud_rate(termios, baud, baud);
+ 	spin_lock_irqsave(&u->lock, flags);
+@@ -1210,6 +1370,11 @@ static int tegra_uart_parse_dt(struct platform_device *pdev,
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	int port;
++	int ret;
++	int index;
++	u32 pval;
++	int count;
++	int n_entries;
+ 
+ 	port = of_alias_get_id(np, "serial");
+ 	if (port < 0) {
+@@ -1220,6 +1385,54 @@ static int tegra_uart_parse_dt(struct platform_device *pdev,
+ 
+ 	tup->enable_modem_interrupt = of_property_read_bool(np,
+ 					"nvidia,enable-modem-interrupt");
++
++	index = of_property_match_string(np, "dma-names", "rx");
++	if (index < 0) {
++		tup->use_rx_pio = true;
++		dev_info(&pdev->dev, "RX in PIO mode\n");
++	}
++	index = of_property_match_string(np, "dma-names", "tx");
++	if (index < 0) {
++		tup->use_tx_pio = true;
++		dev_info(&pdev->dev, "TX in PIO mode\n");
++	}
++
++	n_entries = of_property_count_u32_elems(np, "nvidia,adjust-baud-rates");
++	if (n_entries > 0) {
++		tup->n_adjustable_baud_rates = n_entries / 3;
++		tup->baud_tolerance =
++		devm_kzalloc(&pdev->dev, (tup->n_adjustable_baud_rates) *
++			     sizeof(*tup->baud_tolerance), GFP_KERNEL);
++		if (!tup->baud_tolerance)
++			return -ENOMEM;
++		for (count = 0, index = 0; count < n_entries; count += 3,
++		     index++) {
++			ret =
++			of_property_read_u32_index(np,
++						   "nvidia,adjust-baud-rates",
++						   count, &pval);
++			if (!ret)
++				tup->baud_tolerance[index].lower_range_baud =
++				pval;
++			ret =
++			of_property_read_u32_index(np,
++						   "nvidia,adjust-baud-rates",
++						   count + 1, &pval);
++			if (!ret)
++				tup->baud_tolerance[index].upper_range_baud =
++				pval;
++			ret =
++			of_property_read_u32_index(np,
++						   "nvidia,adjust-baud-rates",
++						   count + 2, &pval);
++			if (!ret)
++				tup->baud_tolerance[index].tolerance =
++				(s32)pval;
++		}
++	} else {
++		tup->n_adjustable_baud_rates = 0;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1227,12 +1440,44 @@ static struct tegra_uart_chip_data tegra20_uart_chip_data = {
+ 	.tx_fifo_full_status		= false,
+ 	.allow_txfifo_reset_fifo_mode	= true,
+ 	.support_clk_src_div		= false,
++	.fifo_mode_enable_status	= false,
++	.uart_max_port			= 5,
++	.max_dma_burst_bytes		= 4,
++	.error_tolerance_low_range	= -4,
++	.error_tolerance_high_range	= 4,
+ };
+ 
+ static struct tegra_uart_chip_data tegra30_uart_chip_data = {
+ 	.tx_fifo_full_status		= true,
+ 	.allow_txfifo_reset_fifo_mode	= false,
+ 	.support_clk_src_div		= true,
++	.fifo_mode_enable_status	= false,
++	.uart_max_port			= 5,
++	.max_dma_burst_bytes		= 4,
++	.error_tolerance_low_range	= -4,
++	.error_tolerance_high_range	= 4,
++};
++
++static struct tegra_uart_chip_data tegra186_uart_chip_data = {
++	.tx_fifo_full_status		= true,
++	.allow_txfifo_reset_fifo_mode	= false,
++	.support_clk_src_div		= true,
++	.fifo_mode_enable_status	= true,
++	.uart_max_port			= 8,
++	.max_dma_burst_bytes		= 8,
++	.error_tolerance_low_range	= 0,
++	.error_tolerance_high_range	= 4,
++};
++
++static struct tegra_uart_chip_data tegra194_uart_chip_data = {
++	.tx_fifo_full_status		= true,
++	.allow_txfifo_reset_fifo_mode	= false,
++	.support_clk_src_div		= true,
++	.fifo_mode_enable_status	= true,
++	.uart_max_port			= 8,
++	.max_dma_burst_bytes		= 8,
++	.error_tolerance_low_range	= -2,
++	.error_tolerance_high_range	= 2,
+ };
+ 
+ static const struct of_device_id tegra_uart_of_match[] = {
+@@ -1242,6 +1487,12 @@ static const struct of_device_id tegra_uart_of_match[] = {
+ 	}, {
+ 		.compatible	= "nvidia,tegra20-hsuart",
+ 		.data		= &tegra20_uart_chip_data,
++	}, {
++		.compatible     = "nvidia,tegra186-hsuart",
++		.data		= &tegra186_uart_chip_data,
++	}, {
++		.compatible     = "nvidia,tegra194-hsuart",
++		.data		= &tegra194_uart_chip_data,
+ 	}, {
+ 	},
+ };
+@@ -1364,11 +1615,22 @@ static struct platform_driver tegra_uart_platform_driver = {
+ static int __init tegra_uart_init(void)
+ {
+ 	int ret;
++	struct device_node *node;
++	const struct of_device_id *match = NULL;
++	const struct tegra_uart_chip_data *cdata = NULL;
++
++	node = of_find_matching_node(NULL, tegra_uart_of_match);
++	if (node)
++		match = of_match_node(tegra_uart_of_match, node);
++	if (match)
++		cdata = match->data;
++	if (cdata)
++		tegra_uart_driver.nr = cdata->uart_max_port;
+ 
+ 	ret = uart_register_driver(&tegra_uart_driver);
+ 	if (ret < 0) {
+ 		pr_err("Could not register %s driver\n",
+-			tegra_uart_driver.driver_name);
++		       tegra_uart_driver.driver_name);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/tty/serial/sunsab.c b/drivers/tty/serial/sunsab.c
+index 72131b5e132eb..beca02c304988 100644
+--- a/drivers/tty/serial/sunsab.c
++++ b/drivers/tty/serial/sunsab.c
+@@ -1140,7 +1140,13 @@ static int __init sunsab_init(void)
+ 		}
+ 	}
+ 
+-	return platform_driver_register(&sab_driver);
++	err = platform_driver_register(&sab_driver);
++	if (err) {
++		kfree(sunsab_ports);
++		sunsab_ports = NULL;
++	}
++
++	return err;
+ }
+ 
+ static void __exit sunsab_exit(void)
+diff --git a/drivers/uio/uio_dmem_genirq.c b/drivers/uio/uio_dmem_genirq.c
+index a00b4aee6c799..b4b7fa05b29bc 100644
+--- a/drivers/uio/uio_dmem_genirq.c
++++ b/drivers/uio/uio_dmem_genirq.c
+@@ -113,8 +113,10 @@ static irqreturn_t uio_dmem_genirq_handler(int irq, struct uio_info *dev_info)
+ 	 * remember the state so we can allow user space to enable it later.
+ 	 */
+ 
++	spin_lock(&priv->lock);
+ 	if (!test_and_set_bit(0, &priv->flags))
+ 		disable_irq_nosync(irq);
++	spin_unlock(&priv->lock);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -128,20 +130,19 @@ static int uio_dmem_genirq_irqcontrol(struct uio_info *dev_info, s32 irq_on)
+ 	 * in the interrupt controller, but keep track of the
+ 	 * state to prevent per-irq depth damage.
+ 	 *
+-	 * Serialize this operation to support multiple tasks.
++	 * Serialize this operation to support multiple tasks and concurrency
++	 * with irq handler on SMP systems.
+ 	 */
+ 
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 	if (irq_on) {
+ 		if (test_and_clear_bit(0, &priv->flags))
+ 			enable_irq(dev_info->irq);
+-		spin_unlock_irqrestore(&priv->lock, flags);
+ 	} else {
+-		if (!test_and_set_bit(0, &priv->flags)) {
+-			spin_unlock_irqrestore(&priv->lock, flags);
+-			disable_irq(dev_info->irq);
+-		}
++		if (!test_and_set_bit(0, &priv->flags))
++			disable_irq_nosync(dev_info->irq);
+ 	}
++	spin_unlock_irqrestore(&priv->lock, flags);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 465aabb6e96c8..65caee589e677 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -914,8 +914,13 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 
+ 	if (!dwc->ulpi_ready) {
+ 		ret = dwc3_core_ulpi_init(dwc);
+-		if (ret)
++		if (ret) {
++			if (ret == -ETIMEDOUT) {
++				dwc3_core_soft_reset(dwc);
++				ret = -EPROBE_DEFER;
++			}
+ 			goto err0;
++		}
+ 		dwc->ulpi_ready = true;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 1d10d29c115bc..ea877cba0763e 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -45,12 +45,25 @@ struct f_hidg {
+ 	unsigned short			report_desc_length;
+ 	char				*report_desc;
+ 	unsigned short			report_length;
++	/*
++	 * use_out_ep - if true, the OUT Endpoint (interrupt out method)
++	 *              will be used to receive reports from the host
++	 *              using functions with the "intout" suffix.
++	 *              Otherwise, the OUT Endpoint will not be configured
++	 *              and the SETUP/SET_REPORT method ("ssreport" suffix)
++	 *              will be used to receive reports.
++	 */
++	bool				use_out_ep;
+ 
+ 	/* recv report */
+-	struct list_head		completed_out_req;
+ 	spinlock_t			read_spinlock;
+ 	wait_queue_head_t		read_queue;
++	/* recv report - interrupt out only (use_out_ep == 1) */
++	struct list_head		completed_out_req;
+ 	unsigned int			qlen;
++	/* recv report - setup set_report only (use_out_ep == 0) */
++	char				*set_report_buf;
++	unsigned int			set_report_length;
+ 
+ 	/* send report */
+ 	spinlock_t			write_spinlock;
+@@ -58,7 +71,7 @@ struct f_hidg {
+ 	wait_queue_head_t		write_queue;
+ 	struct usb_request		*req;
+ 
+-	int				minor;
++	struct device			dev;
+ 	struct cdev			cdev;
+ 	struct usb_function		func;
+ 
+@@ -71,6 +84,14 @@ static inline struct f_hidg *func_to_hidg(struct usb_function *f)
+ 	return container_of(f, struct f_hidg, func);
+ }
+ 
++static void hidg_release(struct device *dev)
++{
++	struct f_hidg *hidg = container_of(dev, struct f_hidg, dev);
++
++	kfree(hidg->set_report_buf);
++	kfree(hidg);
++}
++
+ /*-------------------------------------------------------------------------*/
+ /*                           Static descriptors                            */
+ 
+@@ -79,7 +100,7 @@ static struct usb_interface_descriptor hidg_interface_desc = {
+ 	.bDescriptorType	= USB_DT_INTERFACE,
+ 	/* .bInterfaceNumber	= DYNAMIC */
+ 	.bAlternateSetting	= 0,
+-	.bNumEndpoints		= 2,
++	/* .bNumEndpoints	= DYNAMIC (depends on use_out_ep) */
+ 	.bInterfaceClass	= USB_CLASS_HID,
+ 	/* .bInterfaceSubClass	= DYNAMIC */
+ 	/* .bInterfaceProtocol	= DYNAMIC */
+@@ -140,7 +161,7 @@ static struct usb_ss_ep_comp_descriptor hidg_ss_out_comp_desc = {
+ 	/* .wBytesPerInterval   = DYNAMIC */
+ };
+ 
+-static struct usb_descriptor_header *hidg_ss_descriptors[] = {
++static struct usb_descriptor_header *hidg_ss_descriptors_intout[] = {
+ 	(struct usb_descriptor_header *)&hidg_interface_desc,
+ 	(struct usb_descriptor_header *)&hidg_desc,
+ 	(struct usb_descriptor_header *)&hidg_ss_in_ep_desc,
+@@ -150,6 +171,14 @@ static struct usb_descriptor_header *hidg_ss_descriptors[] = {
+ 	NULL,
+ };
+ 
++static struct usb_descriptor_header *hidg_ss_descriptors_ssreport[] = {
++	(struct usb_descriptor_header *)&hidg_interface_desc,
++	(struct usb_descriptor_header *)&hidg_desc,
++	(struct usb_descriptor_header *)&hidg_ss_in_ep_desc,
++	(struct usb_descriptor_header *)&hidg_ss_in_comp_desc,
++	NULL,
++};
++
+ /* High-Speed Support */
+ 
+ static struct usb_endpoint_descriptor hidg_hs_in_ep_desc = {
+@@ -176,7 +205,7 @@ static struct usb_endpoint_descriptor hidg_hs_out_ep_desc = {
+ 				      */
+ };
+ 
+-static struct usb_descriptor_header *hidg_hs_descriptors[] = {
++static struct usb_descriptor_header *hidg_hs_descriptors_intout[] = {
+ 	(struct usb_descriptor_header *)&hidg_interface_desc,
+ 	(struct usb_descriptor_header *)&hidg_desc,
+ 	(struct usb_descriptor_header *)&hidg_hs_in_ep_desc,
+@@ -184,6 +213,13 @@ static struct usb_descriptor_header *hidg_hs_descriptors[] = {
+ 	NULL,
+ };
+ 
++static struct usb_descriptor_header *hidg_hs_descriptors_ssreport[] = {
++	(struct usb_descriptor_header *)&hidg_interface_desc,
++	(struct usb_descriptor_header *)&hidg_desc,
++	(struct usb_descriptor_header *)&hidg_hs_in_ep_desc,
++	NULL,
++};
++
+ /* Full-Speed Support */
+ 
+ static struct usb_endpoint_descriptor hidg_fs_in_ep_desc = {
+@@ -210,7 +246,7 @@ static struct usb_endpoint_descriptor hidg_fs_out_ep_desc = {
+ 				       */
+ };
+ 
+-static struct usb_descriptor_header *hidg_fs_descriptors[] = {
++static struct usb_descriptor_header *hidg_fs_descriptors_intout[] = {
+ 	(struct usb_descriptor_header *)&hidg_interface_desc,
+ 	(struct usb_descriptor_header *)&hidg_desc,
+ 	(struct usb_descriptor_header *)&hidg_fs_in_ep_desc,
+@@ -218,6 +254,13 @@ static struct usb_descriptor_header *hidg_fs_descriptors[] = {
+ 	NULL,
+ };
+ 
++static struct usb_descriptor_header *hidg_fs_descriptors_ssreport[] = {
++	(struct usb_descriptor_header *)&hidg_interface_desc,
++	(struct usb_descriptor_header *)&hidg_desc,
++	(struct usb_descriptor_header *)&hidg_fs_in_ep_desc,
++	NULL,
++};
++
+ /*-------------------------------------------------------------------------*/
+ /*                                 Strings                                 */
+ 
+@@ -241,8 +284,8 @@ static struct usb_gadget_strings *ct_func_strings[] = {
+ /*-------------------------------------------------------------------------*/
+ /*                              Char Device                                */
+ 
+-static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+-			size_t count, loff_t *ptr)
++static ssize_t f_hidg_intout_read(struct file *file, char __user *buffer,
++				  size_t count, loff_t *ptr)
+ {
+ 	struct f_hidg *hidg = file->private_data;
+ 	struct f_hidg_req_list *list;
+@@ -258,15 +301,15 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ 
+ 	spin_lock_irqsave(&hidg->read_spinlock, flags);
+ 
+-#define READ_COND (!list_empty(&hidg->completed_out_req))
++#define READ_COND_INTOUT (!list_empty(&hidg->completed_out_req))
+ 
+ 	/* wait for at least one buffer to complete */
+-	while (!READ_COND) {
++	while (!READ_COND_INTOUT) {
+ 		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ 		if (file->f_flags & O_NONBLOCK)
+ 			return -EAGAIN;
+ 
+-		if (wait_event_interruptible(hidg->read_queue, READ_COND))
++		if (wait_event_interruptible(hidg->read_queue, READ_COND_INTOUT))
+ 			return -ERESTARTSYS;
+ 
+ 		spin_lock_irqsave(&hidg->read_spinlock, flags);
+@@ -316,6 +359,60 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ 	return count;
+ }
+ 
++#define READ_COND_SSREPORT (hidg->set_report_buf != NULL)
++
++static ssize_t f_hidg_ssreport_read(struct file *file, char __user *buffer,
++				    size_t count, loff_t *ptr)
++{
++	struct f_hidg *hidg = file->private_data;
++	char *tmp_buf = NULL;
++	unsigned long flags;
++
++	if (!count)
++		return 0;
++
++	spin_lock_irqsave(&hidg->read_spinlock, flags);
++
++	while (!READ_COND_SSREPORT) {
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++		if (file->f_flags & O_NONBLOCK)
++			return -EAGAIN;
++
++		if (wait_event_interruptible(hidg->read_queue, READ_COND_SSREPORT))
++			return -ERESTARTSYS;
++
++		spin_lock_irqsave(&hidg->read_spinlock, flags);
++	}
++
++	count = min_t(unsigned int, count, hidg->set_report_length);
++	tmp_buf = hidg->set_report_buf;
++	hidg->set_report_buf = NULL;
++
++	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
++	if (tmp_buf != NULL) {
++		count -= copy_to_user(buffer, tmp_buf, count);
++		kfree(tmp_buf);
++	} else {
++		count = -ENOMEM;
++	}
++
++	wake_up(&hidg->read_queue);
++
++	return count;
++}
++
++static ssize_t f_hidg_read(struct file *file, char __user *buffer,
++			   size_t count, loff_t *ptr)
++{
++	struct f_hidg *hidg = file->private_data;
++
++	if (hidg->use_out_ep)
++		return f_hidg_intout_read(file, buffer, count, ptr);
++	else
++		return f_hidg_ssreport_read(file, buffer, count, ptr);
++}
++
+ static void f_hidg_req_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+ 	struct f_hidg *hidg = (struct f_hidg *)ep->driver_data;
+@@ -439,14 +536,20 @@ static __poll_t f_hidg_poll(struct file *file, poll_table *wait)
+ 	if (WRITE_COND)
+ 		ret |= EPOLLOUT | EPOLLWRNORM;
+ 
+-	if (READ_COND)
+-		ret |= EPOLLIN | EPOLLRDNORM;
++	if (hidg->use_out_ep) {
++		if (READ_COND_INTOUT)
++			ret |= EPOLLIN | EPOLLRDNORM;
++	} else {
++		if (READ_COND_SSREPORT)
++			ret |= EPOLLIN | EPOLLRDNORM;
++	}
+ 
+ 	return ret;
+ }
+ 
+ #undef WRITE_COND
+-#undef READ_COND
++#undef READ_COND_SSREPORT
++#undef READ_COND_INTOUT
+ 
+ static int f_hidg_release(struct inode *inode, struct file *fd)
+ {
+@@ -473,7 +576,7 @@ static inline struct usb_request *hidg_alloc_ep_req(struct usb_ep *ep,
+ 	return alloc_ep_req(ep, length);
+ }
+ 
+-static void hidg_set_report_complete(struct usb_ep *ep, struct usb_request *req)
++static void hidg_intout_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+ 	struct f_hidg *hidg = (struct f_hidg *) req->context;
+ 	struct usb_composite_dev *cdev = hidg->func.config->cdev;
+@@ -508,6 +611,37 @@ free_req:
+ 	}
+ }
+ 
++static void hidg_ssreport_complete(struct usb_ep *ep, struct usb_request *req)
++{
++	struct f_hidg *hidg = (struct f_hidg *)req->context;
++	struct usb_composite_dev *cdev = hidg->func.config->cdev;
++	char *new_buf = NULL;
++	unsigned long flags;
++
++	if (req->status != 0 || req->buf == NULL || req->actual == 0) {
++		ERROR(cdev,
++		      "%s FAILED: status=%d, buf=%p, actual=%d\n",
++		      __func__, req->status, req->buf, req->actual);
++		return;
++	}
++
++	spin_lock_irqsave(&hidg->read_spinlock, flags);
++
++	new_buf = krealloc(hidg->set_report_buf, req->actual, GFP_ATOMIC);
++	if (new_buf == NULL) {
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++		return;
++	}
++	hidg->set_report_buf = new_buf;
++
++	hidg->set_report_length = req->actual;
++	memcpy(hidg->set_report_buf, req->buf, req->actual);
++
++	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
++	wake_up(&hidg->read_queue);
++}
++
+ static int hidg_setup(struct usb_function *f,
+ 		const struct usb_ctrlrequest *ctrl)
+ {
+@@ -555,7 +689,11 @@ static int hidg_setup(struct usb_function *f,
+ 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
+ 		  | HID_REQ_SET_REPORT):
+ 		VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength);
+-		goto stall;
++		if (hidg->use_out_ep)
++			goto stall;
++		req->complete = hidg_ssreport_complete;
++		req->context  = hidg;
++		goto respond;
+ 		break;
+ 
+ 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
+@@ -643,15 +781,18 @@ static void hidg_disable(struct usb_function *f)
+ 	unsigned long flags;
+ 
+ 	usb_ep_disable(hidg->in_ep);
+-	usb_ep_disable(hidg->out_ep);
+ 
+-	spin_lock_irqsave(&hidg->read_spinlock, flags);
+-	list_for_each_entry_safe(list, next, &hidg->completed_out_req, list) {
+-		free_ep_req(hidg->out_ep, list->req);
+-		list_del(&list->list);
+-		kfree(list);
++	if (hidg->out_ep) {
++		usb_ep_disable(hidg->out_ep);
++
++		spin_lock_irqsave(&hidg->read_spinlock, flags);
++		list_for_each_entry_safe(list, next, &hidg->completed_out_req, list) {
++			free_ep_req(hidg->out_ep, list->req);
++			list_del(&list->list);
++			kfree(list);
++		}
++		spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ 	}
+-	spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ 
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+ 	if (!hidg->write_pending) {
+@@ -697,8 +838,7 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 		}
+ 	}
+ 
+-
+-	if (hidg->out_ep != NULL) {
++	if (hidg->use_out_ep && hidg->out_ep != NULL) {
+ 		/* restart endpoint */
+ 		usb_ep_disable(hidg->out_ep);
+ 
+@@ -723,7 +863,7 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 					hidg_alloc_ep_req(hidg->out_ep,
+ 							  hidg->report_length);
+ 			if (req) {
+-				req->complete = hidg_set_report_complete;
++				req->complete = hidg_intout_complete;
+ 				req->context  = hidg;
+ 				status = usb_ep_queue(hidg->out_ep, req,
+ 						      GFP_ATOMIC);
+@@ -749,7 +889,8 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ 	}
+ 	return 0;
+ disable_out_ep:
+-	usb_ep_disable(hidg->out_ep);
++	if (hidg->out_ep)
++		usb_ep_disable(hidg->out_ep);
+ free_req_in:
+ 	if (req_in)
+ 		free_ep_req(hidg->in_ep, req_in);
+@@ -777,9 +918,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	struct usb_ep		*ep;
+ 	struct f_hidg		*hidg = func_to_hidg(f);
+ 	struct usb_string	*us;
+-	struct device		*device;
+ 	int			status;
+-	dev_t			dev;
+ 
+ 	/* maybe allocate device-global string IDs, and patch descriptors */
+ 	us = usb_gstrings_attach(c->cdev, ct_func_strings,
+@@ -801,14 +940,21 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 		goto fail;
+ 	hidg->in_ep = ep;
+ 
+-	ep = usb_ep_autoconfig(c->cdev->gadget, &hidg_fs_out_ep_desc);
+-	if (!ep)
+-		goto fail;
+-	hidg->out_ep = ep;
++	hidg->out_ep = NULL;
++	if (hidg->use_out_ep) {
++		ep = usb_ep_autoconfig(c->cdev->gadget, &hidg_fs_out_ep_desc);
++		if (!ep)
++			goto fail;
++		hidg->out_ep = ep;
++	}
++
++	/* used only if use_out_ep == 1 */
++	hidg->set_report_buf = NULL;
+ 
+ 	/* set descriptor dynamic values */
+ 	hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass;
+ 	hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol;
++	hidg_interface_desc.bNumEndpoints = hidg->use_out_ep ? 2 : 1;
+ 	hidg->protocol = HID_REPORT_PROTOCOL;
+ 	hidg->idle = 1;
+ 	hidg_ss_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
+@@ -839,9 +985,19 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	hidg_ss_out_ep_desc.bEndpointAddress =
+ 		hidg_fs_out_ep_desc.bEndpointAddress;
+ 
+-	status = usb_assign_descriptors(f, hidg_fs_descriptors,
+-			hidg_hs_descriptors, hidg_ss_descriptors,
+-			hidg_ss_descriptors);
++	if (hidg->use_out_ep)
++		status = usb_assign_descriptors(f,
++			hidg_fs_descriptors_intout,
++			hidg_hs_descriptors_intout,
++			hidg_ss_descriptors_intout,
++			hidg_ss_descriptors_intout);
++	else
++		status = usb_assign_descriptors(f,
++			hidg_fs_descriptors_ssreport,
++			hidg_hs_descriptors_ssreport,
++			hidg_ss_descriptors_ssreport,
++			hidg_ss_descriptors_ssreport);
++
+ 	if (status)
+ 		goto fail;
+ 
+@@ -855,21 +1011,11 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 
+ 	/* create char device */
+ 	cdev_init(&hidg->cdev, &f_hidg_fops);
+-	dev = MKDEV(major, hidg->minor);
+-	status = cdev_add(&hidg->cdev, dev, 1);
++	status = cdev_device_add(&hidg->cdev, &hidg->dev);
+ 	if (status)
+ 		goto fail_free_descs;
+ 
+-	device = device_create(hidg_class, NULL, dev, NULL,
+-			       "%s%d", "hidg", hidg->minor);
+-	if (IS_ERR(device)) {
+-		status = PTR_ERR(device);
+-		goto del;
+-	}
+-
+ 	return 0;
+-del:
+-	cdev_del(&hidg->cdev);
+ fail_free_descs:
+ 	usb_free_all_descriptors(f);
+ fail:
+@@ -956,6 +1102,7 @@ CONFIGFS_ATTR(f_hid_opts_, name)
+ 
+ F_HID_OPT(subclass, 8, 255);
+ F_HID_OPT(protocol, 8, 255);
++F_HID_OPT(no_out_endpoint, 8, 1);
+ F_HID_OPT(report_length, 16, 65535);
+ 
+ static ssize_t f_hid_opts_report_desc_show(struct config_item *item, char *page)
+@@ -1015,6 +1162,7 @@ CONFIGFS_ATTR_RO(f_hid_opts_, dev);
+ static struct configfs_attribute *hid_attrs[] = {
+ 	&f_hid_opts_attr_subclass,
+ 	&f_hid_opts_attr_protocol,
++	&f_hid_opts_attr_no_out_endpoint,
+ 	&f_hid_opts_attr_report_length,
+ 	&f_hid_opts_attr_report_desc,
+ 	&f_hid_opts_attr_dev,
+@@ -1098,8 +1246,7 @@ static void hidg_free(struct usb_function *f)
+ 
+ 	hidg = func_to_hidg(f);
+ 	opts = container_of(f->fi, struct f_hid_opts, func_inst);
+-	kfree(hidg->report_desc);
+-	kfree(hidg);
++	put_device(&hidg->dev);
+ 	mutex_lock(&opts->lock);
+ 	--opts->refcnt;
+ 	mutex_unlock(&opts->lock);
+@@ -1109,8 +1256,7 @@ static void hidg_unbind(struct usb_configuration *c, struct usb_function *f)
+ {
+ 	struct f_hidg *hidg = func_to_hidg(f);
+ 
+-	device_destroy(hidg_class, MKDEV(major, hidg->minor));
+-	cdev_del(&hidg->cdev);
++	cdev_device_del(&hidg->cdev, &hidg->dev);
+ 
+ 	usb_free_all_descriptors(f);
+ }
+@@ -1119,6 +1265,7 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
+ {
+ 	struct f_hidg *hidg;
+ 	struct f_hid_opts *opts;
++	int ret;
+ 
+ 	/* allocate and initialize one new instance */
+ 	hidg = kzalloc(sizeof(*hidg), GFP_KERNEL);
+@@ -1130,21 +1277,33 @@ static struct usb_function *hidg_alloc(struct usb_function_instance *fi)
+ 	mutex_lock(&opts->lock);
+ 	++opts->refcnt;
+ 
+-	hidg->minor = opts->minor;
++	device_initialize(&hidg->dev);
++	hidg->dev.release = hidg_release;
++	hidg->dev.class = hidg_class;
++	hidg->dev.devt = MKDEV(major, opts->minor);
++	ret = dev_set_name(&hidg->dev, "hidg%d", opts->minor);
++	if (ret) {
++		--opts->refcnt;
++		mutex_unlock(&opts->lock);
++		return ERR_PTR(ret);
++	}
++
+ 	hidg->bInterfaceSubClass = opts->subclass;
+ 	hidg->bInterfaceProtocol = opts->protocol;
+ 	hidg->report_length = opts->report_length;
+ 	hidg->report_desc_length = opts->report_desc_length;
+ 	if (opts->report_desc) {
+-		hidg->report_desc = kmemdup(opts->report_desc,
+-					    opts->report_desc_length,
+-					    GFP_KERNEL);
++		hidg->report_desc = devm_kmemdup(&hidg->dev, opts->report_desc,
++						 opts->report_desc_length,
++						 GFP_KERNEL);
+ 		if (!hidg->report_desc) {
+-			kfree(hidg);
++			put_device(&hidg->dev);
++			--opts->refcnt;
+ 			mutex_unlock(&opts->lock);
+ 			return ERR_PTR(-ENOMEM);
+ 		}
+ 	}
++	hidg->use_out_ep = !opts->no_out_endpoint;
+ 
+ 	mutex_unlock(&opts->lock);
+ 
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 169e73ed128c8..21893c891c4f8 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -219,8 +219,9 @@ uvc_function_ep0_complete(struct usb_ep *ep, struct usb_request *req)
+ 
+ 		memset(&v4l2_event, 0, sizeof(v4l2_event));
+ 		v4l2_event.type = UVC_EVENT_DATA;
+-		uvc_event->data.length = req->actual;
+-		memcpy(&uvc_event->data.data, req->buf, req->actual);
++		uvc_event->data.length = min_t(unsigned int, req->actual,
++			sizeof(uvc_event->data.data));
++		memcpy(&uvc_event->data.data, req->buf, uvc_event->data.length);
+ 		v4l2_event_queue(&uvc->vdev, &v4l2_event);
+ 	}
+ }
+diff --git a/drivers/usb/gadget/function/u_hid.h b/drivers/usb/gadget/function/u_hid.h
+index 2f5ca4bfa7ff1..c090eb6c9759e 100644
+--- a/drivers/usb/gadget/function/u_hid.h
++++ b/drivers/usb/gadget/function/u_hid.h
+@@ -20,6 +20,7 @@ struct f_hid_opts {
+ 	int				minor;
+ 	unsigned char			subclass;
+ 	unsigned char			protocol;
++	unsigned char			no_out_endpoint;
+ 	unsigned short			report_length;
+ 	unsigned short			report_desc_length;
+ 	unsigned char			*report_desc;
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index 785822ecc3f1a..74c93fa7439c2 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -633,10 +633,10 @@ static void fotg210_request_error(struct fotg210_udc *fotg210)
+ static void fotg210_set_address(struct fotg210_udc *fotg210,
+ 				struct usb_ctrlrequest *ctrl)
+ {
+-	if (ctrl->wValue >= 0x0100) {
++	if (le16_to_cpu(ctrl->wValue) >= 0x0100) {
+ 		fotg210_request_error(fotg210);
+ 	} else {
+-		fotg210_set_dev_addr(fotg210, ctrl->wValue);
++		fotg210_set_dev_addr(fotg210, le16_to_cpu(ctrl->wValue));
+ 		fotg210_set_cxdone(fotg210);
+ 	}
+ }
+@@ -717,17 +717,17 @@ static void fotg210_get_status(struct fotg210_udc *fotg210,
+ 
+ 	switch (ctrl->bRequestType & USB_RECIP_MASK) {
+ 	case USB_RECIP_DEVICE:
+-		fotg210->ep0_data = 1 << USB_DEVICE_SELF_POWERED;
++		fotg210->ep0_data = cpu_to_le16(1 << USB_DEVICE_SELF_POWERED);
+ 		break;
+ 	case USB_RECIP_INTERFACE:
+-		fotg210->ep0_data = 0;
++		fotg210->ep0_data = cpu_to_le16(0);
+ 		break;
+ 	case USB_RECIP_ENDPOINT:
+ 		epnum = ctrl->wIndex & USB_ENDPOINT_NUMBER_MASK;
+ 		if (epnum)
+ 			fotg210->ep0_data =
+-				fotg210_is_epnstall(fotg210->ep[epnum])
+-				<< USB_ENDPOINT_HALT;
++				cpu_to_le16(fotg210_is_epnstall(fotg210->ep[epnum])
++					    << USB_ENDPOINT_HALT);
+ 		else
+ 			fotg210_request_error(fotg210);
+ 		break;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index 8e83995fc3bd2..b8fc818c154ae 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1629,8 +1629,6 @@ static int musb_gadget_vbus_draw(struct usb_gadget *gadget, unsigned mA)
+ {
+ 	struct musb	*musb = gadget_to_musb(gadget);
+ 
+-	if (!musb->xceiv->set_power)
+-		return -EOPNOTSUPP;
+ 	return usb_phy_set_power(musb->xceiv, mA);
+ }
+ 
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c7c6d5d1dfbc8..f1acc6f686a96 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -196,6 +196,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x16DC, 0x0015) }, /* W-IE-NE-R Plein & Baus GmbH CML Control, Monitoring and Data Logger */
+ 	{ USB_DEVICE(0x17A8, 0x0001) }, /* Kamstrup Optical Eye/3-wire */
+ 	{ USB_DEVICE(0x17A8, 0x0005) }, /* Kamstrup M-Bus Master MultiPort 250D */
++	{ USB_DEVICE(0x17A8, 0x0011) }, /* Kamstrup 444 MHz RF sniffer */
++	{ USB_DEVICE(0x17A8, 0x0013) }, /* Kamstrup 870 MHz RF sniffer */
+ 	{ USB_DEVICE(0x17A8, 0x0101) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Int Ant) */
+ 	{ USB_DEVICE(0x17A8, 0x0102) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Ext Ant) */
+ 	{ USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
+diff --git a/drivers/usb/serial/f81534.c b/drivers/usb/serial/f81534.c
+index db6c93c04b3c8..7f400b1bb8e6c 100644
+--- a/drivers/usb/serial/f81534.c
++++ b/drivers/usb/serial/f81534.c
+@@ -538,9 +538,6 @@ static int f81534_submit_writer(struct usb_serial_port *port, gfp_t mem_flags)
+ 
+ static u32 f81534_calc_baud_divisor(u32 baudrate, u32 clockrate)
+ {
+-	if (!baudrate)
+-		return 0;
+-
+ 	/* Round to nearest divisor */
+ 	return DIV_ROUND_CLOSEST(clockrate, baudrate);
+ }
+@@ -570,9 +567,14 @@ static int f81534_set_port_config(struct usb_serial_port *port,
+ 	u32 baud_list[] = {baudrate, old_baudrate, F81534_DEFAULT_BAUD_RATE};
+ 
+ 	for (i = 0; i < ARRAY_SIZE(baud_list); ++i) {
+-		idx = f81534_find_clk(baud_list[i]);
++		baudrate = baud_list[i];
++		if (baudrate == 0) {
++			tty_encode_baud_rate(tty, 0, 0);
++			return 0;
++		}
++
++		idx = f81534_find_clk(baudrate);
+ 		if (idx >= 0) {
+-			baudrate = baud_list[i];
+ 			tty_encode_baud_rate(tty, baudrate, baudrate);
+ 			break;
+ 		}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index ebb2936271299..57bef990382b1 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -255,6 +255,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EP06			0x0306
+ #define QUECTEL_PRODUCT_EM05G			0x030a
+ #define QUECTEL_PRODUCT_EM060K			0x030b
++#define QUECTEL_PRODUCT_EM05G_SG		0x0311
+ #define QUECTEL_PRODUCT_EM12			0x0512
+ #define QUECTEL_PRODUCT_RM500Q			0x0800
+ #define QUECTEL_PRODUCT_RM520N			0x0801
+@@ -1160,6 +1161,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
+ 	  .driver_info = RSVD(6) | ZLP },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_SG, 0xff),
++	  .driver_info = RSVD(6) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
+diff --git a/drivers/usb/storage/alauda.c b/drivers/usb/storage/alauda.c
+index 6b8edf6178df3..6fcf5fd2ff987 100644
+--- a/drivers/usb/storage/alauda.c
++++ b/drivers/usb/storage/alauda.c
+@@ -437,6 +437,8 @@ static int alauda_init_media(struct us_data *us)
+ 		+ MEDIA_INFO(us).blockshift + MEDIA_INFO(us).pageshift);
+ 	MEDIA_INFO(us).pba_to_lba = kcalloc(num_zones, sizeof(u16*), GFP_NOIO);
+ 	MEDIA_INFO(us).lba_to_pba = kcalloc(num_zones, sizeof(u16*), GFP_NOIO);
++	if (MEDIA_INFO(us).pba_to_lba == NULL || MEDIA_INFO(us).lba_to_pba == NULL)
++		return USB_STOR_TRANSPORT_ERROR;
+ 
+ 	if (alauda_reset_media(us) != USB_STOR_XFER_GOOD)
+ 		return USB_STOR_TRANSPORT_ERROR;
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index 76299b6ff06da..7605963f71ede 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -126,7 +126,7 @@ int typec_altmode_exit(struct typec_altmode *adev)
+ 	if (!adev || !adev->active)
+ 		return 0;
+ 
+-	if (!pdev->ops || !pdev->ops->enter)
++	if (!pdev->ops || !pdev->ops->exit)
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Moving to USB Safe State */
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index c29fc6844f845..d8bb6537c688e 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -78,12 +78,11 @@ static int vfio_platform_acpi_call_reset(struct vfio_platform_device *vdev,
+ 				  const char **extra_dbg)
+ {
+ #ifdef CONFIG_ACPI
+-	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+ 	struct device *dev = vdev->device;
+ 	acpi_handle handle = ACPI_HANDLE(dev);
+ 	acpi_status acpi_ret;
+ 
+-	acpi_ret = acpi_evaluate_object(handle, "_RST", NULL, &buffer);
++	acpi_ret = acpi_evaluate_object(handle, "_RST", NULL, NULL);
+ 	if (ACPI_FAILURE(acpi_ret)) {
+ 		if (extra_dbg)
+ 			*extra_dbg = acpi_format_exception(acpi_ret);
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 97c4319797d5c..afb0c9e4d7382 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2316,7 +2316,6 @@ config FB_SSD1307
+ 	select FB_SYS_COPYAREA
+ 	select FB_SYS_IMAGEBLIT
+ 	select FB_DEFERRED_IO
+-	select PWM
+ 	select FB_BACKLIGHT
+ 	help
+ 	  This driver implements support for the Solomon SSD1307
+diff --git a/drivers/video/fbdev/pm2fb.c b/drivers/video/fbdev/pm2fb.c
+index 8ae010f07d7da..0ec4be2f2e8c5 100644
+--- a/drivers/video/fbdev/pm2fb.c
++++ b/drivers/video/fbdev/pm2fb.c
+@@ -1529,8 +1529,10 @@ static int pm2fb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	}
+ 
+ 	info = framebuffer_alloc(sizeof(struct pm2fb_par), &pdev->dev);
+-	if (!info)
+-		return -ENOMEM;
++	if (!info) {
++		err = -ENOMEM;
++		goto err_exit_disable;
++	}
+ 	default_par = info->par;
+ 
+ 	switch (pdev->device) {
+@@ -1711,6 +1713,8 @@ static int pm2fb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	release_mem_region(pm2fb_fix.mmio_start, pm2fb_fix.mmio_len);
+  err_exit_neither:
+ 	framebuffer_release(info);
++ err_exit_disable:
++	pci_disable_device(pdev);
+ 	return retval;
+ }
+ 
+@@ -1737,6 +1741,7 @@ static void pm2fb_remove(struct pci_dev *pdev)
+ 	fb_dealloc_cmap(&info->cmap);
+ 	kfree(info->pixmap.addr);
+ 	framebuffer_release(info);
++	pci_disable_device(pdev);
+ }
+ 
+ static const struct pci_device_id pm2fb_id_table[] = {
+diff --git a/drivers/video/fbdev/uvesafb.c b/drivers/video/fbdev/uvesafb.c
+index 440a6636d8f0a..f6ebca8839127 100644
+--- a/drivers/video/fbdev/uvesafb.c
++++ b/drivers/video/fbdev/uvesafb.c
+@@ -1755,6 +1755,7 @@ static int uvesafb_probe(struct platform_device *dev)
+ out_unmap:
+ 	iounmap(info->screen_base);
+ out_mem:
++	arch_phys_wc_del(par->mtrr_handle);
+ 	release_mem_region(info->fix.smem_start, info->fix.smem_len);
+ out_reg:
+ 	release_region(0x3c0, 32);
+diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c
+index 5172fa5811476..be5e43be528aa 100644
+--- a/drivers/video/fbdev/vermilion/vermilion.c
++++ b/drivers/video/fbdev/vermilion/vermilion.c
+@@ -291,8 +291,10 @@ static int vmlfb_get_gpu(struct vml_par *par)
+ 
+ 	mutex_unlock(&vml_mutex);
+ 
+-	if (pci_enable_device(par->gpu) < 0)
++	if (pci_enable_device(par->gpu) < 0) {
++		pci_dev_put(par->gpu);
+ 		return -ENODEV;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/video/fbdev/via/via-core.c b/drivers/video/fbdev/via/via-core.c
+index b041eb27a9bff..3f74bc8af7b3e 100644
+--- a/drivers/video/fbdev/via/via-core.c
++++ b/drivers/video/fbdev/via/via-core.c
+@@ -774,7 +774,14 @@ static int __init via_core_init(void)
+ 		return ret;
+ 	viafb_i2c_init();
+ 	viafb_gpio_init();
+-	return pci_register_driver(&via_driver);
++	ret = pci_register_driver(&via_driver);
++	if (ret) {
++		viafb_gpio_exit();
++		viafb_i2c_exit();
++		return ret;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit via_core_exit(void)
+diff --git a/drivers/vme/bridges/vme_fake.c b/drivers/vme/bridges/vme_fake.c
+index 685a43bdc2a1d..06bfc7f0e0b62 100644
+--- a/drivers/vme/bridges/vme_fake.c
++++ b/drivers/vme/bridges/vme_fake.c
+@@ -1077,6 +1077,8 @@ static int __init fake_init(void)
+ 
+ 	/* We need a fake parent device */
+ 	vme_root = __root_device_register("vme", THIS_MODULE);
++	if (IS_ERR(vme_root))
++		return PTR_ERR(vme_root);
+ 
+ 	/* If we want to support more than one bridge at some point, we need to
+ 	 * dynamically allocate this so we get one per device.
+diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
+index 647d231d4422c..b1be12dc61eb8 100644
+--- a/drivers/vme/bridges/vme_tsi148.c
++++ b/drivers/vme/bridges/vme_tsi148.c
+@@ -1775,6 +1775,7 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
+ 	return 0;
+ 
+ err_dma:
++	list_del(&entry->list);
+ err_dest:
+ err_source:
+ err_align:
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index d138027034fd6..b6b3131cb079f 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -2100,8 +2100,8 @@ void xen_callback_vector(void)
+ void xen_callback_vector(void) {}
+ #endif
+ 
+-static bool fifo_events = true;
+-module_param(fifo_events, bool, 0);
++bool xen_fifo_events = true;
++module_param_named(fifo_events, xen_fifo_events, bool, 0);
+ 
+ static int xen_evtchn_cpu_prepare(unsigned int cpu)
+ {
+@@ -2130,10 +2130,12 @@ void __init xen_init_IRQ(void)
+ 	int ret = -EINVAL;
+ 	unsigned int evtchn;
+ 
+-	if (fifo_events)
++	if (xen_fifo_events)
+ 		ret = xen_evtchn_fifo_init();
+-	if (ret < 0)
++	if (ret < 0) {
+ 		xen_evtchn_2l_init();
++		xen_fifo_events = false;
++	}
+ 
+ 	xen_cpu_init_eoi(smp_processor_id());
+ 
+diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
+index 74ff28fda64dc..15ece1041d173 100644
+--- a/drivers/xen/privcmd.c
++++ b/drivers/xen/privcmd.c
+@@ -785,7 +785,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
+ 		goto out;
+ 	}
+ 
+-	pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL);
++	pfns = kcalloc(kdata.num, sizeof(*pfns), GFP_KERNEL | __GFP_NOWARN);
+ 	if (!pfns) {
+ 		rc = -ENOMEM;
+ 		goto out;
+diff --git a/fs/binfmt_aout.c b/fs/binfmt_aout.c
+index c3deb2e35f203..e7a9e8b56e71a 100644
+--- a/fs/binfmt_aout.c
++++ b/fs/binfmt_aout.c
+@@ -244,6 +244,7 @@ static int load_aout_binary(struct linux_binprm * bprm)
+ 	set_personality(PER_LINUX);
+ #endif
+ 	setup_new_exec(bprm);
++	install_exec_creds(bprm);
+ 
+ 	current->mm->end_code = ex.a_text +
+ 		(current->mm->start_code = N_TXTADDR(ex));
+@@ -256,7 +257,6 @@ static int load_aout_binary(struct linux_binprm * bprm)
+ 	if (retval < 0)
+ 		return retval;
+ 
+-	install_exec_creds(bprm);
+ 
+ 	if (N_MAGIC(ex) == OMAGIC) {
+ 		unsigned long text_addr, map_size;
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index b53bb3729ac1e..64d0b838085d3 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -357,6 +357,7 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ 		current->personality |= READ_IMPLIES_EXEC;
+ 
+ 	setup_new_exec(bprm);
++	install_exec_creds(bprm);
+ 
+ 	set_binfmt(&elf_fdpic_format);
+ 
+@@ -438,9 +439,9 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ 	current->mm->start_stack = current->mm->start_brk + stack_size;
+ #endif
+ 
+-	install_exec_creds(bprm);
+-	if (create_elf_fdpic_tables(bprm, current->mm,
+-				    &exec_params, &interp_params) < 0)
++	retval = create_elf_fdpic_tables(bprm, current->mm, &exec_params,
++					 &interp_params);
++	if (retval < 0)
+ 		goto error;
+ 
+ 	kdebug("- start_code  %lx", current->mm->start_code);
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index a6f97d86fb801..a909743b1a0e1 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -541,6 +541,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ 		/* OK, This is the point of no return */
+ 		set_personality(PER_LINUX_32BIT);
+ 		setup_new_exec(bprm);
++		install_exec_creds(bprm);
+ 	}
+ 
+ 	/*
+@@ -965,8 +966,6 @@ static int load_flat_binary(struct linux_binprm *bprm)
+ 		}
+ 	}
+ 
+-	install_exec_creds(bprm);
+-
+ 	set_binfmt(&flat_format);
+ 
+ #ifdef CONFIG_MMU
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 27a04f4925416..8fe7edd2b001f 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -42,10 +42,10 @@ static LIST_HEAD(entries);
+ static int enabled = 1;
+ 
+ enum {Enabled, Magic};
+-#define MISC_FMT_PRESERVE_ARGV0 (1 << 31)
+-#define MISC_FMT_OPEN_BINARY (1 << 30)
+-#define MISC_FMT_CREDENTIALS (1 << 29)
+-#define MISC_FMT_OPEN_FILE (1 << 28)
++#define MISC_FMT_PRESERVE_ARGV0 (1UL << 31)
++#define MISC_FMT_OPEN_BINARY (1UL << 30)
++#define MISC_FMT_CREDENTIALS (1UL << 29)
++#define MISC_FMT_OPEN_FILE (1UL << 28)
+ 
+ typedef struct {
+ 	struct list_head list;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 752b5d2652843..4f25133885676 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3234,13 +3234,10 @@ static long btrfs_ioctl_dev_info(struct btrfs_fs_info *fs_info,
+ 	di_args->bytes_used = btrfs_device_get_bytes_used(dev);
+ 	di_args->total_bytes = btrfs_device_get_total_bytes(dev);
+ 	memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid));
+-	if (dev->name) {
+-		strncpy(di_args->path, rcu_str_deref(dev->name),
+-				sizeof(di_args->path) - 1);
+-		di_args->path[sizeof(di_args->path) - 1] = 0;
+-	} else {
++	if (dev->name)
++		strscpy(di_args->path, rcu_str_deref(dev->name), sizeof(di_args->path));
++	else
+ 		di_args->path[0] = '\0';
+-	}
+ 
+ out:
+ 	rcu_read_unlock();
+diff --git a/fs/btrfs/rcu-string.h b/fs/btrfs/rcu-string.h
+index a97dc74a4d3de..02f15321cecc9 100644
+--- a/fs/btrfs/rcu-string.h
++++ b/fs/btrfs/rcu-string.h
+@@ -18,7 +18,11 @@ static inline struct rcu_string *rcu_string_strdup(const char *src, gfp_t mask)
+ 					 (len * sizeof(char)), mask);
+ 	if (!ret)
+ 		return ret;
+-	strncpy(ret->str, src, len);
++	/* Warn if the source got unexpectedly truncated. */
++	if (WARN_ON(strscpy(ret->str, src, len) < 0)) {
++		kfree(ret);
++		return NULL;
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index eb2f8e84ffc9e..80d248e88761d 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -1306,6 +1306,7 @@ static int find_extent_clone(struct send_ctx *sctx,
+ 	u64 disk_byte;
+ 	u64 num_bytes;
+ 	u64 extent_item_pos;
++	u64 extent_refs;
+ 	u64 flags = 0;
+ 	struct btrfs_file_extent_item *fi;
+ 	struct extent_buffer *eb = path->nodes[0];
+@@ -1373,14 +1374,22 @@ static int find_extent_clone(struct send_ctx *sctx,
+ 
+ 	ei = btrfs_item_ptr(tmp_path->nodes[0], tmp_path->slots[0],
+ 			    struct btrfs_extent_item);
++	extent_refs = btrfs_extent_refs(tmp_path->nodes[0], ei);
+ 	/*
+ 	 * Backreference walking (iterate_extent_inodes() below) is currently
+ 	 * too expensive when an extent has a large number of references, both
+ 	 * in time spent and used memory. So for now just fallback to write
+ 	 * operations instead of clone operations when an extent has more than
+ 	 * a certain amount of references.
++	 *
++	 * Also, if we have only one reference and only the send root as a clone
++	 * source - meaning no clone roots were given in the struct
++	 * btrfs_ioctl_send_args passed to the send ioctl - then it's our
++	 * reference and there's no point in doing backref walking which is
++	 * expensive, so exit early.
+ 	 */
+-	if (btrfs_extent_refs(tmp_path->nodes[0], ei) > SEND_MAX_EXTENT_REFS) {
++	if ((extent_refs == 1 && sctx->clone_roots_cnt == 1) ||
++	    extent_refs > SEND_MAX_EXTENT_REFS) {
+ 		ret = -ENOENT;
+ 		goto out;
+ 	}
+diff --git a/fs/char_dev.c b/fs/char_dev.c
+index 5fffd5050fb7a..2c3d519b21c23 100644
+--- a/fs/char_dev.c
++++ b/fs/char_dev.c
+@@ -553,7 +553,7 @@ int cdev_device_add(struct cdev *cdev, struct device *dev)
+ 	}
+ 
+ 	rc = device_add(dev);
+-	if (rc)
++	if (rc && dev->devt)
+ 		cdev_del(cdev);
+ 
+ 	return rc;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index b5aba1c895cb5..a7ef847f285d3 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2940,7 +2940,7 @@ cifs_set_cifscreds(struct smb_vol *vol __attribute__((unused)),
+ static struct cifs_ses *
+ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
+ {
+-	int rc = -ENOMEM;
++	int rc = 0;
+ 	unsigned int xid;
+ 	struct cifs_ses *ses;
+ 	struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
+@@ -2982,6 +2982,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb_vol *volume_info)
+ 		return ses;
+ 	}
+ 
++	rc = -ENOMEM;
++
+ 	cifs_dbg(FYI, "Existing smb sess not found\n");
+ 	ses = sesInfoAlloc();
+ 	if (ses == NULL)
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index 5b1c33d9283ae..f590149e21bae 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -481,6 +481,7 @@ smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.disposition = FILE_CREATE;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
++	oparms.mode = 0644;
+ 
+ 	rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, NULL,
+ 		       NULL);
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index 4fce1da7db23d..a57d080d2ba57 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -330,8 +330,8 @@ ssize_t debugfs_attr_read(struct file *file, char __user *buf,
+ }
+ EXPORT_SYMBOL_GPL(debugfs_attr_read);
+ 
+-ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
+-			 size_t len, loff_t *ppos)
++static ssize_t debugfs_attr_write_xsigned(struct file *file, const char __user *buf,
++			 size_t len, loff_t *ppos, bool is_signed)
+ {
+ 	struct dentry *dentry = F_DENTRY(file);
+ 	ssize_t ret;
+@@ -339,12 +339,28 @@ ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
+ 	ret = debugfs_file_get(dentry);
+ 	if (unlikely(ret))
+ 		return ret;
+-	ret = simple_attr_write(file, buf, len, ppos);
++	if (is_signed)
++		ret = simple_attr_write_signed(file, buf, len, ppos);
++	else
++		ret = simple_attr_write(file, buf, len, ppos);
+ 	debugfs_file_put(dentry);
+ 	return ret;
+ }
++
++ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
++			 size_t len, loff_t *ppos)
++{
++	return debugfs_attr_write_xsigned(file, buf, len, ppos, false);
++}
+ EXPORT_SYMBOL_GPL(debugfs_attr_write);
+ 
++ssize_t debugfs_attr_write_signed(struct file *file, const char __user *buf,
++			 size_t len, loff_t *ppos)
++{
++	return debugfs_attr_write_xsigned(file, buf, len, ppos, true);
++}
++EXPORT_SYMBOL_GPL(debugfs_attr_write_signed);
++
+ static struct dentry *debugfs_create_mode_unsafe(const char *name, umode_t mode,
+ 					struct dentry *parent, void *value,
+ 					const struct file_operations *fops,
+@@ -742,11 +758,11 @@ static int debugfs_atomic_t_get(void *data, u64 *val)
+ 	*val = atomic_read((atomic_t *)data);
+ 	return 0;
+ }
+-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t, debugfs_atomic_t_get,
+ 			debugfs_atomic_t_set, "%lld\n");
+-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL,
++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t_ro, debugfs_atomic_t_get, NULL,
+ 			"%lld\n");
+-DEFINE_DEBUGFS_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set,
++DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(fops_atomic_t_wo, NULL, debugfs_atomic_t_set,
+ 			"%lld\n");
+ 
+ /**
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 6938dff9f04bd..d8068c0e547dc 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -503,7 +503,7 @@ enum {
+  *
+  * It's not paranoia if the Murphy's Law really *is* out to get you.  :-)
+  */
+-#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1 << EXT4_INODE_##FLAG))
++#define TEST_FLAG_VALUE(FLAG) (EXT4_##FLAG##_FL == (1U << EXT4_INODE_##FLAG))
+ #define CHECK_FLAG_VALUE(FLAG) BUILD_BUG_ON(!TEST_FLAG_VALUE(FLAG))
+ 
+ static inline void ext4_check_flag_values(void)
+@@ -1041,6 +1041,9 @@ struct ext4_inode_info {
+ 	ext4_lblk_t i_da_metadata_calc_last_lblock;
+ 	int i_da_metadata_calc_len;
+ 
++	/* pending cluster reservations for bigalloc file systems */
++	struct ext4_pending_tree i_pending_tree;
++
+ 	/* on-disk additional length */
+ 	__u16 i_extra_isize;
+ 
+@@ -3228,10 +3231,6 @@ extern struct ext4_ext_path *ext4_find_extent(struct inode *, ext4_lblk_t,
+ 					      int flags);
+ extern void ext4_ext_drop_refs(struct ext4_ext_path *);
+ extern int ext4_ext_check_inode(struct inode *inode);
+-extern int ext4_find_delalloc_range(struct inode *inode,
+-				    ext4_lblk_t lblk_start,
+-				    ext4_lblk_t lblk_end);
+-extern int ext4_find_delalloc_cluster(struct inode *inode, ext4_lblk_t lblk);
+ extern ext4_lblk_t ext4_ext_next_allocated_block(struct ext4_ext_path *path);
+ extern int ext4_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ 			__u64 start, __u64 len);
+@@ -3242,6 +3241,7 @@ extern int ext4_swap_extents(handle_t *handle, struct inode *inode1,
+ 				struct inode *inode2, ext4_lblk_t lblk1,
+ 			     ext4_lblk_t lblk2,  ext4_lblk_t count,
+ 			     int mark_unwritten,int *err);
++extern int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu);
+ 
+ /* move_extent.c */
+ extern void ext4_double_down_write_data_sem(struct inode *first,
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 6c492fca60c41..1ad4c8eb82c19 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -2381,8 +2381,8 @@ ext4_ext_put_gap_in_cache(struct inode *inode, ext4_lblk_t hole_start,
+ {
+ 	struct extent_status es;
+ 
+-	ext4_es_find_delayed_extent_range(inode, hole_start,
+-					  hole_start + hole_len - 1, &es);
++	ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
++				  hole_start + hole_len - 1, &es);
+ 	if (es.es_len) {
+ 		/* There's delayed extent containing lblock? */
+ 		if (es.es_lblk <= hole_start)
+@@ -3852,39 +3852,6 @@ out:
+ 	return ext4_mark_inode_dirty(handle, inode);
+ }
+ 
+-/**
+- * ext4_find_delalloc_range: find delayed allocated block in the given range.
+- *
+- * Return 1 if there is a delalloc block in the range, otherwise 0.
+- */
+-int ext4_find_delalloc_range(struct inode *inode,
+-			     ext4_lblk_t lblk_start,
+-			     ext4_lblk_t lblk_end)
+-{
+-	struct extent_status es;
+-
+-	ext4_es_find_delayed_extent_range(inode, lblk_start, lblk_end, &es);
+-	if (es.es_len == 0)
+-		return 0; /* there is no delay extent in this tree */
+-	else if (es.es_lblk <= lblk_start &&
+-		 lblk_start < es.es_lblk + es.es_len)
+-		return 1;
+-	else if (lblk_start <= es.es_lblk && es.es_lblk <= lblk_end)
+-		return 1;
+-	else
+-		return 0;
+-}
+-
+-int ext4_find_delalloc_cluster(struct inode *inode, ext4_lblk_t lblk)
+-{
+-	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+-	ext4_lblk_t lblk_start, lblk_end;
+-	lblk_start = EXT4_LBLK_CMASK(sbi, lblk);
+-	lblk_end = lblk_start + sbi->s_cluster_ratio - 1;
+-
+-	return ext4_find_delalloc_range(inode, lblk_start, lblk_end);
+-}
+-
+ /**
+  * Determines how many complete clusters (out of those specified by the 'map')
+  * are under delalloc and were reserved quota for.
+@@ -3943,7 +3910,8 @@ get_reserved_cluster_alloc(struct inode *inode, ext4_lblk_t lblk_start,
+ 		lblk_from = EXT4_LBLK_CMASK(sbi, lblk_start);
+ 		lblk_to = lblk_from + c_offset - 1;
+ 
+-		if (ext4_find_delalloc_range(inode, lblk_from, lblk_to))
++		if (ext4_es_scan_range(inode, &ext4_es_is_delayed, lblk_from,
++				       lblk_to))
+ 			allocated_clusters--;
+ 	}
+ 
+@@ -3953,7 +3921,8 @@ get_reserved_cluster_alloc(struct inode *inode, ext4_lblk_t lblk_start,
+ 		lblk_from = lblk_start + num_blks;
+ 		lblk_to = lblk_from + (sbi->s_cluster_ratio - c_offset) - 1;
+ 
+-		if (ext4_find_delalloc_range(inode, lblk_from, lblk_to))
++		if (ext4_es_scan_range(inode, &ext4_es_is_delayed, lblk_from,
++				       lblk_to))
+ 			allocated_clusters--;
+ 	}
+ 
+@@ -5108,8 +5077,10 @@ static int ext4_find_delayed_extent(struct inode *inode,
+ 	ext4_lblk_t block, next_del;
+ 
+ 	if (newes->es_pblk == 0) {
+-		ext4_es_find_delayed_extent_range(inode, newes->es_lblk,
+-				newes->es_lblk + newes->es_len - 1, &es);
++		ext4_es_find_extent_range(inode, &ext4_es_is_delayed,
++					  newes->es_lblk,
++					  newes->es_lblk + newes->es_len - 1,
++					  &es);
+ 
+ 		/*
+ 		 * No extent in extent-tree contains block @newes->es_pblk,
+@@ -5130,7 +5101,8 @@ static int ext4_find_delayed_extent(struct inode *inode,
+ 	}
+ 
+ 	block = newes->es_lblk + newes->es_len;
+-	ext4_es_find_delayed_extent_range(inode, block, EXT_MAX_BLOCKS, &es);
++	ext4_es_find_extent_range(inode, &ext4_es_is_delayed, block,
++				  EXT_MAX_BLOCKS, &es);
+ 	if (es.es_len == 0)
+ 		next_del = EXT_MAX_BLOCKS;
+ 	else
+@@ -5991,3 +5963,90 @@ ext4_swap_extents(handle_t *handle, struct inode *inode1,
+ 	}
+ 	return replaced_count;
+ }
++
++/*
++ * ext4_clu_mapped - determine whether any block in a logical cluster has
++ *                   been mapped to a physical cluster
++ *
++ * @inode - file containing the logical cluster
++ * @lclu - logical cluster of interest
++ *
++ * Returns 1 if any block in the logical cluster is mapped, signifying
++ * that a physical cluster has been allocated for it.  Otherwise,
++ * returns 0.  Can also return negative error codes.  Derived from
++ * ext4_ext_map_blocks().
++ */
++int ext4_clu_mapped(struct inode *inode, ext4_lblk_t lclu)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct ext4_ext_path *path;
++	int depth, mapped = 0, err = 0;
++	struct ext4_extent *extent;
++	ext4_lblk_t first_lblk, first_lclu, last_lclu;
++
++	/*
++	 * if data can be stored inline, the logical cluster isn't
++	 * mapped - no physical clusters have been allocated, and the
++	 * file has no extents
++	 */
++	if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
++		return 0;
++
++	/* search for the extent closest to the first block in the cluster */
++	path = ext4_find_extent(inode, EXT4_C2B(sbi, lclu), NULL, 0);
++	if (IS_ERR(path)) {
++		err = PTR_ERR(path);
++		path = NULL;
++		goto out;
++	}
++
++	depth = ext_depth(inode);
++
++	/*
++	 * A consistent leaf must not be empty.  This situation is possible,
++	 * though, _during_ tree modification, and it's why an assert can't
++	 * be put in ext4_find_extent().
++	 */
++	if (unlikely(path[depth].p_ext == NULL && depth != 0)) {
++		EXT4_ERROR_INODE(inode,
++		    "bad extent address - lblock: %lu, depth: %d, pblock: %lld",
++				 (unsigned long) EXT4_C2B(sbi, lclu),
++				 depth, path[depth].p_block);
++		err = -EFSCORRUPTED;
++		goto out;
++	}
++
++	extent = path[depth].p_ext;
++
++	/* can't be mapped if the extent tree is empty */
++	if (extent == NULL)
++		goto out;
++
++	first_lblk = le32_to_cpu(extent->ee_block);
++	first_lclu = EXT4_B2C(sbi, first_lblk);
++
++	/*
++	 * Three possible outcomes at this point - found extent spanning
++	 * the target cluster, to the left of the target cluster, or to the
++	 * right of the target cluster.  The first two cases are handled here.
++	 * The last case indicates the target cluster is not mapped.
++	 */
++	if (lclu >= first_lclu) {
++		last_lclu = EXT4_B2C(sbi, first_lblk +
++				     ext4_ext_get_actual_len(extent) - 1);
++		if (lclu <= last_lclu) {
++			mapped = 1;
++		} else {
++			first_lblk = ext4_ext_next_allocated_block(path);
++			first_lclu = EXT4_B2C(sbi, first_lblk);
++			if (lclu == first_lclu)
++				mapped = 1;
++		}
++	}
++
++out:
++	ext4_ext_drop_refs(path);
++	kfree(path);
++
++	return err ? err : mapped;
++}
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 027c3e1b9f61a..441ee2e747d3e 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -142,6 +142,7 @@
+  */
+ 
+ static struct kmem_cache *ext4_es_cachep;
++static struct kmem_cache *ext4_pending_cachep;
+ 
+ static int __es_insert_extent(struct inode *inode, struct extent_status *newes);
+ static int __es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+@@ -233,30 +234,38 @@ static struct extent_status *__es_tree_search(struct rb_root *root,
+ }
+ 
+ /*
+- * ext4_es_find_delayed_extent_range: find the 1st delayed extent covering
+- * @es->lblk if it exists, otherwise, the next extent after @es->lblk.
++ * ext4_es_find_extent_range - find extent with specified status within block
++ *                             range or next extent following block range in
++ *                             extents status tree
+  *
+- * @inode: the inode which owns delayed extents
+- * @lblk: the offset where we start to search
+- * @end: the offset where we stop to search
+- * @es: delayed extent that we found
++ * @inode - file containing the range
++ * @matching_fn - pointer to function that matches extents with desired status
++ * @lblk - logical block defining start of range
++ * @end - logical block defining end of range
++ * @es - extent found, if any
++ *
++ * Find the first extent within the block range specified by @lblk and @end
++ * in the extents status tree that satisfies @matching_fn.  If a match
++ * is found, it's returned in @es.  If not, and a matching extent is found
++ * beyond the block range, it's returned in @es.  If no match is found, an
++ * extent is returned in @es whose es_lblk, es_len, and es_pblk components
++ * are 0.
+  */
+-void ext4_es_find_delayed_extent_range(struct inode *inode,
+-				 ext4_lblk_t lblk, ext4_lblk_t end,
+-				 struct extent_status *es)
++static void __es_find_extent_range(struct inode *inode,
++				   int (*matching_fn)(struct extent_status *es),
++				   ext4_lblk_t lblk, ext4_lblk_t end,
++				   struct extent_status *es)
+ {
+ 	struct ext4_es_tree *tree = NULL;
+ 	struct extent_status *es1 = NULL;
+ 	struct rb_node *node;
+ 
+-	BUG_ON(es == NULL);
+-	BUG_ON(end < lblk);
+-	trace_ext4_es_find_delayed_extent_range_enter(inode, lblk);
++	WARN_ON(es == NULL);
++	WARN_ON(end < lblk);
+ 
+-	read_lock(&EXT4_I(inode)->i_es_lock);
+ 	tree = &EXT4_I(inode)->i_es_tree;
+ 
+-	/* find extent in cache firstly */
++	/* see if the extent has been cached */
+ 	es->es_lblk = es->es_len = es->es_pblk = 0;
+ 	if (tree->cache_es) {
+ 		es1 = tree->cache_es;
+@@ -271,28 +280,133 @@ void ext4_es_find_delayed_extent_range(struct inode *inode,
+ 	es1 = __es_tree_search(&tree->root, lblk);
+ 
+ out:
+-	if (es1 && !ext4_es_is_delayed(es1)) {
++	if (es1 && !matching_fn(es1)) {
+ 		while ((node = rb_next(&es1->rb_node)) != NULL) {
+ 			es1 = rb_entry(node, struct extent_status, rb_node);
+ 			if (es1->es_lblk > end) {
+ 				es1 = NULL;
+ 				break;
+ 			}
+-			if (ext4_es_is_delayed(es1))
++			if (matching_fn(es1))
+ 				break;
+ 		}
+ 	}
+ 
+-	if (es1 && ext4_es_is_delayed(es1)) {
++	if (es1 && matching_fn(es1)) {
+ 		tree->cache_es = es1;
+ 		es->es_lblk = es1->es_lblk;
+ 		es->es_len = es1->es_len;
+ 		es->es_pblk = es1->es_pblk;
+ 	}
+ 
++}
++
++/*
++ * Locking for __es_find_extent_range() for external use
++ */
++void ext4_es_find_extent_range(struct inode *inode,
++			       int (*matching_fn)(struct extent_status *es),
++			       ext4_lblk_t lblk, ext4_lblk_t end,
++			       struct extent_status *es)
++{
++	trace_ext4_es_find_extent_range_enter(inode, lblk);
++
++	read_lock(&EXT4_I(inode)->i_es_lock);
++	__es_find_extent_range(inode, matching_fn, lblk, end, es);
++	read_unlock(&EXT4_I(inode)->i_es_lock);
++
++	trace_ext4_es_find_extent_range_exit(inode, es);
++}
++
++/*
++ * __es_scan_range - search block range for block with specified status
++ *                   in extents status tree
++ *
++ * @inode - file containing the range
++ * @matching_fn - pointer to function that matches extents with desired status
++ * @lblk - logical block defining start of range
++ * @end - logical block defining end of range
++ *
++ * Returns true if at least one block in the specified block range satisfies
++ * the criterion specified by @matching_fn, and false if not.  If at least
++ * one extent has the specified status, then there is at least one block
++ * in the cluster with that status.  Should only be called by code that has
++ * taken i_es_lock.
++ */
++static bool __es_scan_range(struct inode *inode,
++			    int (*matching_fn)(struct extent_status *es),
++			    ext4_lblk_t start, ext4_lblk_t end)
++{
++	struct extent_status es;
++
++	__es_find_extent_range(inode, matching_fn, start, end, &es);
++	if (es.es_len == 0)
++		return false;   /* no matching extent in the tree */
++	else if (es.es_lblk <= start &&
++		 start < es.es_lblk + es.es_len)
++		return true;
++	else if (start <= es.es_lblk && es.es_lblk <= end)
++		return true;
++	else
++		return false;
++}
++/*
++ * Locking for __es_scan_range() for external use
++ */
++bool ext4_es_scan_range(struct inode *inode,
++			int (*matching_fn)(struct extent_status *es),
++			ext4_lblk_t lblk, ext4_lblk_t end)
++{
++	bool ret;
++
++	read_lock(&EXT4_I(inode)->i_es_lock);
++	ret = __es_scan_range(inode, matching_fn, lblk, end);
++	read_unlock(&EXT4_I(inode)->i_es_lock);
++
++	return ret;
++}
++
++/*
++ * __es_scan_clu - search cluster for block with specified status in
++ *                 extents status tree
++ *
++ * @inode - file containing the cluster
++ * @matching_fn - pointer to function that matches extents with desired status
++ * @lblk - logical block in cluster to be searched
++ *
++ * Returns true if at least one extent in the cluster containing @lblk
++ * satisfies the criterion specified by @matching_fn, and false if not.  If at
++ * least one extent has the specified status, then there is at least one block
++ * in the cluster with that status.  Should only be called by code that has
++ * taken i_es_lock.
++ */
++static bool __es_scan_clu(struct inode *inode,
++			  int (*matching_fn)(struct extent_status *es),
++			  ext4_lblk_t lblk)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	ext4_lblk_t lblk_start, lblk_end;
++
++	lblk_start = EXT4_LBLK_CMASK(sbi, lblk);
++	lblk_end = lblk_start + sbi->s_cluster_ratio - 1;
++
++	return __es_scan_range(inode, matching_fn, lblk_start, lblk_end);
++}
++
++/*
++ * Locking for __es_scan_clu() for external use
++ */
++bool ext4_es_scan_clu(struct inode *inode,
++		      int (*matching_fn)(struct extent_status *es),
++		      ext4_lblk_t lblk)
++{
++	bool ret;
++
++	read_lock(&EXT4_I(inode)->i_es_lock);
++	ret = __es_scan_clu(inode, matching_fn, lblk);
+ 	read_unlock(&EXT4_I(inode)->i_es_lock);
+ 
+-	trace_ext4_es_find_delayed_extent_range_exit(inode, es);
++	return ret;
+ }
+ 
+ static void ext4_es_list_add(struct inode *inode)
+@@ -1250,3 +1364,242 @@ static int es_reclaim_extents(struct ext4_inode_info *ei, int *nr_to_scan)
+ 	ei->i_es_tree.cache_es = NULL;
+ 	return nr_shrunk;
+ }
++
++#ifdef ES_DEBUG__
++static void ext4_print_pending_tree(struct inode *inode)
++{
++	struct ext4_pending_tree *tree;
++	struct rb_node *node;
++	struct pending_reservation *pr;
++
++	printk(KERN_DEBUG "pending reservations for inode %lu:", inode->i_ino);
++	tree = &EXT4_I(inode)->i_pending_tree;
++	node = rb_first(&tree->root);
++	while (node) {
++		pr = rb_entry(node, struct pending_reservation, rb_node);
++		printk(KERN_DEBUG " %u", pr->lclu);
++		node = rb_next(node);
++	}
++	printk(KERN_DEBUG "\n");
++}
++#else
++#define ext4_print_pending_tree(inode)
++#endif
++
++int __init ext4_init_pending(void)
++{
++	ext4_pending_cachep = kmem_cache_create("ext4_pending_reservation",
++					   sizeof(struct pending_reservation),
++					   0, (SLAB_RECLAIM_ACCOUNT), NULL);
++	if (ext4_pending_cachep == NULL)
++		return -ENOMEM;
++	return 0;
++}
++
++void ext4_exit_pending(void)
++{
++	kmem_cache_destroy(ext4_pending_cachep);
++}
++
++void ext4_init_pending_tree(struct ext4_pending_tree *tree)
++{
++	tree->root = RB_ROOT;
++}
++
++/*
++ * __get_pending - retrieve a pointer to a pending reservation
++ *
++ * @inode - file containing the pending cluster reservation
++ * @lclu - logical cluster of interest
++ *
++ * Returns a pointer to a pending reservation if it's a member of
++ * the set, and NULL if not.  Must be called holding i_es_lock.
++ */
++static struct pending_reservation *__get_pending(struct inode *inode,
++						 ext4_lblk_t lclu)
++{
++	struct ext4_pending_tree *tree;
++	struct rb_node *node;
++	struct pending_reservation *pr = NULL;
++
++	tree = &EXT4_I(inode)->i_pending_tree;
++	node = (&tree->root)->rb_node;
++
++	while (node) {
++		pr = rb_entry(node, struct pending_reservation, rb_node);
++		if (lclu < pr->lclu)
++			node = node->rb_left;
++		else if (lclu > pr->lclu)
++			node = node->rb_right;
++		else if (lclu == pr->lclu)
++			return pr;
++	}
++	return NULL;
++}
++
++/*
++ * __insert_pending - adds a pending cluster reservation to the set of
++ *                    pending reservations
++ *
++ * @inode - file containing the cluster
++ * @lblk - logical block in the cluster to be added
++ *
++ * Returns 0 on successful insertion and -ENOMEM on failure.  If the
++ * pending reservation is already in the set, returns successfully.
++ */
++static int __insert_pending(struct inode *inode, ext4_lblk_t lblk)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct ext4_pending_tree *tree = &EXT4_I(inode)->i_pending_tree;
++	struct rb_node **p = &tree->root.rb_node;
++	struct rb_node *parent = NULL;
++	struct pending_reservation *pr;
++	ext4_lblk_t lclu;
++	int ret = 0;
++
++	lclu = EXT4_B2C(sbi, lblk);
++	/* search to find parent for insertion */
++	while (*p) {
++		parent = *p;
++		pr = rb_entry(parent, struct pending_reservation, rb_node);
++
++		if (lclu < pr->lclu) {
++			p = &(*p)->rb_left;
++		} else if (lclu > pr->lclu) {
++			p = &(*p)->rb_right;
++		} else {
++			/* pending reservation already inserted */
++			goto out;
++		}
++	}
++
++	pr = kmem_cache_alloc(ext4_pending_cachep, GFP_ATOMIC);
++	if (pr == NULL) {
++		ret = -ENOMEM;
++		goto out;
++	}
++	pr->lclu = lclu;
++
++	rb_link_node(&pr->rb_node, parent, p);
++	rb_insert_color(&pr->rb_node, &tree->root);
++
++out:
++	return ret;
++}
++
++/*
++ * __remove_pending - removes a pending cluster reservation from the set
++ *                    of pending reservations
++ *
++ * @inode - file containing the cluster
++ * @lblk - logical block in the pending cluster reservation to be removed
++ *
++ * Returns successfully if pending reservation is not a member of the set.
++ */
++static void __remove_pending(struct inode *inode, ext4_lblk_t lblk)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct pending_reservation *pr;
++	struct ext4_pending_tree *tree;
++
++	pr = __get_pending(inode, EXT4_B2C(sbi, lblk));
++	if (pr != NULL) {
++		tree = &EXT4_I(inode)->i_pending_tree;
++		rb_erase(&pr->rb_node, &tree->root);
++		kmem_cache_free(ext4_pending_cachep, pr);
++	}
++}
++
++/*
++ * ext4_remove_pending - removes a pending cluster reservation from the set
++ *                       of pending reservations
++ *
++ * @inode - file containing the cluster
++ * @lblk - logical block in the pending cluster reservation to be removed
++ *
++ * Locking for external use of __remove_pending.
++ */
++void ext4_remove_pending(struct inode *inode, ext4_lblk_t lblk)
++{
++	struct ext4_inode_info *ei = EXT4_I(inode);
++
++	write_lock(&ei->i_es_lock);
++	__remove_pending(inode, lblk);
++	write_unlock(&ei->i_es_lock);
++}
++
++/*
++ * ext4_is_pending - determine whether a cluster has a pending reservation
++ *                   on it
++ *
++ * @inode - file containing the cluster
++ * @lblk - logical block in the cluster
++ *
++ * Returns true if there's a pending reservation for the cluster in the
++ * set of pending reservations, and false if not.
++ */
++bool ext4_is_pending(struct inode *inode, ext4_lblk_t lblk)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	struct ext4_inode_info *ei = EXT4_I(inode);
++	bool ret;
++
++	read_lock(&ei->i_es_lock);
++	ret = (bool)(__get_pending(inode, EXT4_B2C(sbi, lblk)) != NULL);
++	read_unlock(&ei->i_es_lock);
++
++	return ret;
++}
++
++/*
++ * ext4_es_insert_delayed_block - adds a delayed block to the extents status
++ *                                tree, adding a pending reservation where
++ *                                needed
++ *
++ * @inode - file containing the newly added block
++ * @lblk - logical block to be added
++ * @allocated - indicates whether a physical cluster has been allocated for
++ *              the logical cluster that contains the block
++ *
++ * Returns 0 on success, negative error code on failure.
++ */
++int ext4_es_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk,
++				 bool allocated)
++{
++	struct extent_status newes;
++	int err = 0;
++
++	es_debug("add [%u/1) delayed to extent status tree of inode %lu\n",
++		 lblk, inode->i_ino);
++
++	newes.es_lblk = lblk;
++	newes.es_len = 1;
++	ext4_es_store_pblock_status(&newes, ~0, EXTENT_STATUS_DELAYED);
++	trace_ext4_es_insert_delayed_block(inode, &newes, allocated);
++
++	ext4_es_insert_extent_check(inode, &newes);
++
++	write_lock(&EXT4_I(inode)->i_es_lock);
++
++	err = __es_remove_extent(inode, lblk, lblk);
++	if (err != 0)
++		goto error;
++retry:
++	err = __es_insert_extent(inode, &newes);
++	if (err == -ENOMEM && __es_shrink(EXT4_SB(inode->i_sb),
++					  128, EXT4_I(inode)))
++		goto retry;
++	if (err != 0)
++		goto error;
++
++	if (allocated)
++		__insert_pending(inode, lblk);
++
++error:
++	write_unlock(&EXT4_I(inode)->i_es_lock);
++
++	ext4_es_print_tree(inode);
++	ext4_print_pending_tree(inode);
++
++	return err;
++}
+diff --git a/fs/ext4/extents_status.h b/fs/ext4/extents_status.h
+index 8efdeb903d6ba..9d3c676ec623a 100644
+--- a/fs/ext4/extents_status.h
++++ b/fs/ext4/extents_status.h
+@@ -78,6 +78,51 @@ struct ext4_es_stats {
+ 	struct percpu_counter es_stats_shk_cnt;
+ };
+ 
++/*
++ * Pending cluster reservations for bigalloc file systems
++ *
++ * A cluster with a pending reservation is a logical cluster shared by at
++ * least one extent in the extents status tree with delayed and unwritten
++ * status and at least one other written or unwritten extent.  The
++ * reservation is said to be pending because a cluster reservation would
++ * have to be taken in the event all blocks in the cluster shared with
++ * written or unwritten extents were deleted while the delayed and
++ * unwritten blocks remained.
++ *
++ * The set of pending cluster reservations is an auxiliary data structure
++ * used with the extents status tree to implement reserved cluster/block
++ * accounting for bigalloc file systems.  The set is kept in memory and
++ * records all pending cluster reservations.
++ *
++ * Its primary function is to avoid the need to read extents from the
++ * disk when invalidating pages as a result of a truncate, punch hole, or
++ * collapse range operation.  Page invalidation requires a decrease in the
++ * reserved cluster count if it results in the removal of all delayed
++ * and unwritten extents (blocks) from a cluster that is not shared with a
++ * written or unwritten extent, and no decrease otherwise.  Determining
++ * whether the cluster is shared can be done by searching for a pending
++ * reservation on it.
++ *
++ * Secondarily, it provides a potentially faster method for determining
++ * whether the reserved cluster count should be increased when a physical
++ * cluster is deallocated as a result of a truncate, punch hole, or
++ * collapse range operation.  The necessary information is also present
++ * in the extents status tree, but might be more rapidly accessed in
++ * the pending reservation set in many cases due to smaller size.
++ *
++ * The pending cluster reservation set is implemented as a red-black tree
++ * with the goal of minimizing per page search time overhead.
++ */
++
++struct pending_reservation {
++	struct rb_node rb_node;
++	ext4_lblk_t lclu;
++};
++
++struct ext4_pending_tree {
++	struct rb_root root;
++};
++
+ extern int __init ext4_init_es(void);
+ extern void ext4_exit_es(void);
+ extern void ext4_es_init_tree(struct ext4_es_tree *tree);
+@@ -90,11 +135,18 @@ extern void ext4_es_cache_extent(struct inode *inode, ext4_lblk_t lblk,
+ 				 unsigned int status);
+ extern int ext4_es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
+ 				 ext4_lblk_t len);
+-extern void ext4_es_find_delayed_extent_range(struct inode *inode,
+-					ext4_lblk_t lblk, ext4_lblk_t end,
+-					struct extent_status *es);
++extern void ext4_es_find_extent_range(struct inode *inode,
++				      int (*match_fn)(struct extent_status *es),
++				      ext4_lblk_t lblk, ext4_lblk_t end,
++				      struct extent_status *es);
+ extern int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
+ 				 struct extent_status *es);
++extern bool ext4_es_scan_range(struct inode *inode,
++			       int (*matching_fn)(struct extent_status *es),
++			       ext4_lblk_t lblk, ext4_lblk_t end);
++extern bool ext4_es_scan_clu(struct inode *inode,
++			     int (*matching_fn)(struct extent_status *es),
++			     ext4_lblk_t lblk);
+ 
+ static inline unsigned int ext4_es_status(struct extent_status *es)
+ {
+@@ -126,6 +178,16 @@ static inline int ext4_es_is_hole(struct extent_status *es)
+ 	return (ext4_es_type(es) & EXTENT_STATUS_HOLE) != 0;
+ }
+ 
++static inline int ext4_es_is_mapped(struct extent_status *es)
++{
++	return (ext4_es_is_written(es) || ext4_es_is_unwritten(es));
++}
++
++static inline int ext4_es_is_delonly(struct extent_status *es)
++{
++	return (ext4_es_is_delayed(es) && !ext4_es_is_unwritten(es));
++}
++
+ static inline void ext4_es_set_referenced(struct extent_status *es)
+ {
+ 	es->es_pblk |= ((ext4_fsblk_t)EXTENT_STATUS_REFERENCED) << ES_SHIFT;
+@@ -175,4 +237,12 @@ extern void ext4_es_unregister_shrinker(struct ext4_sb_info *sbi);
+ 
+ extern int ext4_seq_es_shrinker_info_show(struct seq_file *seq, void *v);
+ 
++extern int __init ext4_init_pending(void);
++extern void ext4_exit_pending(void);
++extern void ext4_init_pending_tree(struct ext4_pending_tree *tree);
++extern void ext4_remove_pending(struct inode *inode, ext4_lblk_t lblk);
++extern bool ext4_is_pending(struct inode *inode, ext4_lblk_t lblk);
++extern int ext4_es_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk,
++					bool allocated);
++
+ #endif /* _EXT4_EXTENTS_STATUS_H */
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index a5442528a60d2..0cc0d22c08569 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -148,6 +148,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ 	struct super_block *sb = inode->i_sb;
+ 	Indirect *p = chain;
+ 	struct buffer_head *bh;
++	unsigned int key;
+ 	int ret = -EIO;
+ 
+ 	*err = 0;
+@@ -156,7 +157,13 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ 	if (!p->key)
+ 		goto no_block;
+ 	while (--depth) {
+-		bh = sb_getblk(sb, le32_to_cpu(p->key));
++		key = le32_to_cpu(p->key);
++		if (key > ext4_blocks_count(EXT4_SB(sb)->s_es)) {
++			/* the block was out of range */
++			ret = -EFSCORRUPTED;
++			goto failure;
++		}
++		bh = sb_getblk(sb, key);
+ 		if (unlikely(!bh)) {
+ 			ret = -ENOMEM;
+ 			goto failure;
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 95139c992a540..17d120ac20100 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -207,6 +207,8 @@ void ext4_evict_inode(struct inode *inode)
+ 
+ 	trace_ext4_evict_inode(inode);
+ 
++	if (EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)
++		ext4_evict_ea_inode(inode);
+ 	if (inode->i_nlink) {
+ 		/*
+ 		 * When journalling data dirty buffers are tracked only in the
+@@ -598,8 +600,8 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode,
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+ 		if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
+ 		    !(status & EXTENT_STATUS_WRITTEN) &&
+-		    ext4_find_delalloc_range(inode, map->m_lblk,
+-					     map->m_lblk + map->m_len - 1))
++		    ext4_es_scan_range(inode, &ext4_es_is_delayed, map->m_lblk,
++				       map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+ 		ret = ext4_es_insert_extent(inode, map->m_lblk,
+ 					    map->m_len, map->m_pblk, status);
+@@ -722,8 +724,8 @@ found:
+ 				EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
+ 		if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
+ 		    !(status & EXTENT_STATUS_WRITTEN) &&
+-		    ext4_find_delalloc_range(inode, map->m_lblk,
+-					     map->m_lblk + map->m_len - 1))
++		    ext4_es_scan_range(inode, &ext4_es_is_delayed, map->m_lblk,
++				       map->m_lblk + map->m_len - 1))
+ 			status |= EXTENT_STATUS_DELAYED;
+ 		ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
+ 					    map->m_pblk, status);
+@@ -1715,7 +1717,7 @@ static void ext4_da_page_release_reservation(struct page *page,
+ 		lblk = (page->index << (PAGE_SHIFT - inode->i_blkbits)) +
+ 			((num_clusters - 1) << sbi->s_cluster_bits);
+ 		if (sbi->s_cluster_ratio == 1 ||
+-		    !ext4_find_delalloc_cluster(inode, lblk))
++		    !ext4_es_scan_clu(inode, &ext4_es_is_delayed, lblk))
+ 			ext4_da_release_space(inode, 1);
+ 
+ 		num_clusters--;
+@@ -1821,6 +1823,65 @@ static int ext4_bh_delay_or_unwritten(handle_t *handle, struct buffer_head *bh)
+ 	return (buffer_delay(bh) || buffer_unwritten(bh)) && buffer_dirty(bh);
+ }
+ 
++/*
++ * ext4_insert_delayed_block - adds a delayed block to the extents status
++ *                             tree, incrementing the reserved cluster/block
++ *                             count or making a pending reservation
++ *                             where needed
++ *
++ * @inode - file containing the newly added block
++ * @lblk - logical block to be added
++ *
++ * Returns 0 on success, negative error code on failure.
++ */
++static int ext4_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	int ret;
++	bool allocated = false;
++
++	/*
++	 * If the cluster containing lblk is shared with a delayed,
++	 * written, or unwritten extent in a bigalloc file system, it's
++	 * already been accounted for and does not need to be reserved.
++	 * A pending reservation must be made for the cluster if it's
++	 * shared with a written or unwritten extent and doesn't already
++	 * have one.  Written and unwritten extents can be purged from the
++	 * extents status tree if the system is under memory pressure, so
++	 * it's necessary to examine the extent tree if a search of the
++	 * extents status tree doesn't get a match.
++	 */
++	if (sbi->s_cluster_ratio == 1) {
++		ret = ext4_da_reserve_space(inode);
++		if (ret != 0)   /* ENOSPC */
++			goto errout;
++	} else {   /* bigalloc */
++		if (!ext4_es_scan_clu(inode, &ext4_es_is_delonly, lblk)) {
++			if (!ext4_es_scan_clu(inode,
++					      &ext4_es_is_mapped, lblk)) {
++				ret = ext4_clu_mapped(inode,
++						      EXT4_B2C(sbi, lblk));
++				if (ret < 0)
++					goto errout;
++				if (ret == 0) {
++					ret = ext4_da_reserve_space(inode);
++					if (ret != 0)   /* ENOSPC */
++						goto errout;
++				} else {
++					allocated = true;
++				}
++			} else {
++				allocated = true;
++			}
++		}
++	}
++
++	ret = ext4_es_insert_delayed_block(inode, lblk, allocated);
++
++errout:
++	return ret;
++}
++
+ /*
+  * This function is grabs code from the very beginning of
+  * ext4_map_blocks, but assumes that the caller is from delayed write
+@@ -1900,28 +1961,14 @@ static int ext4_da_map_blocks(struct inode *inode, sector_t iblock,
+ add_delayed:
+ 	if (retval == 0) {
+ 		int ret;
++
+ 		/*
+ 		 * XXX: __block_prepare_write() unmaps passed block,
+ 		 * is it OK?
+ 		 */
+-		/*
+-		 * If the block was allocated from previously allocated cluster,
+-		 * then we don't need to reserve it again. However we still need
+-		 * to reserve metadata for every block we're going to write.
+-		 */
+-		if (EXT4_SB(inode->i_sb)->s_cluster_ratio == 1 ||
+-		    !ext4_find_delalloc_cluster(inode, map->m_lblk)) {
+-			ret = ext4_da_reserve_space(inode);
+-			if (ret) {
+-				/* not enough space to reserve */
+-				retval = ret;
+-				goto out_unlock;
+-			}
+-		}
+ 
+-		ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
+-					    ~0, EXTENT_STATUS_DELAYED);
+-		if (ret) {
++		ret = ext4_insert_delayed_block(inode, map->m_lblk);
++		if (ret != 0) {
+ 			retval = ret;
+ 			goto out_unlock;
+ 		}
+@@ -3517,7 +3564,8 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ 			ext4_lblk_t end = map.m_lblk + map.m_len - 1;
+ 			struct extent_status es;
+ 
+-			ext4_es_find_delayed_extent_range(inode, map.m_lblk, end, &es);
++			ext4_es_find_extent_range(inode, &ext4_es_is_delayed,
++						  map.m_lblk, end, &es);
+ 
+ 			if (!es.es_len || es.es_lblk > end) {
+ 				/* entire range is a hole */
+@@ -4549,7 +4597,7 @@ int ext4_truncate(struct inode *inode)
+ 	trace_ext4_truncate_enter(inode);
+ 
+ 	if (!ext4_can_truncate(inode))
+-		return 0;
++		goto out_trace;
+ 
+ 	ext4_clear_inode_flag(inode, EXT4_INODE_EOFBLOCKS);
+ 
+@@ -4560,16 +4608,15 @@ int ext4_truncate(struct inode *inode)
+ 		int has_inline = 1;
+ 
+ 		err = ext4_inline_data_truncate(inode, &has_inline);
+-		if (err)
+-			return err;
+-		if (has_inline)
+-			return 0;
++		if (err || has_inline)
++			goto out_trace;
+ 	}
+ 
+ 	/* If we zero-out tail of the page, we have to create jinode for jbd2 */
+ 	if (inode->i_size & (inode->i_sb->s_blocksize - 1)) {
+-		if (ext4_inode_attach_jinode(inode) < 0)
+-			return 0;
++		err = ext4_inode_attach_jinode(inode);
++		if (err)
++			goto out_trace;
+ 	}
+ 
+ 	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+@@ -4578,8 +4625,10 @@ int ext4_truncate(struct inode *inode)
+ 		credits = ext4_blocks_for_truncate(inode);
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, credits);
+-	if (IS_ERR(handle))
+-		return PTR_ERR(handle);
++	if (IS_ERR(handle)) {
++		err = PTR_ERR(handle);
++		goto out_trace;
++	}
+ 
+ 	if (inode->i_size & (inode->i_sb->s_blocksize - 1))
+ 		ext4_block_truncate_page(handle, mapping, inode->i_size);
+@@ -4628,6 +4677,7 @@ out_stop:
+ 	ext4_mark_inode_dirty(handle, inode);
+ 	ext4_journal_stop(handle);
+ 
++out_trace:
+ 	trace_ext4_truncate_exit(inode);
+ 	return err;
+ }
+@@ -4663,9 +4713,17 @@ static int __ext4_get_inode_loc(struct inode *inode,
+ 	inodes_per_block = EXT4_SB(sb)->s_inodes_per_block;
+ 	inode_offset = ((inode->i_ino - 1) %
+ 			EXT4_INODES_PER_GROUP(sb));
+-	block = ext4_inode_table(sb, gdp) + (inode_offset / inodes_per_block);
+ 	iloc->offset = (inode_offset % inodes_per_block) * EXT4_INODE_SIZE(sb);
+ 
++	block = ext4_inode_table(sb, gdp);
++	if ((block <= le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) ||
++	    (block >= ext4_blocks_count(EXT4_SB(sb)->s_es))) {
++		ext4_error(sb, "Invalid inode table block %llu in "
++			   "block_group %u", block, iloc->block_group);
++		return -EFSCORRUPTED;
++	}
++	block += (inode_offset / inodes_per_block);
++
+ 	bh = sb_getblk(sb, block);
+ 	if (unlikely(!bh))
+ 		return -ENOMEM;
+@@ -6033,6 +6091,14 @@ static int __ext4_expand_extra_isize(struct inode *inode,
+ 		return 0;
+ 	}
+ 
++	/*
++	 * We may need to allocate external xattr block so we need quotas
++	 * initialized. Here we can be called with various locks held so we
++	 * cannot affort to initialize quotas ourselves. So just bail.
++	 */
++	if (dquot_initialize_needed(inode))
++		return -EAGAIN;
++
+ 	/* try to expand with EAs present */
+ 	error = ext4_expand_extra_isize_ea(inode, new_extra_isize,
+ 					   raw_inode, handle);
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 484cb68c34d92..fd0353017d897 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -169,7 +169,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	/* Protect extent tree against block allocations via delalloc */
+ 	ext4_double_down_write_data_sem(inode, inode_bl);
+ 
+-	if (inode_bl->i_nlink == 0) {
++	if (is_bad_inode(inode_bl) || !S_ISREG(inode_bl->i_mode)) {
+ 		/* this inode has never been used as a BOOT_LOADER */
+ 		set_nlink(inode_bl, 1);
+ 		i_uid_write(inode_bl, 0);
+@@ -449,6 +449,10 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 	if (ext4_is_quota_file(inode))
+ 		return err;
+ 
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
++
+ 	err = ext4_get_inode_loc(inode, &iloc);
+ 	if (err)
+ 		return err;
+@@ -464,10 +468,6 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 		brelse(iloc.bh);
+ 	}
+ 
+-	err = dquot_initialize(inode);
+-	if (err)
+-		return err;
+-
+ 	handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
+ 		EXT4_QUOTA_INIT_BLOCKS(sb) +
+ 		EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a878b9a8d9ead..2c68591102bd4 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3868,6 +3868,9 @@ static int ext4_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return -EXDEV;
+ 
+ 	retval = dquot_initialize(old.dir);
++	if (retval)
++		return retval;
++	retval = dquot_initialize(old.inode);
+ 	if (retval)
+ 		return retval;
+ 	retval = dquot_initialize(new.dir);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 8737d1bcdb6ef..288213cad9143 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1567,8 +1567,8 @@ exit_journal:
+ 		int meta_bg = ext4_has_feature_meta_bg(sb);
+ 		sector_t old_gdb = 0;
+ 
+-		update_backups(sb, sbi->s_sbh->b_blocknr, (char *)es,
+-			       sizeof(struct ext4_super_block), 0);
++		update_backups(sb, ext4_group_first_block_no(sb, 0),
++			       (char *)es, sizeof(struct ext4_super_block), 0);
+ 		for (; gdb_num <= gdb_num_end; gdb_num++) {
+ 			struct buffer_head *gdb_bh;
+ 
+@@ -1775,7 +1775,7 @@ errout:
+ 		if (test_opt(sb, DEBUG))
+ 			printk(KERN_DEBUG "EXT4-fs: extended group to %llu "
+ 			       "blocks\n", ext4_blocks_count(es));
+-		update_backups(sb, EXT4_SB(sb)->s_sbh->b_blocknr,
++		update_backups(sb, ext4_group_first_block_no(sb, 0),
+ 			       (char *)es, sizeof(struct ext4_super_block), 0);
+ 	}
+ 	return err;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index f00cc301da369..73a431b6e720f 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1081,6 +1081,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
+ 		return NULL;
+ 
+ 	inode_set_iversion(&ei->vfs_inode, 1);
++	ei->i_flags = 0;
+ 	spin_lock_init(&ei->i_raw_lock);
+ 	INIT_LIST_HEAD(&ei->i_prealloc_list);
+ 	spin_lock_init(&ei->i_prealloc_lock);
+@@ -1094,6 +1095,7 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
+ 	ei->i_da_metadata_calc_len = 0;
+ 	ei->i_da_metadata_calc_last_lblock = 0;
+ 	spin_lock_init(&(ei->i_block_reservation_lock));
++	ext4_init_pending_tree(&ei->i_pending_tree);
+ #ifdef CONFIG_QUOTA
+ 	ei->i_reserved_quota = 0;
+ 	memset(&ei->i_dquot, 0, sizeof(ei->i_dquot));
+@@ -4302,30 +4304,31 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 		   ext4_has_feature_journal_needs_recovery(sb)) {
+ 		ext4_msg(sb, KERN_ERR, "required journal recovery "
+ 		       "suppressed and not mounted read-only");
+-		goto failed_mount_wq;
++		goto failed_mount3a;
+ 	} else {
+ 		/* Nojournal mode, all journal mount options are illegal */
+-		if (test_opt2(sb, EXPLICIT_JOURNAL_CHECKSUM)) {
+-			ext4_msg(sb, KERN_ERR, "can't mount with "
+-				 "journal_checksum, fs mounted w/o journal");
+-			goto failed_mount_wq;
+-		}
+ 		if (test_opt(sb, JOURNAL_ASYNC_COMMIT)) {
+ 			ext4_msg(sb, KERN_ERR, "can't mount with "
+ 				 "journal_async_commit, fs mounted w/o journal");
+-			goto failed_mount_wq;
++			goto failed_mount3a;
++		}
++
++		if (test_opt2(sb, EXPLICIT_JOURNAL_CHECKSUM)) {
++			ext4_msg(sb, KERN_ERR, "can't mount with "
++				 "journal_checksum, fs mounted w/o journal");
++			goto failed_mount3a;
+ 		}
+ 		if (sbi->s_commit_interval != JBD2_DEFAULT_MAX_COMMIT_AGE*HZ) {
+ 			ext4_msg(sb, KERN_ERR, "can't mount with "
+ 				 "commit=%lu, fs mounted w/o journal",
+ 				 sbi->s_commit_interval / HZ);
+-			goto failed_mount_wq;
++			goto failed_mount3a;
+ 		}
+ 		if (EXT4_MOUNT_DATA_FLAGS &
+ 		    (sbi->s_mount_opt ^ sbi->s_def_mount_opt)) {
+ 			ext4_msg(sb, KERN_ERR, "can't mount with "
+ 				 "data=, fs mounted w/o journal");
+-			goto failed_mount_wq;
++			goto failed_mount3a;
+ 		}
+ 		sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM;
+ 		clear_opt(sb, JOURNAL_CHECKSUM);
+@@ -4731,7 +4734,7 @@ static struct inode *ext4_get_journal_inode(struct super_block *sb,
+ 
+ 	jbd_debug(2, "Journal inode found at %p: %lld bytes\n",
+ 		  journal_inode, journal_inode->i_size);
+-	if (!S_ISREG(journal_inode->i_mode)) {
++	if (!S_ISREG(journal_inode->i_mode) || IS_ENCRYPTED(journal_inode)) {
+ 		ext4_msg(sb, KERN_ERR, "invalid journal inode");
+ 		iput(journal_inode);
+ 		return NULL;
+@@ -5839,6 +5842,20 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ 	return err;
+ }
+ 
++static inline bool ext4_check_quota_inum(int type, unsigned long qf_inum)
++{
++	switch (type) {
++	case USRQUOTA:
++		return qf_inum == EXT4_USR_QUOTA_INO;
++	case GRPQUOTA:
++		return qf_inum == EXT4_GRP_QUOTA_INO;
++	case PRJQUOTA:
++		return qf_inum >= EXT4_GOOD_OLD_FIRST_INO;
++	default:
++		BUG();
++	}
++}
++
+ static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
+ 			     unsigned int flags)
+ {
+@@ -5855,9 +5872,16 @@ static int ext4_quota_enable(struct super_block *sb, int type, int format_id,
+ 	if (!qf_inums[type])
+ 		return -EPERM;
+ 
++	if (!ext4_check_quota_inum(type, qf_inums[type])) {
++		ext4_error(sb, "Bad quota inum: %lu, type: %d",
++				qf_inums[type], type);
++		return -EUCLEAN;
++	}
++
+ 	qf_inode = ext4_iget(sb, qf_inums[type], EXT4_IGET_SPECIAL);
+ 	if (IS_ERR(qf_inode)) {
+-		ext4_error(sb, "Bad quota inode # %lu", qf_inums[type]);
++		ext4_error(sb, "Bad quota inode: %lu, type: %d",
++				qf_inums[type], type);
+ 		return PTR_ERR(qf_inode);
+ 	}
+ 
+@@ -5896,8 +5920,9 @@ static int ext4_enable_quotas(struct super_block *sb)
+ 			if (err) {
+ 				ext4_warning(sb,
+ 					"Failed to enable quota tracking "
+-					"(type=%d, err=%d). Please run "
+-					"e2fsck to fix.", type, err);
++					"(type=%d, err=%d, ino=%lu). "
++					"Please run e2fsck to fix.", type,
++					err, qf_inums[type]);
+ 				for (type--; type >= 0; type--) {
+ 					struct inode *inode;
+ 
+@@ -6165,6 +6190,10 @@ static int __init ext4_init_fs(void)
+ 	if (err)
+ 		return err;
+ 
++	err = ext4_init_pending();
++	if (err)
++		goto out6;
++
+ 	err = ext4_init_pageio();
+ 	if (err)
+ 		goto out5;
+@@ -6203,6 +6232,8 @@ out3:
+ out4:
+ 	ext4_exit_pageio();
+ out5:
++	ext4_exit_pending();
++out6:
+ 	ext4_exit_es();
+ 
+ 	return err;
+@@ -6220,6 +6251,7 @@ static void __exit ext4_exit_fs(void)
+ 	ext4_exit_system_zone();
+ 	ext4_exit_pageio();
+ 	ext4_exit_es();
++	ext4_exit_pending();
+ }
+ 
+ MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others");
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 497649c69bba7..0772941bbe925 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -434,6 +434,21 @@ error:
+ 	return err;
+ }
+ 
++/* Remove entry from mbcache when EA inode is getting evicted */
++void ext4_evict_ea_inode(struct inode *inode)
++{
++	struct mb_cache_entry *oe;
++
++	if (!EA_INODE_CACHE(inode))
++		return;
++	/* Wait for entry to get unused so that we can remove it */
++	while ((oe = mb_cache_entry_delete_or_get(EA_INODE_CACHE(inode),
++			ext4_xattr_inode_get_hash(inode), inode->i_ino))) {
++		mb_cache_entry_wait_unused(oe);
++		mb_cache_entry_put(EA_INODE_CACHE(inode), oe);
++	}
++}
++
+ static int
+ ext4_xattr_inode_verify_hashes(struct inode *ea_inode,
+ 			       struct ext4_xattr_entry *entry, void *buffer,
+@@ -1019,10 +1034,8 @@ static int ext4_xattr_ensure_credits(handle_t *handle, struct inode *inode,
+ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ 				       int ref_change)
+ {
+-	struct mb_cache *ea_inode_cache = EA_INODE_CACHE(ea_inode);
+ 	struct ext4_iloc iloc;
+ 	s64 ref_count;
+-	u32 hash;
+ 	int ret;
+ 
+ 	inode_lock(ea_inode);
+@@ -1047,14 +1060,6 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ 
+ 			set_nlink(ea_inode, 1);
+ 			ext4_orphan_del(handle, ea_inode);
+-
+-			if (ea_inode_cache) {
+-				hash = ext4_xattr_inode_get_hash(ea_inode);
+-				mb_cache_entry_create(ea_inode_cache,
+-						      GFP_NOFS, hash,
+-						      ea_inode->i_ino,
+-						      true /* reusable */);
+-			}
+ 		}
+ 	} else {
+ 		WARN_ONCE(ref_count < 0, "EA inode %lu ref_count=%lld",
+@@ -1067,12 +1072,6 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode,
+ 
+ 			clear_nlink(ea_inode);
+ 			ext4_orphan_add(handle, ea_inode);
+-
+-			if (ea_inode_cache) {
+-				hash = ext4_xattr_inode_get_hash(ea_inode);
+-				mb_cache_entry_delete(ea_inode_cache, hash,
+-						      ea_inode->i_ino);
+-			}
+ 		}
+ 	}
+ 
+@@ -1253,6 +1252,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ 	if (error)
+ 		goto out;
+ 
++retry_ref:
+ 	lock_buffer(bh);
+ 	hash = le32_to_cpu(BHDR(bh)->h_hash);
+ 	ref = le32_to_cpu(BHDR(bh)->h_refcount);
+@@ -1262,9 +1262,18 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ 		 * This must happen under buffer lock for
+ 		 * ext4_xattr_block_set() to reliably detect freed block
+ 		 */
+-		if (ea_block_cache)
+-			mb_cache_entry_delete(ea_block_cache, hash,
+-					      bh->b_blocknr);
++		if (ea_block_cache) {
++			struct mb_cache_entry *oe;
++
++			oe = mb_cache_entry_delete_or_get(ea_block_cache, hash,
++							  bh->b_blocknr);
++			if (oe) {
++				unlock_buffer(bh);
++				mb_cache_entry_wait_unused(oe);
++				mb_cache_entry_put(ea_block_cache, oe);
++				goto retry_ref;
++			}
++		}
+ 		get_bh(bh);
+ 		unlock_buffer(bh);
+ 
+@@ -1288,7 +1297,7 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ 				ce = mb_cache_entry_get(ea_block_cache, hash,
+ 							bh->b_blocknr);
+ 				if (ce) {
+-					ce->e_reusable = 1;
++					set_bit(MBE_REUSABLE_B, &ce->e_flags);
+ 					mb_cache_entry_put(ea_block_cache, ce);
+ 				}
+ 			}
+@@ -1446,6 +1455,9 @@ static struct inode *ext4_xattr_inode_create(handle_t *handle,
+ 		if (!err)
+ 			err = ext4_inode_attach_jinode(ea_inode);
+ 		if (err) {
++			if (ext4_xattr_inode_dec_ref(handle, ea_inode))
++				ext4_warning_inode(ea_inode,
++					"cleanup dec ref error %d", err);
+ 			iput(ea_inode);
+ 			return ERR_PTR(err);
+ 		}
+@@ -1872,6 +1884,8 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ #define header(x) ((struct ext4_xattr_header *)(x))
+ 
+ 	if (s->base) {
++		int offset = (char *)s->here - bs->bh->b_data;
++
+ 		BUFFER_TRACE(bs->bh, "get_write_access");
+ 		error = ext4_journal_get_write_access(handle, bs->bh);
+ 		if (error)
+@@ -1886,9 +1900,20 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ 			 * ext4_xattr_block_set() to reliably detect modified
+ 			 * block
+ 			 */
+-			if (ea_block_cache)
+-				mb_cache_entry_delete(ea_block_cache, hash,
+-						      bs->bh->b_blocknr);
++			if (ea_block_cache) {
++				struct mb_cache_entry *oe;
++
++				oe = mb_cache_entry_delete_or_get(ea_block_cache,
++					hash, bs->bh->b_blocknr);
++				if (oe) {
++					/*
++					 * Xattr block is getting reused. Leave
++					 * it alone.
++					 */
++					mb_cache_entry_put(ea_block_cache, oe);
++					goto clone_block;
++				}
++			}
+ 			ea_bdebug(bs->bh, "modifying in-place");
+ 			error = ext4_xattr_set_entry(i, s, handle, inode,
+ 						     true /* is_block */);
+@@ -1903,50 +1928,47 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ 			if (error)
+ 				goto cleanup;
+ 			goto inserted;
+-		} else {
+-			int offset = (char *)s->here - bs->bh->b_data;
++		}
++clone_block:
++		unlock_buffer(bs->bh);
++		ea_bdebug(bs->bh, "cloning");
++		s->base = kmemdup(BHDR(bs->bh), bs->bh->b_size, GFP_NOFS);
++		error = -ENOMEM;
++		if (s->base == NULL)
++			goto cleanup;
++		s->first = ENTRY(header(s->base)+1);
++		header(s->base)->h_refcount = cpu_to_le32(1);
++		s->here = ENTRY(s->base + offset);
++		s->end = s->base + bs->bh->b_size;
+ 
+-			unlock_buffer(bs->bh);
+-			ea_bdebug(bs->bh, "cloning");
+-			s->base = kmalloc(bs->bh->b_size, GFP_NOFS);
+-			error = -ENOMEM;
+-			if (s->base == NULL)
++		/*
++		 * If existing entry points to an xattr inode, we need
++		 * to prevent ext4_xattr_set_entry() from decrementing
++		 * ref count on it because the reference belongs to the
++		 * original block. In this case, make the entry look
++		 * like it has an empty value.
++		 */
++		if (!s->not_found && s->here->e_value_inum) {
++			ea_ino = le32_to_cpu(s->here->e_value_inum);
++			error = ext4_xattr_inode_iget(inode, ea_ino,
++				      le32_to_cpu(s->here->e_hash),
++				      &tmp_inode);
++			if (error)
+ 				goto cleanup;
+-			memcpy(s->base, BHDR(bs->bh), bs->bh->b_size);
+-			s->first = ENTRY(header(s->base)+1);
+-			header(s->base)->h_refcount = cpu_to_le32(1);
+-			s->here = ENTRY(s->base + offset);
+-			s->end = s->base + bs->bh->b_size;
+-
+-			/*
+-			 * If existing entry points to an xattr inode, we need
+-			 * to prevent ext4_xattr_set_entry() from decrementing
+-			 * ref count on it because the reference belongs to the
+-			 * original block. In this case, make the entry look
+-			 * like it has an empty value.
+-			 */
+-			if (!s->not_found && s->here->e_value_inum) {
+-				ea_ino = le32_to_cpu(s->here->e_value_inum);
+-				error = ext4_xattr_inode_iget(inode, ea_ino,
+-					      le32_to_cpu(s->here->e_hash),
+-					      &tmp_inode);
+-				if (error)
+-					goto cleanup;
+ 
+-				if (!ext4_test_inode_state(tmp_inode,
+-						EXT4_STATE_LUSTRE_EA_INODE)) {
+-					/*
+-					 * Defer quota free call for previous
+-					 * inode until success is guaranteed.
+-					 */
+-					old_ea_inode_quota = le32_to_cpu(
+-							s->here->e_value_size);
+-				}
+-				iput(tmp_inode);
+-
+-				s->here->e_value_inum = 0;
+-				s->here->e_value_size = 0;
++			if (!ext4_test_inode_state(tmp_inode,
++					EXT4_STATE_LUSTRE_EA_INODE)) {
++				/*
++				 * Defer quota free call for previous
++				 * inode until success is guaranteed.
++				 */
++				old_ea_inode_quota = le32_to_cpu(
++						s->here->e_value_size);
+ 			}
++			iput(tmp_inode);
++
++			s->here->e_value_inum = 0;
++			s->here->e_value_size = 0;
+ 		}
+ 	} else {
+ 		/* Allocate a buffer where we construct the new block. */
+@@ -2013,18 +2035,13 @@ inserted:
+ 				lock_buffer(new_bh);
+ 				/*
+ 				 * We have to be careful about races with
+-				 * freeing, rehashing or adding references to
+-				 * xattr block. Once we hold buffer lock xattr
+-				 * block's state is stable so we can check
+-				 * whether the block got freed / rehashed or
+-				 * not.  Since we unhash mbcache entry under
+-				 * buffer lock when freeing / rehashing xattr
+-				 * block, checking whether entry is still
+-				 * hashed is reliable. Same rules hold for
+-				 * e_reusable handling.
++				 * adding references to xattr block. Once we
++				 * hold buffer lock xattr block's state is
++				 * stable so we can check the additional
++				 * reference fits.
+ 				 */
+-				if (hlist_bl_unhashed(&ce->e_hash_list) ||
+-				    !ce->e_reusable) {
++				ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1;
++				if (ref > EXT4_XATTR_REFCOUNT_MAX) {
+ 					/*
+ 					 * Undo everything and check mbcache
+ 					 * again.
+@@ -2039,10 +2056,9 @@ inserted:
+ 					new_bh = NULL;
+ 					goto inserted;
+ 				}
+-				ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1;
+ 				BHDR(new_bh)->h_refcount = cpu_to_le32(ref);
+-				if (ref >= EXT4_XATTR_REFCOUNT_MAX)
+-					ce->e_reusable = 0;
++				if (ref == EXT4_XATTR_REFCOUNT_MAX)
++					clear_bit(MBE_REUSABLE_B, &ce->e_flags);
+ 				ea_bdebug(new_bh, "reusing; refcount now=%d",
+ 					  ref);
+ 				ext4_xattr_block_csum_set(inode, new_bh);
+@@ -2070,19 +2086,11 @@ inserted:
+ 
+ 			goal = ext4_group_first_block_no(sb,
+ 						EXT4_I(inode)->i_block_group);
+-
+-			/* non-extent files can't have physical blocks past 2^32 */
+-			if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-				goal = goal & EXT4_MAX_BLOCK_FILE_PHYS;
+-
+ 			block = ext4_new_meta_blocks(handle, inode, goal, 0,
+ 						     NULL, &error);
+ 			if (error)
+ 				goto cleanup;
+ 
+-			if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
+-				BUG_ON(block > EXT4_MAX_BLOCK_FILE_PHYS);
+-
+ 			ea_idebug(inode, "creating block %llu",
+ 				  (unsigned long long)block);
+ 
+@@ -2575,7 +2583,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 
+ 	is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
+ 	bs = kzalloc(sizeof(struct ext4_xattr_block_find), GFP_NOFS);
+-	buffer = kmalloc(value_size, GFP_NOFS);
++	buffer = kvmalloc(value_size, GFP_NOFS);
+ 	b_entry_name = kmalloc(entry->e_name_len + 1, GFP_NOFS);
+ 	if (!is || !bs || !buffer || !b_entry_name) {
+ 		error = -ENOMEM;
+@@ -2627,7 +2635,7 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
+ 	error = 0;
+ out:
+ 	kfree(b_entry_name);
+-	kfree(buffer);
++	kvfree(buffer);
+ 	if (is)
+ 		brelse(is->iloc.bh);
+ 	if (bs)
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index 990084e00374f..231ef308d10cb 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -190,6 +190,7 @@ extern void ext4_xattr_inode_array_free(struct ext4_xattr_inode_array *array);
+ 
+ extern int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ 			    struct ext4_inode *raw_inode, handle_t *handle);
++extern void ext4_evict_ea_inode(struct inode *inode);
+ 
+ extern const struct xattr_handler *ext4_xattr_handlers[];
+ 
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 6fbf0471323e7..7596fce92bef1 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1382,7 +1382,7 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
+ 		if (i + 1 < dpolicy->granularity)
+ 			break;
+ 
+-		if (i < DEFAULT_DISCARD_GRANULARITY && dpolicy->ordered)
++		if (i + 1 < DEFAULT_DISCARD_GRANULARITY && dpolicy->ordered)
+ 			return __issue_discard_cmd_orderly(sbi, dpolicy);
+ 
+ 		pend_list = &dcc->pend_list[i];
+diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
+index da243c84e93b0..ee2ea5532e69b 100644
+--- a/fs/hfs/inode.c
++++ b/fs/hfs/inode.c
+@@ -453,14 +453,16 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 		/* panic? */
+ 		return -EIO;
+ 
++	res = -EIO;
++	if (HFS_I(main_inode)->cat_key.CName.len > HFS_NAMELEN)
++		goto out;
+ 	fd.search_key->cat = HFS_I(main_inode)->cat_key;
+ 	if (hfs_brec_find(&fd))
+-		/* panic? */
+ 		goto out;
+ 
+ 	if (S_ISDIR(main_inode->i_mode)) {
+ 		if (fd.entrylength < sizeof(struct hfs_cat_dir))
+-			/* panic? */;
++			goto out;
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
+ 			   sizeof(struct hfs_cat_dir));
+ 		if (rec.type != HFS_CDR_DIR ||
+@@ -473,6 +475,8 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 		hfs_bnode_write(fd.bnode, &rec, fd.entryoffset,
+ 			    sizeof(struct hfs_cat_dir));
+ 	} else if (HFS_IS_RSRC(inode)) {
++		if (fd.entrylength < sizeof(struct hfs_cat_file))
++			goto out;
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
+ 			       sizeof(struct hfs_cat_file));
+ 		hfs_inode_write_fork(inode, rec.file.RExtRec,
+@@ -481,7 +485,7 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 				sizeof(struct hfs_cat_file));
+ 	} else {
+ 		if (fd.entrylength < sizeof(struct hfs_cat_file))
+-			/* panic? */;
++			goto out;
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset,
+ 			   sizeof(struct hfs_cat_file));
+ 		if (rec.type != HFS_CDR_FIL ||
+@@ -498,9 +502,10 @@ int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ 		hfs_bnode_write(fd.bnode, &rec, fd.entryoffset,
+ 			    sizeof(struct hfs_cat_file));
+ 	}
++	res = 0;
+ out:
+ 	hfs_find_exit(&fd);
+-	return 0;
++	return res;
+ }
+ 
+ static struct dentry *hfs_file_lookup(struct inode *dir, struct dentry *dentry,
+diff --git a/fs/hfs/trans.c b/fs/hfs/trans.c
+index 39f5e343bf4d4..fdb0edb8a607d 100644
+--- a/fs/hfs/trans.c
++++ b/fs/hfs/trans.c
+@@ -109,7 +109,7 @@ void hfs_asc2mac(struct super_block *sb, struct hfs_name *out, const struct qstr
+ 	if (nls_io) {
+ 		wchar_t ch;
+ 
+-		while (srclen > 0) {
++		while (srclen > 0 && dstlen > 0) {
+ 			size = nls_io->char2uni(src, srclen, &ch);
+ 			if (size < 0) {
+ 				ch = '?';
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index dd7ad9f13e3aa..db2e1c750199a 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -198,6 +198,8 @@ struct hfsplus_sb_info {
+ #define HFSPLUS_SB_HFSX		3
+ #define HFSPLUS_SB_CASEFOLD	4
+ #define HFSPLUS_SB_NOBARRIER	5
++#define HFSPLUS_SB_UID		6
++#define HFSPLUS_SB_GID		7
+ 
+ static inline struct hfsplus_sb_info *HFSPLUS_SB(struct super_block *sb)
+ {
+diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
+index d7ab9d8c4b674..c7073a1517d68 100644
+--- a/fs/hfsplus/inode.c
++++ b/fs/hfsplus/inode.c
+@@ -187,11 +187,11 @@ static void hfsplus_get_perms(struct inode *inode,
+ 	mode = be16_to_cpu(perms->mode);
+ 
+ 	i_uid_write(inode, be32_to_cpu(perms->owner));
+-	if (!i_uid_read(inode) && !mode)
++	if ((test_bit(HFSPLUS_SB_UID, &sbi->flags)) || (!i_uid_read(inode) && !mode))
+ 		inode->i_uid = sbi->uid;
+ 
+ 	i_gid_write(inode, be32_to_cpu(perms->group));
+-	if (!i_gid_read(inode) && !mode)
++	if ((test_bit(HFSPLUS_SB_GID, &sbi->flags)) || (!i_gid_read(inode) && !mode))
+ 		inode->i_gid = sbi->gid;
+ 
+ 	if (dir) {
+@@ -476,8 +476,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	if (type == HFSPLUS_FOLDER) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		if (fd->entrylength < sizeof(struct hfsplus_cat_folder))
+-			/* panic? */;
++		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_folder));
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		hfsplus_get_perms(inode, &folder->permissions, 1);
+@@ -497,8 +496,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
+ 	} else if (type == HFSPLUS_FILE) {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		if (fd->entrylength < sizeof(struct hfsplus_cat_file))
+-			/* panic? */;
++		WARN_ON(fd->entrylength < sizeof(struct hfsplus_cat_file));
+ 		hfs_bnode_read(fd->bnode, &entry, fd->entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 
+@@ -555,8 +553,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	if (S_ISDIR(main_inode->i_mode)) {
+ 		struct hfsplus_cat_folder *folder = &entry.folder;
+ 
+-		if (fd.entrylength < sizeof(struct hfsplus_cat_folder))
+-			/* panic? */;
++		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_folder));
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_folder));
+ 		/* simple node checks? */
+@@ -581,8 +578,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
+ 	} else {
+ 		struct hfsplus_cat_file *file = &entry.file;
+ 
+-		if (fd.entrylength < sizeof(struct hfsplus_cat_file))
+-			/* panic? */;
++		WARN_ON(fd.entrylength < sizeof(struct hfsplus_cat_file));
+ 		hfs_bnode_read(fd.bnode, &entry, fd.entryoffset,
+ 					sizeof(struct hfsplus_cat_file));
+ 		hfsplus_inode_write_fork(inode, &file->data_fork);
+diff --git a/fs/hfsplus/options.c b/fs/hfsplus/options.c
+index 047e05c575601..c94a58762ad6d 100644
+--- a/fs/hfsplus/options.c
++++ b/fs/hfsplus/options.c
+@@ -140,6 +140,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
+ 			if (!uid_valid(sbi->uid)) {
+ 				pr_err("invalid uid specified\n");
+ 				return 0;
++			} else {
++				set_bit(HFSPLUS_SB_UID, &sbi->flags);
+ 			}
+ 			break;
+ 		case opt_gid:
+@@ -151,6 +153,8 @@ int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi)
+ 			if (!gid_valid(sbi->gid)) {
+ 				pr_err("invalid gid specified\n");
+ 				return 0;
++			} else {
++				set_bit(HFSPLUS_SB_GID, &sbi->flags);
+ 			}
+ 			break;
+ 		case opt_part:
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 1014f2a24697f..f06796cad9aa8 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -168,7 +168,7 @@ int dbMount(struct inode *ipbmap)
+ 	struct bmap *bmp;
+ 	struct dbmap_disk *dbmp_le;
+ 	struct metapage *mp;
+-	int i;
++	int i, err;
+ 
+ 	/*
+ 	 * allocate/initialize the in-memory bmap descriptor
+@@ -183,8 +183,8 @@ int dbMount(struct inode *ipbmap)
+ 			   BMAPBLKNO << JFS_SBI(ipbmap->i_sb)->l2nbperpage,
+ 			   PSIZE, 0);
+ 	if (mp == NULL) {
+-		kfree(bmp);
+-		return -EIO;
++		err = -EIO;
++		goto err_kfree_bmp;
+ 	}
+ 
+ 	/* copy the on-disk bmap descriptor to its in-memory version. */
+@@ -194,9 +194,8 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+ 	bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+ 	if (!bmp->db_numag) {
+-		release_metapage(mp);
+-		kfree(bmp);
+-		return -EINVAL;
++		err = -EINVAL;
++		goto err_release_metapage;
+ 	}
+ 
+ 	bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+@@ -207,6 +206,16 @@ int dbMount(struct inode *ipbmap)
+ 	bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
+ 	bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
+ 	bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
++	if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) {
++		err = -EINVAL;
++		goto err_release_metapage;
++	}
++
++	if (((bmp->db_mapsize - 1) >> bmp->db_agl2size) > MAXAG) {
++		err = -EINVAL;
++		goto err_release_metapage;
++	}
++
+ 	for (i = 0; i < MAXAG; i++)
+ 		bmp->db_agfree[i] = le64_to_cpu(dbmp_le->dn_agfree[i]);
+ 	bmp->db_agsize = le64_to_cpu(dbmp_le->dn_agsize);
+@@ -227,6 +236,12 @@ int dbMount(struct inode *ipbmap)
+ 	BMAP_LOCK_INIT(bmp);
+ 
+ 	return (0);
++
++err_release_metapage:
++	release_metapage(mp);
++err_kfree_bmp:
++	kfree(bmp);
++	return err;
+ }
+ 
+ 
+diff --git a/fs/libfs.c b/fs/libfs.c
+index be57e64834e5a..fd5f6c1060595 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -864,8 +864,8 @@ out:
+ EXPORT_SYMBOL_GPL(simple_attr_read);
+ 
+ /* interpret the buffer as a number to call the set function with */
+-ssize_t simple_attr_write(struct file *file, const char __user *buf,
+-			  size_t len, loff_t *ppos)
++static ssize_t simple_attr_write_xsigned(struct file *file, const char __user *buf,
++			  size_t len, loff_t *ppos, bool is_signed)
+ {
+ 	struct simple_attr *attr;
+ 	unsigned long long val;
+@@ -886,7 +886,10 @@ ssize_t simple_attr_write(struct file *file, const char __user *buf,
+ 		goto out;
+ 
+ 	attr->set_buf[size] = '\0';
+-	ret = kstrtoull(attr->set_buf, 0, &val);
++	if (is_signed)
++		ret = kstrtoll(attr->set_buf, 0, &val);
++	else
++		ret = kstrtoull(attr->set_buf, 0, &val);
+ 	if (ret)
+ 		goto out;
+ 	ret = attr->set(attr->data, val);
+@@ -896,8 +899,21 @@ out:
+ 	mutex_unlock(&attr->mutex);
+ 	return ret;
+ }
++
++ssize_t simple_attr_write(struct file *file, const char __user *buf,
++			  size_t len, loff_t *ppos)
++{
++	return simple_attr_write_xsigned(file, buf, len, ppos, false);
++}
+ EXPORT_SYMBOL_GPL(simple_attr_write);
+ 
++ssize_t simple_attr_write_signed(struct file *file, const char __user *buf,
++			  size_t len, loff_t *ppos)
++{
++	return simple_attr_write_xsigned(file, buf, len, ppos, true);
++}
++EXPORT_SYMBOL_GPL(simple_attr_write_signed);
++
+ /**
+  * generic_fh_to_dentry - generic helper for the fh_to_dentry export operation
+  * @sb:		filesystem to do the file handle conversion on
+diff --git a/fs/mbcache.c b/fs/mbcache.c
+index 081ccf0caee35..2e2d4de4cf875 100644
+--- a/fs/mbcache.c
++++ b/fs/mbcache.c
+@@ -10,7 +10,7 @@
+ /*
+  * Mbcache is a simple key-value store. Keys need not be unique, however
+  * key-value pairs are expected to be unique (we use this fact in
+- * mb_cache_entry_delete()).
++ * mb_cache_entry_delete_or_get()).
+  *
+  * Ext2 and ext4 use this cache for deduplication of extended attribute blocks.
+  * Ext4 also uses it for deduplication of xattr values stored in inodes.
+@@ -89,12 +89,19 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ 		return -ENOMEM;
+ 
+ 	INIT_LIST_HEAD(&entry->e_list);
+-	/* One ref for hash, one ref returned */
+-	atomic_set(&entry->e_refcnt, 1);
++	/*
++	 * We create entry with two references. One reference is kept by the
++	 * hash table, the other reference is used to protect us from
++	 * mb_cache_entry_delete_or_get() until the entry is fully setup. This
++	 * avoids nesting of cache->c_list_lock into hash table bit locks which
++	 * is problematic for RT.
++	 */
++	atomic_set(&entry->e_refcnt, 2);
+ 	entry->e_key = key;
+ 	entry->e_value = value;
+-	entry->e_reusable = reusable;
+-	entry->e_referenced = 0;
++	entry->e_flags = 0;
++	if (reusable)
++		set_bit(MBE_REUSABLE_B, &entry->e_flags);
+ 	head = mb_cache_entry_head(cache, key);
+ 	hlist_bl_lock(head);
+ 	hlist_bl_for_each_entry(dup, dup_node, head, e_hash_list) {
+@@ -106,24 +113,41 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ 	}
+ 	hlist_bl_add_head(&entry->e_hash_list, head);
+ 	hlist_bl_unlock(head);
+-
+ 	spin_lock(&cache->c_list_lock);
+ 	list_add_tail(&entry->e_list, &cache->c_list);
+-	/* Grab ref for LRU list */
+-	atomic_inc(&entry->e_refcnt);
+ 	cache->c_entry_count++;
+ 	spin_unlock(&cache->c_list_lock);
++	mb_cache_entry_put(cache, entry);
+ 
+ 	return 0;
+ }
+ EXPORT_SYMBOL(mb_cache_entry_create);
+ 
+-void __mb_cache_entry_free(struct mb_cache_entry *entry)
++void __mb_cache_entry_free(struct mb_cache *cache, struct mb_cache_entry *entry)
+ {
++	struct hlist_bl_head *head;
++
++	head = mb_cache_entry_head(cache, entry->e_key);
++	hlist_bl_lock(head);
++	hlist_bl_del(&entry->e_hash_list);
++	hlist_bl_unlock(head);
+ 	kmem_cache_free(mb_entry_cache, entry);
+ }
+ EXPORT_SYMBOL(__mb_cache_entry_free);
+ 
++/*
++ * mb_cache_entry_wait_unused - wait to be the last user of the entry
++ *
++ * @entry - entry to work on
++ *
++ * Wait to be the last user of the entry.
++ */
++void mb_cache_entry_wait_unused(struct mb_cache_entry *entry)
++{
++	wait_var_event(&entry->e_refcnt, atomic_read(&entry->e_refcnt) <= 2);
++}
++EXPORT_SYMBOL(mb_cache_entry_wait_unused);
++
+ static struct mb_cache_entry *__entry_find(struct mb_cache *cache,
+ 					   struct mb_cache_entry *entry,
+ 					   u32 key)
+@@ -141,10 +165,10 @@ static struct mb_cache_entry *__entry_find(struct mb_cache *cache,
+ 	while (node) {
+ 		entry = hlist_bl_entry(node, struct mb_cache_entry,
+ 				       e_hash_list);
+-		if (entry->e_key == key && entry->e_reusable) {
+-			atomic_inc(&entry->e_refcnt);
++		if (entry->e_key == key &&
++		    test_bit(MBE_REUSABLE_B, &entry->e_flags) &&
++		    atomic_inc_not_zero(&entry->e_refcnt))
+ 			goto out;
+-		}
+ 		node = node->next;
+ 	}
+ 	entry = NULL;
+@@ -204,10 +228,9 @@ struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
+ 	head = mb_cache_entry_head(cache, key);
+ 	hlist_bl_lock(head);
+ 	hlist_bl_for_each_entry(entry, node, head, e_hash_list) {
+-		if (entry->e_key == key && entry->e_value == value) {
+-			atomic_inc(&entry->e_refcnt);
++		if (entry->e_key == key && entry->e_value == value &&
++		    atomic_inc_not_zero(&entry->e_refcnt))
+ 			goto out;
+-		}
+ 	}
+ 	entry = NULL;
+ out:
+@@ -216,7 +239,7 @@ out:
+ }
+ EXPORT_SYMBOL(mb_cache_entry_get);
+ 
+-/* mb_cache_entry_delete - remove a cache entry
++/* mb_cache_entry_delete - try to remove a cache entry
+  * @cache - cache we work with
+  * @key - key
+  * @value - value
+@@ -253,6 +276,43 @@ void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value)
+ }
+ EXPORT_SYMBOL(mb_cache_entry_delete);
+ 
++/* mb_cache_entry_delete_or_get - remove a cache entry if it has no users
++ * @cache - cache we work with
++ * @key - key
++ * @value - value
++ *
++ * Remove entry from cache @cache with key @key and value @value. The removal
++ * happens only if the entry is unused. The function returns NULL in case the
++ * entry was successfully removed or there's no entry in cache. Otherwise the
++ * function grabs reference of the entry that we failed to delete because it
++ * still has users and return it.
++ */
++struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
++						    u32 key, u64 value)
++{
++	struct mb_cache_entry *entry;
++
++	entry = mb_cache_entry_get(cache, key, value);
++	if (!entry)
++		return NULL;
++
++	/*
++	 * Drop the ref we got from mb_cache_entry_get() and the initial hash
++	 * ref if we are the last user
++	 */
++	if (atomic_cmpxchg(&entry->e_refcnt, 2, 0) != 2)
++		return entry;
++
++	spin_lock(&cache->c_list_lock);
++	if (!list_empty(&entry->e_list))
++		list_del_init(&entry->e_list);
++	cache->c_entry_count--;
++	spin_unlock(&cache->c_list_lock);
++	__mb_cache_entry_free(cache, entry);
++	return NULL;
++}
++EXPORT_SYMBOL(mb_cache_entry_delete_or_get);
++
+ /* mb_cache_entry_touch - cache entry got used
+  * @cache - cache the entry belongs to
+  * @entry - entry that got used
+@@ -262,7 +322,7 @@ EXPORT_SYMBOL(mb_cache_entry_delete);
+ void mb_cache_entry_touch(struct mb_cache *cache,
+ 			  struct mb_cache_entry *entry)
+ {
+-	entry->e_referenced = 1;
++	set_bit(MBE_REFERENCED_B, &entry->e_flags);
+ }
+ EXPORT_SYMBOL(mb_cache_entry_touch);
+ 
+@@ -280,34 +340,24 @@ static unsigned long mb_cache_shrink(struct mb_cache *cache,
+ 				     unsigned long nr_to_scan)
+ {
+ 	struct mb_cache_entry *entry;
+-	struct hlist_bl_head *head;
+ 	unsigned long shrunk = 0;
+ 
+ 	spin_lock(&cache->c_list_lock);
+ 	while (nr_to_scan-- && !list_empty(&cache->c_list)) {
+ 		entry = list_first_entry(&cache->c_list,
+ 					 struct mb_cache_entry, e_list);
+-		if (entry->e_referenced) {
+-			entry->e_referenced = 0;
++		/* Drop initial hash reference if there is no user */
++		if (test_bit(MBE_REFERENCED_B, &entry->e_flags) ||
++		    atomic_cmpxchg(&entry->e_refcnt, 1, 0) != 1) {
++			clear_bit(MBE_REFERENCED_B, &entry->e_flags);
+ 			list_move_tail(&entry->e_list, &cache->c_list);
+ 			continue;
+ 		}
+ 		list_del_init(&entry->e_list);
+ 		cache->c_entry_count--;
+-		/*
+-		 * We keep LRU list reference so that entry doesn't go away
+-		 * from under us.
+-		 */
+ 		spin_unlock(&cache->c_list_lock);
+-		head = mb_cache_entry_head(cache, entry->e_key);
+-		hlist_bl_lock(head);
+-		if (!hlist_bl_unhashed(&entry->e_hash_list)) {
+-			hlist_bl_del_init(&entry->e_hash_list);
+-			atomic_dec(&entry->e_refcnt);
+-		}
+-		hlist_bl_unlock(head);
+-		if (mb_cache_entry_put(cache, entry))
+-			shrunk++;
++		__mb_cache_entry_free(cache, entry);
++		shrunk++;
+ 		cond_resched();
+ 		spin_lock(&cache->c_list_lock);
+ 	}
+@@ -399,11 +449,6 @@ void mb_cache_destroy(struct mb_cache *cache)
+ 	 * point.
+ 	 */
+ 	list_for_each_entry_safe(entry, next, &cache->c_list, e_list) {
+-		if (!hlist_bl_unhashed(&entry->e_hash_list)) {
+-			hlist_bl_del_init(&entry->e_hash_list);
+-			atomic_dec(&entry->e_refcnt);
+-		} else
+-			WARN_ON(1);
+ 		list_del(&entry->e_list);
+ 		WARN_ON(atomic_read(&entry->e_refcnt) != 1);
+ 		mb_cache_entry_put(cache, entry);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9a0f48f7f2b83..250fa88303fad 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1980,18 +1980,18 @@ static struct nfs4_opendata *nfs4_open_recoverdata_alloc(struct nfs_open_context
+ }
+ 
+ static int nfs4_open_recover_helper(struct nfs4_opendata *opendata,
+-		fmode_t fmode)
++				    fmode_t fmode)
+ {
+ 	struct nfs4_state *newstate;
++	struct nfs_server *server = NFS_SB(opendata->dentry->d_sb);
++	int openflags = opendata->o_arg.open_flags;
+ 	int ret;
+ 
+ 	if (!nfs4_mode_match_open_stateid(opendata->state, fmode))
+ 		return 0;
+-	opendata->o_arg.open_flags = 0;
+ 	opendata->o_arg.fmode = fmode;
+-	opendata->o_arg.share_access = nfs4_map_atomic_open_share(
+-			NFS_SB(opendata->dentry->d_sb),
+-			fmode, 0);
++	opendata->o_arg.share_access =
++		nfs4_map_atomic_open_share(server, fmode, openflags);
+ 	memset(&opendata->o_res, 0, sizeof(opendata->o_res));
+ 	memset(&opendata->c_res, 0, sizeof(opendata->c_res));
+ 	nfs4_init_opendata_res(opendata);
+@@ -2569,10 +2569,15 @@ static int _nfs4_open_expired(struct nfs_open_context *ctx, struct nfs4_state *s
+ 	struct nfs4_opendata *opendata;
+ 	int ret;
+ 
+-	opendata = nfs4_open_recoverdata_alloc(ctx, state,
+-			NFS4_OPEN_CLAIM_FH);
++	opendata = nfs4_open_recoverdata_alloc(ctx, state, NFS4_OPEN_CLAIM_FH);
+ 	if (IS_ERR(opendata))
+ 		return PTR_ERR(opendata);
++	/*
++	 * We're not recovering a delegation, so ask for no delegation.
++	 * Otherwise the recovery thread could deadlock with an outstanding
++	 * delegation return.
++	 */
++	opendata->o_arg.open_flags = O_DIRECT;
+ 	ret = nfs4_open_recover(opendata, state);
+ 	if (ret == -ESTALE)
+ 		d_drop(ctx->dentry);
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 5ab021f87ecf8..b9fbd01ef4cfe 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1247,6 +1247,8 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ 	if (IS_ERR(task)) {
+ 		printk(KERN_ERR "%s: kthread_run: %ld\n",
+ 			__func__, PTR_ERR(task));
++		if (!nfs_client_init_is_complete(clp))
++			nfs_mark_client_ready(clp, PTR_ERR(task));
+ 		nfs4_clear_state_manager_bit(clp);
+ 		nfs_put_client(clp);
+ 		module_put(THIS_MODULE);
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 56e48642c43eb..f0021e3b8efdd 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -4277,12 +4277,10 @@ static int decode_attr_security_label(struct xdr_stream *xdr, uint32_t *bitmap,
+ 		if (unlikely(!p))
+ 			goto out_overflow;
+ 		if (len < NFS4_MAXLABELLEN) {
+-			if (label) {
+-				if (label->len) {
+-					if (label->len < len)
+-						return -ERANGE;
+-					memcpy(label->label, p, len);
+-				}
++			if (label && label->len) {
++				if (label->len < len)
++					return -ERANGE;
++				memcpy(label->label, p, len);
+ 				label->len = len;
+ 				label->pi = pi;
+ 				label->lfs = lfs;
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 7ee417b685e98..519d994c0c4c0 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -800,7 +800,6 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ 	} else {
+ 		if (!conn->cb_xprt)
+ 			return -EINVAL;
+-		clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
+ 		clp->cl_cb_session = ses;
+ 		args.bc_xprt = conn->cb_xprt;
+ 		args.prognumber = clp->cl_cb_session->se_cb_prog;
+@@ -820,6 +819,9 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ 		rpc_shutdown_client(client);
+ 		return PTR_ERR(cred);
+ 	}
++
++	if (clp->cl_minorversion != 0)
++		clp->cl_cb_conn.cb_xprt = conn->cb_xprt;
+ 	clp->cl_cb_client = client;
+ 	clp->cl_cb_cred = cred;
+ 	return 0;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 96b79bd906313..c82f898325c1c 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3102,6 +3102,17 @@ nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
+ 	case nfserr_noent:
+ 		xdr_truncate_encode(xdr, start_offset);
+ 		goto skip_entry;
++	case nfserr_jukebox:
++		/*
++		 * The pseudoroot should only display dentries that lead to
++		 * exports. If we get EJUKEBOX here, then we can't tell whether
++		 * this entry should be included. Just fail the whole READDIR
++		 * with NFS4ERR_DELAY in that case, and hope that the situation
++		 * will resolve itself by the client's next attempt.
++		 */
++		if (cd->rd_fhp->fh_export->ex_flags & NFSEXP_V4ROOT)
++			goto fail;
++		/* fallthrough */
+ 	default:
+ 		/*
+ 		 * If the client requested the RDATTR_ERROR attribute,
+diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
+index fb61c33c60045..74ef3d313686f 100644
+--- a/fs/nilfs2/the_nilfs.c
++++ b/fs/nilfs2/the_nilfs.c
+@@ -13,6 +13,7 @@
+ #include <linux/blkdev.h>
+ #include <linux/backing-dev.h>
+ #include <linux/random.h>
++#include <linux/log2.h>
+ #include <linux/crc32.h>
+ #include "nilfs.h"
+ #include "segment.h"
+@@ -448,11 +449,33 @@ static int nilfs_valid_sb(struct nilfs_super_block *sbp)
+ 	return crc == le32_to_cpu(sbp->s_sum);
+ }
+ 
+-static int nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset)
++/**
++ * nilfs_sb2_bad_offset - check the location of the second superblock
++ * @sbp: superblock raw data buffer
++ * @offset: byte offset of second superblock calculated from device size
++ *
++ * nilfs_sb2_bad_offset() checks if the position on the second
++ * superblock is valid or not based on the filesystem parameters
++ * stored in @sbp.  If @offset points to a location within the segment
++ * area, or if the parameters themselves are not normal, it is
++ * determined to be invalid.
++ *
++ * Return Value: true if invalid, false if valid.
++ */
++static bool nilfs_sb2_bad_offset(struct nilfs_super_block *sbp, u64 offset)
+ {
+-	return offset < ((le64_to_cpu(sbp->s_nsegments) *
+-			  le32_to_cpu(sbp->s_blocks_per_segment)) <<
+-			 (le32_to_cpu(sbp->s_log_block_size) + 10));
++	unsigned int shift_bits = le32_to_cpu(sbp->s_log_block_size);
++	u32 blocks_per_segment = le32_to_cpu(sbp->s_blocks_per_segment);
++	u64 nsegments = le64_to_cpu(sbp->s_nsegments);
++	u64 index;
++
++	if (blocks_per_segment < NILFS_SEG_MIN_BLOCKS ||
++	    shift_bits > ilog2(NILFS_MAX_BLOCK_SIZE) - BLOCK_SIZE_BITS)
++		return true;
++
++	index = offset >> (shift_bits + BLOCK_SIZE_BITS);
++	do_div(index, blocks_per_segment);
++	return index < nsegments;
+ }
+ 
+ static void nilfs_release_super_block(struct the_nilfs *nilfs)
+diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c
+index e7eb08ac42156..10d691530d83d 100644
+--- a/fs/ocfs2/stackglue.c
++++ b/fs/ocfs2/stackglue.c
+@@ -715,6 +715,8 @@ static struct ctl_table_header *ocfs2_table_header;
+ 
+ static int __init ocfs2_stack_glue_init(void)
+ {
++	int ret;
++
+ 	strcpy(cluster_stack_name, OCFS2_STACK_PLUGIN_O2CB);
+ 
+ 	ocfs2_table_header = register_sysctl_table(ocfs2_root_table);
+@@ -724,7 +726,11 @@ static int __init ocfs2_stack_glue_init(void)
+ 		return -ENOMEM; /* or something. */
+ 	}
+ 
+-	return ocfs2_sysfs_init();
++	ret = ocfs2_sysfs_init();
++	if (ret)
++		unregister_sysctl_table(ocfs2_table_header);
++
++	return ret;
+ }
+ 
+ static void __exit ocfs2_stack_glue_exit(void)
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index e24738c691f66..f79c015fa7cb8 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -254,6 +254,8 @@ out:
+ void orangefs_debugfs_cleanup(void)
+ {
+ 	debugfs_remove_recursive(debug_dir);
++	kfree(debug_help_string);
++	debug_help_string = NULL;
+ }
+ 
+ /* open ORANGEFS_KMOD_DEBUG_HELP_FILE */
+@@ -709,6 +711,7 @@ int orangefs_prepare_debugfs_help_string(int at_boot)
+ 		memset(debug_help_string, 0, DEBUG_HELP_STRING_SIZE);
+ 		strlcat(debug_help_string, new, string_size);
+ 		mutex_unlock(&orangefs_help_file_lock);
++		kfree(new);
+ 	}
+ 
+ 	rc = 0;
+diff --git a/fs/orangefs/orangefs-mod.c b/fs/orangefs/orangefs-mod.c
+index 85ef87245a872..c8818163e3922 100644
+--- a/fs/orangefs/orangefs-mod.c
++++ b/fs/orangefs/orangefs-mod.c
+@@ -141,7 +141,7 @@ static int __init orangefs_init(void)
+ 		gossip_err("%s: could not initialize device subsystem %d!\n",
+ 			   __func__,
+ 			   ret);
+-		goto cleanup_device;
++		goto cleanup_sysfs;
+ 	}
+ 
+ 	ret = register_filesystem(&orangefs_fs_type);
+@@ -153,11 +153,11 @@ static int __init orangefs_init(void)
+ 		goto out;
+ 	}
+ 
+-	orangefs_sysfs_exit();
+-
+-cleanup_device:
+ 	orangefs_dev_cleanup();
+ 
++cleanup_sysfs:
++	orangefs_sysfs_exit();
++
+ sysfs_init_failed:
+ 
+ debugfs_init_failed:
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 06dc649629c74..9fa64dbde97ad 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -561,28 +561,42 @@ static int ovl_create_or_link(struct dentry *dentry, struct inode *inode,
+ 			goto out_revert_creds;
+ 	}
+ 
+-	err = -ENOMEM;
+-	override_cred = prepare_creds();
+-	if (override_cred) {
++	if (!attr->hardlink) {
++		err = -ENOMEM;
++		override_cred = prepare_creds();
++		if (!override_cred)
++			goto out_revert_creds;
++		/*
++		 * In the creation cases(create, mkdir, mknod, symlink),
++		 * ovl should transfer current's fs{u,g}id to underlying
++		 * fs. Because underlying fs want to initialize its new
++		 * inode owner using current's fs{u,g}id. And in this
++		 * case, the @inode is a new inode that is initialized
++		 * in inode_init_owner() to current's fs{u,g}id. So use
++		 * the inode's i_{u,g}id to override the cred's fs{u,g}id.
++		 *
++		 * But in the other hardlink case, ovl_link() does not
++		 * create a new inode, so just use the ovl mounter's
++		 * fs{u,g}id.
++		 */
+ 		override_cred->fsuid = inode->i_uid;
+ 		override_cred->fsgid = inode->i_gid;
+-		if (!attr->hardlink) {
+-			err = security_dentry_create_files_as(dentry,
+-					attr->mode, &dentry->d_name, old_cred,
+-					override_cred);
+-			if (err) {
+-				put_cred(override_cred);
+-				goto out_revert_creds;
+-			}
++		err = security_dentry_create_files_as(dentry,
++				attr->mode, &dentry->d_name, old_cred,
++				override_cred);
++		if (err) {
++			put_cred(override_cred);
++			goto out_revert_creds;
+ 		}
+ 		put_cred(override_creds(override_cred));
+ 		put_cred(override_cred);
+-
+-		if (!ovl_dentry_is_whiteout(dentry))
+-			err = ovl_create_upper(dentry, inode, attr);
+-		else
+-			err = ovl_create_over_whiteout(dentry, inode, attr);
+ 	}
++
++	if (!ovl_dentry_is_whiteout(dentry))
++		err = ovl_create_upper(dentry, inode, attr);
++	else
++		err = ovl_create_over_whiteout(dentry, inode, attr);
++
+ out_revert_creds:
+ 	revert_creds(old_cred);
+ 	return err;
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 7910ae91f17e9..d27b7b97c4c17 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -245,7 +245,7 @@ static int propagate_one(struct mount *m)
+ 		}
+ 		do {
+ 			struct mount *parent = last_source->mnt_parent;
+-			if (last_source == first_source)
++			if (peers(last_source, first_source))
+ 				break;
+ 			done = parent->mnt_master == p;
+ 			if (done && peers(n, parent))
+diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
+index 503086f7f7c1b..d5fb6d95d4d40 100644
+--- a/fs/pstore/Kconfig
++++ b/fs/pstore/Kconfig
+@@ -117,6 +117,7 @@ config PSTORE_CONSOLE
+ config PSTORE_PMSG
+ 	bool "Log user space messages"
+ 	depends on PSTORE
++	select RT_MUTEXES
+ 	help
+ 	  When the option is enabled, pstore will export a character
+ 	  interface /dev/pmsg0 to log user space messages. On reboot
+diff --git a/fs/pstore/pmsg.c b/fs/pstore/pmsg.c
+index 24db02de17874..ffc13ea196d2a 100644
+--- a/fs/pstore/pmsg.c
++++ b/fs/pstore/pmsg.c
+@@ -15,9 +15,10 @@
+ #include <linux/device.h>
+ #include <linux/fs.h>
+ #include <linux/uaccess.h>
++#include <linux/rtmutex.h>
+ #include "internal.h"
+ 
+-static DEFINE_MUTEX(pmsg_lock);
++static DEFINE_RT_MUTEX(pmsg_lock);
+ 
+ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 			  size_t count, loff_t *ppos)
+@@ -36,9 +37,9 @@ static ssize_t write_pmsg(struct file *file, const char __user *buf,
+ 	if (!access_ok(VERIFY_READ, buf, count))
+ 		return -EFAULT;
+ 
+-	mutex_lock(&pmsg_lock);
++	rt_mutex_lock(&pmsg_lock);
+ 	ret = psinfo->write_user(&record, buf);
+-	mutex_unlock(&pmsg_lock);
++	rt_mutex_unlock(&pmsg_lock);
+ 	return ret ? ret : count;
+ }
+ 
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index bafbab2dd0392..33294dee7d7f6 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -753,6 +753,7 @@ static int ramoops_probe(struct platform_device *pdev)
+ 	/* Make sure we didn't get bogus platform data pointer. */
+ 	if (!pdata) {
+ 		pr_err("NULL platform data\n");
++		err = -EINVAL;
+ 		goto fail_out;
+ 	}
+ 
+@@ -760,6 +761,7 @@ static int ramoops_probe(struct platform_device *pdev)
+ 			!pdata->ftrace_size && !pdata->pmsg_size)) {
+ 		pr_err("The memory size and the record/console size must be "
+ 			"non-zero\n");
++		err = -EINVAL;
+ 		goto fail_out;
+ 	}
+ 
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 3c777ec80d470..60dff71804120 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -426,7 +426,11 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
+ 		phys_addr_t addr = page_start + i * PAGE_SIZE;
+ 		pages[i] = pfn_to_page(addr >> PAGE_SHIFT);
+ 	}
+-	vaddr = vmap(pages, page_count, VM_MAP, prot);
++	/*
++	 * VM_IOREMAP used here to bypass this region during vread()
++	 * and kmap_atomic() (i.e. kcore) to avoid __va() failures.
++	 */
++	vaddr = vmap(pages, page_count, VM_MAP | VM_IOREMAP, prot);
+ 	kfree(pages);
+ 
+ 	/*
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index ddb379abd919d..770a2b1434856 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -2298,28 +2298,62 @@ EXPORT_SYMBOL(dquot_quota_off);
+  *	Turn quotas on on a device
+  */
+ 
+-/*
+- * Helper function to turn quotas on when we already have the inode of
+- * quota file and no quota information is loaded.
+- */
+-static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
++static int vfs_setup_quota_inode(struct inode *inode, int type)
++{
++	struct super_block *sb = inode->i_sb;
++	struct quota_info *dqopt = sb_dqopt(sb);
++
++	if (is_bad_inode(inode))
++		return -EUCLEAN;
++	if (!S_ISREG(inode->i_mode))
++		return -EACCES;
++	if (IS_RDONLY(inode))
++		return -EROFS;
++	if (sb_has_quota_loaded(sb, type))
++		return -EBUSY;
++
++	dqopt->files[type] = igrab(inode);
++	if (!dqopt->files[type])
++		return -EIO;
++	if (!(dqopt->flags & DQUOT_QUOTA_SYS_FILE)) {
++		/* We don't want quota and atime on quota files (deadlocks
++		 * possible) Also nobody should write to the file - we use
++		 * special IO operations which ignore the immutable bit. */
++		inode_lock(inode);
++		inode->i_flags |= S_NOQUOTA;
++		inode_unlock(inode);
++		/*
++		 * When S_NOQUOTA is set, remove dquot references as no more
++		 * references can be added
++		 */
++		__dquot_drop(inode);
++	}
++	return 0;
++}
++
++static void vfs_cleanup_quota_inode(struct super_block *sb, int type)
++{
++	struct quota_info *dqopt = sb_dqopt(sb);
++	struct inode *inode = dqopt->files[type];
++
++	if (!(dqopt->flags & DQUOT_QUOTA_SYS_FILE)) {
++		inode_lock(inode);
++		inode->i_flags &= ~S_NOQUOTA;
++		inode_unlock(inode);
++	}
++	dqopt->files[type] = NULL;
++	iput(inode);
++}
++
++int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
+ 	unsigned int flags)
+ {
+ 	struct quota_format_type *fmt = find_quota_format(format_id);
+-	struct super_block *sb = inode->i_sb;
+ 	struct quota_info *dqopt = sb_dqopt(sb);
+ 	int error;
+ 
+ 	if (!fmt)
+ 		return -ESRCH;
+-	if (!S_ISREG(inode->i_mode)) {
+-		error = -EACCES;
+-		goto out_fmt;
+-	}
+-	if (IS_RDONLY(inode)) {
+-		error = -EROFS;
+-		goto out_fmt;
+-	}
+ 	if (!sb->s_op->quota_write || !sb->s_op->quota_read ||
+ 	    (type == PRJQUOTA && sb->dq_op->get_projid == NULL)) {
+ 		error = -EINVAL;
+@@ -2351,27 +2385,9 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
+ 		invalidate_bdev(sb->s_bdev);
+ 	}
+ 
+-	if (!(dqopt->flags & DQUOT_QUOTA_SYS_FILE)) {
+-		/* We don't want quota and atime on quota files (deadlocks
+-		 * possible) Also nobody should write to the file - we use
+-		 * special IO operations which ignore the immutable bit. */
+-		inode_lock(inode);
+-		inode->i_flags |= S_NOQUOTA;
+-		inode_unlock(inode);
+-		/*
+-		 * When S_NOQUOTA is set, remove dquot references as no more
+-		 * references can be added
+-		 */
+-		__dquot_drop(inode);
+-	}
+-
+-	error = -EIO;
+-	dqopt->files[type] = igrab(inode);
+-	if (!dqopt->files[type])
+-		goto out_file_flags;
+ 	error = -EINVAL;
+ 	if (!fmt->qf_ops->check_quota_file(sb, type))
+-		goto out_file_init;
++		goto out_fmt;
+ 
+ 	dqopt->ops[type] = fmt->qf_ops;
+ 	dqopt->info[type].dqi_format = fmt;
+@@ -2379,7 +2395,7 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
+ 	INIT_LIST_HEAD(&dqopt->info[type].dqi_dirty_list);
+ 	error = dqopt->ops[type]->read_file_info(sb, type);
+ 	if (error < 0)
+-		goto out_file_init;
++		goto out_fmt;
+ 	if (dqopt->flags & DQUOT_QUOTA_SYS_FILE) {
+ 		spin_lock(&dq_data_lock);
+ 		dqopt->info[type].dqi_flags |= DQF_SYS_FILE;
+@@ -2394,18 +2410,30 @@ static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
+ 		dquot_disable(sb, type, flags);
+ 
+ 	return error;
+-out_file_init:
+-	dqopt->files[type] = NULL;
+-	iput(inode);
+-out_file_flags:
+-	inode_lock(inode);
+-	inode->i_flags &= ~S_NOQUOTA;
+-	inode_unlock(inode);
+ out_fmt:
+ 	put_quota_format(fmt);
+ 
+ 	return error; 
+ }
++EXPORT_SYMBOL(dquot_load_quota_sb);
++
++/*
++ * Helper function to turn quotas on when we already have the inode of
++ * quota file and no quota information is loaded.
++ */
++static int vfs_load_quota_inode(struct inode *inode, int type, int format_id,
++	unsigned int flags)
++{
++	int err;
++
++	err = vfs_setup_quota_inode(inode, type);
++	if (err < 0)
++		return err;
++	err = dquot_load_quota_sb(inode->i_sb, type, format_id, flags);
++	if (err < 0)
++		vfs_cleanup_quota_inode(inode->i_sb, type);
++	return err;
++}
+ 
+ /* Reenable quotas on remount RW */
+ int dquot_resume(struct super_block *sb, int type)
+diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c
+index 959a066b7bb07..2843b7cf4d7af 100644
+--- a/fs/reiserfs/namei.c
++++ b/fs/reiserfs/namei.c
+@@ -695,6 +695,7 @@ static int reiserfs_create(struct inode *dir, struct dentry *dentry, umode_t mod
+ 
+ out_failed:
+ 	reiserfs_write_unlock(dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+@@ -778,6 +779,7 @@ static int reiserfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode
+ 
+ out_failed:
+ 	reiserfs_write_unlock(dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+@@ -876,6 +878,7 @@ static int reiserfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode
+ 	retval = journal_end(&th);
+ out_failed:
+ 	reiserfs_write_unlock(dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+@@ -1191,6 +1194,7 @@ static int reiserfs_symlink(struct inode *parent_dir,
+ 	retval = journal_end(&th);
+ out_failed:
+ 	reiserfs_write_unlock(parent_dir->i_sb);
++	reiserfs_security_free(&security);
+ 	return retval;
+ }
+ 
+diff --git a/fs/reiserfs/xattr_security.c b/fs/reiserfs/xattr_security.c
+index 20be9a0e5870e..59d87f9f72fb4 100644
+--- a/fs/reiserfs/xattr_security.c
++++ b/fs/reiserfs/xattr_security.c
+@@ -49,6 +49,7 @@ int reiserfs_security_init(struct inode *dir, struct inode *inode,
+ 	int error;
+ 
+ 	sec->name = NULL;
++	sec->value = NULL;
+ 
+ 	/* Don't add selinux attributes on xattrs - they'll never get used */
+ 	if (IS_PRIVATE(dir))
+@@ -94,7 +95,6 @@ int reiserfs_security_write(struct reiserfs_transaction_handle *th,
+ 
+ void reiserfs_security_free(struct reiserfs_security_handle *sec)
+ {
+-	kfree(sec->name);
+ 	kfree(sec->value);
+ 	sec->name = NULL;
+ 	sec->value = NULL;
+diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c
+index bcb67b0cabe7e..31f66053e2393 100644
+--- a/fs/sysv/itree.c
++++ b/fs/sysv/itree.c
+@@ -438,7 +438,7 @@ static unsigned sysv_nblocks(struct super_block *s, loff_t size)
+ 		res += blocks;
+ 		direct = 1;
+ 	}
+-	return blocks;
++	return res;
+ }
+ 
+ int sysv_getattr(const struct path *path, struct kstat *stat,
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index ec8089a313906..55a86120e756f 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -434,6 +434,12 @@ static int udf_get_block(struct inode *inode, sector_t block,
+ 		iinfo->i_next_alloc_goal++;
+ 	}
+ 
++	/*
++	 * Block beyond EOF and prealloc extents? Just discard preallocation
++	 * as it is not useful and complicates things.
++	 */
++	if (((loff_t)block) << inode->i_blkbits > iinfo->i_lenExtents)
++		udf_discard_prealloc(inode);
+ 	udf_clear_extent_cache(inode);
+ 	phys = inode_getblk(inode, block, &err, &new);
+ 	if (!phys)
+@@ -483,8 +489,6 @@ static int udf_do_extend_file(struct inode *inode,
+ 	uint32_t add;
+ 	int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+ 	struct super_block *sb = inode->i_sb;
+-	struct kernel_lb_addr prealloc_loc = {};
+-	uint32_t prealloc_len = 0;
+ 	struct udf_inode_info *iinfo;
+ 	int err;
+ 
+@@ -505,19 +509,6 @@ static int udf_do_extend_file(struct inode *inode,
+ 			~(sb->s_blocksize - 1);
+ 	}
+ 
+-	/* Last extent are just preallocated blocks? */
+-	if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
+-						EXT_NOT_RECORDED_ALLOCATED) {
+-		/* Save the extent so that we can reattach it to the end */
+-		prealloc_loc = last_ext->extLocation;
+-		prealloc_len = last_ext->extLength;
+-		/* Mark the extent as a hole */
+-		last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+-			(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+-		last_ext->extLocation.logicalBlockNum = 0;
+-		last_ext->extLocation.partitionReferenceNum = 0;
+-	}
+-
+ 	/* Can we merge with the previous extent? */
+ 	if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
+ 					EXT_NOT_RECORDED_NOT_ALLOCATED) {
+@@ -545,7 +536,7 @@ static int udf_do_extend_file(struct inode *inode,
+ 		 * more extents, we may need to enter possible following
+ 		 * empty indirect extent.
+ 		 */
+-		if (new_block_bytes || prealloc_len)
++		if (new_block_bytes)
+ 			udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
+ 	}
+ 
+@@ -579,17 +570,6 @@ static int udf_do_extend_file(struct inode *inode,
+ 	}
+ 
+ out:
+-	/* Do we have some preallocated blocks saved? */
+-	if (prealloc_len) {
+-		err = udf_add_aext(inode, last_pos, &prealloc_loc,
+-				   prealloc_len, 1);
+-		if (err)
+-			return err;
+-		last_ext->extLocation = prealloc_loc;
+-		last_ext->extLength = prealloc_len;
+-		count++;
+-	}
+-
+ 	/* last_pos should point to the last written extent... */
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ 		last_pos->offset -= sizeof(struct short_ad);
+@@ -605,13 +585,17 @@ out:
+ static void udf_do_extend_final_block(struct inode *inode,
+ 				      struct extent_position *last_pos,
+ 				      struct kernel_long_ad *last_ext,
+-				      uint32_t final_block_len)
++				      uint32_t new_elen)
+ {
+-	struct super_block *sb = inode->i_sb;
+ 	uint32_t added_bytes;
+ 
+-	added_bytes = final_block_len -
+-		      (last_ext->extLength & (sb->s_blocksize - 1));
++	/*
++	 * Extent already large enough? It may be already rounded up to block
++	 * size...
++	 */
++	if (new_elen <= (last_ext->extLength & UDF_EXTENT_LENGTH_MASK))
++		return;
++	added_bytes = new_elen - (last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
+ 	last_ext->extLength += added_bytes;
+ 	UDF_I(inode)->i_lenExtents += added_bytes;
+ 
+@@ -628,12 +612,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 	int8_t etype;
+ 	struct super_block *sb = inode->i_sb;
+ 	sector_t first_block = newsize >> sb->s_blocksize_bits, offset;
+-	unsigned long partial_final_block;
++	loff_t new_elen;
+ 	int adsize;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
+ 	struct kernel_long_ad extent;
+ 	int err = 0;
+-	int within_final_block;
++	bool within_last_ext;
+ 
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+ 		adsize = sizeof(struct short_ad);
+@@ -642,8 +626,17 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 	else
+ 		BUG();
+ 
++	/*
++	 * When creating hole in file, just don't bother with preserving
++	 * preallocation. It likely won't be very useful anyway.
++	 */
++	udf_discard_prealloc(inode);
++
+ 	etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
+-	within_final_block = (etype != -1);
++	within_last_ext = (etype != -1);
++	/* We don't expect extents past EOF... */
++	WARN_ON_ONCE(within_last_ext &&
++		     elen > ((loff_t)offset + 1) << inode->i_blkbits);
+ 
+ 	if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) ||
+ 	    (epos.bh && epos.offset == sizeof(struct allocExtDesc))) {
+@@ -659,19 +652,17 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
+ 		extent.extLength |= etype << 30;
+ 	}
+ 
+-	partial_final_block = newsize & (sb->s_blocksize - 1);
++	new_elen = ((loff_t)offset << inode->i_blkbits) |
++					(newsize & (sb->s_blocksize - 1));
+ 
+ 	/* File has extent covering the new size (could happen when extending
+ 	 * inside a block)?
+ 	 */
+-	if (within_final_block) {
++	if (within_last_ext) {
+ 		/* Extending file within the last file block */
+-		udf_do_extend_final_block(inode, &epos, &extent,
+-					  partial_final_block);
++		udf_do_extend_final_block(inode, &epos, &extent, new_elen);
+ 	} else {
+-		loff_t add = ((loff_t)offset << sb->s_blocksize_bits) |
+-			     partial_final_block;
+-		err = udf_do_extend_file(inode, &epos, &extent, add);
++		err = udf_do_extend_file(inode, &epos, &extent, new_elen);
+ 	}
+ 
+ 	if (err < 0)
+@@ -772,10 +763,11 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
+ 		goto out_free;
+ 	}
+ 
+-	/* Are we beyond EOF? */
++	/* Are we beyond EOF and preallocated extent? */
+ 	if (etype == -1) {
+ 		int ret;
+ 		loff_t hole_len;
++
+ 		isBeyondEOF = true;
+ 		if (count) {
+ 			if (c)
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index d13ded8e2c309..ef251622da137 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -1106,8 +1106,9 @@ static int udf_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		return -EINVAL;
+ 
+ 	ofi = udf_find_entry(old_dir, &old_dentry->d_name, &ofibh, &ocfi);
+-	if (IS_ERR(ofi)) {
+-		retval = PTR_ERR(ofi);
++	if (!ofi || IS_ERR(ofi)) {
++		if (IS_ERR(ofi))
++			retval = PTR_ERR(ofi);
+ 		goto end_rename;
+ 	}
+ 
+@@ -1116,8 +1117,7 @@ static int udf_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	brelse(ofibh.sbh);
+ 	tloc = lelb_to_cpu(ocfi.icb.extLocation);
+-	if (!ofi || udf_get_lb_pblock(old_dir->i_sb, &tloc, 0)
+-	    != old_inode->i_ino)
++	if (udf_get_lb_pblock(old_dir->i_sb, &tloc, 0) != old_inode->i_ino)
+ 		goto end_rename;
+ 
+ 	nfi = udf_find_entry(new_dir, &new_dentry->d_name, &nfibh, &ncfi);
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index 94220ba856285..b0c71edc83f70 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -120,60 +120,42 @@ void udf_truncate_tail_extent(struct inode *inode)
+ 
+ void udf_discard_prealloc(struct inode *inode)
+ {
+-	struct extent_position epos = { NULL, 0, {0, 0} };
++	struct extent_position epos = {};
++	struct extent_position prev_epos = {};
+ 	struct kernel_lb_addr eloc;
+ 	uint32_t elen;
+ 	uint64_t lbcount = 0;
+ 	int8_t etype = -1, netype;
+-	int adsize;
+ 	struct udf_inode_info *iinfo = UDF_I(inode);
++	int bsize = 1 << inode->i_blkbits;
+ 
+ 	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB ||
+-	    inode->i_size == iinfo->i_lenExtents)
++	    ALIGN(inode->i_size, bsize) == ALIGN(iinfo->i_lenExtents, bsize))
+ 		return;
+ 
+-	if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
+-		adsize = sizeof(struct short_ad);
+-	else if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_LONG)
+-		adsize = sizeof(struct long_ad);
+-	else
+-		adsize = 0;
+-
+ 	epos.block = iinfo->i_location;
+ 
+ 	/* Find the last extent in the file */
+-	while ((netype = udf_next_aext(inode, &epos, &eloc, &elen, 1)) != -1) {
+-		etype = netype;
++	while ((netype = udf_next_aext(inode, &epos, &eloc, &elen, 0)) != -1) {
++		brelse(prev_epos.bh);
++		prev_epos = epos;
++		if (prev_epos.bh)
++			get_bh(prev_epos.bh);
++
++		etype = udf_next_aext(inode, &epos, &eloc, &elen, 1);
+ 		lbcount += elen;
+ 	}
+ 	if (etype == (EXT_NOT_RECORDED_ALLOCATED >> 30)) {
+-		epos.offset -= adsize;
+ 		lbcount -= elen;
+-		extent_trunc(inode, &epos, &eloc, etype, elen, 0);
+-		if (!epos.bh) {
+-			iinfo->i_lenAlloc =
+-				epos.offset -
+-				udf_file_entry_alloc_offset(inode);
+-			mark_inode_dirty(inode);
+-		} else {
+-			struct allocExtDesc *aed =
+-				(struct allocExtDesc *)(epos.bh->b_data);
+-			aed->lengthAllocDescs =
+-				cpu_to_le32(epos.offset -
+-					    sizeof(struct allocExtDesc));
+-			if (!UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_STRICT) ||
+-			    UDF_SB(inode->i_sb)->s_udfrev >= 0x0201)
+-				udf_update_tag(epos.bh->b_data, epos.offset);
+-			else
+-				udf_update_tag(epos.bh->b_data,
+-					       sizeof(struct allocExtDesc));
+-			mark_buffer_dirty_inode(epos.bh, inode);
+-		}
++		udf_delete_aext(inode, prev_epos);
++		udf_free_blocks(inode->i_sb, inode, &eloc, 0,
++				DIV_ROUND_UP(elen, 1 << inode->i_blkbits));
+ 	}
+ 	/* This inode entry is in-memory only and thus we don't have to mark
+ 	 * the inode dirty */
+ 	iinfo->i_lenExtents = lbcount;
+ 	brelse(epos.bh);
++	brelse(prev_epos.bh);
+ }
+ 
+ static void udf_update_alloc_ext_desc(struct inode *inode,
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 470ee0af32007..5c3407e18e15a 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -1012,7 +1012,7 @@ static int xattr_list_one(char **buffer, ssize_t *remaining_size,
+ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ 			  char *buffer, size_t size)
+ {
+-	bool trusted = capable(CAP_SYS_ADMIN);
++	bool trusted = ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN);
+ 	struct simple_xattr *xattr;
+ 	ssize_t remaining_size = size;
+ 	int err = 0;
+diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
+index db72ad39853b9..737f5cb0dc840 100644
+--- a/include/asm-generic/tlb.h
++++ b/include/asm-generic/tlb.h
+@@ -61,6 +61,12 @@ struct mmu_table_batch {
+ extern void tlb_table_flush(struct mmu_gather *tlb);
+ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
+ 
++void tlb_remove_table_sync_one(void);
++
++#else
++
++static inline void tlb_remove_table_sync_one(void) { }
++
+ #endif
+ 
+ /*
+diff --git a/include/linux/can/platform/sja1000.h b/include/linux/can/platform/sja1000.h
+index 5755ae5a47122..6a869682c1207 100644
+--- a/include/linux/can/platform/sja1000.h
++++ b/include/linux/can/platform/sja1000.h
+@@ -14,7 +14,7 @@
+ #define OCR_MODE_TEST     0x01
+ #define OCR_MODE_NORMAL   0x02
+ #define OCR_MODE_CLOCK    0x03
+-#define OCR_MODE_MASK     0x07
++#define OCR_MODE_MASK     0x03
+ #define OCR_TX0_INVERT    0x04
+ #define OCR_TX0_PULLDOWN  0x08
+ #define OCR_TX0_PULLUP    0x10
+diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
+index 2a4638bb40338..6ebc269e48ace 100644
+--- a/include/linux/debugfs.h
++++ b/include/linux/debugfs.h
+@@ -39,7 +39,7 @@ struct debugfs_regset32 {
+ 
+ extern struct dentry *arch_debugfs_dir;
+ 
+-#define DEFINE_DEBUGFS_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++#define DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, __is_signed)	\
+ static int __fops ## _open(struct inode *inode, struct file *file)	\
+ {									\
+ 	__simple_attr_check_format(__fmt, 0ull);			\
+@@ -50,10 +50,16 @@ static const struct file_operations __fops = {				\
+ 	.open	 = __fops ## _open,					\
+ 	.release = simple_attr_release,					\
+ 	.read	 = debugfs_attr_read,					\
+-	.write	 = debugfs_attr_write,					\
++	.write	 = (__is_signed) ? debugfs_attr_write_signed : debugfs_attr_write,	\
+ 	.llseek  = no_llseek,						\
+ }
+ 
++#define DEFINE_DEBUGFS_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++	DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, false)
++
++#define DEFINE_DEBUGFS_ATTRIBUTE_SIGNED(__fops, __get, __set, __fmt)	\
++	DEFINE_DEBUGFS_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, true)
++
+ typedef struct vfsmount *(*debugfs_automount_t)(struct dentry *, void *);
+ 
+ #if defined(CONFIG_DEBUG_FS)
+@@ -96,6 +102,8 @@ ssize_t debugfs_attr_read(struct file *file, char __user *buf,
+ 			size_t len, loff_t *ppos);
+ ssize_t debugfs_attr_write(struct file *file, const char __user *buf,
+ 			size_t len, loff_t *ppos);
++ssize_t debugfs_attr_write_signed(struct file *file, const char __user *buf,
++			size_t len, loff_t *ppos);
+ 
+ struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
+                 struct dentry *new_dir, const char *new_name);
+@@ -246,6 +254,13 @@ static inline ssize_t debugfs_attr_write(struct file *file,
+ 	return -ENODEV;
+ }
+ 
++static inline ssize_t debugfs_attr_write_signed(struct file *file,
++					const char __user *buf,
++					size_t len, loff_t *ppos)
++{
++	return -ENODEV;
++}
++
+ static inline struct dentry *debugfs_rename(struct dentry *old_dir, struct dentry *old_dentry,
+                 struct dentry *new_dir, char *new_name)
+ {
+diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
+index dc4fd8a6644dd..3482f9365a4db 100644
+--- a/include/linux/eventfd.h
++++ b/include/linux/eventfd.h
+@@ -61,7 +61,7 @@ static inline struct eventfd_ctx *eventfd_ctx_fdget(int fd)
+ 	return ERR_PTR(-ENOSYS);
+ }
+ 
+-static inline int eventfd_signal(struct eventfd_ctx *ctx, int n)
++static inline int eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
+ {
+ 	return -ENOSYS;
+ }
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 86f884e78b6b9..95b8ef09b76cf 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3353,7 +3353,7 @@ void simple_transaction_set(struct file *file, size_t n);
+  * All attributes contain a text representation of a numeric value
+  * that are accessed with the get() and set() functions.
+  */
+-#define DEFINE_SIMPLE_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++#define DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, __is_signed)	\
+ static int __fops ## _open(struct inode *inode, struct file *file)	\
+ {									\
+ 	__simple_attr_check_format(__fmt, 0ull);			\
+@@ -3364,10 +3364,16 @@ static const struct file_operations __fops = {				\
+ 	.open	 = __fops ## _open,					\
+ 	.release = simple_attr_release,					\
+ 	.read	 = simple_attr_read,					\
+-	.write	 = simple_attr_write,					\
++	.write	 = (__is_signed) ? simple_attr_write_signed : simple_attr_write,	\
+ 	.llseek	 = generic_file_llseek,					\
+ }
+ 
++#define DEFINE_SIMPLE_ATTRIBUTE(__fops, __get, __set, __fmt)		\
++	DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, false)
++
++#define DEFINE_SIMPLE_ATTRIBUTE_SIGNED(__fops, __get, __set, __fmt)	\
++	DEFINE_SIMPLE_ATTRIBUTE_XSIGNED(__fops, __get, __set, __fmt, true)
++
+ static inline __printf(1, 2)
+ void __simple_attr_check_format(const char *fmt, ...)
+ {
+@@ -3382,6 +3388,8 @@ ssize_t simple_attr_read(struct file *file, char __user *buf,
+ 			 size_t len, loff_t *ppos);
+ ssize_t simple_attr_write(struct file *file, const char __user *buf,
+ 			  size_t len, loff_t *ppos);
++ssize_t simple_attr_write_signed(struct file *file, const char __user *buf,
++				 size_t len, loff_t *ppos);
+ 
+ struct ctl_table;
+ int proc_nr_files(struct ctl_table *table, int write,
+diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
+index 20f1e3ff60130..591bc4cefe1d6 100644
+--- a/include/linux/mbcache.h
++++ b/include/linux/mbcache.h
+@@ -10,16 +10,29 @@
+ 
+ struct mb_cache;
+ 
++/* Cache entry flags */
++enum {
++	MBE_REFERENCED_B = 0,
++	MBE_REUSABLE_B
++};
++
+ struct mb_cache_entry {
+ 	/* List of entries in cache - protected by cache->c_list_lock */
+ 	struct list_head	e_list;
+-	/* Hash table list - protected by hash chain bitlock */
++	/*
++	 * Hash table list - protected by hash chain bitlock. The entry is
++	 * guaranteed to be hashed while e_refcnt > 0.
++	 */
+ 	struct hlist_bl_node	e_hash_list;
++	/*
++	 * Entry refcount. Once it reaches zero, entry is unhashed and freed.
++	 * While refcount > 0, the entry is guaranteed to stay in the hash and
++	 * e.g. mb_cache_entry_try_delete() will fail.
++	 */
+ 	atomic_t		e_refcnt;
+ 	/* Key in hash - stable during lifetime of the entry */
+ 	u32			e_key;
+-	u32			e_referenced:1;
+-	u32			e_reusable:1;
++	unsigned long		e_flags;
+ 	/* User provided value - stable during lifetime of the entry */
+ 	u64			e_value;
+ };
+@@ -29,16 +42,24 @@ void mb_cache_destroy(struct mb_cache *cache);
+ 
+ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ 			  u64 value, bool reusable);
+-void __mb_cache_entry_free(struct mb_cache_entry *entry);
+-static inline int mb_cache_entry_put(struct mb_cache *cache,
+-				     struct mb_cache_entry *entry)
++void __mb_cache_entry_free(struct mb_cache *cache,
++			   struct mb_cache_entry *entry);
++void mb_cache_entry_wait_unused(struct mb_cache_entry *entry);
++static inline void mb_cache_entry_put(struct mb_cache *cache,
++				      struct mb_cache_entry *entry)
+ {
+-	if (!atomic_dec_and_test(&entry->e_refcnt))
+-		return 0;
+-	__mb_cache_entry_free(entry);
+-	return 1;
++	unsigned int cnt = atomic_dec_return(&entry->e_refcnt);
++
++	if (cnt > 0) {
++		if (cnt <= 2)
++			wake_up_var(&entry->e_refcnt);
++		return;
++	}
++	__mb_cache_entry_free(cache, entry);
+ }
+ 
++struct mb_cache_entry *mb_cache_entry_delete_or_get(struct mb_cache *cache,
++						    u32 key, u64 value);
+ void mb_cache_entry_delete(struct mb_cache *cache, u32 key, u64 value);
+ struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *cache, u32 key,
+ 					  u64 value);
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index 5141657a0f7f6..c16352fbbe1fe 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -117,8 +117,10 @@ static inline void proc_remove(struct proc_dir_entry *de) {}
+ static inline int remove_proc_subtree(const char *name, struct proc_dir_entry *parent) { return 0; }
+ 
+ #define proc_create_net_data(name, mode, parent, ops, state_size, data) ({NULL;})
++#define proc_create_net_data_write(name, mode, parent, ops, write, state_size, data) ({NULL;})
+ #define proc_create_net(name, mode, parent, state_size, ops) ({NULL;})
+ #define proc_create_net_single(name, mode, parent, show, data) ({NULL;})
++#define proc_create_net_single_write(name, mode, parent, show, write, data) ({NULL;})
+ 
+ #endif /* CONFIG_PROC_FS */
+ 
+diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
+index 91e0b7624053e..ec10897f7f60c 100644
+--- a/include/linux/quotaops.h
++++ b/include/linux/quotaops.h
+@@ -99,6 +99,8 @@ int dquot_file_open(struct inode *inode, struct file *file);
+ 
+ int dquot_enable(struct inode *inode, int type, int format_id,
+ 	unsigned int flags);
++int dquot_load_quota_sb(struct super_block *sb, int type, int format_id,
++	unsigned int flags);
+ int dquot_quota_on(struct super_block *sb, int type, int format_id,
+ 	const struct path *path);
+ int dquot_quota_on_mount(struct super_block *sb, char *qf_name,
+diff --git a/include/linux/sunrpc/rpc_pipe_fs.h b/include/linux/sunrpc/rpc_pipe_fs.h
+index e90b9bd99ded5..396de2ef8767e 100644
+--- a/include/linux/sunrpc/rpc_pipe_fs.h
++++ b/include/linux/sunrpc/rpc_pipe_fs.h
+@@ -94,6 +94,11 @@ extern ssize_t rpc_pipe_generic_upcall(struct file *, struct rpc_pipe_msg *,
+ 				       char __user *, size_t);
+ extern int rpc_queue_upcall(struct rpc_pipe *, struct rpc_pipe_msg *);
+ 
++/* returns true if the msg is in-flight, i.e., already eaten by the peer */
++static inline bool rpc_msg_is_inflight(const struct rpc_pipe_msg *msg) {
++	return (msg->copied != 0 && list_empty(&msg->list));
++}
++
+ struct rpc_clnt;
+ extern struct dentry *rpc_create_client_dir(struct dentry *, const char *, struct rpc_clnt *);
+ extern int rpc_remove_client_dir(struct rpc_clnt *);
+diff --git a/include/linux/timerqueue.h b/include/linux/timerqueue.h
+index aff122f1062a8..e1a843cb16d49 100644
+--- a/include/linux/timerqueue.h
++++ b/include/linux/timerqueue.h
+@@ -35,7 +35,7 @@ struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head)
+ {
+ 	struct rb_node *leftmost = rb_first_cached(&head->rb_root);
+ 
+-	return rb_entry(leftmost, struct timerqueue_node, node);
++	return rb_entry_safe(leftmost, struct timerqueue_node, node);
+ }
+ 
+ static inline void timerqueue_init(struct timerqueue_node *node)
+diff --git a/include/media/dvbdev.h b/include/media/dvbdev.h
+index 881ca461b7bba..09279ed0051ea 100644
+--- a/include/media/dvbdev.h
++++ b/include/media/dvbdev.h
+@@ -126,6 +126,7 @@ struct dvb_adapter {
+  * struct dvb_device - represents a DVB device node
+  *
+  * @list_head:	List head with all DVB devices
++ * @ref:	reference counter
+  * @fops:	pointer to struct file_operations
+  * @adapter:	pointer to the adapter that holds this device node
+  * @type:	type of the device, as defined by &enum dvb_device_type.
+@@ -156,6 +157,7 @@ struct dvb_adapter {
+  */
+ struct dvb_device {
+ 	struct list_head list_head;
++	struct kref ref;
+ 	const struct file_operations *fops;
+ 	struct dvb_adapter *adapter;
+ 	enum dvb_device_type type;
+@@ -187,6 +189,20 @@ struct dvb_device {
+ 	void *priv;
+ };
+ 
++/**
++ * dvb_device_get - Increase dvb_device reference
++ *
++ * @dvbdev:	pointer to struct dvb_device
++ */
++struct dvb_device *dvb_device_get(struct dvb_device *dvbdev);
++
++/**
++ * dvb_device_put - Decrease dvb_device reference
++ *
++ * @dvbdev:	pointer to struct dvb_device
++ */
++void dvb_device_put(struct dvb_device *dvbdev);
++
+ /**
+  * dvb_register_adapter - Registers a new DVB adapter
+  *
+@@ -231,29 +247,17 @@ int dvb_register_device(struct dvb_adapter *adap,
+ /**
+  * dvb_remove_device - Remove a registered DVB device
+  *
+- * This does not free memory.  To do that, call dvb_free_device().
++ * This does not free memory. dvb_free_device() will do that when
++ * reference counter is empty
+  *
+  * @dvbdev:	pointer to struct dvb_device
+  */
+ void dvb_remove_device(struct dvb_device *dvbdev);
+ 
+-/**
+- * dvb_free_device - Free memory occupied by a DVB device.
+- *
+- * Call dvb_unregister_device() before calling this function.
+- *
+- * @dvbdev:	pointer to struct dvb_device
+- */
+-void dvb_free_device(struct dvb_device *dvbdev);
+ 
+ /**
+  * dvb_unregister_device - Unregisters a DVB device
+  *
+- * This is a combination of dvb_remove_device() and dvb_free_device().
+- * Using this function is usually a mistake, and is often an indicator
+- * for a use-after-free bug (when a userspace process keeps a file
+- * handle to a detached device).
+- *
+  * @dvbdev:	pointer to struct dvb_device
+  */
+ void dvb_unregister_device(struct dvb_device *dvbdev);
+diff --git a/include/net/mrp.h b/include/net/mrp.h
+index ef58b4a071900..c6c53370e390f 100644
+--- a/include/net/mrp.h
++++ b/include/net/mrp.h
+@@ -120,6 +120,7 @@ struct mrp_applicant {
+ 	struct sk_buff		*pdu;
+ 	struct rb_root		mad;
+ 	struct rcu_head		rcu;
++	bool			active;
+ };
+ 
+ struct mrp_port {
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index 0dfb174f707e9..20c9b8e77a571 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -2290,7 +2290,7 @@ TRACE_EVENT(ext4_es_remove_extent,
+ 		  __entry->lblk, __entry->len)
+ );
+ 
+-TRACE_EVENT(ext4_es_find_delayed_extent_range_enter,
++TRACE_EVENT(ext4_es_find_extent_range_enter,
+ 	TP_PROTO(struct inode *inode, ext4_lblk_t lblk),
+ 
+ 	TP_ARGS(inode, lblk),
+@@ -2312,7 +2312,7 @@ TRACE_EVENT(ext4_es_find_delayed_extent_range_enter,
+ 		  (unsigned long) __entry->ino, __entry->lblk)
+ );
+ 
+-TRACE_EVENT(ext4_es_find_delayed_extent_range_exit,
++TRACE_EVENT(ext4_es_find_extent_range_exit,
+ 	TP_PROTO(struct inode *inode, struct extent_status *es),
+ 
+ 	TP_ARGS(inode, es),
+@@ -2532,6 +2532,41 @@ TRACE_EVENT(ext4_es_shrink,
+ 		  __entry->scan_time, __entry->nr_skipped, __entry->retried)
+ );
+ 
++TRACE_EVENT(ext4_es_insert_delayed_block,
++	TP_PROTO(struct inode *inode, struct extent_status *es,
++		 bool allocated),
++
++	TP_ARGS(inode, es, allocated),
++
++	TP_STRUCT__entry(
++		__field(	dev_t,		dev		)
++		__field(	ino_t,		ino		)
++		__field(	ext4_lblk_t,	lblk		)
++		__field(	ext4_lblk_t,	len		)
++		__field(	ext4_fsblk_t,	pblk		)
++		__field(	char,		status		)
++		__field(	bool,		allocated	)
++	),
++
++	TP_fast_assign(
++		__entry->dev		= inode->i_sb->s_dev;
++		__entry->ino		= inode->i_ino;
++		__entry->lblk		= es->es_lblk;
++		__entry->len		= es->es_len;
++		__entry->pblk		= ext4_es_pblock(es);
++		__entry->status		= ext4_es_status(es);
++		__entry->allocated	= allocated;
++	),
++
++	TP_printk("dev %d,%d ino %lu es [%u/%u) mapped %llu status %s "
++		  "allocated %d",
++		  MAJOR(__entry->dev), MINOR(__entry->dev),
++		  (unsigned long) __entry->ino,
++		  __entry->lblk, __entry->len,
++		  __entry->pblk, show_extent_status(__entry->status),
++		  __entry->allocated)
++);
++
+ /* fsmap traces */
+ DECLARE_EVENT_CLASS(ext4_fsmap_class,
+ 	TP_PROTO(struct super_block *sb, u32 keydev, u32 agno, u64 bno, u64 len,
+diff --git a/include/uapi/linux/swab.h b/include/uapi/linux/swab.h
+index 7272f85d6d6ab..3736f2fe15418 100644
+--- a/include/uapi/linux/swab.h
++++ b/include/uapi/linux/swab.h
+@@ -3,7 +3,7 @@
+ #define _UAPI_LINUX_SWAB_H
+ 
+ #include <linux/types.h>
+-#include <linux/compiler.h>
++#include <linux/stddef.h>
+ #include <asm/bitsperlong.h>
+ #include <asm/swab.h>
+ 
+diff --git a/include/uapi/sound/asequencer.h b/include/uapi/sound/asequencer.h
+index a75e14edc957e..dbd60f48b4b01 100644
+--- a/include/uapi/sound/asequencer.h
++++ b/include/uapi/sound/asequencer.h
+@@ -344,10 +344,10 @@ typedef int __bitwise snd_seq_client_type_t;
+ #define	KERNEL_CLIENT	((__force snd_seq_client_type_t) 2)
+                         
+ 	/* event filter flags */
+-#define SNDRV_SEQ_FILTER_BROADCAST	(1<<0)	/* accept broadcast messages */
+-#define SNDRV_SEQ_FILTER_MULTICAST	(1<<1)	/* accept multicast messages */
+-#define SNDRV_SEQ_FILTER_BOUNCE		(1<<2)	/* accept bounce event in error */
+-#define SNDRV_SEQ_FILTER_USE_EVENT	(1<<31)	/* use event filter */
++#define SNDRV_SEQ_FILTER_BROADCAST	(1U<<0)	/* accept broadcast messages */
++#define SNDRV_SEQ_FILTER_MULTICAST	(1U<<1)	/* accept multicast messages */
++#define SNDRV_SEQ_FILTER_BOUNCE		(1U<<2)	/* accept bounce event in error */
++#define SNDRV_SEQ_FILTER_USE_EVENT	(1U<<31)	/* use event filter */
+ 
+ struct snd_seq_client_info {
+ 	int client;			/* client number to inquire */
+diff --git a/kernel/acct.c b/kernel/acct.c
+index 81f9831a78592..6d98aed403bac 100644
+--- a/kernel/acct.c
++++ b/kernel/acct.c
+@@ -331,6 +331,8 @@ static comp_t encode_comp_t(unsigned long value)
+ 		exp++;
+ 	}
+ 
++	if (exp > (((comp_t) ~0U) >> MANTSIZE))
++		return (comp_t) ~0U;
+ 	/*
+ 	 * Clean it up and polish it off.
+ 	 */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index ba66ea3ca7056..668e5492e4c4d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -9682,13 +9682,15 @@ static int pmu_dev_alloc(struct pmu *pmu)
+ 
+ 	pmu->dev->groups = pmu->attr_groups;
+ 	device_initialize(pmu->dev);
+-	ret = dev_set_name(pmu->dev, "%s", pmu->name);
+-	if (ret)
+-		goto free_dev;
+ 
+ 	dev_set_drvdata(pmu->dev, pmu);
+ 	pmu->dev->bus = &pmu_bus;
+ 	pmu->dev->release = pmu_dev_release;
++
++	ret = dev_set_name(pmu->dev, "%s", pmu->name);
++	if (ret)
++		goto free_dev;
++
+ 	ret = device_add(pmu->dev);
+ 	if (ret)
+ 		goto free_dev;
+diff --git a/kernel/gcov/gcc_4_7.c b/kernel/gcov/gcc_4_7.c
+index bb6ccf456e2df..1c2bd1f23536b 100644
+--- a/kernel/gcov/gcc_4_7.c
++++ b/kernel/gcov/gcc_4_7.c
+@@ -85,6 +85,7 @@ struct gcov_fn_info {
+  * @version: gcov version magic indicating the gcc version used for compilation
+  * @next: list head for a singly-linked list
+  * @stamp: uniquifying time stamp
++ * @checksum: unique object checksum
+  * @filename: name of the associated gcov data file
+  * @merge: merge functions (null for unused counter type)
+  * @n_functions: number of instrumented functions
+@@ -97,6 +98,10 @@ struct gcov_info {
+ 	unsigned int version;
+ 	struct gcov_info *next;
+ 	unsigned int stamp;
++ /* Since GCC 12.1 a checksum field is added. */
++#if (__GNUC__ >= 12)
++	unsigned int checksum;
++#endif
+ 	const char *filename;
+ 	void (*merge[GCOV_COUNTERS])(gcov_type *, unsigned int);
+ 	unsigned int n_functions;
+diff --git a/kernel/relay.c b/kernel/relay.c
+index 735cb208f023b..b7aa7df43955b 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -163,13 +163,13 @@ static struct rchan_buf *relay_create_buf(struct rchan *chan)
+ {
+ 	struct rchan_buf *buf;
+ 
+-	if (chan->n_subbufs > KMALLOC_MAX_SIZE / sizeof(size_t *))
++	if (chan->n_subbufs > KMALLOC_MAX_SIZE / sizeof(size_t))
+ 		return NULL;
+ 
+ 	buf = kzalloc(sizeof(struct rchan_buf), GFP_KERNEL);
+ 	if (!buf)
+ 		return NULL;
+-	buf->padding = kmalloc_array(chan->n_subbufs, sizeof(size_t *),
++	buf->padding = kmalloc_array(chan->n_subbufs, sizeof(size_t),
+ 				     GFP_KERNEL);
+ 	if (!buf->padding)
+ 		goto free_buf;
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 75ea1a5be31a8..113cbe1a20aa3 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1594,7 +1594,8 @@ blk_trace_event_print_binary(struct trace_iterator *iter, int flags,
+ 
+ static enum print_line_t blk_tracer_print_line(struct trace_iterator *iter)
+ {
+-	if (!(blk_tracer_flags.val & TRACE_BLK_OPT_CLASSIC))
++	if ((iter->ent->type != TRACE_BLK) ||
++	    !(blk_tracer_flags.val & TRACE_BLK_OPT_CLASSIC))
+ 		return TRACE_TYPE_UNHANDLED;
+ 
+ 	return print_one_line(iter, true);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 5b7a6e9b0ab6b..59ad83499ba25 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5803,7 +5803,20 @@ waitagain:
+ 
+ 		ret = print_trace_line(iter);
+ 		if (ret == TRACE_TYPE_PARTIAL_LINE) {
+-			/* don't print partial lines */
++			/*
++			 * If one print_trace_line() fills entire trace_seq in one shot,
++			 * trace_seq_to_user() will returns -EBUSY because save_len == 0,
++			 * In this case, we need to consume it, otherwise, loop will peek
++			 * this event next time, resulting in an infinite loop.
++			 */
++			if (save_len == 0) {
++				iter->seq.full = 0;
++				trace_seq_puts(&iter->seq, "[LINE TOO BIG]\n");
++				trace_consume(iter);
++				break;
++			}
++
++			/* In other cases, don't print partial lines */
+ 			iter->seq.seq.len = save_len;
+ 			break;
+ 		}
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 48f85dab9ef19..17b15bd978579 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -5807,7 +5807,7 @@ enable:
+ 	/* Just return zero, not the number of registered triggers */
+ 	ret = 0;
+  out:
+-	if (ret == 0)
++	if (ret == 0 && glob[0])
+ 		hist_err_clear();
+ 
+ 	return ret;
+diff --git a/lib/notifier-error-inject.c b/lib/notifier-error-inject.c
+index eb4a04afea80a..125ea8ce23a46 100644
+--- a/lib/notifier-error-inject.c
++++ b/lib/notifier-error-inject.c
+@@ -14,7 +14,7 @@ static int debugfs_errno_get(void *data, u64 *val)
+ 	return 0;
+ }
+ 
+-DEFINE_SIMPLE_ATTRIBUTE(fops_errno, debugfs_errno_get, debugfs_errno_set,
++DEFINE_SIMPLE_ATTRIBUTE_SIGNED(fops_errno, debugfs_errno_get, debugfs_errno_set,
+ 			"%lld\n");
+ 
+ static struct dentry *debugfs_create_errno(const char *name, umode_t mode,
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index a74b1aae74618..f4cc874021da2 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -902,6 +902,7 @@ static int __init test_firmware_init(void)
+ 
+ 	rc = misc_register(&test_fw_misc_device);
+ 	if (rc) {
++		__test_firmware_config_free();
+ 		kfree(test_fw_config);
+ 		pr_err("could not register misc device: %d\n", rc);
+ 		return rc;
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 5dd14ef2e1de8..60f7df9875678 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -23,6 +23,19 @@
+ #include <asm/pgalloc.h>
+ #include "internal.h"
+ 
++/* gross hack for <=4.19 stable */
++#if defined(CONFIG_S390) || defined(CONFIG_ARM)
++static void tlb_remove_table_smp_sync(void *arg)
++{
++        /* Simply deliver the interrupt */
++}
++
++static void tlb_remove_table_sync_one(void)
++{
++        smp_call_function(tlb_remove_table_smp_sync, NULL, 1);
++}
++#endif
++
+ enum scan_result {
+ 	SCAN_FAIL,
+ 	SCAN_SUCCEED,
+@@ -1045,6 +1058,7 @@ static void collapse_huge_page(struct mm_struct *mm,
+ 	_pmd = pmdp_collapse_flush(vma, address, pmd);
+ 	spin_unlock(pmd_ptl);
+ 	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
++	tlb_remove_table_sync_one();
+ 
+ 	spin_lock(pte_ptl);
+ 	isolated = __collapse_huge_page_isolate(vma, address, pte);
+@@ -1289,12 +1303,20 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
+ 		 */
+ 		if (down_write_trylock(&mm->mmap_sem)) {
+ 			if (!khugepaged_test_exit(mm)) {
+-				spinlock_t *ptl = pmd_lock(mm, pmd);
++				spinlock_t *ptl;
++				unsigned long end = addr + HPAGE_PMD_SIZE;
++
++				mmu_notifier_invalidate_range_start(mm, addr,
++								    end);
++				ptl = pmd_lock(mm, pmd);
+ 				/* assume page table is clear */
+ 				_pmd = pmdp_collapse_flush(vma, addr, pmd);
+ 				spin_unlock(ptl);
+ 				mm_dec_nr_ptes(mm);
++				tlb_remove_table_sync_one();
+ 				pte_free(mm, pmd_pgtable(_pmd));
++				mmu_notifier_invalidate_range_end(mm, addr,
++								  end);
+ 			}
+ 			up_write(&mm->mmap_sem);
+ 		}
+diff --git a/mm/memory.c b/mm/memory.c
+index 800834cff4e67..b80ce6b3c8f49 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -362,6 +362,11 @@ static void tlb_remove_table_smp_sync(void *arg)
+ 	/* Simply deliver the interrupt */
+ }
+ 
++void tlb_remove_table_sync_one(void)
++{
++	smp_call_function(tlb_remove_table_smp_sync, NULL, 1);
++}
++
+ static void tlb_remove_table_one(void *table)
+ {
+ 	/*
+diff --git a/net/802/mrp.c b/net/802/mrp.c
+index 32f87d458f054..ce6e4774d3338 100644
+--- a/net/802/mrp.c
++++ b/net/802/mrp.c
+@@ -609,7 +609,10 @@ static void mrp_join_timer(struct timer_list *t)
+ 	spin_unlock(&app->lock);
+ 
+ 	mrp_queue_xmit(app);
+-	mrp_join_timer_arm(app);
++	spin_lock(&app->lock);
++	if (likely(app->active))
++		mrp_join_timer_arm(app);
++	spin_unlock(&app->lock);
+ }
+ 
+ static void mrp_periodic_timer_arm(struct mrp_applicant *app)
+@@ -623,11 +626,12 @@ static void mrp_periodic_timer(struct timer_list *t)
+ 	struct mrp_applicant *app = from_timer(app, t, periodic_timer);
+ 
+ 	spin_lock(&app->lock);
+-	mrp_mad_event(app, MRP_EVENT_PERIODIC);
+-	mrp_pdu_queue(app);
++	if (likely(app->active)) {
++		mrp_mad_event(app, MRP_EVENT_PERIODIC);
++		mrp_pdu_queue(app);
++		mrp_periodic_timer_arm(app);
++	}
+ 	spin_unlock(&app->lock);
+-
+-	mrp_periodic_timer_arm(app);
+ }
+ 
+ static int mrp_pdu_parse_end_mark(struct sk_buff *skb, int *offset)
+@@ -875,6 +879,7 @@ int mrp_init_applicant(struct net_device *dev, struct mrp_application *appl)
+ 	app->dev = dev;
+ 	app->app = appl;
+ 	app->mad = RB_ROOT;
++	app->active = true;
+ 	spin_lock_init(&app->lock);
+ 	skb_queue_head_init(&app->queue);
+ 	rcu_assign_pointer(dev->mrp_port->applicants[appl->type], app);
+@@ -903,6 +908,9 @@ void mrp_uninit_applicant(struct net_device *dev, struct mrp_application *appl)
+ 
+ 	RCU_INIT_POINTER(port->applicants[appl->type], NULL);
+ 
++	spin_lock_bh(&app->lock);
++	app->active = false;
++	spin_unlock_bh(&app->lock);
+ 	/* Delete timer and generate a final TX event to flush out
+ 	 * all pending messages before the applicant is gone.
+ 	 */
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 3d780220e2d19..e87777255c474 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -4331,7 +4331,7 @@ void hci_req_cmd_complete(struct hci_dev *hdev, u16 opcode, u8 status,
+ 			*req_complete_skb = bt_cb(skb)->hci.req_complete_skb;
+ 		else
+ 			*req_complete = bt_cb(skb)->hci.req_complete;
+-		kfree_skb(skb);
++		dev_kfree_skb_irq(skb);
+ 	}
+ 	spin_unlock_irqrestore(&hdev->cmd_q.lock, flags);
+ }
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index fd6cd47a6c5ab..fd95631205a6a 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -4193,7 +4193,8 @@ static inline int l2cap_config_req(struct l2cap_conn *conn,
+ 
+ 	chan->ident = cmd->ident;
+ 	l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, len, rsp);
+-	chan->num_conf_rsp++;
++	if (chan->num_conf_rsp < L2CAP_CONF_MAX_CONF_RSP)
++		chan->num_conf_rsp++;
+ 
+ 	/* Reset config buffer. */
+ 	chan->conf_len = 0;
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index e4eaf5d2acbd8..86edf512d4971 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -593,7 +593,7 @@ int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb)
+ 
+ 		ret = rfcomm_dlc_send_frag(d, frag);
+ 		if (ret < 0) {
+-			kfree_skb(frag);
++			dev_kfree_skb_irq(frag);
+ 			goto unlock;
+ 		}
+ 
+diff --git a/net/caif/cfctrl.c b/net/caif/cfctrl.c
+index a1e85f0321081..330cb2b087bb6 100644
+--- a/net/caif/cfctrl.c
++++ b/net/caif/cfctrl.c
+@@ -269,11 +269,15 @@ int cfctrl_linkup_request(struct cflayer *layer,
+ 	default:
+ 		pr_warn("Request setup of bad link type = %d\n",
+ 			param->linktype);
++		cfpkt_destroy(pkt);
+ 		return -EINVAL;
+ 	}
+ 	req = kzalloc(sizeof(*req), GFP_KERNEL);
+-	if (!req)
++	if (!req) {
++		cfpkt_destroy(pkt);
+ 		return -ENOMEM;
++	}
++
+ 	req->client_layer = user_layer;
+ 	req->cmd = CFCTRL_CMD_LINK_SETUP;
+ 	req->param = *param;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 5129e89f52bb2..32d0b8f14aabb 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2025,6 +2025,10 @@ static int __bpf_redirect_no_mac(struct sk_buff *skb, struct net_device *dev,
+ 
+ 	if (mlen) {
+ 		__skb_pull(skb, mlen);
++		if (unlikely(!skb->len)) {
++			kfree_skb(skb);
++			return -ERANGE;
++		}
+ 
+ 		/* At ingress, the mac header has already been pulled once.
+ 		 * At egress, skb_pospull_rcsum has to be done in case that
+@@ -2561,15 +2565,18 @@ static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)
+ 
+ static int bpf_skb_generic_pop(struct sk_buff *skb, u32 off, u32 len)
+ {
++	void *old_data;
++
+ 	/* skb_ensure_writable() is not needed here, as we're
+ 	 * already working on an uncloned skb.
+ 	 */
+ 	if (unlikely(!pskb_may_pull(skb, off + len)))
+ 		return -ENOMEM;
+ 
+-	skb_postpull_rcsum(skb, skb->data + off, len);
+-	memmove(skb->data + len, skb->data, off);
++	old_data = skb->data;
+ 	__skb_pull(skb, len);
++	skb_postpull_rcsum(skb, old_data + off, len);
++	memmove(skb->data, old_data, off);
+ 
+ 	return 0;
+ }
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 4178fc28c2770..7f501dff4501c 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1953,6 +1953,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
+ 				insp = list;
+ 			} else {
+ 				/* Eaten partially. */
++				if (skb_is_gso(skb) && !list->head_frag &&
++				    skb_headlen(list))
++					skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
+ 
+ 				if (skb_shared(list)) {
+ 					/* Sucks! We need to fork list. :-( */
+diff --git a/net/core/stream.c b/net/core/stream.c
+index 7b411a91a81cf..58755528d39e1 100644
+--- a/net/core/stream.c
++++ b/net/core/stream.c
+@@ -196,6 +196,12 @@ void sk_stream_kill_queues(struct sock *sk)
+ 	/* First the read buffer. */
+ 	__skb_queue_purge(&sk->sk_receive_queue);
+ 
++	/* Next, the error queue.
++	 * We need to use queue lock, because other threads might
++	 * add packets to the queue without socket lock being held.
++	 */
++	skb_queue_purge(&sk->sk_error_queue);
++
+ 	/* Next, the write queue. */
+ 	WARN_ON(!skb_queue_empty(&sk->sk_write_queue));
+ 
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 0a69f92da71b8..457f619a34610 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -903,11 +903,25 @@ void inet_csk_prepare_forced_close(struct sock *sk)
+ }
+ EXPORT_SYMBOL(inet_csk_prepare_forced_close);
+ 
++static int inet_ulp_can_listen(const struct sock *sk)
++{
++	const struct inet_connection_sock *icsk = inet_csk(sk);
++
++	if (icsk->icsk_ulp_ops)
++		return -EINVAL;
++
++	return 0;
++}
++
+ int inet_csk_listen_start(struct sock *sk, int backlog)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 	struct inet_sock *inet = inet_sk(sk);
+-	int err = -EADDRINUSE;
++	int err;
++
++	err = inet_ulp_can_listen(sk);
++	if (unlikely(err))
++		return err;
+ 
+ 	reqsk_queue_alloc(&icsk->icsk_accept_queue);
+ 
+diff --git a/net/ipv4/udp_tunnel.c b/net/ipv4/udp_tunnel.c
+index 6539ff15e9a34..d03d74388870a 100644
+--- a/net/ipv4/udp_tunnel.c
++++ b/net/ipv4/udp_tunnel.c
+@@ -186,6 +186,7 @@ EXPORT_SYMBOL_GPL(udp_tunnel_xmit_skb);
+ void udp_tunnel_sock_release(struct socket *sock)
+ {
+ 	rcu_assign_sk_user_data(sock->sk, NULL);
++	synchronize_rcu();
+ 	kernel_sock_shutdown(sock, SHUT_RDWR);
+ 	sock_release(sock);
+ }
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index ad7bd40b6d533..44e9a240d6070 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -544,6 +544,7 @@ csum_copy_err:
+ static int rawv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6,
+ 				     struct raw6_sock *rp)
+ {
++	struct ipv6_txoptions *opt;
+ 	struct sk_buff *skb;
+ 	int err = 0;
+ 	int offset;
+@@ -561,6 +562,9 @@ static int rawv6_push_pending_frames(struct sock *sk, struct flowi6 *fl6,
+ 
+ 	offset = rp->offset;
+ 	total_len = inet_sk(sk)->cork.base.length;
++	opt = inet6_sk(sk)->cork.opt;
++	total_len -= opt ? opt->opt_flen : 0;
++
+ 	if (offset >= total_len - 1) {
+ 		err = -EINVAL;
+ 		ip6_flush_pending_frames(sk);
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
+index e3257077158f8..1b9df64c62360 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
+@@ -299,8 +299,8 @@ bitmap_ip_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ 			return -IPSET_ERR_BITMAP_RANGE;
+ 
+ 		pr_debug("mask_bits %u, netmask %u\n", mask_bits, netmask);
+-		hosts = 2 << (32 - netmask - 1);
+-		elements = 2 << (netmask - mask_bits - 1);
++		hosts = 2U << (32 - netmask - 1);
++		elements = 2UL << (netmask - mask_bits - 1);
+ 	}
+ 	if (elements > IPSET_BITMAP_MAX_RANGE + 1)
+ 		return -IPSET_ERR_BITMAP_RANGE_SIZE;
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index 39fb01ee92220..8953b03d5a529 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1515,6 +1515,7 @@ static int nfc_genl_se_io(struct sk_buff *skb, struct genl_info *info)
+ 	u32 dev_idx, se_idx;
+ 	u8 *apdu;
+ 	size_t apdu_len;
++	int rc;
+ 
+ 	if (!info->attrs[NFC_ATTR_DEVICE_INDEX] ||
+ 	    !info->attrs[NFC_ATTR_SE_INDEX] ||
+@@ -1528,25 +1529,37 @@ static int nfc_genl_se_io(struct sk_buff *skb, struct genl_info *info)
+ 	if (!dev)
+ 		return -ENODEV;
+ 
+-	if (!dev->ops || !dev->ops->se_io)
+-		return -ENOTSUPP;
++	if (!dev->ops || !dev->ops->se_io) {
++		rc = -EOPNOTSUPP;
++		goto put_dev;
++	}
+ 
+ 	apdu_len = nla_len(info->attrs[NFC_ATTR_SE_APDU]);
+-	if (apdu_len == 0)
+-		return -EINVAL;
++	if (apdu_len == 0) {
++		rc = -EINVAL;
++		goto put_dev;
++	}
+ 
+ 	apdu = nla_data(info->attrs[NFC_ATTR_SE_APDU]);
+-	if (!apdu)
+-		return -EINVAL;
++	if (!apdu) {
++		rc = -EINVAL;
++		goto put_dev;
++	}
+ 
+ 	ctx = kzalloc(sizeof(struct se_io_ctx), GFP_KERNEL);
+-	if (!ctx)
+-		return -ENOMEM;
++	if (!ctx) {
++		rc = -ENOMEM;
++		goto put_dev;
++	}
+ 
+ 	ctx->dev_idx = dev_idx;
+ 	ctx->se_idx = se_idx;
+ 
+-	return nfc_se_io(dev, se_idx, apdu, apdu_len, se_io_cb, ctx);
++	rc = nfc_se_io(dev, se_idx, apdu, apdu_len, se_io_cb, ctx);
++
++put_dev:
++	nfc_put_device(dev);
++	return rc;
+ }
+ 
+ static int nfc_genl_vendor_cmd(struct sk_buff *skb,
+@@ -1569,14 +1582,21 @@ static int nfc_genl_vendor_cmd(struct sk_buff *skb,
+ 	subcmd = nla_get_u32(info->attrs[NFC_ATTR_VENDOR_SUBCMD]);
+ 
+ 	dev = nfc_get_device(dev_idx);
+-	if (!dev || !dev->vendor_cmds || !dev->n_vendor_cmds)
++	if (!dev)
+ 		return -ENODEV;
+ 
++	if (!dev->vendor_cmds || !dev->n_vendor_cmds) {
++		err = -ENODEV;
++		goto put_dev;
++	}
++
+ 	if (info->attrs[NFC_ATTR_VENDOR_DATA]) {
+ 		data = nla_data(info->attrs[NFC_ATTR_VENDOR_DATA]);
+ 		data_len = nla_len(info->attrs[NFC_ATTR_VENDOR_DATA]);
+-		if (data_len == 0)
+-			return -EINVAL;
++		if (data_len == 0) {
++			err = -EINVAL;
++			goto put_dev;
++		}
+ 	} else {
+ 		data = NULL;
+ 		data_len = 0;
+@@ -1591,10 +1611,14 @@ static int nfc_genl_vendor_cmd(struct sk_buff *skb,
+ 		dev->cur_cmd_info = info;
+ 		err = cmd->doit(dev, data, data_len);
+ 		dev->cur_cmd_info = NULL;
+-		return err;
++		goto put_dev;
+ 	}
+ 
+-	return -EOPNOTSUPP;
++	err = -EOPNOTSUPP;
++
++put_dev:
++	nfc_put_device(dev);
++	return err;
+ }
+ 
+ /* message building helper */
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index e9a10a66b4cad..fbc5752472689 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -903,6 +903,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	struct sw_flow_mask mask;
+ 	struct sk_buff *reply;
+ 	struct datapath *dp;
++	struct sw_flow_key *key;
+ 	struct sw_flow_actions *acts;
+ 	struct sw_flow_match match;
+ 	u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
+@@ -930,24 +931,26 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	}
+ 
+ 	/* Extract key. */
+-	ovs_match_init(&match, &new_flow->key, false, &mask);
++	key = kzalloc(sizeof(*key), GFP_KERNEL);
++	if (!key) {
++		error = -ENOMEM;
++		goto err_kfree_key;
++	}
++
++	ovs_match_init(&match, key, false, &mask);
+ 	error = ovs_nla_get_match(net, &match, a[OVS_FLOW_ATTR_KEY],
+ 				  a[OVS_FLOW_ATTR_MASK], log);
+ 	if (error)
+ 		goto err_kfree_flow;
+ 
++	ovs_flow_mask_key(&new_flow->key, key, true, &mask);
++
+ 	/* Extract flow identifier. */
+ 	error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID],
+-				       &new_flow->key, log);
++				       key, log);
+ 	if (error)
+ 		goto err_kfree_flow;
+ 
+-	/* unmasked key is needed to match when ufid is not used. */
+-	if (ovs_identifier_is_key(&new_flow->id))
+-		match.key = new_flow->id.unmasked_key;
+-
+-	ovs_flow_mask_key(&new_flow->key, &new_flow->key, true, &mask);
+-
+ 	/* Validate actions. */
+ 	error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS],
+ 				     &new_flow->key, &acts, log);
+@@ -974,7 +977,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 	if (ovs_identifier_is_ufid(&new_flow->id))
+ 		flow = ovs_flow_tbl_lookup_ufid(&dp->table, &new_flow->id);
+ 	if (!flow)
+-		flow = ovs_flow_tbl_lookup(&dp->table, &new_flow->key);
++		flow = ovs_flow_tbl_lookup(&dp->table, key);
+ 	if (likely(!flow)) {
+ 		rcu_assign_pointer(new_flow->sf_acts, acts);
+ 
+@@ -1044,6 +1047,8 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (reply)
+ 		ovs_notify(&dp_flow_genl_family, reply, info);
++
++	kfree(key);
+ 	return 0;
+ 
+ err_unlock_ovs:
+@@ -1053,6 +1058,8 @@ err_kfree_acts:
+ 	ovs_nla_free_flow_actions(acts);
+ err_kfree_flow:
+ 	ovs_flow_free(new_flow, false);
++err_kfree_key:
++	kfree(key);
+ error:
+ 	return error;
+ }
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 0220a29350023..a7a09eb04d93b 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -689,7 +689,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ 			if (call->tx_total_len != -1 ||
+ 			    call->tx_pending ||
+ 			    call->tx_top != 0)
+-				goto error_put;
++				goto out_put_unlock;
+ 			call->tx_total_len = p.call.tx_total_len;
+ 		}
+ 	}
+diff --git a/net/sched/ematch.c b/net/sched/ematch.c
+index 113a133ee5440..5ba3548d2eb74 100644
+--- a/net/sched/ematch.c
++++ b/net/sched/ematch.c
+@@ -259,6 +259,8 @@ static int tcf_em_validate(struct tcf_proto *tp,
+ 			 * the value carried.
+ 			 */
+ 			if (em_hdr->flags & TCF_EM_SIMPLE) {
++				if (em->ops->datalen > 0)
++					goto errout;
+ 				if (data_len < sizeof(u32))
+ 					goto errout;
+ 				em->data = *(u32 *) data;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 424e70907b967..41c67cfd264fb 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1031,8 +1031,12 @@ skip:
+ 			unsigned long cl = cops->find(parent, classid);
+ 
+ 			if (cl) {
+-				err = cops->graft(parent, cl, new, &old,
+-						  extack);
++				if (new && new->ops == &noqueue_qdisc_ops) {
++					NL_SET_ERR_MSG(extack, "Cannot assign noqueue to a class");
++					err = -EINVAL;
++				} else {
++					err = cops->graft(parent, cl, new, &old, extack);
++				}
+ 			} else {
+ 				NL_SET_ERR_MSG(extack, "Specified class not found");
+ 				err = -ENOENT;
+diff --git a/net/sched/sch_atm.c b/net/sched/sch_atm.c
+index 9a1bfa13a6cd6..ff825f40ea048 100644
+--- a/net/sched/sch_atm.c
++++ b/net/sched/sch_atm.c
+@@ -394,10 +394,13 @@ static int atm_tc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 				result = tcf_classify(skb, fl, &res, true);
+ 				if (result < 0)
+ 					continue;
++				if (result == TC_ACT_SHOT)
++					goto done;
++
+ 				flow = (struct atm_flow_data *)res.class;
+ 				if (!flow)
+ 					flow = lookup_flow(sch, res.classid);
+-				goto done;
++				goto drop;
+ 			}
+ 		}
+ 		flow = NULL;
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index e61c48c1b37d6..c11e685396025 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -323,7 +323,7 @@ __gss_find_upcall(struct rpc_pipe *pipe, kuid_t uid, const struct gss_auth *auth
+ 	list_for_each_entry(pos, &pipe->in_downcall, list) {
+ 		if (!uid_eq(pos->uid, uid))
+ 			continue;
+-		if (auth && pos->auth->service != auth->service)
++		if (pos->auth->service != auth->service)
+ 			continue;
+ 		refcount_inc(&pos->count);
+ 		dprintk("RPC:       %s found msg %p\n", __func__, pos);
+@@ -677,6 +677,21 @@ out:
+ 	return err;
+ }
+ 
++static struct gss_upcall_msg *
++gss_find_downcall(struct rpc_pipe *pipe, kuid_t uid)
++{
++	struct gss_upcall_msg *pos;
++	list_for_each_entry(pos, &pipe->in_downcall, list) {
++		if (!uid_eq(pos->uid, uid))
++			continue;
++		if (!rpc_msg_is_inflight(&pos->msg))
++			continue;
++		refcount_inc(&pos->count);
++		return pos;
++	}
++	return NULL;
++}
++
+ #define MSG_BUF_MAXSIZE 1024
+ 
+ static ssize_t
+@@ -723,7 +738,7 @@ gss_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ 	err = -ENOENT;
+ 	/* Find a matching upcall */
+ 	spin_lock(&pipe->lock);
+-	gss_msg = __gss_find_upcall(pipe, uid, NULL);
++	gss_msg = gss_find_downcall(pipe, uid);
+ 	if (gss_msg == NULL) {
+ 		spin_unlock(&pipe->lock);
+ 		goto err_put_ctx;
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index d9d03881e4de5..ed6b2a155f44b 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1088,18 +1088,23 @@ static int gss_read_proxy_verf(struct svc_rqst *rqstp,
+ 		return res;
+ 
+ 	inlen = svc_getnl(argv);
+-	if (inlen > (argv->iov_len + rqstp->rq_arg.page_len))
++	if (inlen > (argv->iov_len + rqstp->rq_arg.page_len)) {
++		kfree(in_handle->data);
+ 		return SVC_DENIED;
++	}
+ 
+ 	pages = DIV_ROUND_UP(inlen, PAGE_SIZE);
+ 	in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL);
+-	if (!in_token->pages)
++	if (!in_token->pages) {
++		kfree(in_handle->data);
+ 		return SVC_DENIED;
++	}
+ 	in_token->page_base = 0;
+ 	in_token->page_len = inlen;
+ 	for (i = 0; i < pages; i++) {
+ 		in_token->pages[i] = alloc_page(GFP_KERNEL);
+ 		if (!in_token->pages[i]) {
++			kfree(in_handle->data);
+ 			gss_free_in_token_pages(in_token);
+ 			return SVC_DENIED;
+ 		}
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 0d7d149b1b1b0..1946bd13d5df7 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1267,7 +1267,7 @@ static int rpc_sockname(struct net *net, struct sockaddr *sap, size_t salen,
+ 		break;
+ 	default:
+ 		err = -EAFNOSUPPORT;
+-		goto out;
++		goto out_release;
+ 	}
+ 	if (err < 0) {
+ 		dprintk("RPC:       can't bind UDP socket (%d)\n", err);
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index 42ab3e2ac0602..2a8127f245e8d 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -1733,7 +1733,11 @@ static int vmci_transport_dgram_enqueue(
+ 	if (!dg)
+ 		return -ENOMEM;
+ 
+-	memcpy_from_msg(VMCI_DG_PAYLOAD(dg), msg, len);
++	err = memcpy_from_msg(VMCI_DG_PAYLOAD(dg), msg, len);
++	if (err) {
++		kfree(dg);
++		return err;
++	}
+ 
+ 	dg->dst = vmci_make_handle(remote_addr->svm_cid,
+ 				   remote_addr->svm_port);
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 07d053603e3a6..beba41f8a178c 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -3918,8 +3918,10 @@ static int __init regulatory_init_db(void)
+ 		return -EINVAL;
+ 
+ 	err = load_builtin_regdb_keys();
+-	if (err)
++	if (err) {
++		platform_device_unregister(reg_pdev);
+ 		return err;
++	}
+ 
+ 	/* We always try to get an update for the static regdomain */
+ 	err = regulatory_hint_core(cfg80211_world_regdom->alpha2);
+diff --git a/samples/vfio-mdev/mdpy-fb.c b/samples/vfio-mdev/mdpy-fb.c
+index a760e130bd0d6..8ad1aa13ddd90 100644
+--- a/samples/vfio-mdev/mdpy-fb.c
++++ b/samples/vfio-mdev/mdpy-fb.c
+@@ -109,7 +109,7 @@ static int mdpy_fb_probe(struct pci_dev *pdev,
+ 
+ 	ret = pci_request_regions(pdev, "mdpy-fb");
+ 	if (ret < 0)
+-		return ret;
++		goto err_disable_dev;
+ 
+ 	pci_read_config_dword(pdev, MDPY_FORMAT_OFFSET, &format);
+ 	pci_read_config_dword(pdev, MDPY_WIDTH_OFFSET,	&width);
+@@ -191,6 +191,9 @@ err_release_fb:
+ err_release_regions:
+ 	pci_release_regions(pdev);
+ 
++err_disable_dev:
++	pci_disable_device(pdev);
++
+ 	return ret;
+ }
+ 
+@@ -199,7 +202,10 @@ static void mdpy_fb_remove(struct pci_dev *pdev)
+ 	struct fb_info *info = pci_get_drvdata(pdev);
+ 
+ 	unregister_framebuffer(info);
++	iounmap(info->screen_base);
+ 	framebuffer_release(info);
++	pci_release_regions(pdev);
++	pci_disable_device(pdev);
+ }
+ 
+ static struct pci_device_id mdpy_fb_pci_table[] = {
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 8868c475205ff..80012d21f0386 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -869,8 +869,10 @@ static struct multi_transaction *multi_transaction_new(struct file *file,
+ 	if (!t)
+ 		return ERR_PTR(-ENOMEM);
+ 	kref_init(&t->count);
+-	if (copy_from_user(t->data, buf, size))
++	if (copy_from_user(t->data, buf, size)) {
++		put_multi_transaction(t);
+ 		return ERR_PTR(-EFAULT);
++	}
+ 
+ 	return t;
+ }
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index 3a4293c46ad53..c4b5d5e3a721a 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -1125,7 +1125,7 @@ ssize_t aa_remove_profiles(struct aa_ns *policy_ns, struct aa_label *subj,
+ 
+ 	if (!name) {
+ 		/* remove namespace - can only happen if fqname[0] == ':' */
+-		mutex_lock_nested(&ns->parent->lock, ns->level);
++		mutex_lock_nested(&ns->parent->lock, ns->parent->level);
+ 		__aa_bump_ns_revision(ns);
+ 		__aa_remove_ns(ns);
+ 		mutex_unlock(&ns->parent->lock);
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 612f737cee836..41da5ccc3f3eb 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -892,7 +892,7 @@ static int verify_header(struct aa_ext *e, int required, const char **ns)
+ 	 * if not specified use previous version
+ 	 * Mask off everything that is not kernel abi version
+ 	 */
+-	if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v7)) {
++	if (VERSION_LT(e->version, v5) || VERSION_GT(e->version, v8)) {
+ 		audit_iface(NULL, NULL, NULL, "unsupported interface version",
+ 			    e, error);
+ 		return error;
+diff --git a/security/device_cgroup.c b/security/device_cgroup.c
+index dc28914fa72e0..5ff31eeea68cf 100644
+--- a/security/device_cgroup.c
++++ b/security/device_cgroup.c
+@@ -79,6 +79,17 @@ free_and_exit:
+ 	return -ENOMEM;
+ }
+ 
++static void dev_exceptions_move(struct list_head *dest, struct list_head *orig)
++{
++	struct dev_exception_item *ex, *tmp;
++
++	lockdep_assert_held(&devcgroup_mutex);
++
++	list_for_each_entry_safe(ex, tmp, orig, list) {
++		list_move_tail(&ex->list, dest);
++	}
++}
++
+ /*
+  * called under devcgroup_mutex
+  */
+@@ -600,11 +611,13 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
+ 	int count, rc = 0;
+ 	struct dev_exception_item ex;
+ 	struct dev_cgroup *parent = css_to_devcgroup(devcgroup->css.parent);
++	struct dev_cgroup tmp_devcgrp;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	memset(&ex, 0, sizeof(ex));
++	memset(&tmp_devcgrp, 0, sizeof(tmp_devcgrp));
+ 	b = buffer;
+ 
+ 	switch (*b) {
+@@ -616,15 +629,27 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
+ 
+ 			if (!may_allow_all(parent))
+ 				return -EPERM;
+-			dev_exception_clean(devcgroup);
+-			devcgroup->behavior = DEVCG_DEFAULT_ALLOW;
+-			if (!parent)
++			if (!parent) {
++				devcgroup->behavior = DEVCG_DEFAULT_ALLOW;
++				dev_exception_clean(devcgroup);
+ 				break;
++			}
+ 
++			INIT_LIST_HEAD(&tmp_devcgrp.exceptions);
++			rc = dev_exceptions_copy(&tmp_devcgrp.exceptions,
++						 &devcgroup->exceptions);
++			if (rc)
++				return rc;
++			dev_exception_clean(devcgroup);
+ 			rc = dev_exceptions_copy(&devcgroup->exceptions,
+ 						 &parent->exceptions);
+-			if (rc)
++			if (rc) {
++				dev_exceptions_move(&devcgroup->exceptions,
++						    &tmp_devcgrp.exceptions);
+ 				return rc;
++			}
++			devcgroup->behavior = DEVCG_DEFAULT_ALLOW;
++			dev_exception_clean(&tmp_devcgrp);
+ 			break;
+ 		case DEVCG_DENY:
+ 			if (css_has_online_children(&devcgroup->css))
+diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
+index 4dfdccce497b6..ec814cbdae992 100644
+--- a/security/integrity/ima/ima_template.c
++++ b/security/integrity/ima/ima_template.c
+@@ -196,11 +196,11 @@ static int template_desc_init_fields(const char *template_fmt,
+ 	}
+ 
+ 	if (fields && num_fields) {
+-		*fields = kmalloc_array(i, sizeof(*fields), GFP_KERNEL);
++		*fields = kmalloc_array(i, sizeof(**fields), GFP_KERNEL);
+ 		if (*fields == NULL)
+ 			return -ENOMEM;
+ 
+-		memcpy(*fields, found_fields, i * sizeof(*fields));
++		memcpy(*fields, found_fields, i * sizeof(**fields));
+ 		*num_fields = i;
+ 	}
+ 
+@@ -266,8 +266,11 @@ static struct ima_template_desc *restore_template_fmt(char *template_name)
+ 
+ 	template_desc->name = "";
+ 	template_desc->fmt = kstrdup(template_name, GFP_KERNEL);
+-	if (!template_desc->fmt)
++	if (!template_desc->fmt) {
++		kfree(template_desc);
++		template_desc = NULL;
+ 		goto out;
++	}
+ 
+ 	spin_lock(&template_list);
+ 	list_add_tail_rcu(&template_desc->list, &defined_templates);
+diff --git a/sound/core/control_compat.c b/sound/core/control_compat.c
+index 3fc216644e0e8..00d826b048c4a 100644
+--- a/sound/core/control_compat.c
++++ b/sound/core/control_compat.c
+@@ -319,7 +319,9 @@ static int ctl_elem_read_user(struct snd_card *card,
+ 	err = snd_power_wait(card, SNDRV_CTL_POWER_D0);
+ 	if (err < 0)
+ 		goto error;
++	down_read(&card->controls_rwsem);
+ 	err = snd_ctl_elem_read(card, data);
++	up_read(&card->controls_rwsem);
+ 	if (err < 0)
+ 		goto error;
+ 	err = copy_ctl_value_to_user(userdata, valuep, data, type, count);
+@@ -347,7 +349,9 @@ static int ctl_elem_write_user(struct snd_ctl_file *file,
+ 	err = snd_power_wait(card, SNDRV_CTL_POWER_D0);
+ 	if (err < 0)
+ 		goto error;
++	down_write(&card->controls_rwsem);
+ 	err = snd_ctl_elem_write(card, file, data);
++	up_write(&card->controls_rwsem);
+ 	if (err < 0)
+ 		goto error;
+ 	err = copy_ctl_value_to_user(userdata, valuep, data, type, count);
+diff --git a/sound/drivers/mts64.c b/sound/drivers/mts64.c
+index b68e71ca7abda..7dceb1e1c3b48 100644
+--- a/sound/drivers/mts64.c
++++ b/sound/drivers/mts64.c
+@@ -830,6 +830,9 @@ static void snd_mts64_interrupt(void *private)
+ 	u8 status, data;
+ 	struct snd_rawmidi_substream *substream;
+ 
++	if (!mts)
++		return;
++
+ 	spin_lock(&mts->lock);
+ 	ret = mts64_read(mts->pardev->port);
+ 	data = ret & 0x00ff;
+diff --git a/sound/pci/asihpi/hpioctl.c b/sound/pci/asihpi/hpioctl.c
+index 3f06986fbecf8..d8c244a5dce05 100644
+--- a/sound/pci/asihpi/hpioctl.c
++++ b/sound/pci/asihpi/hpioctl.c
+@@ -359,7 +359,7 @@ int asihpi_adapter_probe(struct pci_dev *pci_dev,
+ 		pci_dev->device, pci_dev->subsystem_vendor,
+ 		pci_dev->subsystem_device, pci_dev->devfn);
+ 
+-	if (pci_enable_device(pci_dev) < 0) {
++	if (pcim_enable_device(pci_dev) < 0) {
+ 		dev_err(&pci_dev->dev,
+ 			"pci_enable_device failed, disabling device\n");
+ 		return -EIO;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index cbd5118570fd5..be9f1c4295cde 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1804,33 +1804,43 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ 
+ static int hdmi_parse_codec(struct hda_codec *codec)
+ {
+-	hda_nid_t nid;
++	hda_nid_t start_nid;
++	unsigned int caps;
+ 	int i, nodes;
+ 
+-	nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &nid);
+-	if (!nid || nodes < 0) {
++	nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &start_nid);
++	if (!start_nid || nodes < 0) {
+ 		codec_warn(codec, "HDMI: failed to get afg sub nodes\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	for (i = 0; i < nodes; i++, nid++) {
+-		unsigned int caps;
+-		unsigned int type;
++	/*
++	 * hdmi_add_pin() assumes total amount of converters to
++	 * be known, so first discover all converters
++	 */
++	for (i = 0; i < nodes; i++) {
++		hda_nid_t nid = start_nid + i;
+ 
+ 		caps = get_wcaps(codec, nid);
+-		type = get_wcaps_type(caps);
+ 
+ 		if (!(caps & AC_WCAP_DIGITAL))
+ 			continue;
+ 
+-		switch (type) {
+-		case AC_WID_AUD_OUT:
++		if (get_wcaps_type(caps) == AC_WID_AUD_OUT)
+ 			hdmi_add_cvt(codec, nid);
+-			break;
+-		case AC_WID_PIN:
++	}
++
++	/* discover audio pins */
++	for (i = 0; i < nodes; i++) {
++		hda_nid_t nid = start_nid + i;
++
++		caps = get_wcaps(codec, nid);
++
++		if (!(caps & AC_WCAP_DIGITAL))
++			continue;
++
++		if (get_wcaps_type(caps) == AC_WID_PIN)
+ 			hdmi_add_pin(codec, nid);
+-			break;
+-		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/soc/codecs/pcm512x.c b/sound/soc/codecs/pcm512x.c
+index 5272c81641c1c..310cfceab41fb 100644
+--- a/sound/soc/codecs/pcm512x.c
++++ b/sound/soc/codecs/pcm512x.c
+@@ -1471,7 +1471,7 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
+ 			if (val > 6) {
+ 				dev_err(dev, "Invalid pll-in\n");
+ 				ret = -EINVAL;
+-				goto err_clk;
++				goto err_pm;
+ 			}
+ 			pcm512x->pll_in = val;
+ 		}
+@@ -1480,7 +1480,7 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
+ 			if (val > 6) {
+ 				dev_err(dev, "Invalid pll-out\n");
+ 				ret = -EINVAL;
+-				goto err_clk;
++				goto err_pm;
+ 			}
+ 			pcm512x->pll_out = val;
+ 		}
+@@ -1489,12 +1489,12 @@ int pcm512x_probe(struct device *dev, struct regmap *regmap)
+ 			dev_err(dev,
+ 				"Error: both pll-in and pll-out, or none\n");
+ 			ret = -EINVAL;
+-			goto err_clk;
++			goto err_pm;
+ 		}
+ 		if (pcm512x->pll_in && pcm512x->pll_in == pcm512x->pll_out) {
+ 			dev_err(dev, "Error: pll-in == pll-out\n");
+ 			ret = -EINVAL;
+-			goto err_clk;
++			goto err_pm;
+ 		}
+ 	}
+ #endif
+diff --git a/sound/soc/codecs/rt298.c b/sound/soc/codecs/rt298.c
+index 06cdba4edfe2c..3181b91a025bc 100644
+--- a/sound/soc/codecs/rt298.c
++++ b/sound/soc/codecs/rt298.c
+@@ -1169,6 +1169,13 @@ static const struct dmi_system_id force_combo_jack_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Geminilake")
+ 		}
+ 	},
++	{
++		.ident = "Intel Kabylake R RVP",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Kabylake Client platform")
++		}
++	},
+ 	{ }
+ };
+ 
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index 6a2a58e107e37..9dd99d123e440 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -3217,8 +3217,6 @@ static int rt5670_i2c_probe(struct i2c_client *i2c,
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	pm_runtime_put(&i2c->dev);
+-
+ 	return 0;
+ err:
+ 	pm_runtime_disable(&i2c->dev);
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index e3e069277a3ff..13ef2bebf6da3 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -3715,7 +3715,12 @@ static irqreturn_t wm1811_jackdet_irq(int irq, void *data)
+ 	} else {
+ 		dev_dbg(component->dev, "Jack not detected\n");
+ 
++		/* Release wm8994->accdet_lock to avoid deadlock:
++		 * cancel_delayed_work_sync() takes wm8994->mic_work internal
++		 * lock and wm1811_mic_work takes wm8994->accdet_lock */
++		mutex_unlock(&wm8994->accdet_lock);
+ 		cancel_delayed_work_sync(&wm8994->mic_work);
++		mutex_lock(&wm8994->accdet_lock);
+ 
+ 		snd_soc_component_update_bits(component, WM8958_MICBIAS2,
+ 				    WM8958_MICB2_DISCH, WM8958_MICB2_DISCH);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index c4d19b88d17dc..2001bc774c643 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -437,6 +437,21 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{
++		/* Advantech MICA-071 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Advantech"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MICA-071"),
++		},
++		/* OVCD Th = 1500uA to reliable detect head-phones vs -set */
++		.driver_data = (void *)(BYT_RT5640_IN3_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_1500UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_DIFF_MIC |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ARCHOS"),
+diff --git a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+index cdb3940710379..9f8d2a00a1cdd 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
++++ b/sound/soc/mediatek/mt8173/mt8173-rt5650-rt5514.c
+@@ -199,14 +199,16 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
+ 	if (!mt8173_rt5650_rt5514_codecs[0].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 	mt8173_rt5650_rt5514_codecs[1].of_node =
+ 		of_parse_phandle(pdev->dev.of_node, "mediatek,audio-codec", 1);
+ 	if (!mt8173_rt5650_rt5514_codecs[1].of_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto out;
+ 	}
+ 	mt8173_rt5650_rt5514_codec_conf[0].of_node =
+ 		mt8173_rt5650_rt5514_codecs[1].of_node;
+@@ -218,6 +220,7 @@ static int mt8173_rt5650_rt5514_dev_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++out:
+ 	of_node_put(platform_node);
+ 	return ret;
+ }
+diff --git a/sound/soc/pxa/mmp-pcm.c b/sound/soc/pxa/mmp-pcm.c
+index d2d4652de32cc..5969aa66410dc 100644
+--- a/sound/soc/pxa/mmp-pcm.c
++++ b/sound/soc/pxa/mmp-pcm.c
+@@ -90,7 +90,7 @@ static bool filter(struct dma_chan *chan, void *param)
+ 
+ 	devname = kasprintf(GFP_KERNEL, "%s.%d", dma_data->dma_res->name,
+ 		dma_data->ssp_id);
+-	if ((strcmp(dev_name(chan->device->dev), devname) == 0) &&
++	if (devname && (strcmp(dev_name(chan->device->dev), devname) == 0) &&
+ 		(chan->chan_id == dma_data->dma_res->start)) {
+ 		found = true;
+ 	}
+diff --git a/sound/soc/rockchip/rockchip_pdm.c b/sound/soc/rockchip/rockchip_pdm.c
+index ad16c8310dd38..7dfd1e6b2c25c 100644
+--- a/sound/soc/rockchip/rockchip_pdm.c
++++ b/sound/soc/rockchip/rockchip_pdm.c
+@@ -303,6 +303,7 @@ static int rockchip_pdm_runtime_resume(struct device *dev)
+ 
+ 	ret = clk_prepare_enable(pdm->hclk);
+ 	if (ret) {
++		clk_disable_unprepare(pdm->clk);
+ 		dev_err(pdm->dev, "hclock enable failed %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/sound/soc/rockchip/rockchip_spdif.c b/sound/soc/rockchip/rockchip_spdif.c
+index a89fe9b6463ba..5ac726da60158 100644
+--- a/sound/soc/rockchip/rockchip_spdif.c
++++ b/sound/soc/rockchip/rockchip_spdif.c
+@@ -89,6 +89,7 @@ static int __maybe_unused rk_spdif_runtime_resume(struct device *dev)
+ 
+ 	ret = clk_prepare_enable(spdif->hclk);
+ 	if (ret) {
++		clk_disable_unprepare(spdif->mclk);
+ 		dev_err(spdif->dev, "hclk clock enable failed %d\n", ret);
+ 		return ret;
+ 	}
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 2faf95d4bb75f..e01f3bf3ef171 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -458,8 +458,15 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ 		return err;
+ 
+ 	if (snd_soc_volsw_is_stereo(mc)) {
++		val2 = ucontrol->value.integer.value[1];
++
++		if (mc->platform_max && val2 > mc->platform_max)
++			return -EINVAL;
++		if (val2 > max)
++			return -EINVAL;
++
+ 		val_mask = mask << rshift;
+-		val2 = (ucontrol->value.integer.value[1] + min) & mask;
++		val2 = (val2 + min) & mask;
+ 		val2 = val2 << rshift;
+ 
+ 		err = snd_soc_component_update_bits(component, reg2, val_mask,
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index 67d74218d861b..2399d500b8812 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -318,7 +318,8 @@ static void line6_data_received(struct urb *urb)
+ 		for (;;) {
+ 			done =
+ 				line6_midibuf_read(mb, line6->buffer_message,
+-						LINE6_MIDI_MESSAGE_MAXLEN);
++						   LINE6_MIDI_MESSAGE_MAXLEN,
++						   LINE6_MIDIBUF_READ_RX);
+ 
+ 			if (done <= 0)
+ 				break;
+diff --git a/sound/usb/line6/midi.c b/sound/usb/line6/midi.c
+index e2cf55c53ea8b..6df1cf26e440c 100644
+--- a/sound/usb/line6/midi.c
++++ b/sound/usb/line6/midi.c
+@@ -48,7 +48,8 @@ static void line6_midi_transmit(struct snd_rawmidi_substream *substream)
+ 	int req, done;
+ 
+ 	for (;;) {
+-		req = min(line6_midibuf_bytes_free(mb), line6->max_packet_size);
++		req = min3(line6_midibuf_bytes_free(mb), line6->max_packet_size,
++			   LINE6_FALLBACK_MAXPACKETSIZE);
+ 		done = snd_rawmidi_transmit_peek(substream, chunk, req);
+ 
+ 		if (done == 0)
+@@ -60,7 +61,8 @@ static void line6_midi_transmit(struct snd_rawmidi_substream *substream)
+ 
+ 	for (;;) {
+ 		done = line6_midibuf_read(mb, chunk,
+-					  LINE6_FALLBACK_MAXPACKETSIZE);
++					  LINE6_FALLBACK_MAXPACKETSIZE,
++					  LINE6_MIDIBUF_READ_TX);
+ 
+ 		if (done == 0)
+ 			break;
+diff --git a/sound/usb/line6/midibuf.c b/sound/usb/line6/midibuf.c
+index c931d48801ebe..4622234723a6e 100644
+--- a/sound/usb/line6/midibuf.c
++++ b/sound/usb/line6/midibuf.c
+@@ -13,6 +13,7 @@
+ 
+ #include "midibuf.h"
+ 
++
+ static int midibuf_message_length(unsigned char code)
+ {
+ 	int message_length;
+@@ -24,12 +25,7 @@ static int midibuf_message_length(unsigned char code)
+ 
+ 		message_length = length[(code >> 4) - 8];
+ 	} else {
+-		/*
+-		   Note that according to the MIDI specification 0xf2 is
+-		   the "Song Position Pointer", but this is used by Line 6
+-		   to send sysex messages to the host.
+-		 */
+-		static const int length[] = { -1, 2, -1, 2, -1, -1, 1, 1, 1, 1,
++		static const int length[] = { -1, 2, 2, 2, -1, -1, 1, 1, 1, -1,
+ 			1, 1, 1, -1, 1, 1
+ 		};
+ 		message_length = length[code & 0x0f];
+@@ -129,7 +125,7 @@ int line6_midibuf_write(struct midi_buffer *this, unsigned char *data,
+ }
+ 
+ int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,
+-		       int length)
++		       int length, int read_type)
+ {
+ 	int bytes_used;
+ 	int length1, length2;
+@@ -152,9 +148,22 @@ int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,
+ 
+ 	length1 = this->size - this->pos_read;
+ 
+-	/* check MIDI command length */
+ 	command = this->buf[this->pos_read];
++	/*
++	   PODxt always has status byte lower nibble set to 0010,
++	   when it means to send 0000, so we correct if here so
++	   that control/program changes come on channel 1 and
++	   sysex message status byte is correct
++	 */
++	if (read_type == LINE6_MIDIBUF_READ_RX) {
++		if (command == 0xb2 || command == 0xc2 || command == 0xf2) {
++			unsigned char fixed = command & 0xf0;
++			this->buf[this->pos_read] = fixed;
++			command = fixed;
++		}
++	}
+ 
++	/* check MIDI command length */
+ 	if (command & 0x80) {
+ 		midi_length = midibuf_message_length(command);
+ 		this->command_prev = command;
+diff --git a/sound/usb/line6/midibuf.h b/sound/usb/line6/midibuf.h
+index 6ea21ffb66271..187f49c975c28 100644
+--- a/sound/usb/line6/midibuf.h
++++ b/sound/usb/line6/midibuf.h
+@@ -12,6 +12,9 @@
+ #ifndef MIDIBUF_H
+ #define MIDIBUF_H
+ 
++#define LINE6_MIDIBUF_READ_TX 0
++#define LINE6_MIDIBUF_READ_RX 1
++
+ struct midi_buffer {
+ 	unsigned char *buf;
+ 	int size;
+@@ -27,7 +30,7 @@ extern void line6_midibuf_destroy(struct midi_buffer *mb);
+ extern int line6_midibuf_ignore(struct midi_buffer *mb, int length);
+ extern int line6_midibuf_init(struct midi_buffer *mb, int size, int split);
+ extern int line6_midibuf_read(struct midi_buffer *mb, unsigned char *data,
+-			      int length);
++			      int length, int read_type);
+ extern void line6_midibuf_reset(struct midi_buffer *mb);
+ extern int line6_midibuf_write(struct midi_buffer *mb, unsigned char *data,
+ 			       int length);
+diff --git a/sound/usb/line6/pod.c b/sound/usb/line6/pod.c
+index dff8e7d5f3050..41cb655eb4a6f 100644
+--- a/sound/usb/line6/pod.c
++++ b/sound/usb/line6/pod.c
+@@ -169,8 +169,9 @@ static struct line6_pcm_properties pod_pcm_properties = {
+ 	.bytes_per_channel = 3 /* SNDRV_PCM_FMTBIT_S24_3LE */
+ };
+ 
++
+ static const char pod_version_header[] = {
+-	0xf2, 0x7e, 0x7f, 0x06, 0x02
++	0xf0, 0x7e, 0x7f, 0x06, 0x02
+ };
+ 
+ /* forward declarations: */
+diff --git a/tools/arch/parisc/include/uapi/asm/mman.h b/tools/arch/parisc/include/uapi/asm/mman.h
+index 1bd78758bde98..5d789369aeef8 100644
+--- a/tools/arch/parisc/include/uapi/asm/mman.h
++++ b/tools/arch/parisc/include/uapi/asm/mman.h
+@@ -1,20 +1,20 @@
+ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+ #ifndef TOOLS_ARCH_PARISC_UAPI_ASM_MMAN_FIX_H
+ #define TOOLS_ARCH_PARISC_UAPI_ASM_MMAN_FIX_H
+-#define MADV_DODUMP	70
++#define MADV_DODUMP	17
+ #define MADV_DOFORK	11
+-#define MADV_DONTDUMP   69
++#define MADV_DONTDUMP   16
+ #define MADV_DONTFORK	10
+ #define MADV_DONTNEED   4
+ #define MADV_FREE	8
+-#define MADV_HUGEPAGE	67
+-#define MADV_MERGEABLE   65
+-#define MADV_NOHUGEPAGE	68
++#define MADV_HUGEPAGE	14
++#define MADV_MERGEABLE  12
++#define MADV_NOHUGEPAGE 15
+ #define MADV_NORMAL     0
+ #define MADV_RANDOM     1
+ #define MADV_REMOVE	9
+ #define MADV_SEQUENTIAL 2
+-#define MADV_UNMERGEABLE 66
++#define MADV_UNMERGEABLE 13
+ #define MADV_WILLNEED   3
+ #define MAP_ANONYMOUS	0x10
+ #define MAP_DENYWRITE	0x0800
+diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h
+index b3e418afc21a2..71010922cd4d1 100644
+--- a/tools/perf/bench/bench.h
++++ b/tools/perf/bench/bench.h
+@@ -10,25 +10,13 @@ extern struct timeval bench__start, bench__end, bench__runtime;
+  * The madvise transparent hugepage constants were added in glibc
+  * 2.13. For compatibility with older versions of glibc, define these
+  * tokens if they are not already defined.
+- *
+- * PA-RISC uses different madvise values from other architectures and
+- * needs to be special-cased.
+  */
+-#ifdef __hppa__
+-# ifndef MADV_HUGEPAGE
+-#  define MADV_HUGEPAGE		67
+-# endif
+-# ifndef MADV_NOHUGEPAGE
+-#  define MADV_NOHUGEPAGE	68
+-# endif
+-#else
+ # ifndef MADV_HUGEPAGE
+ #  define MADV_HUGEPAGE		14
+ # endif
+ # ifndef MADV_NOHUGEPAGE
+ #  define MADV_NOHUGEPAGE	15
+ # endif
+-#endif
+ 
+ int bench_numa(int argc, const char **argv);
+ int bench_sched_messaging(int argc, const char **argv);
+diff --git a/tools/perf/tests/attr.py b/tools/perf/tests/attr.py
+index 44090a9a19f3b..3e07eee33b103 100644
+--- a/tools/perf/tests/attr.py
++++ b/tools/perf/tests/attr.py
+@@ -1,4 +1,3 @@
+-#! /usr/bin/python
+ # SPDX-License-Identifier: GPL-2.0
+ 
+ import os
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index 05cbdb6e6be8e..1e607403c94c1 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -1962,7 +1962,7 @@ static int find_dso_sym(struct dso *dso, const char *sym_name, u64 *start,
+ 				*size = sym->start - *start;
+ 			if (idx > 0) {
+ 				if (*size)
+-					return 1;
++					return 0;
+ 			} else if (dso_sym_match(sym, sym_name, &cnt, idx)) {
+ 				print_duplicate_syms(dso, sym_name);
+ 				return -EINVAL;
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index 230e94bf7775a..6de57d9ee7cc2 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -267,26 +267,13 @@ static int die_get_attr_udata(Dwarf_Die *tp_die, unsigned int attr_name,
+ {
+ 	Dwarf_Attribute attr;
+ 
+-	if (dwarf_attr(tp_die, attr_name, &attr) == NULL ||
++	if (dwarf_attr_integrate(tp_die, attr_name, &attr) == NULL ||
+ 	    dwarf_formudata(&attr, result) != 0)
+ 		return -ENOENT;
+ 
+ 	return 0;
+ }
+ 
+-/* Get attribute and translate it as a sdata */
+-static int die_get_attr_sdata(Dwarf_Die *tp_die, unsigned int attr_name,
+-			      Dwarf_Sword *result)
+-{
+-	Dwarf_Attribute attr;
+-
+-	if (dwarf_attr(tp_die, attr_name, &attr) == NULL ||
+-	    dwarf_formsdata(&attr, result) != 0)
+-		return -ENOENT;
+-
+-	return 0;
+-}
+-
+ /**
+  * die_is_signed_type - Check whether a type DIE is signed or not
+  * @tp_die: a DIE of a type
+@@ -410,9 +397,9 @@ int die_get_data_member_location(Dwarf_Die *mb_die, Dwarf_Word *offs)
+ /* Get the call file index number in CU DIE */
+ static int die_get_call_fileno(Dwarf_Die *in_die)
+ {
+-	Dwarf_Sword idx;
++	Dwarf_Word idx;
+ 
+-	if (die_get_attr_sdata(in_die, DW_AT_call_file, &idx) == 0)
++	if (die_get_attr_udata(in_die, DW_AT_call_file, &idx) == 0)
+ 		return (int)idx;
+ 	else
+ 		return -ENOENT;
+@@ -421,9 +408,9 @@ static int die_get_call_fileno(Dwarf_Die *in_die)
+ /* Get the declared file index number in CU DIE */
+ static int die_get_decl_fileno(Dwarf_Die *pdie)
+ {
+-	Dwarf_Sword idx;
++	Dwarf_Word idx;
+ 
+-	if (die_get_attr_sdata(pdie, DW_AT_decl_file, &idx) == 0)
++	if (die_get_attr_udata(pdie, DW_AT_decl_file, &idx) == 0)
+ 		return (int)idx;
+ 	else
+ 		return -ENOENT;
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 5fba57c10eddc..8dde4369fbcdb 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1129,7 +1129,7 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ 			   (!used_opd && syms_ss->adjust_symbols)) {
+ 			GElf_Phdr phdr;
+ 
+-			if (elf_read_program_header(syms_ss->elf,
++			if (elf_read_program_header(runtime_ss->elf,
+ 						    (u64)sym.st_value, &phdr)) {
+ 				pr_warning("%s: failed to find program header for "
+ 					   "symbol: %s st_value: %#" PRIx64 "\n",
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index 406401f1acc25..a0b53309c5d8f 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -63,6 +63,7 @@ my %default = (
+     "STOP_TEST_AFTER"		=> 600,
+     "MAX_MONITOR_WAIT"		=> 1800,
+     "GRUB_REBOOT"		=> "grub2-reboot",
++    "GRUB_BLS_GET"		=> "grubby --info=ALL",
+     "SYSLINUX"			=> "extlinux",
+     "SYSLINUX_PATH"		=> "/boot/extlinux",
+     "CONNECT_TIMEOUT"		=> 25,
+@@ -123,6 +124,7 @@ my $last_grub_menu;
+ my $grub_file;
+ my $grub_number;
+ my $grub_reboot;
++my $grub_bls_get;
+ my $syslinux;
+ my $syslinux_path;
+ my $syslinux_label;
+@@ -292,6 +294,7 @@ my %option_map = (
+     "GRUB_MENU"			=> \$grub_menu,
+     "GRUB_FILE"			=> \$grub_file,
+     "GRUB_REBOOT"		=> \$grub_reboot,
++    "GRUB_BLS_GET"		=> \$grub_bls_get,
+     "SYSLINUX"			=> \$syslinux,
+     "SYSLINUX_PATH"		=> \$syslinux_path,
+     "SYSLINUX_LABEL"		=> \$syslinux_label,
+@@ -437,7 +440,7 @@ EOF
+     ;
+ $config_help{"REBOOT_TYPE"} = << "EOF"
+  Way to reboot the box to the test kernel.
+- Only valid options so far are "grub", "grub2", "syslinux", and "script".
++ Only valid options so far are "grub", "grub2", "grub2bls", "syslinux", and "script".
+ 
+  If you specify grub, it will assume grub version 1
+  and will search in /boot/grub/menu.lst for the title \$GRUB_MENU
+@@ -451,6 +454,8 @@ $config_help{"REBOOT_TYPE"} = << "EOF"
+  If you specify grub2, then you also need to specify both \$GRUB_MENU
+  and \$GRUB_FILE.
+ 
++ If you specify grub2bls, then you also need to specify \$GRUB_MENU.
++
+  If you specify syslinux, then you may use SYSLINUX to define the syslinux
+  command (defaults to extlinux), and SYSLINUX_PATH to specify the path to
+  the syslinux install (defaults to /boot/extlinux). But you have to specify
+@@ -476,6 +481,9 @@ $config_help{"GRUB_MENU"} = << "EOF"
+  menu must be a non-nested menu. Add the quotes used in the menu
+  to guarantee your selection, as the first menuentry with the content
+  of \$GRUB_MENU that is found will be used.
++
++ For grub2bls, \$GRUB_MENU is searched on the result of \$GRUB_BLS_GET
++ command for the lines that begin with "title".
+ EOF
+     ;
+ $config_help{"GRUB_FILE"} = << "EOF"
+@@ -692,7 +700,7 @@ sub get_mandatory_configs {
+ 	}
+     }
+ 
+-    if ($rtype eq "grub") {
++    if (($rtype eq "grub") or ($rtype eq "grub2bls")) {
+ 	get_mandatory_config("GRUB_MENU");
+     }
+ 
+@@ -1850,84 +1858,121 @@ sub run_scp_mod {
+     return run_scp($src, $dst, $cp_scp);
+ }
+ 
+-sub get_grub2_index {
++sub _get_grub_index {
++
++    my ($command, $target, $skip, $submenu) = @_;
+ 
+     return if (defined($grub_number) && defined($last_grub_menu) &&
+ 	       $last_grub_menu eq $grub_menu && defined($last_machine) &&
+ 	       $last_machine eq $machine);
+ 
+-    doprint "Find grub2 menu ... ";
++    doprint "Find $reboot_type menu ... ";
+     $grub_number = -1;
+ 
+     my $ssh_grub = $ssh_exec;
+-    $ssh_grub =~ s,\$SSH_COMMAND,cat $grub_file,g;
++    $ssh_grub =~ s,\$SSH_COMMAND,$command,g;
+ 
+     open(IN, "$ssh_grub |")
+-	or dodie "unable to get $grub_file";
++	or dodie "unable to execute $command";
+ 
+     my $found = 0;
+ 
++    my $submenu_number = 0;
++
+     while (<IN>) {
+-	if (/^menuentry.*$grub_menu/) {
++	if (/$target/) {
+ 	    $grub_number++;
+ 	    $found = 1;
+ 	    last;
+-	} elsif (/^menuentry\s|^submenu\s/) {
++	} elsif (defined($submenu) && /$submenu/) {
++		$submenu_number++;
++		$grub_number = -1;
++	} elsif (/$skip/) {
+ 	    $grub_number++;
+ 	}
+     }
+     close(IN);
+ 
+-    dodie "Could not find '$grub_menu' in $grub_file on $machine"
++    dodie "Could not find '$grub_menu' through $command on $machine"
+ 	if (!$found);
++    if ($submenu_number > 0) {
++	$grub_number = "$submenu_number>$grub_number";
++    }
+     doprint "$grub_number\n";
+     $last_grub_menu = $grub_menu;
+     $last_machine = $machine;
+ }
+ 
+-sub get_grub_index {
+-
+-    if ($reboot_type eq "grub2") {
+-	get_grub2_index;
+-	return;
+-    }
++sub get_grub2_index {
+ 
+-    if ($reboot_type ne "grub") {
+-	return;
+-    }
+     return if (defined($grub_number) && defined($last_grub_menu) &&
+ 	       $last_grub_menu eq $grub_menu && defined($last_machine) &&
+ 	       $last_machine eq $machine);
+ 
+-    doprint "Find grub menu ... ";
++    doprint "Find grub2 menu ... ";
+     $grub_number = -1;
+ 
+     my $ssh_grub = $ssh_exec;
+-    $ssh_grub =~ s,\$SSH_COMMAND,cat /boot/grub/menu.lst,g;
++    $ssh_grub =~ s,\$SSH_COMMAND,cat $grub_file,g;
+ 
+     open(IN, "$ssh_grub |")
+-	or dodie "unable to get menu.lst";
++	or dodie "unable to get $grub_file";
+ 
+     my $found = 0;
++    my $grub_menu_qt = quotemeta($grub_menu);
+ 
+     while (<IN>) {
+-	if (/^\s*title\s+$grub_menu\s*$/) {
++	if (/^menuentry.*$grub_menu_qt/) {
+ 	    $grub_number++;
+ 	    $found = 1;
+ 	    last;
+-	} elsif (/^\s*title\s/) {
++	} elsif (/^menuentry\s|^submenu\s/) {
+ 	    $grub_number++;
+ 	}
+     }
+     close(IN);
+ 
+-    dodie "Could not find '$grub_menu' in /boot/grub/menu on $machine"
++    dodie "Could not find '$grub_menu' in $grub_file on $machine"
+ 	if (!$found);
+     doprint "$grub_number\n";
+     $last_grub_menu = $grub_menu;
+     $last_machine = $machine;
+ }
+ 
++sub get_grub_index {
++
++    my $command;
++    my $target;
++    my $skip;
++    my $submenu;
++    my $grub_menu_qt;
++
++    if ($reboot_type !~ /^grub/) {
++	return;
++    }
++
++    $grub_menu_qt = quotemeta($grub_menu);
++
++    if ($reboot_type eq "grub") {
++	$command = "cat /boot/grub/menu.lst";
++	$target = '^\s*title\s+' . $grub_menu_qt . '\s*$';
++	$skip = '^\s*title\s';
++    } elsif ($reboot_type eq "grub2") {
++	$command = "cat $grub_file";
++	$target = '^\s*menuentry.*' . $grub_menu_qt;
++	$skip = '^\s*menuentry';
++	$submenu = '^\s*submenu\s';
++    } elsif ($reboot_type eq "grub2bls") {
++        $command = $grub_bls_get;
++        $target = '^title=.*' . $grub_menu_qt;
++        $skip = '^title=';
++    } else {
++	return;
++    }
++
++    _get_grub_index($command, $target, $skip, $submenu);
++}
++
+ sub wait_for_input
+ {
+     my ($fp, $time) = @_;
+@@ -1988,8 +2033,8 @@ sub reboot_to {
+ 
+     if ($reboot_type eq "grub") {
+ 	run_ssh "'(echo \"savedefault --default=$grub_number --once\" | grub --batch)'";
+-    } elsif ($reboot_type eq "grub2") {
+-	run_ssh "$grub_reboot $grub_number";
++    } elsif (($reboot_type eq "grub2") or ($reboot_type eq "grub2bls")) {
++	run_ssh "$grub_reboot \"'$grub_number'\"";
+     } elsif ($reboot_type eq "syslinux") {
+ 	run_ssh "$syslinux --once \\\"$syslinux_label\\\" $syslinux_path";
+     } elsif (defined $reboot_script) {
+@@ -3706,9 +3751,10 @@ sub test_this_config {
+     # .config to make sure it is missing the config that
+     # we had before
+     my %configs = %min_configs;
+-    delete $configs{$config};
++    $configs{$config} = "# $config is not set";
+     make_new_config ((values %configs), (values %keep_configs));
+     make_oldconfig;
++    delete $configs{$config};
+     undef %configs;
+     assign_configs \%configs, $output_config;
+ 
+@@ -4283,7 +4329,7 @@ for (my $i = 1; $i <= $opt{"NUM_TESTS"}; $i++) {
+ 
+     if (!$buildonly) {
+ 	$target = "$ssh_user\@$machine";
+-	if ($reboot_type eq "grub") {
++	if (($reboot_type eq "grub") or ($reboot_type eq "grub2bls")) {
+ 	    dodie "GRUB_MENU not defined" if (!defined($grub_menu));
+ 	} elsif ($reboot_type eq "grub2") {
+ 	    dodie "GRUB_MENU not defined" if (!defined($grub_menu));
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+index 6fed4cf2db81d..79d614f1fe8e4 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+@@ -45,11 +45,18 @@ cnt_trace() {
+ 
+ test_event_enabled() {
+     val=$1
++    check_times=10		# wait for 10 * SLEEP_TIME at most
+ 
+-    e=`cat $EVENT_ENABLE`
+-    if [ "$e" != $val ]; then
+-	fail "Expected $val but found $e"
+-    fi
++    while [ $check_times -ne 0 ]; do
++	e=`cat $EVENT_ENABLE`
++	if [ "$e" == $val ]; then
++	    return 0
++	fi
++	sleep $SLEEP_TIME
++	check_times=$((check_times - 1))
++    done
++
++    fail "Expected $val but found $e"
+ }
+ 
+ run_enable_disable() {
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index 9700281bee4c3..e7c205c8df56d 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -144,6 +144,11 @@ endef
+ clean:
+ 	$(CLEAN)
+ 
++# Enables to extend CFLAGS and LDFLAGS from command line, e.g.
++# make USERCFLAGS=-Werror USERLDFLAGS=-static
++CFLAGS += $(USERCFLAGS)
++LDFLAGS += $(USERLDFLAGS)
++
+ # When make O= with kselftest target from main level
+ # the following aren't defined.
+ #
+diff --git a/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c b/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
+index 1899bd85121f0..3f6d5be5bd145 100644
+--- a/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
++++ b/tools/testing/selftests/powerpc/dscr/dscr_sysfs_test.c
+@@ -27,6 +27,7 @@ static int check_cpu_dscr_default(char *file, unsigned long val)
+ 	rc = read(fd, buf, sizeof(buf));
+ 	if (rc == -1) {
+ 		perror("read() failed");
++		close(fd);
+ 		return 1;
+ 	}
+ 	close(fd);
+@@ -68,8 +69,10 @@ static int check_all_cpu_dscr_defaults(unsigned long val)
+ 		if (access(file, F_OK))
+ 			continue;
+ 
+-		if (check_cpu_dscr_default(file, val))
++		if (check_cpu_dscr_default(file, val)) {
++			closedir(sysfs);
+ 			return 1;
++		}
+ 	}
+ 	closedir(sysfs);
+ 	return 0;
+diff --git a/tools/testing/selftests/proc/proc-uptime-002.c b/tools/testing/selftests/proc/proc-uptime-002.c
+index e7ceabed7f51f..7d0aa22bdc12b 100644
+--- a/tools/testing/selftests/proc/proc-uptime-002.c
++++ b/tools/testing/selftests/proc/proc-uptime-002.c
+@@ -17,6 +17,7 @@
+ // while shifting across CPUs.
+ #undef NDEBUG
+ #include <assert.h>
++#include <errno.h>
+ #include <unistd.h>
+ #include <sys/syscall.h>
+ #include <stdlib.h>
+@@ -54,7 +55,7 @@ int main(void)
+ 		len += sizeof(unsigned long);
+ 		free(m);
+ 		m = malloc(len);
+-	} while (sys_sched_getaffinity(0, len, m) == -EINVAL);
++	} while (sys_sched_getaffinity(0, len, m) == -1 && errno == EINVAL);
+ 
+ 	fd = open("/proc/uptime", O_RDONLY);
+ 	assert(fd >= 0);


             reply	other threads:[~2023-01-18 11:11 UTC|newest]

Thread overview: 332+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-18 11:11 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2024-04-18  3:06 [gentoo-commits] proj/linux-patches:4.19 commit in: / Alice Ferrazzi
2023-09-02  9:59 Mike Pagano
2023-08-30 15:00 Mike Pagano
2023-08-16 16:59 Mike Pagano
2023-08-11 11:58 Mike Pagano
2023-08-08 18:43 Mike Pagano
2023-07-24 20:30 Mike Pagano
2023-06-28 10:29 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:21 Mike Pagano
2023-06-09 11:32 Mike Pagano
2023-05-30 12:57 Mike Pagano
2023-05-17 11:14 Mike Pagano
2023-05-17 11:01 Mike Pagano
2023-05-10 17:59 Mike Pagano
2023-04-26  9:35 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 11:41 Mike Pagano
2023-03-22 14:16 Alice Ferrazzi
2023-03-17 10:46 Mike Pagano
2023-03-13 11:35 Alice Ferrazzi
2023-03-11 16:01 Mike Pagano
2023-03-03 12:31 Mike Pagano
2023-02-25 11:41 Mike Pagano
2023-02-24  3:19 Alice Ferrazzi
2023-02-24  3:15 Alice Ferrazzi
2023-02-22 14:51 Alice Ferrazzi
2023-02-06 12:49 Mike Pagano
2023-01-24  7:16 Alice Ferrazzi
2022-12-14 12:15 Mike Pagano
2022-12-08 12:14 Alice Ferrazzi
2022-11-25 17:04 Mike Pagano
2022-11-23  9:39 Alice Ferrazzi
2022-11-10 17:58 Mike Pagano
2022-11-03 15:11 Mike Pagano
2022-11-01 19:48 Mike Pagano
2022-10-26 11:41 Mike Pagano
2022-10-05 11:59 Mike Pagano
2022-09-28  9:18 Mike Pagano
2022-09-20 12:03 Mike Pagano
2022-09-15 11:09 Mike Pagano
2022-09-05 12:06 Mike Pagano
2022-08-25 10:35 Mike Pagano
2022-08-11 12:36 Mike Pagano
2022-07-29 15:28 Mike Pagano
2022-07-21 20:12 Mike Pagano
2022-07-12 16:01 Mike Pagano
2022-07-07 16:18 Mike Pagano
2022-07-02 16:07 Mike Pagano
2022-06-25 10:22 Mike Pagano
2022-06-16 11:40 Mike Pagano
2022-06-14 16:02 Mike Pagano
2022-06-06 11:05 Mike Pagano
2022-05-27 12:24 Mike Pagano
2022-05-25 11:55 Mike Pagano
2022-05-18  9:50 Mike Pagano
2022-05-15 22:12 Mike Pagano
2022-05-12 11:30 Mike Pagano
2022-05-01 17:04 Mike Pagano
2022-04-27 12:03 Mike Pagano
2022-04-20 12:09 Mike Pagano
2022-04-15 13:11 Mike Pagano
2022-04-12 19:24 Mike Pagano
2022-03-28 10:59 Mike Pagano
2022-03-23 11:57 Mike Pagano
2022-03-16 13:27 Mike Pagano
2022-03-11 10:56 Mike Pagano
2022-03-08 18:30 Mike Pagano
2022-03-02 13:08 Mike Pagano
2022-02-26 21:14 Mike Pagano
2022-02-23 12:39 Mike Pagano
2022-02-16 12:47 Mike Pagano
2022-02-11 12:53 Mike Pagano
2022-02-11 12:46 Mike Pagano
2022-02-11 12:45 Mike Pagano
2022-02-11 12:37 Mike Pagano
2022-02-08 17:56 Mike Pagano
2022-01-29 17:45 Mike Pagano
2022-01-27 11:39 Mike Pagano
2022-01-11 13:14 Mike Pagano
2022-01-05 12:55 Mike Pagano
2021-12-29 13:11 Mike Pagano
2021-12-22 14:07 Mike Pagano
2021-12-14 10:36 Mike Pagano
2021-12-08 12:55 Mike Pagano
2021-12-01 12:51 Mike Pagano
2021-11-26 11:59 Mike Pagano
2021-11-12 14:16 Mike Pagano
2021-11-06 13:26 Mike Pagano
2021-11-02 19:32 Mike Pagano
2021-10-27 11:59 Mike Pagano
2021-10-20 13:26 Mike Pagano
2021-10-17 13:12 Mike Pagano
2021-10-13 15:00 Alice Ferrazzi
2021-10-09 21:33 Mike Pagano
2021-10-06 14:06 Mike Pagano
2021-09-26 14:13 Mike Pagano
2021-09-22 11:40 Mike Pagano
2021-09-20 22:05 Mike Pagano
2021-09-03 11:22 Mike Pagano
2021-09-03 10:08 Alice Ferrazzi
2021-08-26 14:06 Mike Pagano
2021-08-25 22:45 Mike Pagano
2021-08-25 20:41 Mike Pagano
2021-08-15 20:07 Mike Pagano
2021-08-12 11:51 Mike Pagano
2021-08-08 13:39 Mike Pagano
2021-08-04 11:54 Mike Pagano
2021-08-03 12:26 Mike Pagano
2021-07-31 10:34 Alice Ferrazzi
2021-07-28 12:37 Mike Pagano
2021-07-20 15:35 Alice Ferrazzi
2021-07-13 12:38 Mike Pagano
2021-07-11 14:45 Mike Pagano
2021-06-30 14:25 Mike Pagano
2021-06-16 12:22 Mike Pagano
2021-06-10 11:46 Mike Pagano
2021-06-03 10:32 Alice Ferrazzi
2021-05-26 12:05 Mike Pagano
2021-05-22 10:03 Mike Pagano
2021-05-07 11:40 Alice Ferrazzi
2021-04-30 19:02 Mike Pagano
2021-04-28 18:31 Mike Pagano
2021-04-28 11:44 Alice Ferrazzi
2021-04-16 11:15 Alice Ferrazzi
2021-04-14 11:22 Alice Ferrazzi
2021-04-10 13:24 Mike Pagano
2021-04-07 12:21 Mike Pagano
2021-03-30 14:17 Mike Pagano
2021-03-24 12:08 Mike Pagano
2021-03-22 15:50 Mike Pagano
2021-03-20 14:26 Mike Pagano
2021-03-17 16:21 Mike Pagano
2021-03-11 14:05 Mike Pagano
2021-03-07 15:15 Mike Pagano
2021-03-04 12:08 Mike Pagano
2021-02-23 14:31 Alice Ferrazzi
2021-02-13 15:28 Alice Ferrazzi
2021-02-10 10:03 Alice Ferrazzi
2021-02-07 14:40 Alice Ferrazzi
2021-02-03 23:43 Mike Pagano
2021-01-30 13:34 Alice Ferrazzi
2021-01-27 11:15 Mike Pagano
2021-01-23 16:36 Mike Pagano
2021-01-19 20:34 Mike Pagano
2021-01-17 16:20 Mike Pagano
2021-01-12 20:06 Mike Pagano
2021-01-09 12:57 Mike Pagano
2021-01-06 14:15 Mike Pagano
2020-12-30 12:52 Mike Pagano
2020-12-11 12:56 Mike Pagano
2020-12-08 12:06 Mike Pagano
2020-12-02 12:49 Mike Pagano
2020-11-24 14:40 Mike Pagano
2020-11-22 19:26 Mike Pagano
2020-11-18 19:56 Mike Pagano
2020-11-11 15:43 Mike Pagano
2020-11-10 13:56 Mike Pagano
2020-11-05 12:35 Mike Pagano
2020-11-01 20:29 Mike Pagano
2020-10-29 11:18 Mike Pagano
2020-10-17 10:17 Mike Pagano
2020-10-14 20:36 Mike Pagano
2020-10-07 12:50 Mike Pagano
2020-10-01 12:45 Mike Pagano
2020-09-26 22:07 Mike Pagano
2020-09-26 22:00 Mike Pagano
2020-09-24 15:58 Mike Pagano
2020-09-23 12:07 Mike Pagano
2020-09-17 15:01 Mike Pagano
2020-09-17 14:55 Mike Pagano
2020-09-12 17:59 Mike Pagano
2020-09-09 17:59 Mike Pagano
2020-09-03 11:37 Mike Pagano
2020-08-26 11:15 Mike Pagano
2020-08-21 10:49 Alice Ferrazzi
2020-08-19  9:36 Alice Ferrazzi
2020-08-12 23:36 Alice Ferrazzi
2020-08-07 19:16 Mike Pagano
2020-08-05 14:51 Thomas Deutschmann
2020-07-31 18:00 Mike Pagano
2020-07-29 12:33 Mike Pagano
2020-07-22 12:42 Mike Pagano
2020-07-16 11:17 Mike Pagano
2020-07-09 12:12 Mike Pagano
2020-07-01 12:14 Mike Pagano
2020-06-29 17:41 Mike Pagano
2020-06-25 15:07 Mike Pagano
2020-06-22 14:47 Mike Pagano
2020-06-10 21:27 Mike Pagano
2020-06-07 21:52 Mike Pagano
2020-06-03 11:41 Mike Pagano
2020-05-27 16:25 Mike Pagano
2020-05-20 11:30 Mike Pagano
2020-05-20 11:27 Mike Pagano
2020-05-14 11:30 Mike Pagano
2020-05-13 12:33 Mike Pagano
2020-05-11 22:50 Mike Pagano
2020-05-09 22:20 Mike Pagano
2020-05-06 11:46 Mike Pagano
2020-05-02 19:24 Mike Pagano
2020-04-29 17:57 Mike Pagano
2020-04-23 11:44 Mike Pagano
2020-04-21 11:15 Mike Pagano
2020-04-17 11:45 Mike Pagano
2020-04-15 17:09 Mike Pagano
2020-04-13 11:34 Mike Pagano
2020-04-02 15:24 Mike Pagano
2020-03-25 14:58 Mike Pagano
2020-03-20 11:57 Mike Pagano
2020-03-18 14:21 Mike Pagano
2020-03-16 12:23 Mike Pagano
2020-03-11 17:20 Mike Pagano
2020-03-05 16:23 Mike Pagano
2020-02-28 16:38 Mike Pagano
2020-02-24 11:06 Mike Pagano
2020-02-19 23:45 Mike Pagano
2020-02-14 23:52 Mike Pagano
2020-02-11 16:20 Mike Pagano
2020-02-05 17:05 Mike Pagano
2020-02-01 10:37 Mike Pagano
2020-02-01 10:30 Mike Pagano
2020-01-29 16:16 Mike Pagano
2020-01-27 14:25 Mike Pagano
2020-01-23 11:07 Mike Pagano
2020-01-17 19:56 Mike Pagano
2020-01-14 22:30 Mike Pagano
2020-01-12 15:00 Mike Pagano
2020-01-09 11:15 Mike Pagano
2020-01-04 19:50 Mike Pagano
2019-12-31 17:46 Mike Pagano
2019-12-21 15:03 Mike Pagano
2019-12-17 21:56 Mike Pagano
2019-12-13 12:35 Mike Pagano
2019-12-05 12:03 Alice Ferrazzi
2019-12-01 14:06 Thomas Deutschmann
2019-11-24 15:44 Mike Pagano
2019-11-20 19:36 Mike Pagano
2019-11-12 21:00 Mike Pagano
2019-11-10 16:20 Mike Pagano
2019-11-06 14:26 Mike Pagano
2019-10-29 12:04 Mike Pagano
2019-10-17 22:27 Mike Pagano
2019-10-11 17:04 Mike Pagano
2019-10-07 17:42 Mike Pagano
2019-10-05 11:42 Mike Pagano
2019-10-01 10:10 Mike Pagano
2019-09-21 17:11 Mike Pagano
2019-09-19 12:34 Mike Pagano
2019-09-19 10:04 Mike Pagano
2019-09-16 12:26 Mike Pagano
2019-09-10 11:12 Mike Pagano
2019-09-06 17:25 Mike Pagano
2019-08-29 14:15 Mike Pagano
2019-08-25 17:37 Mike Pagano
2019-08-23 22:18 Mike Pagano
2019-08-16 12:26 Mike Pagano
2019-08-16 12:13 Mike Pagano
2019-08-09 17:45 Mike Pagano
2019-08-06 19:19 Mike Pagano
2019-08-04 16:15 Mike Pagano
2019-07-31 15:09 Mike Pagano
2019-07-31 10:22 Mike Pagano
2019-07-28 16:27 Mike Pagano
2019-07-26 11:35 Mike Pagano
2019-07-21 14:41 Mike Pagano
2019-07-14 15:44 Mike Pagano
2019-07-10 11:05 Mike Pagano
2019-07-03 11:34 Mike Pagano
2019-06-25 10:53 Mike Pagano
2019-06-22 19:06 Mike Pagano
2019-06-19 17:17 Thomas Deutschmann
2019-06-17 19:22 Mike Pagano
2019-06-15 15:07 Mike Pagano
2019-06-11 12:42 Mike Pagano
2019-06-10 19:43 Mike Pagano
2019-06-09 16:19 Mike Pagano
2019-06-04 11:11 Mike Pagano
2019-05-31 15:02 Mike Pagano
2019-05-26 17:10 Mike Pagano
2019-05-22 11:02 Mike Pagano
2019-05-16 23:03 Mike Pagano
2019-05-14 21:00 Mike Pagano
2019-05-10 19:40 Mike Pagano
2019-05-08 10:06 Mike Pagano
2019-05-05 13:42 Mike Pagano
2019-05-04 18:28 Mike Pagano
2019-05-02 10:13 Mike Pagano
2019-04-27 17:36 Mike Pagano
2019-04-20 11:09 Mike Pagano
2019-04-19 19:51 Mike Pagano
2019-04-05 21:46 Mike Pagano
2019-04-03 10:59 Mike Pagano
2019-03-27 10:22 Mike Pagano
2019-03-23 20:23 Mike Pagano
2019-03-19 16:58 Mike Pagano
2019-03-13 22:08 Mike Pagano
2019-03-10 14:15 Mike Pagano
2019-03-06 19:06 Mike Pagano
2019-03-05 18:04 Mike Pagano
2019-02-27 11:23 Mike Pagano
2019-02-23 11:35 Mike Pagano
2019-02-23  0:46 Mike Pagano
2019-02-20 11:19 Mike Pagano
2019-02-16  0:42 Mike Pagano
2019-02-15 12:39 Mike Pagano
2019-02-12 20:53 Mike Pagano
2019-02-06 17:08 Mike Pagano
2019-01-31 11:28 Mike Pagano
2019-01-26 15:09 Mike Pagano
2019-01-22 23:06 Mike Pagano
2019-01-16 23:32 Mike Pagano
2019-01-13 19:29 Mike Pagano
2019-01-09 17:54 Mike Pagano
2018-12-29 18:55 Mike Pagano
2018-12-29  1:08 Mike Pagano
2018-12-21 14:58 Mike Pagano
2018-12-19 19:09 Mike Pagano
2018-12-17 11:42 Mike Pagano
2018-12-13 11:40 Mike Pagano
2018-12-08 13:17 Mike Pagano
2018-12-08 13:17 Mike Pagano
2018-12-05 20:16 Mike Pagano
2018-12-01 15:08 Mike Pagano
2018-11-27 16:16 Mike Pagano
2018-11-23 12:42 Mike Pagano
2018-11-21 12:30 Mike Pagano
2018-11-14  0:47 Mike Pagano
2018-11-14  0:47 Mike Pagano
2018-11-13 20:44 Mike Pagano
2018-11-04 16:22 Alice Ferrazzi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1674040258.56b72a550443e7e9eefd7614b6850bd17794ddef.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox